 PyTorch Brand GuidelinesGuidelines PyTorch Symbol Our expression is best communicated when it is supported by the Symbol — a simple graphic that adds intrigue and curiosity to our system. The symbol allows us to speak metaphors. 2 Brand Guidelines PyTorch Symbol Clearspace While our system encourages a flexible use of elements, it’s important to present the symbol in its entirety maintaining legibility and space surrounding the symbol. Please keep at least 1/2 distance of the symbol’s width at all times. 3 Brand Guidelines PyTorch Symbol Sizing When sizing or scaling the symbol, never exceed a0 码力 | 12 页 | 34.16 MB | 1 年前3 PyTorch Brand GuidelinesGuidelines PyTorch Symbol Our expression is best communicated when it is supported by the Symbol — a simple graphic that adds intrigue and curiosity to our system. The symbol allows us to speak metaphors. 2 Brand Guidelines PyTorch Symbol Clearspace While our system encourages a flexible use of elements, it’s important to present the symbol in its entirety maintaining legibility and space surrounding the symbol. Please keep at least 1/2 distance of the symbol’s width at all times. 3 Brand Guidelines PyTorch Symbol Sizing When sizing or scaling the symbol, never exceed a0 码力 | 12 页 | 34.16 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniquesachieved with a simple Huffman Tree (figure 2-1 bottom). Each leaf node in the tree is a symbol, and the path to that symbol is the bit-string assigned to it. This allows us to encode the given data in as few aggregate, this would be better than encoding each symbol with the same number of bits. The lookup table (figure 2-1 middle) that contains the symbol-code mapping is transmitted along with the encoded the code from the lookup table to retrieve the symbols back. Since the codes are unique for each symbol (in fact, they are prefix codes: no code is a prefix of some other code, which eliminates ambiguity0 码力 | 33 页 | 1.96 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniquesachieved with a simple Huffman Tree (figure 2-1 bottom). Each leaf node in the tree is a symbol, and the path to that symbol is the bit-string assigned to it. This allows us to encode the given data in as few aggregate, this would be better than encoding each symbol with the same number of bits. The lookup table (figure 2-1 middle) that contains the symbol-code mapping is transmitted along with the encoded the code from the lookup table to retrieve the symbols back. Since the codes are unique for each symbol (in fact, they are prefix codes: no code is a prefix of some other code, which eliminates ambiguity0 码力 | 33 页 | 1.96 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniquessparser regions? Recall that huffman encoding does this by trying to create a huffman tree based on symbol frequency. As a result it comes up with a variable-length code, where a smaller length code is assigned0 码力 | 34 页 | 3.18 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniquessparser regions? Recall that huffman encoding does this by trying to create a huffman tree based on symbol frequency. As a result it comes up with a variable-length code, where a smaller length code is assigned0 码力 | 34 页 | 3.18 MB | 1 年前3
共 3 条
- 1













