《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression TechniquesChapter 2 - Compression Techniques “I have made this longer than usual because I have not had time to make it shorter.” Blaise Pascal In the last chapter, we discussed a few ideas to improve the deep deep learning efficiency. Now, we will elaborate on one of those ideas, the compression techniques. Compression techniques aim to reduce the model footprint (size, latency, memory etc.). We can reduce the chapter, we introduce Quantization, a model compression technique that addresses both these issues. We’ll start with a gentle introduction to the idea of compression. Details of quantization and its applications0 码力 | 33 页 | 1.96 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression TechniquesAdvanced Compression Techniques “The problem is that we attempt to solve the simplest questions cleverly, thereby rendering them unusually complex. One should seek the simple solution.” — Anton Pavlovich Pavlovich Chekhov In this chapter, we will discuss two advanced compression techniques. By ‘advanced’ we mean that these techniques are slightly more involved than quantization (as discussed in the second the quality of our models. Did we get you excited yet? Let’s learn about these techniques together! Model Compression Using Sparsity Sparsity or Pruning refers to the technique of removing (pruning)0 码力 | 34 页 | 3.18 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning TechniquesChapter 3 - Learning Techniques “The more that you read, the more things you will know. The more that you learn, the more places you'll go.” ― Dr. Seuss Model quality is an important benchmark to evaluate translation accuracy would garner better consumer support. In this chapter, our focus will be on the techniques that enable us to achieve our quality goals. High quality models have an additional benefit in first chapter, we briefly introduced learning techniques such as regularization, dropout, data augmentation, and distillation to improve quality. These techniques can boost metrics like accuracy, precision0 码力 | 56 页 | 18.93 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical ReviewChapter 6 - Advanced Learning Techniques “Tell me and I forget, teach me and I may remember, involve me and I learn.” – Benjamin Franklin This chapter is a continuation of Chapter 3, where we introduced introduced learning techniques. To recap, learning techniques can help us meet our model quality goals. Techniques like distillation and data augmentation improve the model quality, without increasing the achieve impressive quality with a small number of labels. As we described in chapter 3’s ‘Learning Techniques and Efficiency’ section, labeling of training data is an expensive undertaking. Factoring in the0 码力 | 31 页 | 4.03 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 1 - Introductionin deep learning models. We will also introduce core areas of efficiency techniques (compression techniques, learning techniques, automation, efficient models & layers, infrastructure). Our hope is that tools at your disposal to achieve what you want. The subsequent chapters will delve deeper into techniques, infrastructure, and other helpful topics where you can get your hands dirty with practical projects possible classes. This helped with creating a testbed for researchers to experiment with. Along with techniques like Transfer Learning to adapt such models for the real world, and a rapid growth in data collected0 码力 | 21 页 | 3.17 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient ArchitecturesSchmidt in ANALOG magazine (1991) So far, we have discussed generic techniques which are agnostic to the model architecture. These techniques can be applied in NLP, vision, speech or other domains. However deep learning era). Techniques like Principal Components Analysis, Low-Rank Matrix Factorization, etc. are popular tools for dimensionality reduction. We will explain these techniques in further detail in chapter 2. We could also incorporate compression techniques such as sparsity, k-means clustering, etc. which will be discussed in the later chapters. 2. Even after compression, the vocabulary itself is large:0 码力 | 53 页 | 3.92 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 7 - Automationa variety of techniques in the last few chapters to improve efficiency and boost the quality of deep learning models. These techniques are just a small subset of the available techniques. It is often tedious learning field is growing at a rapid pace. Over the past few years, we have seen newer architectures, techniques and training procedures pushing the performance benchmarks higher. Figure 7-1 shows some of the as Let's understand this using the earlier example for choosing quantization and/or clustering techniques for model optimization. We have a search space which has two boolean valued parameters: quantization0 码力 | 33 页 | 2.48 MB | 1 年前3
Lecture 1: OverviewOverview September 6, 2023 3 / 57 Course Information We will investigate fundamental concepts, techniques and algorithms in machine learning. The topics include linear regression, logistic re- gression Semi-supervised Learning As the name suggests, it is in between Supervised and Unsupervised learning techniques w.r.t the amount of labeled and unlabeled data required for training. With the goal of reducing0 码力 | 57 页 | 2.41 MB | 1 年前3
Lecture 4: Regularization and Bayesian StatisticsSee “A comparison of numerical optimizers for logistic regression” by Tom Minka on optimization techniques (gradient descent and others) for logistic regression (both MLE and MAP) Feng Li (SDU) Regularization0 码力 | 25 页 | 185.30 KB | 1 年前3
Lecture 3: Logistic Regressionclassifying instances into one of the more than two classes The existing multiclass classification techniques can be categorized into Transformation to binary Extension from binary Hierarchical classification0 码力 | 29 页 | 660.51 KB | 1 年前3
共 16 条
- 1
- 2













