Lecture 1: Overviewbecause they require specific detailed skills or knowledge tuned to a specific task (knowledge engineering bottleneck) Develop systems that can automatically adapt and customize them- selves to individual0 码力 | 57 页 | 2.41 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquesto benchmark learning techniques. It is followed by a short discussion on exchanging model quality and model footprint. An in-depth discussion of data augmentation and distillation follows right after. discuss a few label invariant image and text transformation techniques. Image Transformations This discussion is organized into the following two categories: spatial transformation and value transformation cheaper than human labor costs to produce training samples. We have chosen four techniques for deeper discussion to cover both statistical and deep learning based synthetic data generation models. Back Translation0 码力 | 56 页 | 18.93 MB | 1 年前3
机器学习课程-温州大学-08机器学习-集成学习Ridgeway G . Special Invited Paper. Additive Logistic Regression: A Statistical View of Boosting: Discussion[J]. Annals of Statistics, 2000, 28(2):393-400. [6] FRIEDMAN J H . Stochastic gradient boosting[J]0 码力 | 50 页 | 2.03 MB | 1 年前3
Lecture Notes on Gaussian Discriminant Analysis, Naivei=1 1(y(i) = y) (23) Remark: We assume binary features (Xj ∈ {0, 1} for ∀j ∈ [n]) in the above discussion. What if Xj ∈ {1, 2, · · · , v}? Can we get similar results? Check it by yourselves! 4.4 Laplace0 码力 | 19 页 | 238.80 KB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 7 - Automationreaders to play with other choices and see how they affect the results. It's now time to conclude our discussion on automation with a short introduction to Automated ML or AutoML in the final section. Summary0 码力 | 33 页 | 2.48 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniquesan n-dimensional matrix) to denote inputs, weights and the bias respectively. To simplify this discussion, let’s assume the shape (an array describing the size of each dimension) of X as [batch size, D1]0 码力 | 33 页 | 1.96 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review, training data-efficient (specifically, label efficient) models. We will describe the general principles of Self-Supervised learning which are applicable to both language and vision. We will also demonstrate0 码力 | 31 页 | 4.03 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniquesverify the compression gains). However, given that we computed the clusters earlier from first principles, we hope that you also got an understanding of what is happening under the covers (or wrappers0 码力 | 34 页 | 3.18 MB | 1 年前3
动手学深度学习 v2.0approach. Vol. 5. GMD‐Forschungszentrum Informationstechnik Bonn. [James, 2007] James, W. (2007). The principles of psychology. Vol. 1. Cosimo, Inc. [Jia et al., 2018] Jia, X., Song, S., He, W., Wang, Y., Rong0 码力 | 797 页 | 29.45 MB | 1 年前3
共 9 条
- 1













