 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquesfirstly, regularization and dropout are fairly straight-forward to enable in any modern deep learning framework. Secondly, data augmentation and distillation can bring significant efficiency gains during the dataset for various transformations3. 3 Menghani, Gaurav. "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better." arXiv preprint arXiv:2106.08962 (2021). It’s to typical human behavior when making a big decision (a big purchase or an important life event). We discuss with friends and family to decide whether it is a good decision. We rely on their perspectives0 码力 | 56 页 | 18.93 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquesfirstly, regularization and dropout are fairly straight-forward to enable in any modern deep learning framework. Secondly, data augmentation and distillation can bring significant efficiency gains during the dataset for various transformations3. 3 Menghani, Gaurav. "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better." arXiv preprint arXiv:2106.08962 (2021). It’s to typical human behavior when making a big decision (a big purchase or an important life event). We discuss with friends and family to decide whether it is a good decision. We rely on their perspectives0 码力 | 56 页 | 18.93 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesfeatures might be equally important, thus selecting the most informative features is crucial for making the training step efficient. In the case of visual, textual, and other multimodal data, we often line separating the two classes is called a decision boundary, and this is only one possible decision boundary. Refer to Figure 4-3 where we draw one such decision boundary. If we had more than two features - https://en.wikipedia.org/wiki/Linear_separability Figure 4-3: Extending figure 4-2, we draw a decision boundary to separate the two classes of animals (suitable and not suitable for the petting zoo)0 码力 | 53 页 | 3.92 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesfeatures might be equally important, thus selecting the most informative features is crucial for making the training step efficient. In the case of visual, textual, and other multimodal data, we often line separating the two classes is called a decision boundary, and this is only one possible decision boundary. Refer to Figure 4-3 where we draw one such decision boundary. If we had more than two features - https://en.wikipedia.org/wiki/Linear_separability Figure 4-3: Extending figure 4-2, we draw a decision boundary to separate the two classes of animals (suitable and not suitable for the petting zoo)0 码力 | 53 页 | 3.92 MB | 1 年前3
 Lecture 3: Logistic Regression2023 10 / 29 Logistic Regression: A Closer Look ... What’s the underlying decision rule in logistic regression? At the decision boundary, both classes are equiprobable; thus, we have Pr(Y = 1 | X = x; = 0 | X = x; θ) ⇒ 1 1 + exp(−θTx) = 1 1 + exp(θTx) ⇒ exp(θTx) = 1 ⇒ θTx = 0 Therefore, the decision boundary of logistic regression is nothing but a linear hyperplane Feng Li (SDU) Logistic Regression {(x(i), z(i))}i=1,··· ,mto obtain fk. Higher fk(x) implies hight probability that x is in class k Making decision: y ∗ = arg maxk fk(x) Example: Using SVM to train each binary classifier Feng Li (SDU) Logistic0 码力 | 29 页 | 660.51 KB | 1 年前3 Lecture 3: Logistic Regression2023 10 / 29 Logistic Regression: A Closer Look ... What’s the underlying decision rule in logistic regression? At the decision boundary, both classes are equiprobable; thus, we have Pr(Y = 1 | X = x; = 0 | X = x; θ) ⇒ 1 1 + exp(−θTx) = 1 1 + exp(θTx) ⇒ exp(θTx) = 1 ⇒ θTx = 0 Therefore, the decision boundary of logistic regression is nothing but a linear hyperplane Feng Li (SDU) Logistic Regression {(x(i), z(i))}i=1,··· ,mto obtain fk. Higher fk(x) implies hight probability that x is in class k Making decision: y ∗ = arg maxk fk(x) Example: Using SVM to train each binary classifier Feng Li (SDU) Logistic0 码力 | 29 页 | 660.51 KB | 1 年前3
 Lecture 6: Support Vector MachineThe Margin The hyperplane actually serves as a decision boundary to differentiating positive labels from negative labels We make more confident decision if larger margin is given, i.e., the data sample denoted by d∗) α, β are dual feasible if α ⪰ 0, (α, β ) ∈ dom G and G > −∞ Often simplified by making implicit constraint (α, β ) ∈ dom G explicit Feng Li (SDU) SVM December 28, 2021 21 / 82 Weak Duality original space X Feng Li (SDU) SVM December 28, 2021 55 / 82 Kernelized SVM Prediction Define the decision boundary ω∗Tφ(x) + b∗ in the higher-dimensional feature space ω∗ = � i:α∗ i >0 α∗ i y(i)φ(x(i))0 码力 | 82 页 | 773.97 KB | 1 年前3 Lecture 6: Support Vector MachineThe Margin The hyperplane actually serves as a decision boundary to differentiating positive labels from negative labels We make more confident decision if larger margin is given, i.e., the data sample denoted by d∗) α, β are dual feasible if α ⪰ 0, (α, β ) ∈ dom G and G > −∞ Often simplified by making implicit constraint (α, β ) ∈ dom G explicit Feng Li (SDU) SVM December 28, 2021 21 / 82 Weak Duality original space X Feng Li (SDU) SVM December 28, 2021 55 / 82 Kernelized SVM Prediction Define the decision boundary ω∗Tφ(x) + b∗ in the higher-dimensional feature space ω∗ = � i:α∗ i >0 α∗ i y(i)φ(x(i))0 码力 | 82 页 | 773.97 KB | 1 年前3
 Lecture 1: Overviewinteracting 4 Semi-supervised learning: partially supervised learning 5 Active learning: actively making queries Feng Li (SDU) Overview September 6, 2023 22 / 57 Supervised Learning In the ML literature (SDU) Overview September 6, 2023 24 / 57 Classification and Regression Classification: finding decision boundaries Regression: fitting a curve/plane to data x t 0 1 −1 0 1 Feng Li (SDU) Overview results of unsupervised clustering to the expectations of the user. With lots of unlabeled data the decision boundary becomes apparent. Feng Li (SDU) Overview September 6, 2023 39 / 57 Semi-supervised Learning0 码力 | 57 页 | 2.41 MB | 1 年前3 Lecture 1: Overviewinteracting 4 Semi-supervised learning: partially supervised learning 5 Active learning: actively making queries Feng Li (SDU) Overview September 6, 2023 22 / 57 Supervised Learning In the ML literature (SDU) Overview September 6, 2023 24 / 57 Classification and Regression Classification: finding decision boundaries Regression: fitting a curve/plane to data x t 0 1 −1 0 1 Feng Li (SDU) Overview results of unsupervised clustering to the expectations of the user. With lots of unlabeled data the decision boundary becomes apparent. Feng Li (SDU) Overview September 6, 2023 39 / 57 Semi-supervised Learning0 码力 | 57 页 | 2.41 MB | 1 年前3
 QCon北京2018-《从键盘输入到神经网络--深度学习在彭博的应用》-李碧野Terminal delivers a diverse array of information on a single platform to facilitate financial decision- making. 4 © 2018 Bloomberg Finance L.P. All rights reserved. What is Data Technologies Automation0 码力 | 64 页 | 13.45 MB | 1 年前3 QCon北京2018-《从键盘输入到神经网络--深度学习在彭博的应用》-李碧野Terminal delivers a diverse array of information on a single platform to facilitate financial decision- making. 4 © 2018 Bloomberg Finance L.P. All rights reserved. What is Data Technologies Automation0 码力 | 64 页 | 13.45 MB | 1 年前3
 keras tutorialKeras ii About the Tutorial Keras is an open source deep learning framework for python. It has been developed by an artificial intelligence researcher at Google named Francois the field of deep learning and neural network framework. This tutorial is intended to make you comfortable in getting started with the Keras framework concepts. Prerequisites Before proceeding concepts given in this tutorial, we assume that the readers have basic understanding of deep learning framework. In addition to this, it will be very helpful, if the readers have a sound knowledge of Python0 码力 | 98 页 | 1.57 MB | 1 年前3 keras tutorialKeras ii About the Tutorial Keras is an open source deep learning framework for python. It has been developed by an artificial intelligence researcher at Google named Francois the field of deep learning and neural network framework. This tutorial is intended to make you comfortable in getting started with the Keras framework concepts. Prerequisites Before proceeding concepts given in this tutorial, we assume that the readers have basic understanding of deep learning framework. In addition to this, it will be very helpful, if the readers have a sound knowledge of Python0 码力 | 98 页 | 1.57 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression TechniquesApple’s CoreML as well which are covered in chapter 10. If you are not familiar with the tensorflow framework, we refer you to the book Deep Learning with Python1. All the code examples in this book are available to CPU, GPU, and TPU resources. You can also run this locally on your machine using the Jupyter framework or with other cloud services. The solution to this specific exercise is in this notebook. Solution: acceptable tolerance) value. Exercise: Data Dequantization “But you wouldn't clap yet. Because making something disappear isn't enough; you have to bring it back. That's why every magic trick has a0 码力 | 33 页 | 1.96 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression TechniquesApple’s CoreML as well which are covered in chapter 10. If you are not familiar with the tensorflow framework, we refer you to the book Deep Learning with Python1. All the code examples in this book are available to CPU, GPU, and TPU resources. You can also run this locally on your machine using the Jupyter framework or with other cloud services. The solution to this specific exercise is in this notebook. Solution: acceptable tolerance) value. Exercise: Data Dequantization “But you wouldn't clap yet. Because making something disappear isn't enough; you have to bring it back. That's why every magic trick has a0 码力 | 33 页 | 1.96 MB | 1 年前3
 Experiment 2: Logistic Regression and Newton's Methodfind the decision boundary in the classification problem. The decision boundary is defined as the line where P(y = 1|x; θ) = g(θT x) = 0.5 which corresponds to θT x = 0 Plotting the decision boundary 20 30 40 50 60 70 Exam 1 score 40 50 60 70 80 90 100 Exam 2 score Admitted Not admitted Decision Boundary 5. What is the probability that a student with a score of 20 on Exam 1 and a score of achieving convergence? 2. Show how L is decreased iteratively by Newton’s method. 3. Plot the decision boundary. 4. What is the probability that a student with a score of 20 on Exam 1 and a score of0 码力 | 4 页 | 196.41 KB | 1 年前3 Experiment 2: Logistic Regression and Newton's Methodfind the decision boundary in the classification problem. The decision boundary is defined as the line where P(y = 1|x; θ) = g(θT x) = 0.5 which corresponds to θT x = 0 Plotting the decision boundary 20 30 40 50 60 70 Exam 1 score 40 50 60 70 80 90 100 Exam 2 score Admitted Not admitted Decision Boundary 5. What is the probability that a student with a score of 20 on Exam 1 and a score of achieving convergence? 2. Show how L is decreased iteratively by Newton’s method. 3. Plot the decision boundary. 4. What is the probability that a student with a score of 20 on Exam 1 and a score of0 码力 | 4 页 | 196.41 KB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Reviewexpensive undertaking. Factoring in the costs of training human labelers on a given task, and then making sure that the labels are reliable, human labeling gets very expensive very quickly. Even after that dissimilar. How do we go about creating positive pairs? One example of such a recipe is the SimCLR framework12,13 (refer to Figure 6-10). SimCLR creates positive pairs by using different data augmentations enforce agreement between and . Figure 6-10: Contrastive learning as implemented in the SimCLR framework. The input is augmented to generate two views, and . Using the shared encoder , hidden 13 Chen0 码力 | 31 页 | 4.03 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Reviewexpensive undertaking. Factoring in the costs of training human labelers on a given task, and then making sure that the labels are reliable, human labeling gets very expensive very quickly. Even after that dissimilar. How do we go about creating positive pairs? One example of such a recipe is the SimCLR framework12,13 (refer to Figure 6-10). SimCLR creates positive pairs by using different data augmentations enforce agreement between and . Figure 6-10: Contrastive learning as implemented in the SimCLR framework. The input is augmented to generate two views, and . Using the shared encoder , hidden 13 Chen0 码力 | 31 页 | 4.03 MB | 1 年前3
共 34 条
- 1
- 2
- 3
- 4













