 keras tutorialAPI) with relu activation (using Activation module) function. Sequential model exposes Model class to create customized models as well. We can use sub-classing concept to create our own complex model create our own customized layers. Customized layer can be created by sub-classing the Keras.Layer class and it is similar to sub-classing Keras models. Core Modules Keras also provides a lot of built-in HDF5Matrix data = HDF5Matrix('data.hdf5', 'data') to_categorical It is used to convert class vector into binary class matrix. >>> from keras.utils import to_categorical >>> labels = [0, 1, 2, 3, 4,0 码力 | 98 页 | 1.57 MB | 1 年前3 keras tutorialAPI) with relu activation (using Activation module) function. Sequential model exposes Model class to create customized models as well. We can use sub-classing concept to create our own complex model create our own customized layers. Customized layer can be created by sub-classing the Keras.Layer class and it is similar to sub-classing Keras models. Core Modules Keras also provides a lot of built-in HDF5Matrix data = HDF5Matrix('data.hdf5', 'data') to_categorical It is used to convert class vector into binary class matrix. >>> from keras.utils import to_categorical >>> labels = [0, 1, 2, 3, 4,0 码力 | 98 页 | 1.57 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquesthree of which are the keywords that the device will accept: hello, weather and time. The fourth class (none) indicates the absence of an acceptable keyword in the input signal. Figure 3-4: Workflow of with the top (softmax) layer replaced with a new softmax layer with 102 units (one unit for each class). Additionally, we add the recommended resnet preprocessing layer at the bottom (right after the input of the whale. The class labels before and after the rotation are identical. In contrast, the label mixing transformations operate over multiple samples together and recalculate the class labels. However0 码力 | 56 页 | 18.93 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquesthree of which are the keywords that the device will accept: hello, weather and time. The fourth class (none) indicates the absence of an acceptable keyword in the input signal. Figure 3-4: Workflow of with the top (softmax) layer replaced with a new softmax layer with 102 units (one unit for each class). Additionally, we add the recommended resnet preprocessing layer at the bottom (right after the input of the whale. The class labels before and after the rotation are identical. In contrast, the label mixing transformations operate over multiple samples together and recalculate the class labels. However0 码力 | 56 页 | 18.93 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Reviewauthors report a top-1 accuracy of on ImageNet when fine-tuning with only 1% labels (13 labels per class). The SimCLR fine-tuned checkpoint with ResNet-50 as the encoder architecture also achieved a better and label-efficiency. As an example, achieving > 70% accuracy on ImageNet with only 13 labels per class is a hard task, because ImageNet is a 1000-way classification problem. Therefore you should consider case where we have a model and solving a multi-class classification problem with classes. The ground-truth labels will either be the index of the correct class for that given example, or a one-hot vector0 码力 | 31 页 | 4.03 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Reviewauthors report a top-1 accuracy of on ImageNet when fine-tuning with only 1% labels (13 labels per class). The SimCLR fine-tuned checkpoint with ResNet-50 as the encoder architecture also achieved a better and label-efficiency. As an example, achieving > 70% accuracy on ImageNet with only 13 labels per class is a hard task, because ImageNet is a 1000-way classification problem. Therefore you should consider case where we have a model and solving a multi-class classification problem with classes. The ground-truth labels will either be the index of the correct class for that given example, or a one-hot vector0 码力 | 31 页 | 4.03 MB | 1 年前3
 Keras: 基于 Python 的深度学习库播,以创建你自己的完全定制化的模型,(Model 子类 API 引入于 Keras 2.2.0)。 这里是一个用 Model 子类写的简单的多层感知器的例子: import keras class SimpleMLP(keras.Model): def __init__(self, use_bn=False, use_dp=False, num_classes=10): super(SimpleMLP 传递 mode 的字典或列表,以在每个输出上使用不同的 sample_weight_mode。 • weighted_metrics: 在训练和测试期间,由 sample_weight 或 class_weight 评估和加权的度 量标准列表。 • target_tensors: 默认情况下,Keras 将为模型的目标创建一个占位符,在训练过程中将使用 目标数据。相反,如果你想使用自己的目标张量(反过来说,Keras epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)0 码力 | 257 页 | 1.19 MB | 1 年前3 Keras: 基于 Python 的深度学习库播,以创建你自己的完全定制化的模型,(Model 子类 API 引入于 Keras 2.2.0)。 这里是一个用 Model 子类写的简单的多层感知器的例子: import keras class SimpleMLP(keras.Model): def __init__(self, use_bn=False, use_dp=False, num_classes=10): super(SimpleMLP 传递 mode 的字典或列表,以在每个输出上使用不同的 sample_weight_mode。 • weighted_metrics: 在训练和测试期间,由 sample_weight 或 class_weight 评估和加权的度 量标准列表。 • target_tensors: 默认情况下,Keras 将为模型的目标创建一个占位符,在训练过程中将使用 目标数据。相反,如果你想使用自己的目标张量(反过来说,Keras epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)0 码力 | 257 页 | 1.19 MB | 1 年前3
 Lecture 3: Logistic Regressionresult can be represented by a binary variable y ∈ {0, 1} y = � 0 : “Negative Class” (e.g., benign tumor) 1 : “Positive Class” (e.g., malignant tumor) Feng Li (SDU) Logistic Regression September 20, 2023 continuous values) Binary classification problem: y ∈ {0, 1} where 0 represents negative class, while 1 denotes positive class y (i) ∈ {0, 1} is also called the label for the training example Feng Li (SDU) Logistic -all, OvA or OvR, one-against-all, OAA) strategy is to train a single classifier per class, with the samples of that class as positive samples and all other samples as negative ones Inputs: A learning algorithm0 码力 | 29 页 | 660.51 KB | 1 年前3 Lecture 3: Logistic Regressionresult can be represented by a binary variable y ∈ {0, 1} y = � 0 : “Negative Class” (e.g., benign tumor) 1 : “Positive Class” (e.g., malignant tumor) Feng Li (SDU) Logistic Regression September 20, 2023 continuous values) Binary classification problem: y ∈ {0, 1} where 0 represents negative class, while 1 denotes positive class y (i) ∈ {0, 1} is also called the label for the training example Feng Li (SDU) Logistic -all, OvA or OvR, one-against-all, OAA) strategy is to train a single classifier per class, with the samples of that class as positive samples and all other samples as negative ones Inputs: A learning algorithm0 码力 | 29 页 | 660.51 KB | 1 年前3
 动手学深度学习 v2.0到的文本,并将手写字符映 射到对应的已知字符之上。这种“哪一个”的问题叫做分类(classification)问题。分类问题希望模型能够预 测样本属于哪个类别(category,正式称为类(class))。例如,手写数字可能有10类,标签被设置为数字0~ 9。最简单的分类问题是只有两类,这被称之为二项分类(binomial classification)。例如,数据集可能由动 物图像组成,标签可能是{� 10000 a = torch.ones([n]) b = torch.ones([n]) 由于在本书中我们将频繁地进行运行时间的基准测试,所以我们定义一个计时器: 3.1. 线性回归 89 class Timer: #@save """记录多次运行时间""" def __init__(self): self.times = [] self.start() def start(self): valuate_accuracy函数中,我 们在Accumulator实例中创建了2个变量,分别用于存储正确预测的数量和预测的总数量。当我们遍历数据集 时,两者都将随着时间的推移而累加。 class Accumulator: #@save """在n个变量上累加""" def __init__(self, n): self.data = [0.0] * n def add(self,0 码力 | 797 页 | 29.45 MB | 1 年前3 动手学深度学习 v2.0到的文本,并将手写字符映 射到对应的已知字符之上。这种“哪一个”的问题叫做分类(classification)问题。分类问题希望模型能够预 测样本属于哪个类别(category,正式称为类(class))。例如,手写数字可能有10类,标签被设置为数字0~ 9。最简单的分类问题是只有两类,这被称之为二项分类(binomial classification)。例如,数据集可能由动 物图像组成,标签可能是{� 10000 a = torch.ones([n]) b = torch.ones([n]) 由于在本书中我们将频繁地进行运行时间的基准测试,所以我们定义一个计时器: 3.1. 线性回归 89 class Timer: #@save """记录多次运行时间""" def __init__(self): self.times = [] self.start() def start(self): valuate_accuracy函数中,我 们在Accumulator实例中创建了2个变量,分别用于存储正确预测的数量和预测的总数量。当我们遍历数据集 时,两者都将随着时间的推移而累加。 class Accumulator: #@save """在n个变量上累加""" def __init__(self, n): self.data = [0.0] * n def add(self,0 码力 | 797 页 | 29.45 MB | 1 年前3
 PyTorch Tutorial??????????? On Princeton CS server (ssh cycles.cs.princeton.edu) • Non-CS students can request a class account. • Miniconda is highly recommended, because: • It lets you manage your own Python installation Cross Entropy …... Model • In PyTorch, a model is represented by a regular Python class that inherits from the Module class. • Two components • __init__(self): it defines the parts that make up the model Model Class • Predefined 'layer' modules • 'Sequential' layer modules Dataset • Dataset • In PyTorch, a dataset is represented by a regular Python class that inherits from the Dataset class. You can0 码力 | 38 页 | 4.09 MB | 1 年前3 PyTorch Tutorial??????????? On Princeton CS server (ssh cycles.cs.princeton.edu) • Non-CS students can request a class account. • Miniconda is highly recommended, because: • It lets you manage your own Python installation Cross Entropy …... Model • In PyTorch, a model is represented by a regular Python class that inherits from the Module class. • Two components • __init__(self): it defines the parts that make up the model Model Class • Predefined 'layer' modules • 'Sequential' layer modules Dataset • Dataset • In PyTorch, a dataset is represented by a regular Python class that inherits from the Dataset class. You can0 码力 | 38 页 | 4.09 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automationinputs. Hence, we have the count=2 for both of them. Their detailed usage will be explained shortly. class CNNCell(): """ It composes a cell based on the input configuration. Arguments: stride: A positive The CNNCell() class is responsible for the construction of cells given the predicted (or randomly sampled) cell config and the hidden cell inputs. make_cell() is the entry method to the class that is called following code defines a ChildManager class which is responsible for spawning child networks, training them, and computing rewards. The layers constant defined in the class indicates the stacking order of the0 码力 | 33 页 | 2.48 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automationinputs. Hence, we have the count=2 for both of them. Their detailed usage will be explained shortly. class CNNCell(): """ It composes a cell based on the input configuration. Arguments: stride: A positive The CNNCell() class is responsible for the construction of cells given the predicted (or randomly sampled) cell config and the hidden cell inputs. make_cell() is the entry method to the class that is called following code defines a ChildManager class which is responsible for spawning child networks, training them, and computing rewards. The layers constant defined in the class indicates the stacking order of the0 码力 | 33 页 | 2.48 MB | 1 年前3
 《TensorFlow 2项目进阶实战》2-快速上手篇:动⼿训练模型和部署服务历史上的 tf.keras.Model • Class tf.compat.v1.keras.Model • Class tf.compat.v1.keras.models.Model • Class tf.compat.v2.keras.Model • Class tf.compat.v2.keras.models.Model • Class tf.keras.models.Model figure() plt.imshow(train_images[1]) plt.colorbar() plt.grid(False) plt.show() Preprocess data class_names = [ 'T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat’, 'Sandal', 'Shirt', 'Sneaker', 'Bag' plt.yticks ( [ ] ) plt.grid(False) plt.imshow(train_images[i],camp=plt.cm.binary) plt.xlabel(class_names(train_labels[i])) plt.show( ) Build the model Train and evaluate Make prediction Visualize0 码力 | 52 页 | 7.99 MB | 1 年前3 《TensorFlow 2项目进阶实战》2-快速上手篇:动⼿训练模型和部署服务历史上的 tf.keras.Model • Class tf.compat.v1.keras.Model • Class tf.compat.v1.keras.models.Model • Class tf.compat.v2.keras.Model • Class tf.compat.v2.keras.models.Model • Class tf.keras.models.Model figure() plt.imshow(train_images[1]) plt.colorbar() plt.grid(False) plt.show() Preprocess data class_names = [ 'T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat’, 'Sandal', 'Shirt', 'Sneaker', 'Bag' plt.yticks ( [ ] ) plt.grid(False) plt.imshow(train_images[i],camp=plt.cm.binary) plt.xlabel(class_names(train_labels[i])) plt.show( ) Build the model Train and evaluate Make prediction Visualize0 码力 | 52 页 | 7.99 MB | 1 年前3
 AI大模型千问 qwen 中文文档deepspeed 来加速训练过程。该脚本非常简洁且易于理解。 @dataclass @dataclass class ModelArguments: model_name_or_path: Optional[str] = field(default="Qwen/Qwen-7B") @dataclass class DataArguments: data_path: str = field( default=None default=None, metadata={"help": "Path to the evaluation data."} ) lazy_preprocess: bool = False @dataclass class TrainingArguments(transformers.TrainingArguments): cache_dir: Optional[str] = field(default=None) Sequences will be right padded (and␣ �→possibly truncated)." }, ) use_lora: bool = False @dataclass class LoraArguments: lora_r: int = 64 lora_alpha: int = 16 lora_dropout: float = 0.05 lora_target_modules:0 码力 | 56 页 | 835.78 KB | 1 年前3 AI大模型千问 qwen 中文文档deepspeed 来加速训练过程。该脚本非常简洁且易于理解。 @dataclass @dataclass class ModelArguments: model_name_or_path: Optional[str] = field(default="Qwen/Qwen-7B") @dataclass class DataArguments: data_path: str = field( default=None default=None, metadata={"help": "Path to the evaluation data."} ) lazy_preprocess: bool = False @dataclass class TrainingArguments(transformers.TrainingArguments): cache_dir: Optional[str] = field(default=None) Sequences will be right padded (and␣ �→possibly truncated)." }, ) use_lora: bool = False @dataclass class LoraArguments: lora_r: int = 64 lora_alpha: int = 16 lora_dropout: float = 0.05 lora_target_modules:0 码力 | 56 页 | 835.78 KB | 1 年前3
共 37 条
- 1
- 2
- 3
- 4













