 keras tutorialthrough the installation of Keras, basics of deep learning, Keras models, Keras layers, Keras modules and finally conclude with some real-time applications. Audience This tutorial is prepared ... 18 Core Modules ................................................................................................................................................ 19 6. Keras ― Modules ......... ................................................................................ 20 Available modules ................................................................................................0 码力 | 98 页 | 1.57 MB | 1 年前3 keras tutorialthrough the installation of Keras, basics of deep learning, Keras models, Keras layers, Keras modules and finally conclude with some real-time applications. Audience This tutorial is prepared ... 18 Core Modules ................................................................................................................................................ 19 6. Keras ― Modules ......... ................................................................................ 20 Available modules ................................................................................................0 码力 | 98 页 | 1.57 MB | 1 年前3
 PyTorch Release NotescuBLAS 12.1.3.1 ‣ NVIDIA cuDNN 8.9.3 ‣ NVIDIA NCCL 2.18.3 ‣ NVIDIA RAPIDS™ 23.06 ‣ Apex ‣ rdma-core 39.0 ‣ NVIDIA HPC-X 2.15 ‣ OpenMPI 4.1.4+ ‣ GDRCopy 2.3 ‣ TensorBoard 2.9.0 ‣ Nsight Compute with lower memory utilization. Transformer Engine also includes a collection of highly optimized modules for popular Transformer architectures and an automatic mixed precision-like API that can be used For more information about AMP, see the Training With Mixed Precision Guide. Tensor Core Examples The tensor core examples provided in GitHub and NGC focus on achieving the best performance and convergence0 码力 | 365 页 | 2.94 MB | 1 年前3 PyTorch Release NotescuBLAS 12.1.3.1 ‣ NVIDIA cuDNN 8.9.3 ‣ NVIDIA NCCL 2.18.3 ‣ NVIDIA RAPIDS™ 23.06 ‣ Apex ‣ rdma-core 39.0 ‣ NVIDIA HPC-X 2.15 ‣ OpenMPI 4.1.4+ ‣ GDRCopy 2.3 ‣ TensorBoard 2.9.0 ‣ Nsight Compute with lower memory utilization. Transformer Engine also includes a collection of highly optimized modules for popular Transformer architectures and an automatic mixed precision-like API that can be used For more information about AMP, see the Training With Mixed Precision Guide. Tensor Core Examples The tensor core examples provided in GitHub and NGC focus on achieving the best performance and convergence0 码力 | 365 页 | 2.94 MB | 1 年前3
 AI大模型千问 qwen 中文文档class LoraArguments: lora_r: int = 64 lora_alpha: int = 16 lora_dropout: float = 0.05 lora_target_modules: List[str] = field( default_factory=lambda: [ "q_proj", "k_proj", "v_proj", "o_proj", "up_proj" lora_alpha: the alpha value for LoRA; • lora_dropout: the dropout rate for LoRA; • lora_target_modules: the target modules for LoRA. By default we tune all linear layers; • lora_weight_path: the path to the weight lora_config = LoraConfig( r=lora_args.lora_r, lora_alpha=lora_args.lora_alpha, target_modules=lora_args.lora_target_modules, lora_dropout=lora_args.lora_dropout, bias=lora_args.lora_bias, task_type="CAUSAL_LM"0 码力 | 56 页 | 835.78 KB | 1 年前3 AI大模型千问 qwen 中文文档class LoraArguments: lora_r: int = 64 lora_alpha: int = 16 lora_dropout: float = 0.05 lora_target_modules: List[str] = field( default_factory=lambda: [ "q_proj", "k_proj", "v_proj", "o_proj", "up_proj" lora_alpha: the alpha value for LoRA; • lora_dropout: the dropout rate for LoRA; • lora_target_modules: the target modules for LoRA. By default we tune all linear layers; • lora_weight_path: the path to the weight lora_config = LoraConfig( r=lora_args.lora_r, lora_alpha=lora_args.lora_alpha, target_modules=lora_args.lora_target_modules, lora_dropout=lora_args.lora_dropout, bias=lora_args.lora_bias, task_type="CAUSAL_LM"0 码力 | 56 页 | 835.78 KB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 7 - AutomationHyperBand, the recommended factor is 3. We will use the same. Now, let's go on and load the required modules and the dataset. import tensorflow as tf import tensorflow_datasets as tfds import keras_tuner as dropout_rate=DROPOUT_RATE): # Initalize the core model core_args = dict(input_shape=(IMG_SIZE, IMG_SIZE, 3), include_top=False) core = apps.resnet50.ResNet50(**core_args) core.trainable = False # Setup the top Lambda(lambda x: tf.cast(x, tf.float32)), layers.Lambda(lambda x: apps.resnet.preprocess_input(x)), core, layers.Flatten(), layers.Dropout(dropout_rate), layers.Dense(NUM_CLASSES, activation='softmax')0 码力 | 33 页 | 2.48 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 7 - AutomationHyperBand, the recommended factor is 3. We will use the same. Now, let's go on and load the required modules and the dataset. import tensorflow as tf import tensorflow_datasets as tfds import keras_tuner as dropout_rate=DROPOUT_RATE): # Initalize the core model core_args = dict(input_shape=(IMG_SIZE, IMG_SIZE, 3), include_top=False) core = apps.resnet50.ResNet50(**core_args) core.trainable = False # Setup the top Lambda(lambda x: tf.cast(x, tf.float32)), layers.Lambda(lambda x: apps.resnet.preprocess_input(x)), core, layers.Flatten(), layers.Dropout(dropout_rate), layers.Dense(NUM_CLASSES, activation='softmax')0 码力 | 33 页 | 2.48 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquescreate_model(): # Initialize the core model core_args = dict(input_shape=(IMG_SIZE, IMG_SIZE, 3), include_top=False) core = apps.resnet50.ResNet50(**core_args) core.trainable = False # Create the full full model with input, preprocessing, core and softmax layers. model = tf.keras.Sequential([ layers.Input([IMG_SIZE, IMG_SIZE, 3], dtype = tf.uint8), layers.Lambda(lambda x: tf.cast(x, tf.float32)) float32)), layers.Lambda(lambda x: apps.resnet.preprocess_input(x)), core, layers.Flatten(), layers.Dropout(DROPOUT_RATE), layers.Dense(NUM_CLASSES, activation='softmax') ]) adam = optimizers.Adam(learn0 码力 | 56 页 | 18.93 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquescreate_model(): # Initialize the core model core_args = dict(input_shape=(IMG_SIZE, IMG_SIZE, 3), include_top=False) core = apps.resnet50.ResNet50(**core_args) core.trainable = False # Create the full full model with input, preprocessing, core and softmax layers. model = tf.keras.Sequential([ layers.Input([IMG_SIZE, IMG_SIZE, 3], dtype = tf.uint8), layers.Lambda(lambda x: tf.cast(x, tf.float32)) float32)), layers.Lambda(lambda x: apps.resnet.preprocess_input(x)), core, layers.Flatten(), layers.Dropout(DROPOUT_RATE), layers.Dense(NUM_CLASSES, activation='softmax') ]) adam = optimizers.Adam(learn0 码力 | 56 页 | 18.93 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesthe hashing trick. It helps to reduce the vocabulary with little or no performance trade-off. The core idea of the hashing trick is as follows: 1. Choose the desired vocabulary size N, and the number entire Jupyter notebook is here for you to experiment with. We begin with loading the necessary modules. The Oxford-IIIT dataset is available through the tensorflow_datasets package. We apply the standard0 码力 | 53 页 | 3.92 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesthe hashing trick. It helps to reduce the vocabulary with little or no performance trade-off. The core idea of the hashing trick is as follows: 1. Choose the desired vocabulary size N, and the number entire Jupyter notebook is here for you to experiment with. We begin with loading the necessary modules. The Oxford-IIIT dataset is available through the tensorflow_datasets package. We apply the standard0 码力 | 53 页 | 3.92 MB | 1 年前3
 动手学深度学习 v2.0display from matplotlib import pyplot as plt from matplotlib_inline import backend_inline d2l = sys.modules[__name__] 本书中的大部分代码都是基于PyTorch的。PyTorch是一个开源的深度学习框架,在研究界非常受欢迎。本书 中的所有代码都在最新版本的PyTorch下通过了测试。但是 MF (Intel 80186) 1990 10 K (光学字符识别) 10 MB 10 MF (Intel 80486) 2000 10 M (网页) 100 MB 1 GF (Intel Core) 2010 10 G (广告) 1 GB 1 TF (Nvidia C2050) 2020 1 T (社交网络) 100 GB 1 PF (Nvidia DGX‐2) 很明显,随机存取存储 module in enumerate(args): # 这里,module是Module子类的一个实例。我们把它保存在'Module'类的成员 # 变量_modules中。_module的类型是OrderedDict self._modules[str(idx)] = module def forward(self, X): # OrderedDict保证了按照成员添加的顺序遍历它们 (continues0 码力 | 797 页 | 29.45 MB | 1 年前3 动手学深度学习 v2.0display from matplotlib import pyplot as plt from matplotlib_inline import backend_inline d2l = sys.modules[__name__] 本书中的大部分代码都是基于PyTorch的。PyTorch是一个开源的深度学习框架,在研究界非常受欢迎。本书 中的所有代码都在最新版本的PyTorch下通过了测试。但是 MF (Intel 80186) 1990 10 K (光学字符识别) 10 MB 10 MF (Intel 80486) 2000 10 M (网页) 100 MB 1 GF (Intel Core) 2010 10 G (广告) 1 GB 1 TF (Nvidia C2050) 2020 1 T (社交网络) 100 GB 1 PF (Nvidia DGX‐2) 很明显,随机存取存储 module in enumerate(args): # 这里,module是Module子类的一个实例。我们把它保存在'Module'类的成员 # 变量_modules中。_module的类型是OrderedDict self._modules[str(idx)] = module def forward(self, X): # OrderedDict保证了按照成员添加的顺序遍历它们 (continues0 码力 | 797 页 | 29.45 MB | 1 年前3
 深度学习与PyTorch入门实战 - 43. nn.Module▪ Conv2d ▪ ConvTransposed2d ▪ Dropout ▪ etc. 2. Container ▪ net(x) 3. parameters 4. modules ▪ modules: all nodes ▪ children: direct children 5. to(device) 6. save and load 7. train/test 80 码力 | 16 页 | 1.14 MB | 1 年前3 深度学习与PyTorch入门实战 - 43. nn.Module▪ Conv2d ▪ ConvTransposed2d ▪ Dropout ▪ etc. 2. Container ▪ net(x) 3. parameters 4. modules ▪ modules: all nodes ▪ children: direct children 5. to(device) 6. save and load 7. train/test 80 码力 | 16 页 | 1.14 MB | 1 年前3
 PyTorch TutorialSample Code in practice Complex Models • Complex Model Class • Predefined 'layer' modules • 'Sequential' layer modules Dataset • Dataset • In PyTorch, a dataset is represented by a regular Python class0 码力 | 38 页 | 4.09 MB | 1 年前3 PyTorch TutorialSample Code in practice Complex Models • Complex Model Class • Predefined 'layer' modules • 'Sequential' layer modules Dataset • Dataset • In PyTorch, a dataset is represented by a regular Python class0 码力 | 38 页 | 4.09 MB | 1 年前3
 全连接神经网络实战. pytorch 版的训练带来极大好处。 在 NeuralNetwork 内部定义函数: def weight_init ( s e l f ) : #遍 历 网 络 的 每 一 层 fo r m in s e l f . modules () : #如 果 该 层 是 线 性 连 接 层 i f i s i n s t a n c e (m, nn . Linear ) : print (m. weight . shape 8) , nn .ReLU() , nn . Linear (8 , 4) , ) def weight_init ( s e l f ) : fo r m in s e l f . modules () : i f i s i n s t a n c e (m, nn . Linear ) : m. weight . data . normal_ ( 0 . 0 , 1.0)#. f0 码力 | 29 页 | 1.40 MB | 1 年前3 全连接神经网络实战. pytorch 版的训练带来极大好处。 在 NeuralNetwork 内部定义函数: def weight_init ( s e l f ) : #遍 历 网 络 的 每 一 层 fo r m in s e l f . modules () : #如 果 该 层 是 线 性 连 接 层 i f i s i n s t a n c e (m, nn . Linear ) : print (m. weight . shape 8) , nn .ReLU() , nn . Linear (8 , 4) , ) def weight_init ( s e l f ) : fo r m in s e l f . modules () : i f i s i n s t a n c e (m, nn . Linear ) : m. weight . data . normal_ ( 0 . 0 , 1.0)#. f0 码力 | 29 页 | 1.40 MB | 1 年前3
共 19 条
- 1
- 2













