 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient ArchitecturesEmbeddings for Smaller and Faster Models We humans can intuitively grasp similarities between different objects. For instance, when we see an image of a dog or a cat, it is likely that we would find them both to reduce the dimensionality via max-pooling, and to average out the entire sequence dimension via global average pooling. def get_cnn_model(embedding_layer): int_sequences_input = tf.keras.Input(shape=(None activation="relu")(x) x = layers.MaxPooling1D(3)(x) x = layers.Conv1D(128, 5, activation="relu")(x) # Global average pooling averages out the entire sequence. x = layers.GlobalMaxPooling1D()(x) x = layers.Dense(1280 码力 | 53 页 | 3.92 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient ArchitecturesEmbeddings for Smaller and Faster Models We humans can intuitively grasp similarities between different objects. For instance, when we see an image of a dog or a cat, it is likely that we would find them both to reduce the dimensionality via max-pooling, and to average out the entire sequence dimension via global average pooling. def get_cnn_model(embedding_layer): int_sequences_input = tf.keras.Input(shape=(None activation="relu")(x) x = layers.MaxPooling1D(3)(x) x = layers.Conv1D(128, 5, activation="relu")(x) # Global average pooling averages out the entire sequence. x = layers.GlobalMaxPooling1D()(x) x = layers.Dense(1280 码力 | 53 页 | 3.92 MB | 1 年前3
 Keras: 基于 Python 的深度学习库如果要加载的模型包含自定义层或其他自定义类或函数,则可以通过 custom_objects 参数将 它们传递给加载机制: from keras.models import load_model # 假设你的模型包含一个 AttentionLayer 类的实例 model = load_model('my_model.h5', custom_objects={'AttentionLayer': AttentionLayer}) 的工作方式相同: from keras.models import model_from_json model = model_from_json(json_string, custom_objects={'AttentionLayer': AttentionLayer}) 3.3.7 为什么训练误差比测试误差高很多? Keras 模型有两种模式:训练和测试。正则化机制,如 Dropout MobileNet 模 型, 你 需 要 导 入 自 定 义 对 象 relu6 和 DepthwiseConv2D 并通过 custom_objects 传参。 下面是示例代码: model = load_model('mobilenet.h5', custom_objects={ 'relu6': mobilenet.relu6, 'DepthwiseConv2D': mobilenet.DepthwiseConv2D})0 码力 | 257 页 | 1.19 MB | 1 年前3 Keras: 基于 Python 的深度学习库如果要加载的模型包含自定义层或其他自定义类或函数,则可以通过 custom_objects 参数将 它们传递给加载机制: from keras.models import load_model # 假设你的模型包含一个 AttentionLayer 类的实例 model = load_model('my_model.h5', custom_objects={'AttentionLayer': AttentionLayer}) 的工作方式相同: from keras.models import model_from_json model = model_from_json(json_string, custom_objects={'AttentionLayer': AttentionLayer}) 3.3.7 为什么训练误差比测试误差高很多? Keras 模型有两种模式:训练和测试。正则化机制,如 Dropout MobileNet 模 型, 你 需 要 导 入 自 定 义 对 象 relu6 和 DepthwiseConv2D 并通过 custom_objects 传参。 下面是示例代码: model = load_model('mobilenet.h5', custom_objects={ 'relu6': mobilenet.relu6, 'DepthwiseConv2D': mobilenet.DepthwiseConv2D})0 码力 | 257 页 | 1.19 MB | 1 年前3
 【PyTorch深度学习-龙龙老师】-测试版202112运行计算图阶段,此处代码需要使用 tf 1.x 版本运行 # 创建运行环境 sess = tf.InteractiveSession() # 初始化步骤也需要作为操作运行 init = tf.global_variables_initializer() sess.run(init) # 运行初始化操作,完成初始化 # 运行输出端子,需要给输入端子赋值 c_numpy = sess.run(c_op 新建池化层 global_average_layer = layers.GlobalAveragePooling2D() # 利用上一层的输出作为本层的输入,测试其输出 x = tf.random.normal([4,7,7,2048]) # 池化层降维,形状由[4,7,7,2048]变为[4,1,1,2048],删减维度后变为[4,2048] out = global_average_layer(x) 特征子网络、新建的池化层和全连接层后,我们重新利用 Sequential 容器封装成一个新的网络: # 重新包裹成我们的网络模型 mynet = Sequential([resnet, global_average_layer, fc]) mynet.summary() 可以看到新的网络模型的结构信息为: Layer (type) Output Shape0 码力 | 439 页 | 29.91 MB | 1 年前3 【PyTorch深度学习-龙龙老师】-测试版202112运行计算图阶段,此处代码需要使用 tf 1.x 版本运行 # 创建运行环境 sess = tf.InteractiveSession() # 初始化步骤也需要作为操作运行 init = tf.global_variables_initializer() sess.run(init) # 运行初始化操作,完成初始化 # 运行输出端子,需要给输入端子赋值 c_numpy = sess.run(c_op 新建池化层 global_average_layer = layers.GlobalAveragePooling2D() # 利用上一层的输出作为本层的输入,测试其输出 x = tf.random.normal([4,7,7,2048]) # 池化层降维,形状由[4,7,7,2048]变为[4,1,1,2048],删减维度后变为[4,2048] out = global_average_layer(x) 特征子网络、新建的池化层和全连接层后,我们重新利用 Sequential 容器封装成一个新的网络: # 重新包裹成我们的网络模型 mynet = Sequential([resnet, global_average_layer, fc]) mynet.summary() 可以看到新的网络模型的结构信息为: Layer (type) Output Shape0 码力 | 439 页 | 29.91 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquesteaching a child to recognize common household objects such as a toy, a cup or a saucer. The number of times you have to point and call out the kind of the objects such that the child identifies them correctly as possible. Extending the teaching-a-child analogy, consider the number of distinct examples of objects (labels) you must show a child before they can learn to identify them with high accuracy. All cups light. The same process can be repeated for other objects. If the child learns to recognize these objects accurately with fewer numbers of distinct objects being shown, we have made this process more label0 码力 | 56 页 | 18.93 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquesteaching a child to recognize common household objects such as a toy, a cup or a saucer. The number of times you have to point and call out the kind of the objects such that the child identifies them correctly as possible. Extending the teaching-a-child analogy, consider the number of distinct examples of objects (labels) you must show a child before they can learn to identify them with high accuracy. All cups light. The same process can be repeated for other objects. If the child learns to recognize these objects accurately with fewer numbers of distinct objects being shown, we have made this process more label0 码力 | 56 页 | 18.93 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Reviewmultiple valleys or local minimas, of which only one is the true global minima. In practice the optimizer does not know where the global minima lies, so it might get stuck in local minima / valleys when deep learning model can have anywhere from thousands to billions of parameters, finding the exact global minima is intractable. That's why we use optimization algorithms that work for convex functions,0 码力 | 31 页 | 4.03 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Reviewmultiple valleys or local minimas, of which only one is the true global minima. In practice the optimizer does not know where the global minima lies, so it might get stuck in local minima / valleys when deep learning model can have anywhere from thousands to billions of parameters, finding the exact global minima is intractable. That's why we use optimization algorithms that work for convex functions,0 码力 | 31 页 | 4.03 MB | 1 年前3
 Lecture 1: Overviewdocuments available on the web. Image/Video Understanding Given an/a image/video, determine what objects it contains. Determine what semantics it contains Determine what actions it contains. Feng Li subset of the data. Domain knowledge about the clusters. Information about the ‘similarity’ between objects. User preferences. May be pairwise constraints or a labeled subset. Must-link or cannot-link constraints0 码力 | 57 页 | 2.41 MB | 1 年前3 Lecture 1: Overviewdocuments available on the web. Image/Video Understanding Given an/a image/video, determine what objects it contains. Determine what semantics it contains Determine what actions it contains. Feng Li subset of the data. Domain knowledge about the clusters. Information about the ‘similarity’ between objects. User preferences. May be pairwise constraints or a labeled subset. Must-link or cannot-link constraints0 码力 | 57 页 | 2.41 MB | 1 年前3
 rwcpu8 Instruction Install miniconda pytorchMiniconda and PyTorch on rwcpu8.cse.ust.hk Using Global Miniconda and PyTorch If you don't want to install Miniconda and PyTorch yourself, you can use the global Miniconda and PyTorch installed at /export/data/miniconda30 码力 | 3 页 | 75.54 KB | 1 年前3 rwcpu8 Instruction Install miniconda pytorchMiniconda and PyTorch on rwcpu8.cse.ust.hk Using Global Miniconda and PyTorch If you don't want to install Miniconda and PyTorch yourself, you can use the global Miniconda and PyTorch installed at /export/data/miniconda30 码力 | 3 页 | 75.54 KB | 1 年前3
 复杂环境下的视觉同时定位与地图构建如何高效地进行全局优化,消除重建漂移问题? VisualSFM 结果 ENFT:高效的非连续帧特征跟踪 基于两道匹配的连续帧跟踪 • 抽取SIFT特征 • 第一道匹配:比较描述量 Global distinctive 平面运动分割 • 估计若干个平面运动 • 使用第一道匹配得到的内点匹配对(inlier matches) Alignment frame t frame t+1 同时进行图像对的特征匹配和优化匹配矩阵 • 根据选择的图像对的特征匹配结果对匹配矩阵进行优化; • 根据更新的匹配矩阵更可靠地选择出有公共内容的图像对进行特征匹配。 大尺度运动恢复结构的难点 • 全局集束调整(Global Bundle Adjustment) • 变量数目非常庞大 • 内存空间需求大 • 计算耗时 • 迭代的局部集束调整 • 大误差难以均匀扩散到整个序列 • 极易陷入局部最优 • 姿态图优化(Pose0 码力 | 60 页 | 4.61 MB | 1 年前3 复杂环境下的视觉同时定位与地图构建如何高效地进行全局优化,消除重建漂移问题? VisualSFM 结果 ENFT:高效的非连续帧特征跟踪 基于两道匹配的连续帧跟踪 • 抽取SIFT特征 • 第一道匹配:比较描述量 Global distinctive 平面运动分割 • 估计若干个平面运动 • 使用第一道匹配得到的内点匹配对(inlier matches) Alignment frame t frame t+1 同时进行图像对的特征匹配和优化匹配矩阵 • 根据选择的图像对的特征匹配结果对匹配矩阵进行优化; • 根据更新的匹配矩阵更可靠地选择出有公共内容的图像对进行特征匹配。 大尺度运动恢复结构的难点 • 全局集束调整(Global Bundle Adjustment) • 变量数目非常庞大 • 内存空间需求大 • 计算耗时 • 迭代的局部集束调整 • 大误差难以均匀扩散到整个序列 • 极易陷入局部最优 • 姿态图优化(Pose0 码力 | 60 页 | 4.61 MB | 1 年前3
 《TensorFlow 2项目进阶实战》4-商品检测篇:使用RetinaNet瞄准你的货架商品Challenge ILSVRC • The PASCAL Visual Object Classes (VOC) Challenge Pascal VOC • Microsoft Common Objects in Context MS-COCO PASCAL VOC 数据集 4个大类:person, animal, vehicle, household 20个小类: • person •0 码力 | 67 页 | 21.59 MB | 1 年前3 《TensorFlow 2项目进阶实战》4-商品检测篇:使用RetinaNet瞄准你的货架商品Challenge ILSVRC • The PASCAL Visual Object Classes (VOC) Challenge Pascal VOC • Microsoft Common Objects in Context MS-COCO PASCAL VOC 数据集 4个大类:person, animal, vehicle, household 20个小类: • person •0 码力 | 67 页 | 21.59 MB | 1 年前3
 深度学习与PyTorch入门实战 - 14. Tensor高阶https://blog.openai.com/generative-models/ ▪ Where ▪ Gather where example gather retrieve global label ▪ argmax(pred) to get relative labeling ▪ On some condition, our label is dinstinct from0 码力 | 8 页 | 501.85 KB | 1 年前3 深度学习与PyTorch入门实战 - 14. Tensor高阶https://blog.openai.com/generative-models/ ▪ Where ▪ Gather where example gather retrieve global label ▪ argmax(pred) to get relative labeling ▪ On some condition, our label is dinstinct from0 码力 | 8 页 | 501.85 KB | 1 年前3
共 17 条
- 1
- 2













