 《TensorFlow 快速入门与实战》7-实战TensorFlow人脸识别�������a����� ����“��”���� ���������������������� ��GPU������������ ������ CVPR (Computer Vision and Pattern Recognition� 2015 ��������� ����FaceNet �FaceNet �������������������������� ���� LFW�Labeled Faces face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 815-823). ������ – �� ������ – �� ������ – ������ ������ – �� ������ – KYC High-Dimensional Feature and Its Efficient Compression for Face Verification.2013, computer vision and pattern recognition. �L��������� ������L������������ �����k ����-� ���� e���g������]�� �++ ���]�c����0 码力 | 81 页 | 12.64 MB | 1 年前3 《TensorFlow 快速入门与实战》7-实战TensorFlow人脸识别�������a����� ����“��”���� ���������������������� ��GPU������������ ������ CVPR (Computer Vision and Pattern Recognition� 2015 ��������� ����FaceNet �FaceNet �������������������������� ���� LFW�Labeled Faces face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 815-823). ������ – �� ������ – �� ������ – ������ ������ – �� ������ – KYC High-Dimensional Feature and Its Efficient Compression for Face Verification.2013, computer vision and pattern recognition. �L��������� ������L������������ �����k ����-� ���� e���g������]�� �++ ���]�c����0 码力 | 81 页 | 12.64 MB | 1 年前3
 机器学习课程-温州大学-12机器学习-关联规则FP-Growth算法 01 关联规则概述 02 Apriori 算法 03 FP-Growth算法 26 3.FP-Growth算法 FP-growth( Frequent Pattern Growth )算法思想 FP-growth(频繁模式增长)算法是韩家炜老师在2000年提出的关联分析算法,它采 取如下分治策略:将提供频繁项集的数据库压缩到一棵频繁模式树(FP-Tree), 少了频繁项集 的搜索。 27 3.FP-Growth算法 FP-growth算法思想 FP-growth算法是基于Apriori原理的,通过将数据集存储在FP(Frequent Pattern)树上发现频繁项集,但不能发现数据之间的关联规则。 FP-growth算法只需要对数据库进行两次扫描,而Apriori算法在求每个潜在 的频繁项集时都需要扫描一次数据集,所以说Apriori算法是高效的。其中 该算法和Apriori算法最大的不同有两点: 第一,不产生候选集 第二,只需要两次遍历数据库,大大提高了效率。 29 3.FP-Growth算法 FP-Tree ( Frequent Pattern Tree ) FP树(FP-Tree)是由数据库的初始项集组成的树状结构。 FP树的目的是挖掘最 频繁的模式。FP树的每个节点表示项集的一个项。 根节点表示null,而较低的节点表示项集。在形成树的同时,保持节点与较0 码力 | 49 页 | 1.41 MB | 1 年前3 机器学习课程-温州大学-12机器学习-关联规则FP-Growth算法 01 关联规则概述 02 Apriori 算法 03 FP-Growth算法 26 3.FP-Growth算法 FP-growth( Frequent Pattern Growth )算法思想 FP-growth(频繁模式增长)算法是韩家炜老师在2000年提出的关联分析算法,它采 取如下分治策略:将提供频繁项集的数据库压缩到一棵频繁模式树(FP-Tree), 少了频繁项集 的搜索。 27 3.FP-Growth算法 FP-growth算法思想 FP-growth算法是基于Apriori原理的,通过将数据集存储在FP(Frequent Pattern)树上发现频繁项集,但不能发现数据之间的关联规则。 FP-growth算法只需要对数据库进行两次扫描,而Apriori算法在求每个潜在 的频繁项集时都需要扫描一次数据集,所以说Apriori算法是高效的。其中 该算法和Apriori算法最大的不同有两点: 第一,不产生候选集 第二,只需要两次遍历数据库,大大提高了效率。 29 3.FP-Growth算法 FP-Tree ( Frequent Pattern Tree ) FP树(FP-Tree)是由数据库的初始项集组成的树状结构。 FP树的目的是挖掘最 频繁的模式。FP树的每个节点表示项集的一个项。 根节点表示null,而较低的节点表示项集。在形成树的同时,保持节点与较0 码力 | 49 页 | 1.41 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesplot in Figure 4-1 with the newly assigned labels in the third column of Table 4-2, we can see a pattern. It is possible to linearly separate3 the data points belonging to the two classes using a line. randomly chosen or even learnt during the training process. The Fixed/Factorized/Random and Learnable Pattern groups in figure 4-19 show the examples of efficient transformers based on these optimizations. Some with depthwise separable convolutions." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. on mobile and edge devices. Let’s say you want to design a mobile application0 码力 | 53 页 | 3.92 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesplot in Figure 4-1 with the newly assigned labels in the third column of Table 4-2, we can see a pattern. It is possible to linearly separate3 the data points belonging to the two classes using a line. randomly chosen or even learnt during the training process. The Fixed/Factorized/Random and Learnable Pattern groups in figure 4-19 show the examples of efficient transformers based on these optimizations. Some with depthwise separable convolutions." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. on mobile and edge devices. Let’s say you want to design a mobile application0 码力 | 53 页 | 3.92 MB | 1 年前3
 AI大模型千问 qwen 中文文档replace("\n\n", "") sent_sep_pattern = re.compile( '([� � � � . 。 ! ?]["’ ” 」 』]{0,2}|(?=["‘ “ 「 『]{1,2}|$))') sent_list = [] for ele in sent_sep_pattern.split(text): if sent_sep_pattern.match(ele) and sent_list:0 码力 | 56 页 | 835.78 KB | 1 年前3 AI大模型千问 qwen 中文文档replace("\n\n", "") sent_sep_pattern = re.compile( '([� � � � . 。 ! ?]["’ ” 」 』]{0,2}|(?=["‘ “ 「 『]{1,2}|$))') sent_list = [] for ele in sent_sep_pattern.split(text): if sent_sep_pattern.match(ele) and sent_list:0 码力 | 56 页 | 835.78 KB | 1 年前3
 机器学习课程-温州大学-06机器学习-KNN算法清华大学出版社,2019. [3] 周志华. 机器学习[M]. 北京: 清华大学出版社,2016. [4] Cover T M , Hart P E . Nearest neighbor pattern classification[J]. IEEE Trans.inf.theory, 1953, 13(1):21-27. [5] Hastie T., Tibshirani R., Friedman The Elements of Statistical Learning[M]. New York: Springer,2001. [6] CHRISTOPHER M. BISHOP. Pattern Recognition and Machine Learning[M]. New York: Springer,2006. [7] Stephen Boyd, Lieven Vandenberghe0 码力 | 26 页 | 1.60 MB | 1 年前3 机器学习课程-温州大学-06机器学习-KNN算法清华大学出版社,2019. [3] 周志华. 机器学习[M]. 北京: 清华大学出版社,2016. [4] Cover T M , Hart P E . Nearest neighbor pattern classification[J]. IEEE Trans.inf.theory, 1953, 13(1):21-27. [5] Hastie T., Tibshirani R., Friedman The Elements of Statistical Learning[M]. New York: Springer,2001. [6] CHRISTOPHER M. BISHOP. Pattern Recognition and Machine Learning[M]. New York: Springer,2006. [7] Stephen Boyd, Lieven Vandenberghe0 码力 | 26 页 | 1.60 MB | 1 年前3
 机器学习课程-温州大学-13机器学习-人工神经网络Learning[EB/OL]. StanfordUniversity,2014. https://www.coursera.org/course/ml [4] CHRISTOPHER M. BISHOP. Pattern Recognition and Machine Learning[M]. New York: Springer,2006. [5] MINSKY, MARVIN, PAPERT, et al Back Propagating Errors[J]. Nature, 1986, 323(6088):533-536. [7] Bishop C M. Neural Networks for Pattern Recognition[J]. Advances in Computers, 1993, 37:119-166. [8] LeCun Y, Bengio Y. Convolutional networks0 码力 | 29 页 | 1.60 MB | 1 年前3 机器学习课程-温州大学-13机器学习-人工神经网络Learning[EB/OL]. StanfordUniversity,2014. https://www.coursera.org/course/ml [4] CHRISTOPHER M. BISHOP. Pattern Recognition and Machine Learning[M]. New York: Springer,2006. [5] MINSKY, MARVIN, PAPERT, et al Back Propagating Errors[J]. Nature, 1986, 323(6088):533-536. [7] Bishop C M. Neural Networks for Pattern Recognition[J]. Advances in Computers, 1993, 37:119-166. [8] LeCun Y, Bengio Y. Convolutional networks0 码力 | 29 页 | 1.60 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automationarchitectures for scalable image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. blocks for the child networks. NASNet searches for the cells that are fitted neural architecture search for mobile." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. directly on the target devices and weighing the model accuracy based on the latency0 码力 | 33 页 | 2.48 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automationarchitectures for scalable image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. blocks for the child networks. NASNet searches for the cells that are fitted neural architecture search for mobile." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. directly on the target devices and weighing the model accuracy based on the latency0 码力 | 33 页 | 2.48 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniquesmultiplication. If the sparsity is unstructured, CPUs, GPUs, TPUs, and other accelerators cannot assume a pattern, and thus have to do the full matrix multiplication anyway. Structured sparsity as the name suggests Compressing Deep Models by Low Rank and Sparse Decomposition," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 67-76, doi: 10.1109/CVPR.2017.15.0 码力 | 34 页 | 3.18 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniquesmultiplication. If the sparsity is unstructured, CPUs, GPUs, TPUs, and other accelerators cannot assume a pattern, and thus have to do the full matrix multiplication anyway. Structured sparsity as the name suggests Compressing Deep Models by Low Rank and Sparse Decomposition," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 67-76, doi: 10.1109/CVPR.2017.15.0 码力 | 34 页 | 3.18 MB | 1 年前3
 keras tutorial44 Permute Layers Permute is also used to change the shape of the input using pattern. For example, if Permute with argument (2, 1) is applied to layer having input shape as (batch_size input_shape (None, 8, 16) >>> layer_2.output_shape (None, 16, 8) >>> where, (2, 1) is set as pattern. RepeatVector Layers RepeatVector is used to repeat the input for set number, n of times. For0 码力 | 98 页 | 1.57 MB | 1 年前3 keras tutorial44 Permute Layers Permute is also used to change the shape of the input using pattern. For example, if Permute with argument (2, 1) is applied to layer having input shape as (batch_size input_shape (None, 8, 16) >>> layer_2.output_shape (None, 16, 8) >>> where, (2, 1) is set as pattern. RepeatVector Layers RepeatVector is used to repeat the input for set number, n of times. For0 码力 | 98 页 | 1.57 MB | 1 年前3
 动手学深度学习 v2.0尝试不同的激活函数,哪个效果最好? 3. 尝试不同的方案来初始化权重,什么方法效果最好? Discussions61 4.4 模型选择、欠拟合和过拟合 作为机器学习科学家,我们的目标是发现模式(pattern)。但是,我们如何才能确定模型是真正发现了一种 泛化的模式,而不是简单地记住了数据呢?例如,我们想要在患者的基因数据与痴呆状态之间寻找模式,其 中标签是从集合{痴呆, 轻度认知障碍, 健康} tikhonov regularization. Neural computation, 7(1), 108–116. [Bishop, 2006] Bishop, C. M. (2006). Pattern recognition and machine learning. springer. [Bodla et al., 2017] Bodla, N., Singh, B., Chellappa oriented gradients for human detec‐ tion. 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05) (pp. 886–893). 770 Bibliography [DeCock, 2011] De Cock, D. (2011). Ames0 码力 | 797 页 | 29.45 MB | 1 年前3 动手学深度学习 v2.0尝试不同的激活函数,哪个效果最好? 3. 尝试不同的方案来初始化权重,什么方法效果最好? Discussions61 4.4 模型选择、欠拟合和过拟合 作为机器学习科学家,我们的目标是发现模式(pattern)。但是,我们如何才能确定模型是真正发现了一种 泛化的模式,而不是简单地记住了数据呢?例如,我们想要在患者的基因数据与痴呆状态之间寻找模式,其 中标签是从集合{痴呆, 轻度认知障碍, 健康} tikhonov regularization. Neural computation, 7(1), 108–116. [Bishop, 2006] Bishop, C. M. (2006). Pattern recognition and machine learning. springer. [Bodla et al., 2017] Bodla, N., Singh, B., Chellappa oriented gradients for human detec‐ tion. 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05) (pp. 886–893). 770 Bibliography [DeCock, 2011] De Cock, D. (2011). Ames0 码力 | 797 页 | 29.45 MB | 1 年前3
共 28 条
- 1
- 2
- 3













