8 4 Deep Learning with Python 费良宏2016的目标:Web爬虫+深度学习+自然语言处理 = ? Microso� Apple AWS 今年最激动人心的事件? 2016.1.28 “Mastering the game of Go with deep neural networks and tree search” 今年最激动人心的事件? 2016年3月Alphago 4:1 击败李世石九段 人工智能 VS. 机器学习 VS. 深度学习 文的自动分类 半监督学习 - 介于监督学习和无监督学习之间,算法: Graph Inference 或者Laplacian SVM 强化学习- 通过观察来学习做成如何的动作, 算法:Q-Learning以及时间差学习 机器学习- 方法及流程 输入特征选择 – 基于什么进行预测 目标 – 预测什么 预测功能 – 回归、聚类、降维... Xn -> F(xn) -> T(x) 机器学习- (NYU,2002), Facebook AI, Google Deepmind Theano (University of Montreal, ~2010), 学院派 Kersa, “Deep Learning library for Theano and TensorFlow” Caffe (Berkeley),卷积神经网络,贾扬清 TensorFlow (Google) Spark MLLib0 码力 | 49 页 | 9.06 MB | 1 年前3
vmware组Kubernetes on vSphere Deep Dive KubeCon China VMware SIGVMware SIG Deep Dive into Kubernetes Scheduling Performance and high availability options for vSphere Steve Wong, Hui Luo VMware Cloud Native Applications Business Unit November 12, 2018 2 Open0 码力 | 25 页 | 2.22 MB | 1 年前3
Rust原子操作高性能实践 Rust Atomic Deep Dive - 王璞第三届中国Rust开发者大会 Rust Atomic Deep Dive Pu Wang @ DatenLord 2023/06/17 Rust原子操作高性能实践 What are atomic operations in Rust? What Why need atomic operations? Why How 01 02 03 Memory order in atomic operations0 码力 | 19 页 | 1.88 MB | 1 年前3
2022年美团技术年货 合辑NeurIPS 2021 | Twins:重新思考高效的视觉注意力模型设计 339 目录 iv > 2022年美团技术年货 美团获得小样本学习榜单 FewCLUE 第一! Prompt Learning+ 自训练实战 353 DSTC10 开放领域对话评估比赛冠军方法总结 368 KDD 2022 | 美团技术团队精选论文解读 382 ACM SIGIR 2022 | 美团技术团队精选论文解读 思想的启发,基于 RepVGG style[4] 设计了可重参数化、更高效的骨干网络 EfficientRep Backbone 和 Rep-PAN Neck。 ● 优化设计了更简洁有效的 Efficient Decoupled Head,在维持精度的同时, 进一步降低了一般解耦头带来的额外延时开销。 ● 在训练策略上,我们采用 Anchor-free 无锚范式,同时辅以 SimOTA[2] 卷积带来的额外延时开 销。通过在 nano 尺寸模型上进行消融实验,对比相同通道数的解耦头结构,精度提 升 0.2% AP 的同时,速度提升 6.8%。 8 > 2022年美团技术年货 图 6 Efficient Decoupled Head 结构图 2.3 更有效的训练策略 为了进一步提升检测精度,我们吸收借鉴了学术界和业界其他检测框架的先进研究进 展:Anchor-free 无锚范式 、SimOTA0 码力 | 1356 页 | 45.90 MB | 1 年前3
2021 中国开源年度报告and more and more schools to open source courses. We hope the follow-up can be achieved in the learning of computers, compiling principles, software engineering, and other theoretical knowledge at most eye-catching one in China is PingCAP/TiDB, whose open source strategy and tactics are worth learning. 堵俊平:这两年,一个很明显的趋势是越来越多的初创企业参与开源。这一方面得益于 ToB 赛道成为市场和政策导向的热点,另一方面开源所代表的开放式创新也被投资界所认 可。尤其是开源与数据(数据库&大数据)以及 communicate, which can be open and transparent, and settle down the discussion process and reduce the learning cost of new entrants. Domestic developers are currently used to discussing issues in WeChat0 码力 | 199 页 | 9.63 MB | 1 年前3
机器学习课程-温州大学-08深度学习-深度卷积神经网络4.卷积神经网络使用技巧 使用开源的方案 26 4.卷积神经网络使用技巧 数据增强 27 数据增强 4.卷积神经网络使用技巧 28 迁移学习 迁移学习 (Transfer Learning) 是把已学训练好的模型参数用作新训练模型的起 始参数。迁移学习是深度学习中非常重要和常用的⼀个策略。 4.卷积神经网络使用技巧 29 迁移学习步骤 1.使用预训练的模型 net = Ng,http://www.deeplearning.ai • LeNet-5 : Gradient-Based Learning Applied to Document Recognition (Yann LeCun, 1998) • AlexNet : ImageNet Classification with Deep Convolutional Neural Networks (Alex Krizhevsky Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, 2012) 31 参考文献 • VGG:Very Deep Convolutional Networks for Large-Scale Image Recognition (Karen Simonyan and Andrew Zisserman, 2015) • GoogLeNet/Inception Net:Going0 码力 | 32 页 | 2.42 MB | 1 年前3
openEuler 21.09 技术白皮书slash CPU core costs, new technologies like VMs, big data, artificial intelligence (AI), and deep learning require higher computing power and memory capacities. A system that offers limited potential low-speed memory area. This tiered memory structure increases the memory capacity and ensures efficient and stable running of core services. etMem is ideal for applications that use a large amount of deployment and management project initiated by the openEuler SIG sig-CloudNative. It provides efficient and stable cluster deployment (online and offline) for a single cluster over multiple architectures0 码力 | 36 页 | 3.40 MB | 1 年前3
从推荐模型的基础特点看大规模推荐类深度学习系统的设计 袁镱bit 直接压缩->训练算法补偿 [2020] Compressed Communication for Distributed Deep Learning: Survey and Quantitative Evaluation [ICLR2018]Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed 导致模型效果下降 Facebook [KDD21] Compositional Embeddings Using Complementary Partitions for Memory-Efficient Recommendation Systems Twiiter [RecSys21] Model Size Reduction Using Frequency Based Double Hashing [ijcai2021] UNBERT: User-News Matching BERT for News Recommendation [CIKM2021] Self-Supervised Learning on Users’ Spontaneous Behaviors for Multi-Scenario Ranking in E-commerce 端上 重排 场景1 场景X [CIKM2021]0 码力 | 22 页 | 6.76 MB | 1 年前3
Kotlin 1.9.10 官方文档 中文版
optimal performance. KotlinDL is a high-level Deep Learning API written in Kotlin and inspired by Keras. It offers simple APIs for training deep learning models from scratch, importing existing Keras Keras models for inference, and leveraging transfer learning for tweaking existing pre-trained models to your tasks. Kotlin DataFrame is a library for structured data processing. It aims to reconcile Kotlin's mathematics. OptaPlanner——一个用于优化规划问题的求解器实用程序 Charts——一个正在开发中的科学 JavaFX 图表库 Apache OpenNLP - a machine learning based toolkit for the processing of natural language text CoreNLP——一个自然语言处理工具包 Apache Mahout——一个回归、聚类与推荐的分布式框架0 码力 | 3753 页 | 29.69 MB | 1 年前3
Kotlin 官方文档中文版 v1.9optimal performance. KotlinDL is a high-level Deep Learning API written in Kotlin and inspired by Keras. It offers simple APIs for training deep learning models from scratch, importing existing Keras Keras models for inference, and leveraging transfer learning for tweaking existing pre-trained models to your tasks. Kotlin DataFrame is a library for structured data processing. It aims to reconcile Kotlin's mathematics. OptaPlanner——一个用于优化规划问题的求解器实用程序 Charts——一个正在开发中的科学 JavaFX 图表库 Apache OpenNLP - a machine learning based toolkit for the processing of natural language text CoreNLP——一个自然语言处理工具包 Apache Mahout——一个回归、聚类与推荐的分布式框架0 码力 | 2049 页 | 45.06 MB | 1 年前3
共 275 条
- 1
- 2
- 3
- 4
- 5
- 6
- 28













