积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(152)VirtualBox(85)Pandas(29)Apache Kyuubi(25)机器学习(10)Apache Flink(2)dapr(1)

语言

全部英语(150)中文(简体)(2)

格式

全部PDF文档 PDF(132)其他文档 其他(20)
 
本次搜索耗时 0.029 秒,为您找到相关结果约 152 个.
  • 全部
  • 云计算&大数据
  • VirtualBox
  • Pandas
  • Apache Kyuubi
  • 机器学习
  • Apache Flink
  • dapr
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 其他文档 其他
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques

    but reaches that point in fewer epochs, hence needing fewer samples. Distillation is a learning technique which has been shown to reduce the number of samples that a model needs to see to converge to the let’s go through the different text transformations with code examples. Synonym Replacement is a technique to replace words with their synonyms. It is a simple idea to augment the dataset without compromising candidates) >> ['we', 'enjoyed', 'our', 'short', 'holiday', 'in', 'mexico'] Random Insertion technique inserts a word at a random position in the sentence. The inserted word, typically, is a synonym
    0 码力 | 56 页 | 18.93 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review

    for the model to learn anything. You should treat label smoothing as yet another regularization technique. In fact this paper17 goes into details about when label smoothing helps. The original Inception sequences are likely easier), etc. For example Li et al.21 proposed training GPT-3 like models with a technique called sequence length warmup where the model training starts with the input truncated to a limit training progresses. The authors reported a 10x data saving and 17x training time savings with this technique. Pacing the training example difficulty Next, we need a pacing function to tune the difficulty
    0 码力 | 31 页 | 4.03 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    subjective to the specific problem. In this chapter, we introduce Quantization, a model compression technique that addresses both these issues. We’ll start with a gentle introduction to the idea of compression apples this way. We can call this lossy compression because we lost the odd parts. The choice of the technique depends on several factors like customer preference, consumption delay, or resource availability the desired tradeoff goals. In the next section we introduce Quantization, a popular compression technique which is also used in various fields of computer science in addition to deep learning. Quantization
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques

    these techniques together! Model Compression Using Sparsity Sparsity or Pruning refers to the technique of removing (pruning) weights during the model training to achieve smaller models. Such models are saliency score metric is using the magnitude of the weights, which has become a popular pruning technique because of its simplicity and effectiveness. Later on in this chapter, we have a project that relies well-trained neural net by as much as 8x without a drop in classification accuracy. Yet another technique could be momentum based pruning2 which uses the magnitude of the momentum of the weights to evaluate
    0 码力 | 34 页 | 3.18 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 1 - Introduction

    more efficient model by trimming the number of parameters if needed. An example of a learning technique is Distillation (see Figure 1-10), which helps a smaller model (student) that can be deployed, to to a 40% smaller model (DistillBERT), while retaining 97% of the performance. Another learning technique is Data Augmentation. It is a nifty way of addressing the scarcity of labeled data during training resources, so they have to be carefully used. Automated Hyper-Param Optimization (HPO) is one such technique that can be used to replace / supplement manual tweaking of hyper-parameters like learning rate
    0 码力 | 21 页 | 3.17 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation

    has two boolean valued parameters: quantization and clustering. A $$True$$ value means that the technique is turned on and a $$False$$ value means it is turned off. This search space1 has four possible Bayesian Optimization Bayesian Optimization Search (BOS) is a sequential model based search technique where the search is guided by actively estimating the value of the objective function at different objective function looks and plans the next trials based on that knowledge, it is a model-based technique. Moreover, since the selection of trials depends on the results of the past trials, this method
    0 码力 | 33 页 | 2.48 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    high costs associated with manual embeddings. One example of an automated embedding generation technique is the word2vec family of algorithms6 (apart from others like GloVe7) which can learn embeddings softmax calculation. In the real world, as an efficient approximation, we use the Negative Sampling technique so that we only look at the output probability of the label class (which should be closer to 1.0) reduction, it still may not be suitable for a range of mobile and edge devices. Do you recall a technique that can reduce it further? Yes, Quantization! We will leave it for you as an exercise. Tell us
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 Flow control and load shedding - CS 591 K1: Data Stream Processing and Analytics Spring 2020

    • Credit-based flow control (CFC) is a link-by-link, per virtual channel congestion control technique used in ATM network switches. • To exchange data through an ATM network, each pair of endpoints Vasiliki Kalavri | Boston University 2020 28 Credit-based flow control • This classic networking technique turns out to be very useful for load management in modern, highly-parallel stream processors and
    0 码力 | 43 页 | 2.42 MB | 1 年前
    3
  • pdf文档 机器学习课程-温州大学-05机器学习-机器学习实践

    情况. 常用不平衡处理方法有采样和代价敏感学习 采样欠采样、过采样和综合采样的方法 不平衡数据的处理 7 SMOTE(Synthetic Minority Over-sampling Technique)算法是过采样 中比较常用的一种。算法的思想是合成新的少数类样本,而不是简单地复 制样本。算法过程如图: 不平衡数据的处理 (a)原始样本 (b)选定少类样本 (c)找到靠近?的 清华大学出版社,2019. [8] CHAWLA N V, BOWYER K W, HALL L O, et al. SMOTE: Synthetic Minority Over-sampling Technique[J]. Journal of Artificial Intelligence Research, 2002, 16: 321–357. 33 谢 谢!
    0 码力 | 33 页 | 2.14 MB | 1 年前
    3
  • pdf文档 Oracle VM VirtualBox 3.2.4 User Manual

    memory ballooning is not supported on Mac OS X hosts. 4.9 Page Fusion Page Fusion is a novel technique to further improve VM density on the host, i.e. a way of overcommitting resources. It was first the redundancy (deduplication) and thereby free additional memory. Traditional hypervisors use a technique often called “page sharing” or “same page merging” where they go through all memory and compute removed altogether, and the GIF writer has been simplified to produce “uncompressed GIFs”. This technique does not use the LZW algorithm; the resulting GIF files are larger than usual, but are readable
    0 码力 | 306 页 | 3.85 MB | 1 年前
    3
共 152 条
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 16
前往
页
相关搜索词
EfficientDeepLearningBookEDLChapterTechniquesAdvancedTechnicalReviewCompressionIntroductionAutomationArchitecturesFlowcontrolandloadsheddingCS591K1DataStreamProcessingAnalyticsSpring2020机器学习课程温州大学05实践OracleVMVirtualBox3.2UserManual
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩