2 张孝峰 Python与云 AWS的Python原生应用浅析 Amazon SageMaker Ground Truth Notebooks Algorithms + Marketplace Reinforcement Learning Training Optimization Deployment Hosting 人工智能服务 视觉 语音 语言 聊天机器人 预测 推荐 Personalize Forecast Lex Translate Comprehend Amazon SageMaker Ground Truth Notebooks Algorithms + Marketplace Reinforcement Learning Training Optimization Deployment Hosting 人工智能服务 视觉 语音 语言 聊天机器人 预测 推荐 Personalize Forecast Lex Translate Comprehend Direct Marketing with Amazon SageMaker XGBoost and Hyperparameter Tuning https://github.com/awslabs/amazon-sagemaker- examples/blob/master/hyperparameter_tuning/xgboost_direct_marketing/hpo_xgboost _dir0 码力 | 42 页 | 8.12 MB | 1 年前3
 机器学习课程-温州大学-06深度学习-优化算法001之间,就会有更多的搜索资源可用,还有在0.001到0.01之间等等。 20 超参数调整的方法 Hyperparameter 1 Hyperparameter 2 Hyperparameter 1 Hyperparameter 2 21 由粗到细调整超参数 Hyperparameter 1 Hyperparameter 2 22 熊猫方式与鱼子酱方式 由计算资源决定 23 Batch Norm0 码力 | 31 页 | 2.03 MB | 1 年前3
 云原生中的数据科学KubeConAsia2018Finalrvice/ Pipeline Images: https://github.com/pachyderm/pachyderm/tree/master/doc/examples/ml/hyperparameter 谢谢!0 码力 | 47 页 | 14.91 MB | 1 年前3
 动手学深度学习 v2.0最终,我们真正关心的是生成一个模型,它能够在从未见过的数据上表现良好。但“训练”模型只能将模型 与我们实际能看到的数据相拟合。因此,我们可以将拟合模型的任务分解为两个关键问题: • 优化(optimization):用模型拟合观测数据的过程; 39 https://discuss.d2l.ai/t/1751 2.4. 微积分 63 • 泛化(generalization):数学原理和实践者的 size)。η表示学习率(learning rate)。批量 大小和学习率的值通常是手动预先指定,而不是通过模型训练得到的。这些可以调整但不在训练过程中更新 的参数称为超参数(hyperparameter)。调参(hyperparameter tuning)是选择超参数的过程。超参数通常 88 3. 线性神经网络 是我们根据训练迭代结果来调整的,而训练迭代结果是在独立的验证数据集(validation dataset)上评估得 ∇2f)。这相当于根据半正定矩阵的定义,H ⪰ 0。 11.2.3 约束 凸优化的一个很好的特性是能够让我们有效地处理约束(constraints)。即它使我们能够解决以下形式的约 束优化(constrained optimization)问题: minimize x f(x) subject to ci(x) ≤ 0 for all i ∈ {1, . . . , N}. (11.2.16) 这里f是目标函数,ci是约束函数。例如第一个约束c1(x)0 码力 | 797 页 | 29.45 MB | 1 年前3
 TiDB 原理与实战○ GitHub:https://github.com/zimulala Agenda ● A brief introduction of NewSQL ● TiDB ● Plan optimization ● Dist SQL ● Online DDL ● TiKV ● Feelings ● Q & A A brief introduction of NewSQL 1970s 2010 目前还有少量函数或功能未 实现 Plan optimization 逻辑优化 ● 主要依据关系代数的等价交 换做一些逻辑变换 物理优化 ● 主要依据数据读取、表连接方式、表连接顺序、排序等技术对查询进行优化。 TP Parse Logical Plan Physical Plan Exec Stat CBO RBO Plan optimization Logical plan ● Prune column count(*)from t group by id -> select 1 from t id is unique index ● Aggregation push down Plan optimization Decorrelation select * from t where t.id in (select id+1 as c2 from s where s.c1 < 10 and s.n0 码力 | 23 页 | 496.41 KB | 6 月前3
 openEuler 21.03 技术白皮书appropriate computing power unit, improve the parallel processing capability through software optimization, and unleash the full power of diversified computing architectures. Continuous Contribution to Chain The process of building an open source OS is also a process of supply chain aggregation and optimization. A reliable open source software supply chain is fundamental to a large-scale commercial OS introduces more than 20 enhancements in terms of functionality and performance. 1. Scheduler optimization: The optimized fairness of Completely Fair Scheduler (CFS) tasks and the NUMA-aware asynchronous0 码力 | 21 页 | 948.66 KB | 1 年前3
 Apache ShardingSphere v5.5.0 documentMerger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 12.4.6 Query Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 12.4.7 Parse Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 Rewriting for Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 12.4.10 Execute Engine . . . is still under development. Although largely available to users, it still requires significant optimization. Sub-query The Federation execution engine provides support for subqueries and outer queries0 码力 | 602 页 | 3.85 MB | 1 年前3
 The Next G of PHP--鸟哥@PHPCON2017[LONG] $T7 = $a4 + $b6; //$T7: [LONG, DOUBLE] return $T7; } · Data Flow Analysis Optimization · Type Inference System · Enhancement Of Range Inference · Enhancement Of Type Inference or loop iterations 4: JIT functions with @jit annotation 0: JIT off 1: Minimal optimization 2: Basic optimization 3: optimize based on type-inference 4: optimize based on type-Inference and call-tree inner-procedure analises JUST-IN-TIME COMPILER · Inline Opcodes Dispatch · opcache.jit=1201 BASIC OPTIMIZATION function calc($a, $b) { $a = $a * 2 % 1000; $b = $b * 3 % 1000; return $a + $b;0 码力 | 25 页 | 297.68 KB | 1 年前3
 openEuler 21.09 技术白皮书Supply Chain The process of building an open source OS relies on supply chain aggregation and optimization. To ensure reliable open source software or a large-scale commercial OS, openEuler comprises high-performance compiler for the Kunpeng 920 processor through software and hardware collaboration, memory optimization, SVE, and math library. • The compiler fully utilizes the hardware features of Kunpeng processors vectorization phase. • SVE optimization: Significantly improves program running performance for ARM-based machines that support SVE instructions. • SLP vectorization optimization: Analyzes and vectorizes0 码力 | 36 页 | 3.40 MB | 1 年前3
 Greenplum 5.0 and RoadmapOptimizer - ORCA • First Open Source Cost Based Optimizer for BIG data • Applies broad set of optimization strategies at once – Considers many more plan alternatives – Optimizes a wider range of queries emerging technologies 2016Postgres中国用户大会 Postgres Conference China 2016 中国用户大会 Performance: Query Optimization Vision Our new cost-based optimizer, Orca, will become the default optimizer in GPDB for all index support to larger class of predicates • Reduce optimization time: • Auto-disable unnecessary transformations • Investigation: Optimization Levels 2016Postgres中国用户大会 Postgres Conference China0 码力 | 27 页 | 2.66 MB | 1 年前3
共 128 条
- 1
 - 2
 - 3
 - 4
 - 5
 - 6
 - 13
 













