积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(30)机器学习(30)

语言

全部英语(19)中文(简体)(11)

格式

全部PDF文档 PDF(30)
 
本次搜索耗时 0.017 秒,为您找到相关结果约 30 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 Lecture Notes on Support Vector Machine

    + ? ≥ 1 Negative class: ?!? + ? ≤ −1 ? = 1 ? Figure 2: Hard-margin SVM. aim of the above optimization problem is to find a hyperplane (parameterized by ω and b) with margin γ = 1/∥ω∥ maximized, while training set. 2.2 Preliminary Knowledge of Convex Optimization 2.2.1 Optimization Problems and Lagrangian Duality We now consider the following optimization problem min ω f(ω) (9) s.t. gi(ω) ≤ 0, i = 1 gk(ω) and the equality constraints h1(ω), · · · , hl(ω). We construct the Lagrangian of the above optimization problem as L(ω, α, β ) = f(ω) + k � i=1 αigi(ω) + l � j=1 β jhj(ω) (12) In fact, L(ω, α
    0 码力 | 18 页 | 509.37 KB | 1 年前
    3
  • pdf文档 Lecture 6: Support Vector Machine

    Outline 1 SVM: A Primal Form 2 Convex Optimization Review 3 The Lagrange Dual Problem of SVM 4 SVM with Kernels 5 Soft-Margin SVM 6 Sequential Minimal Optimization (SMO) Algorithm Feng Li (SDU) SVM December 28, 2021 15 / 82 Convex Optimization Review Optimization Problem Lagrangian Duality KKT Conditions Convex Optimization S. Boyd and L. Vandenberghe, 2004. Convex Optimization. Cambridge university press press. Feng Li (SDU) SVM December 28, 2021 16 / 82 Optimization Problems Considering the following optimization problem min ω f (ω) s.t. gi(ω) ≤ 0, i = 1, · · · , k hj(ω) = 0, j = 1, · · · , l with
    0 码力 | 82 页 | 773.97 KB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation

    this using the earlier example for choosing quantization and/or clustering techniques for model optimization. We have a search space which has two boolean valued parameters: quantization and clustering hyperparameters. Some of the commonly tuned hyperparameters are the learning rate and the momentum of the optimization algorithm and the training batch size. Other aspects of the training pipeline like data augmentation may influence each other. Hence, we need a sophisticated approach to tune them. Hyperparameter Optimization (HPO) is the process of choosing values for hyperparameters that lead to an optimal model. HPO
    0 码力 | 33 页 | 2.48 MB | 1 年前
    3
  • pdf文档 Machine Learning Pytorch Tutorial

    Pytorch ● Dataset & Dataloader ● Tensors ● torch.nn: Models, Loss Functions ● torch.optim: Optimization ● Save/load models Prerequisites ● We assume you are already familiar with… 1. Python3 ■ deep neural networks Training Neural Networks Training Define Neural Network Loss Function Optimization Algorithm More info about the training process in last year's lecture video. Training & Testing calculation. Training & Testing Neural Networks – in Pytorch Define Neural Network Loss Function Optimization Algorithm Training Validation Testing Step 2. torch.nn.Module Load Data torch.nn – Network
    0 码力 | 48 页 | 584.86 KB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques

    the input model and wraps the prunable blocks for sparse training using TFMOT (Tensorflow Model Optimization) library. In this case, we prune the 50% of the weights in each prunable block using magnitude-based UpdatePruningStep() works in conjunction with the TFMOT pruning wrappers to update the wrappers after each optimization step. update_pruning = tfmot.sparsity.keras.UpdatePruningStep() callbacks = [update_pruning] centroids_init = np.linspace(x_sorted[0], x_sorted[-1], num_clusters) # Construct the variables in this optimization problem. # We will not update 'x', and hence it is not trainable. x_var = tf.Variable(initial_value=x_sorted
    0 码力 | 34 页 | 3.18 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 1 - Introduction

    requires large computational resources, so they have to be carefully used. Automated Hyper-Param Optimization (HPO) is one such technique that can be used to replace / supplement manual tweaking of hyper-parameters allocate resources to promising ranges of hyper-parameters like Bayesian Optimization (Figure 1-12 illustrates Bayesian Optimization). These algorithms construct ‘trials’ of hyper-parameters, where each trial across them is how future trials are constructed based on past results. Figure 1-12: Bayesian Optimization over two dimensions x1 and x2. Red contour lines denote a high loss value, and blue contour lines
    0 码力 | 21 页 | 3.17 MB | 1 年前
    3
  • pdf文档 keras tutorial

    Keras is an optimal choice for deep learning applications. Features Keras leverages various optimization techniques to make high level neural network API easier and more performant. It supports the Optimizer are used in learning phase to find the error (deviation from actual output) and do optimization so that the error will be minimized.  Fit the model: The actual learning process will optimize the layer (and the model) by dynamically applying the penalties on the weights during optimization process. To summarise, Keras layer requires below minimum details to create a complete layer
    0 码力 | 98 页 | 1.57 MB | 1 年前
    3
  • pdf文档 复杂环境下的视觉同时定位与地图构建

    • 变量数目非常庞大 • 内存空间需求大 • 计算耗时 • 迭代的局部集束调整 • 大误差难以均匀扩散到整个序列 • 极易陷入局部最优 • 姿态图优化(Pose Graph Optimization) • 只优化相机之间的相对姿态,三维点都消元掉; • 是集束调整的一个近似,不是最优解。 基于自适应分段的集束调整 • 将长序列分成若干段短序列; • 每个短序列进行独立的Sf Recognition Pose Graph Optimization + Traditional BA Street序列结果比较 ENFT-SLAM ORB-SLAM Non-consecutive Track Matching Segment-based BA Bag-of-words Place Recognition Pose Graph Optimization + Traditional BA
    0 码力 | 60 页 | 4.61 MB | 1 年前
    3
  • pdf文档 Lecture Notes on Linear Regression

    |✓T x(i) � y(i)|. 2 Gradient Descent Gradient Descent (GD) method is a first-order iterative optimization algorithm for finding the minimum of a function. If the multi-variable function J(✓) is di↵erentiable known as incremental gradient descent, is a stochastic approximation of the gradient descent optimization method. In each iteration, the parameters are updated according to the gra- dient of the error
    0 码力 | 6 页 | 455.98 KB | 1 年前
    3
  • pdf文档 机器学习课程-温州大学-09机器学习-支持向量机

    [6] Stephen Boyd, Lieven Vandenberghe, Convex Optimization[M]. Cambridge: Cambridge University Press, 2004. [7] PLATT J. Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector
    0 码力 | 29 页 | 1.51 MB | 1 年前
    3
共 30 条
  • 1
  • 2
  • 3
前往
页
相关搜索词
LectureNotesonSupportVectorMachineEfficientDeepLearningBookEDLChapterAutomationPytorchTutorialAdvancedCompressionTechniquesIntroductionkerastutorial复杂环境视觉同时定位地图构建LinearRegression机器学习课程温州大学09支持向量
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩