积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(39)机器学习(39)

语言

全部中文(简体)(25)英语(14)

格式

全部PDF文档 PDF(39)
 
本次搜索耗时 0.046 秒,为您找到相关结果约 39 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 中文(简体)
  • 英语
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 Lecture Notes on Support Vector Machine

    mapping the data into a higher-dimensional feature space where it exhibits linear patterns, we can employ the linear classification model in the new feature space. 8 Figure 3: Non-linear data v.s. linear classification problem for example. As shown in Fig. 4 (a), Each sample is represented by a single feature x (i.e., the data samples lie in a 1-dimensional space), and no linear separator exists for this data become linearly separable in the new higher-dimensional feature space (a) (b) Figure 4: Feature mapping for 1-dimensional feature space. Another example is given in Fig. 5. The data sample can
    0 码力 | 18 页 | 509.37 KB | 1 年前
    3
  • pdf文档 从推荐模型的基础特点看大规模推荐类深度学习系统的设计 袁镱

    � 千亿级推荐模型应⽤ O1. 千亿级特征(TB级)的模型的在线/离 线训练,在线推理服务和持续上线 O2. 针对推荐特点的深度优化,达到业界先 进⽔平 推荐系统的核⼼特点 � Feature 1(基本特点) 1.1 User与推荐系统交互,7*24⼩时 流式学习 1.2 Item和User新增,离开/遗忘, Embedding空间动态变化。 短期命中的⾼频key随时间缓慢变化 时间(⼩ 时) � Feature 2(数据的时空特点) 2.1 短时间内只有部分item和user被 命中,只有部分参数被⽤到 � Feature 3(机器学习的特点) Embedding以稀疏的⽅式表达信息 ⼤规模推荐模型深度学习系统基本解决维度 分布式 系统 ⼤规模 模型 优化 算法 1. ⾼性能 2. 效果⽆ 损的优化 � Feature 1(基本特点) � Feature 2(数据的时空 特点) � Feature3(机器学习 的特点) ⼤规模推荐模型深度学习系统基本解决维度 分布式 系统 ⼤规模 模型 优化 算法 1. ⾼性能 2. 效果⽆ 损的优化 � Feature 1(基本特点) � Feature 2(数据的时空 特点) � Feature3(机器学习 的特点) 训练框架—基于参数服务器架构的分布式训练框架
    0 码力 | 22 页 | 6.76 MB | 1 年前
    3
  • pdf文档 Lecture 5: Gaussian Discriminant Analysis, Naive Bayes

    September 27, 2023 22 / 122 Prediction Based on Bayes’ Theorem X is a random variable indicating the feature vector Y is a random variable indicating the label We perform a trial to obtain a sample x for image is represented by a vector of features The feature vectors are random, since the images are randomly given Random variable X representing the feature vector (and thus the image) The labels are random (deterministic) hypothesis function y = hθ(x) How to model the (probabilistic) relationship between feature vector X and label Y ? P(Y = y | X = x) = P(X = x | Y = y)P(Y = y) P(X = x) Feng Li (SDU) GDA
    0 码力 | 122 页 | 1.35 MB | 1 年前
    3
  • pdf文档 Lecture 6: Support Vector Machine

    changing the feature representation Feng Li (SDU) SVM December 28, 2021 40 / 82 Feature Mapping Consider the following binary classification problem Each sample is represented by a single feature x No linear linear separator exists for this data Feng Li (SDU) SVM December 28, 2021 41 / 82 Feature Mapping (Contd.) Now map each example as x → {x, x2} Each example now has two features (“derived” from the old 2021 42 / 82 Feature Mapping (Contd.) Another example Each sample is defined by x = {x1, x2} No linear separator exists for this data Feng Li (SDU) SVM December 28, 2021 43 / 82 Feature Mapping (Contd
    0 码力 | 82 页 | 773.97 KB | 1 年前
    3
  • pdf文档 Lecture Notes on Gaussian Discriminant Analysis, Naive

    cat in a given image. We assume X = [X1, X2, · · · , Xn]T is a random variable representing the feature vector of the given image, and Y ∈ {0, 1} is a random variable representing if there is a cat in is labeled by y given that the image can be represented by feature vector x, P(X = x | Y = y) is the probability that the image has its feature vector being x given that it is labeled by y, P(Y = y) is and logistic regression, we use hypothesis function y = hθ(x) to model the relationship between feature vector x and label y, while we now rely on Byes’ theorem to characterize the relationship through
    0 码力 | 19 页 | 238.80 KB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    1.0 to these two features for different animals. The higher the value, the more that particular feature represents the given animal. In Table 4-1 we manually assigned values for the cute and dangerous illustration. Going through table 4-1, cat and dog have high values for the ‘cute’ feature, and low values for the ‘dangerous’ feature. On the other hand, a snake is dangerous and not cute for most people. Similarly two-dimensional embedding for each animal, where each feature represents one dimension, we can represent the animals on a 2-D plot. The feature cute can be 2 These feature values are hand-picked based on what we thought
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation

    Normal and a Reduction cell. A normal cell's output feature map is identical to the input feature map. In contrast, a reduction cell reduces the output feature map to half. Figure 7-7 shows two child networks primitive operations. The concatenation operation happens along the filter dimension to keep the feature map intact. Figure 7-9 shows the Normal and Reduction cells predicted by NASNet with the cifar10 branches): """ It transforms the input branches to an identical feature space. It is useful when a cell receives inputs with different feature spaces. """ (hidden_1, width_1), (hidden_2, width_2) = branches
    0 码力 | 33 页 | 2.48 MB | 1 年前
    3
  • pdf文档 阿里云上深度学习建模实践-程孟力

    内存Allocate优化 ParallelStringOp [split/type conversion] Sequence Feature [side info] Op Fusion [hash + embedding] Overlap Execution [FG OP化] Item Feature增量更新 3.工程优化复 杂 4.数据获取困 难 挑战 深度模型是非线性的: • 参数很多 transformer based (Violet) VIT Video Fram es Bert Title OCR Cls Tok en Title feature OCR feature Im age feature M HSA Fusion M VM VTM M TM Tran sform er decoder Tran sform er decoder
    0 码力 | 40 页 | 8.51 MB | 1 年前
    3
  • pdf文档 动手学深度学习 v2.0

    a point)或数据样本(data instance)。我们把试图预测的目标(比如预测房屋价格)称为标签(label)或目 标(target)。预测所依据的自变量(面积和房龄)称为特征(feature)或协变量(covariate)。 通常,我们使用n来表示数据集中的样本数。对索引为i的样本,其输入表示为x(i) = [x(i) 1 , x(i) 2 ]⊤,其对应的标 签是y(i)。 线性模型 输出,隐去了权重和偏置 的值。 图3.1.2: 线性回归是一个单层神经网络。 在 图3.1.2所示的神经网络中,输入为x1, . . . , xd,因此输入层中的输入数(或称为特征维度,feature dimen‐ sionality)为d。网络的输出为o1,因此输出层中的输出数是1。需要注意的是,输入值都是已经给定的,并且 只有一个计算神经元。由于模型重点在发生计算的地方,所以通常我们在计算层数时不考虑输入层。也就是 了巨大的惩罚。这使得我们的学习算法偏向于在大量特征上均匀分布权重的模型。在实践中,这可能使它们 对单个变量中的观测误差更为稳定。相比之下,L1惩罚会导致模型将权重集中在一小部分特征上,而将其他 权重清除为零。这称为特征选择(feature selection),这可能是其他场景下需要的。 使用与 (3.1.10)中的相同符号,L2正则化回归的小批量随机梯度下降更新如下式: w ← (1 − ηλ) w − η |B| �
    0 码力 | 797 页 | 29.45 MB | 1 年前
    3
  • pdf文档 深度学习与PyTorch入门实战 - 37. 什么是卷积

    什么是卷积 主讲人:龙良曲 Feature Maps Feature maps Feature maps What’s wrong with Linear ▪ 4 Hidden Layers: [784, 256, 256, 256, 256, 10] ▪ 390K parameters ▪ 1.6MB memory ▪ 80386 http://slazebni.cs.illinois com/convolutional-neural-networks-cnn-step-1- convolution-operation/ Convolution Convolution CNN on feature maps https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/ 下一课时 卷积神经网络 Thank You.
    0 码力 | 18 页 | 1.14 MB | 1 年前
    3
共 39 条
  • 1
  • 2
  • 3
  • 4
前往
页
相关搜索词
LectureNotesonSupportVectorMachine推荐模型基础特点大规规模大规模深度学习系统设计GaussianDiscriminantAnalysisNaiveBayesEfficientDeepLearningBookEDLChapterArchitecturesAutomation阿里云上建模实践程孟力动手v2PyTorch入门实战37什么卷积
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩