积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(25)机器学习(25)

语言

全部英语(17)中文(简体)(8)

格式

全部PDF文档 PDF(25)
 
本次搜索耗时 0.058 秒,为您找到相关结果约 25 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 PyTorch Release Notes

    examples, see: ‣ PyTorch website ‣ PyTorch project This document provides information about the key features, software enhancements and improvements, known issues, and how to run this container. PyTorch RN-08516-001_v23 details, see Deep Learning Frameworks Support Matrix. Key Features and Enhancements This PyTorch release includes the following key features and enhancements. ‣ PyTorch container image version 23.07 details, see Deep Learning Frameworks Support Matrix. Key Features and Enhancements This PyTorch release includes the following key features and enhancements. ‣ PyTorch container image version 23.06
    0 码力 | 365 页 | 2.94 MB | 1 年前
    3
  • pdf文档 动手学深度学习 v2.0

    and identically distributed, i.i.d.)。样本有时也叫做数据点 (data point)或者数据实例(data instance),通常每个样本由一组称为特征(features,或协变量(covariates)) 的属性组成。机器学习模型会根据这些属性进行预测。在上面的监督学习问题中,要预测的是一个特殊的属 性,它被称为标签(label,或目标(target))。 true_b = 4.2 features, labels = synthetic_data(true_w, true_b, 1000) 47 https://discuss.d2l.ai/t/1775 3.2. 线性回归的从零开始实现 95 注意,features中的每一行都包含一个二维数据样本,labels中的每一行都包含一维标签值(一个标量)。 print('features:', features[0] '\nlabel:', labels[0]) features: tensor([1.4632, 0.5511]) label: tensor([5.2498]) 通过生成第二个特征features[:, 1]和labels的散点图,可以直观观察到两者之间的线性关系。 d2l.set_figsize() d2l.plt.scatter(features[:, (1)].detach().numpy()
    0 码力 | 797 页 | 29.45 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    Convolutional Neural Nets (CNNs) were another important breakthrough that enabled learning spatial features in the input. Recurrent Neural Nets (RNNs) facilitated learning from the sequences and temporal having an algorithmic way to meaningfully represent these inputs using a small number of numerical features, will help us solve tasks related to these inputs. Ideally this representation is such that similar similar representations. We will call this representation an Embedding. An embedding is a vector of features that represent aspects of an input numerically. It must fulfill the following goals: a) To compress
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 pytorch 入门笔记-03- 神经网络

    x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def num_flat_features(self, x): size = x.size()[1:] size()[1:] num_features = 1 for s in size: num_features *= s return num_features net = Net() print(net) Net( (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1)) stride=(1, 1)) (fc1): Linear(in_features=400, out_features=120, bias=True) (fc2): Linear(in_features=120, out_features=84, bias=True) (fc3): Linear(in_features=84, out_features=10, bias=True) ) 在模型中必须要定义
    0 码力 | 7 页 | 370.53 KB | 1 年前
    3
  • pdf文档 QCon北京2018-《深度学习在微博信息流排序的应用》-刘博

    表达能力强 网络结构灵活 User features Relation features Contextual features Continueous featues Categorical features normalize one-hot encode embedding one-hot encode Content features ReLU(256) ReLU(128) 深度学习应用实践 —— DeepFM User features Relation features Contextual features Continueous featues Categorical features normalize one-hot encode embedding Content features ReLU(256) ReLU(128) ReLU(64)
    0 码力 | 21 页 | 2.14 MB | 1 年前
    3
  • pdf文档 keras tutorial

    ........................................................................................... 1 Features ............................................................................................... learning applications. Features Keras leverages various optimization techniques to make high level neural network API easier and more performant. It supports the following features:  Consistent, simple the input of the next subsequent layer. By using this approach, we can process huge amount of features, which makes deep learning a very powerful tool. Deep learning algorithms are also useful for the
    0 码力 | 98 页 | 1.57 MB | 1 年前
    3
  • pdf文档 Keras: 基于 Python 的深度学习库

    layers import Embedding from keras.layers import LSTM model = Sequential() model.add(Embedding(max_features, output_dim=256)) model.add(LSTM(128)) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) 整数张量,表示将与输入相乘的二进制 dropout 掩层的形状。例如,如果 你的输入尺寸为 (batch_size, timesteps, features),然后你希望 dropout 掩层在所有 时间步都是一样的,你可以使用 noise_shape=(batch_size, 1, features)。 • seed: 一个作为随机种子的 Python 整数。 参考文献 • Dropout: A Simple output_shape == (None, 3, 32) 参数 • n: 整数,重复次数。 输入尺寸 2D 张量,尺寸为 (num_samples, features)。 输出尺寸 3D 张量,尺寸为 (num_samples, n, features)。 5.2.9 Lambda [source] keras.layers.Lambda(function, output_shape=None
    0 码力 | 257 页 | 1.19 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review

    scratch. For models that share the same domain, it is likely that the first few layers learn similar features. Hence training new models from scratch for these tasks is likely wasteful. Regarding the first data we provided to the model in this fine-tuning stage is not being used for learning rudimentary features, but rather how to map the high-level representations it learned in the pretraining stage to solving emnlp-main.831. 10 OpenAI GPT-3 API https://openai.com/api/ 9 GitHub Copilot: https://github.com/features/copilot import tensorflow_datasets as tfds with tf.device('/job:localhost'): ds = tfds.load('ag_news_subset'
    0 码力 | 31 页 | 4.03 MB | 1 年前
    3
  • pdf文档 Lecture Notes on Gaussian Discriminant Analysis, Naive

    y) β j(y) = �m i=1 1(y(i) = y ∧ x(i) j = x) �m i=1 1(y(i) = y) (23) Remark: We assume binary features (Xj ∈ {0, 1} for ∀j ∈ [n]) in the above discussion. What if Xj ∈ {1, 2, · · · , v}? Can we get similar y)p1(x2 | y) · · · p¯j(¯x | y) · · · pn(xn | y) = 0 for ∀y. It is shown that, even the remaining features all have very “strong” conditional probabilities, p(y | x) is forcibly set to be zero due to only sample may involves a different number of features. We assume that the i-th training sample x(i) has ni features. For ∀i ∈ [m], x(i) has each of its features drawn from a sample space [v] = {1, 2, · ·
    0 码力 | 19 页 | 238.80 KB | 1 年前
    3
  • pdf文档 Lecture 5: Gaussian Discriminant Analysis, Naive Bayes

    cat Some of them may not Whether there is a cat is random An image is represented by a vector of features The feature vectors are random, since the images are randomly given Random variable X representing NB and EM September 27, 2023 35 / 122 Warm Up (Contd.) Suppose we have n features X = [X1, X2, · · · , Xn]T The features are independent with each other P(X = x | Y = y) = P(X1 = x1, · · · , Xn = is a n-dimensional vector Each feature x(i) j ∈ {0, 1} (j = 1, · · · , n) y (i) ∈ {0, 1} The features and labels can be represented by random variables {Xj}j=1,··· ,n and Y , respectively Feng Li (SDU)
    0 码力 | 122 页 | 1.35 MB | 1 年前
    3
共 25 条
  • 1
  • 2
  • 3
前往
页
相关搜索词
PyTorchReleaseNotes动手深度学习v2EfficientDeepLearningBookEDLChapterArchitecturespytorch入门笔记03神经网络神经网神经网络QCon北京2018微博信息信息流排序应用刘博kerastutorialKeras基于PythonAdvancedTechniquesTechnicalReviewLectureonGaussianDiscriminantAnalysisNaiveBayes
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩