积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(43)机器学习(43)

语言

全部中文(简体)(23)英语(20)

格式

全部PDF文档 PDF(43)
 
本次搜索耗时 0.062 秒,为您找到相关结果约 43 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 中文(简体)
  • 英语
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques

    train_ds, val_ds = make_dataset('oxford_flowers102') The dataset contains variable sized samples. Go ahead and resize them to 264x264 size. This is a required step because our model expects fixed-sized images after the introduction of built-in predictive text assistance despite it then needing more effort to write (and read).” 4 Wei, Jason, and Kai Zou. "Eda: Easy data augmentation techniques for boosting performance after the introduction of built-in predictive text assistance despite it then needing more effort to write (and read).” Here is a code example that implements random shuffling: # NLTK Import try: from
    0 码力 | 56 页 | 18.93 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques

    Our pruned model performed with an accuracy of 84.71%. It's a slight drop in performance. Let's go ahead and strip the pruning weights from the model that were added by the TFMOT library as shown below. = simulate_clustering( x, num_clusters, num_steps=5000, learning_rate=2e-1) The following is the log of the above training. Computing the centroids. Step: 1000, Loss: 0.04999. Step: 2000, Loss: 0.03865 the number of clusters ( ). Figure 5-7 (b) shows the plot. Note that both the x and y axes are in log-scale. Finally, figure 5-7 (c) compares the reconstruction errors between quantization and clustering
    0 码力 | 34 页 | 3.18 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation

    activation='softmax') ]) Our model, input data and the hyperparameter trial set is ready. Let's go ahead and train the model, each time choosing one item from the trial set. Each model is trained for 2000 build_hp_model(hp): if hp: learning_rate = hp.Float( "learning_rate", min_value=1e-4, max_value=1e-2, sampling="log" ) dropout_rate = hp.Float( "dropout_rate", min_value=.1, max_value=.8, step=.1 ) return creat {'default': 0.0001, 'conditions': [], 'min_value': 0.0001, 'max_value': 0.01, 'step': None, 'sampling': 'log'} dropout_rate (Float) {'default': 0.1, 'conditions': [], 'min_value': 0.1, 'max_value': 0.8, 'step':
    0 码力 | 33 页 | 2.48 MB | 1 年前
    3
  • pdf文档 动手学深度学习 v2.0

    包含a行和b列的实数矩阵集合 • A ∪ B: 集合A和B的并集 13 • A ∩ B:集合A和B的交集 • A \ B:集合A与集合B相减,B关于A的相对补集 函数和运算符 • f(·):函数 • log(·):自然对数 • exp(·): 指数函数 • 1X : 指示函数 • (·)⊤: 向量或矩阵的转置 • X−1: 矩阵的逆 • ⊙: 按元素相乘 • [·, ·]:连结 • |X|:集合的基数 open(data_file, 'w') as f: f.write('NumRooms,Alley,Price\n') # 列名 f.write('NA,Pave,127500\n') # 每行表示一个数据样本 f.write('2,NA,106000\n') f.write('4,NA,178100\n') f.write('NA,NA,140000\n') 要从创建的CSV文件中 �→'identity_transform', 'independent', 'kl', 'kl_divergence', 'kumaraswamy', 'laplace', 'lkj_cholesky', �→'log_normal', 'logistic_normal', 'lowrank_multivariate_normal', 'mixture_same_family', 'multinomial',
    0 码力 | 797 页 | 29.45 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    work. In the following section we will explain them through a toy example, but feel free to jump ahead if you are familiar with the motivation behind them. 1 Dimensionality reduction is the process of Skipgram is going to be identical, and is left as an exercise to the reader! We always wanted to write this in our books, after having read this in many textbooks throughout our life. Hopefully, we have
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 深度学习与PyTorch入门实战 - 10. Broadcasting

    https://blog.openai.com/generative-models/ ▪ Expand ▪ without copying data Key idea ▪ Insert 1 dim ahead ▪ Expand dims with size 1 to same size ▪ Feature maps: [4, 32, 14, 14] ▪ Bias: [32, 1, 1] => [1
    0 码力 | 12 页 | 551.84 KB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review

    As always, we recommend that to build an intuition for what works better and when, you should go ahead and try these ideas with both academic datasets which are easier to play with, or your own model and
    0 码力 | 31 页 | 4.03 MB | 1 年前
    3
  • pdf文档 Keras: 基于 Python 的深度学习库

    sampling_factor))) 我们假设单词频率遵循 Zipf 定律(s=1),来导出 frequency(rank) 的数值近似: frequency(rank) ~ 1/(rank * (log(rank) + gamma) + 1/2 - 1/(12*rank)), 其 中 gamma 为 Euler-Mascheroni 常量。 参数 • size: 整数,可能采样的单词数量。 y_pred) 7.2.8 logcosh logcosh(y_true, y_pred) 预测误差的双曲余弦的对数。 对于小的 x,log(cosh(x)) 近似等于 (x ** 2) / 2。对于大的 x,近似于 abs(x) - log(2)。这表示’logcosh’ 与均方误差大致相同,但是不会受到偶尔疯狂的错误预测的强烈影响。 Arguments • y_true: 目标真实值的张量。 10 TensorBoard [source] keras.callbacks.TensorBoard(log_dir='./logs', histogram_freq=0, batch_size=32, write_graph=True, write_grads=False, write_images=False, embeddings_freq=0, embeddings_layer_names=None
    0 码力 | 257 页 | 1.19 MB | 1 年前
    3
  • pdf文档 Lecture Notes on Gaussian Discriminant Analysis, Naive

    y(i))}i=1,··· ,m, the log-likelihood is defined as ℓ(ψ, µ0, µ1, Σ) = log m � i=1 pX,Y (x(i), y(i); ψ, µ0, µ1, Σ) = log m � i=1 pX|Y (x(i) | y(i); µ0, µ1, Σ)pY (y(i); ψ) = m � i=1 log pX|Y (x(i) | y(i); + m � i=1 log pY (y(i); ψ)(8) where ψ, µ0, and σ are parameters. Substituting Eq. (5)∼(7) into Eq. (8) gives 2 us a full expression of ℓ(ψ, µ0, µ1, Σ) ℓ(ψ, µ0, µ1, Σ) = m � i=1 log pX|Y (x(i) | m � i=1 log pY (y(i); ψ) = � i:y(i)=0 log � 1 (2π)n/2|Σ|1/2 exp � −1 2(x − µ0)T Σ−1(x − µ0) �� + � i:y(i)=1 log � 1 (2π)n/2|Σ|1/2 exp � −1 2(x − µ1)T Σ−1(x − µ1) �� + m � i=1 log ψy(i)(1 −
    0 码力 | 19 页 | 238.80 KB | 1 年前
    3
  • pdf文档 rwcpu8 Instruction Install miniconda pytorch

    . But the default shell initialization script set by cssystem is ~/.cshrc_user , so you should write the content in ~/.tcshrc to ~/.cshrc_user : source "/export/data/miniconda3/etc/profile.d/conda if ~/.tchsrc exists, ~/.cshrc_user won't be loaded, so you need to remove ~/.tcshrc : 4. Log out and log in again. If Miniconda is successfully installed, you should be able to see the usage of conda
    0 码力 | 3 页 | 75.54 KB | 1 年前
    3
共 43 条
  • 1
  • 2
  • 3
  • 4
  • 5
前往
页
相关搜索词
EfficientDeepLearningBookEDLChapterTechniquesAdvancedCompressionAutomation动手深度学习v2ArchitecturesPyTorch入门实战10BroadcastingTechnicalReviewKeras基于PythonLectureNotesonGaussianDiscriminantAnalysisNaiverwcpu8InstructionInstallminicondapytorch
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩