积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(34)机器学习(34)

语言

全部英语(21)中文(简体)(13)

格式

全部PDF文档 PDF(34)
 
本次搜索耗时 0.083 秒,为您找到相关结果约 34 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 PyTorch Brand Guidelines

    Light Light Gray (Digital+Print) Light Gray (Digital+Print) Medium Gray (Digital+Print) Dark Gray (Digital+Print) #F6F6F6 R246, G246, B246 C00, M00, Y00, K04 Pantone Cool Grey 1 C #FFFFFF as the background color, and use Coding color—Dark Gray, Light Gray, Green, Yellow, and reference other PyTorch Brand colors to use. At the same time, please ensure the clarity and legibility of Green (Digital) Coding Text— Light Gray (Digital) Coding Text— Dark Gray (Digital) Coding Background— Dark (Digital) Coding Background— Light (Digital) Hex #2B7D6D Hex #F4A623
    0 码力 | 12 页 | 34.16 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques

    similar to the baseline, but does so in fewer epochs. We could ideally save an epoch’s worth of training time by terminating the training early, if we adopt this hypothetical sample efficient model training. effective utilization of the training data. Labeling data is often an expensive process both in terms of time consumption and fiscal expenditure because it involves human labelers looking at each example and the four classes, three of which are the keywords that the device will accept: hello, weather and time. The fourth class (none) indicates the absence of an acceptable keyword in the input signal. Figure
    0 码力 | 56 页 | 18.93 MB | 1 年前
    3
  • pdf文档 人工智能发展史

    toronto.edu/~fritz/absps/cvq.pdf probability distributions Meanwhile: Speech Sequence ▪ No Memory ▪ Time delay NN http://www.cs.toronto.edu/~fritz/absps/waibelTDNN.pdf Moving window ▪ Inspired LeCun Vector Machine: 1992 http://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf Dark time ▪ Paper got rejected ▪ Hinton moved to CIFAR seeking for funding ▪ Conspiracy: rebrand“neural
    0 码力 | 54 页 | 3.87 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques

    is, it relies on the momentum of the weights which is an exponentially smoothed estimate of over time. For instance, the momentum of weight at training step is given by: 2 Dettmers, Tim, and Luke Zettlemoyer scores, but they will all try to approximate the importance of a given weight at a certain point of time in the training process to minimize the loss function. The better we can estimate this importance granularities visually. Figure 5-4: An example of sparsified weight matrices (zero-d weights are dark) each with 33% sparsity at various granularity levels. It shows the parameter layout for a convolutional
    0 码力 | 34 页 | 3.18 MB | 1 年前
    3
  • pdf文档 【PyTorch深度学习-龙龙老师】-测试版202112

    算在内 cpu_time = timeit.timeit(cpu_run, number=3) gpu_time = timeit.timeit(gpu_run, number=3) print('warmup:', cpu_time, gpu_time) # 正式计算 10 次,取平均时间 cpu_time = timeit.timeit(cpu_run timeit(cpu_run, number=10) 预览版202112 第 1 章 人工智能绪论 16 gpu_time = timeit.timeit(gpu_run, number=10) print('run time:', cpu_time, gpu_time) 将不同大小?下的 CPU 和 GPU 环境的运算时间绘制为曲线,如图 1.21 所示。可以看 到,在矩阵?和矩阵 Fort Lauderdale, FL, USA, 2011. [3] J. Mizera-Pietraszko 和 P. Pichappan, Lecture Notes in Real-Time Intelligent Systems, Springer International Publishing, 2017.
    0 码力 | 439 页 | 29.91 MB | 1 年前
    3
  • pdf文档 AI大模型千问 qwen 中文文档

    json │ │ └── vocab.json 随后你需要运行 python server.py 来启动你的网页服务。请点击进入 `http://localhost:7860/?__theme=dark` 然后享受使用 Qwen 的 Web UI 吧! 1.6.2 下一步 TGW 中包含了许多更多用途,您甚至可以在其中享受角色扮演的乐趣,并使用不同类型的量化模型。您可 以训练诸如 LoRA
    0 码力 | 56 页 | 835.78 KB | 1 年前
    3
  • pdf文档 PyTorch Release Notes

    tested against each NGC monthly container release to ensure consistent accuracy and performance over time. ‣ ResNeXt101-32x4d model: This model was introduced in the Aggregated Residual Transformations for leverages mixed precision arithmetic by using Tensor Cores on NVIDIA V100 GPUs for 1.3x faster training time while maintaining target accuracy. This model script is available on GitHub and NGC. ‣ Tacotron tested against each NGC monthly container release to ensure consistent accuracy and performance over time. ‣ ResNeXt101-32x4d model: This model was introduced in the Aggregated Residual Transformations for
    0 码力 | 365 页 | 2.94 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation

    model, input data and the hyperparameter trial set is ready. Let's go ahead and train the model, each time choosing one item from the trial set. Each model is trained for 2000 iterations. At the end of a trial on the hyperparameters for the final training. For large models, this is very expensive in terms of time and resources. Alternatively, we can base the search approach on the budget allocation to cap the 24s] val_accuracy: 0.6313725709915161 Best val_accuracy So Far: 0.7284313440322876 Total elapsed time: 00h 17m 23s Results summary Results in hpo/hyperband Showing 3 best trials Trial summary Hyperparameters:
    0 码力 | 33 页 | 2.48 MB | 1 年前
    3
  • pdf文档 机器学习课程-温州大学-时间序列总结

    还可以将包含多个datetime对象的列表传给 index参数,同样能创建具有时间戳索引的 Series对象。 date_list = [datetime(2018, 1, 1), datetime(2018, 1, 15] time_se = pd.Series(np.arange(6), index=date_list) 12 创建时间序列 如果希望DataFrame对象具有时间戳索引, 也可以采用上述方式进行创建。 2, 15)] time_df = pd.DataFrame(data_demo, index=date_list) 13 通过时间戳索引选取子集 最简单的选取子集的方式,是直接使用位置 索引来获取具体的数据。 # 根据位置索引获取数据 time_se[3] 14 通过时间戳索引选取子集 还可以使用datetime构建的日期获取其对应 的数据。 date_time = datetime(2015 datetime(2015, 6, 1) date_se[date_time] 15 通过时间戳索引选取子集 还可以在操作索引时,直接使用一个日期字 符串(符合可以被解析的格式)进行获取。 date_se['20150530'] date_se['2018/01/23'] 16 通过时间戳索引选取子集 如果希望获取某年或某个月的数据,则可以 直接用指定的年份或者月份操作索引。 date_se['2015']
    0 码力 | 67 页 | 1.30 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    Chapter 2 - Compression Techniques “I have made this longer than usual because I have not had time to make it shorter.” Blaise Pascal In the last chapter, we discussed a few ideas to improve the deep of the simplest approaches towards efficiency is compression to reduce data size. For the longest time in the history of computing, scientists have worked tirelessly towards storing and transmitting information Footprint Metrics Quality Metrics ● Model Size ● Inference Latency on Target Device ● Training Time for Convergence ● Peak RAM Consumption ● Accuracy ● Precision ● Recall ● F1 ● AUC Table 2-1:
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
共 34 条
  • 1
  • 2
  • 3
  • 4
前往
页
相关搜索词
PyTorchBrandGuidelinesEfficientDeepLearningBookEDLChapterTechniques人工智能人工智能发展发展史AdvancedCompression深度学习AI模型千问qwen中文文档ReleaseNotesAutomation机器学习课程温州大学时间序列总结
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩