积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(230)VirtualBox(104)Apache Kyuubi(44)机器学习(32)Pandas(26)Apache Flink(5)Kubernetes(3)dapr(3)rancher(3)OpenShift(2)

语言

全部英语(208)中文(简体)(20)中文(繁体)(1)中文(简体)(1)

格式

全部PDF文档 PDF(207)其他文档 其他(23)
 
本次搜索耗时 0.035 秒,为您找到相关结果约 230 个.
  • 全部
  • 云计算&大数据
  • VirtualBox
  • Apache Kyuubi
  • 机器学习
  • Pandas
  • Apache Flink
  • Kubernetes
  • dapr
  • rancher
  • OpenShift
  • 全部
  • 英语
  • 中文(简体)
  • 中文(繁体)
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 其他文档 其他
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 1 - Introduction

    seeking efficiency in deep learning models. We will also introduce core areas of efficiency techniques (compression techniques, learning techniques, automation, efficient models & layers, infrastructure). Our you just read this chapter, you would be able to appreciate why we need efficiency in deep learning models today, how to think about it in terms of metrics that you care about, and finally the tools at your practical projects. With that being said, let’s start off on our journey to more efficient deep learning models. Introduction to Deep Learning Machine learning is being used in countless applications today.
    0 码力 | 21 页 | 3.17 MB | 1 年前
    3
  • pdf文档 PyTorch Release Notes

    tested on Pascal GPU architectures. ‣ Transformer Engine is a library for accelerating Transformer models on NVIDIA GPUs. It includes support for 8-bit floating point (FP8) precision on Hopper GPUs which an 8X increase in computational throughput over FP32 arithmetic. APEX AMP is included to support models that currently rely on it, but torch.cuda.amp is the future-proof alternative and offers a number recognition (ASR) that provides near state-of-the-art results on LibriSpeech among end-to-end ASR models without external data. This model script is available on GitHub and NGC. ‣ BERT model: Bidirectional
    0 码力 | 365 页 | 2.94 MB | 1 年前
    3
  • pdf文档 keras tutorial

    using Keras. This tutorial walks through the installation of Keras, basics of deep learning, Keras models, Keras layers, Keras modules and finally conclude with some real-time applications. Audience ................................................................................ 52 9. Keras ― Models ................................................................................................ ................................................................... 89 17. Keras ― Pre-Trained Models ................................................................................................
    0 码力 | 98 页 | 1.57 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques

    chapter, our focus will be on the techniques that enable us to achieve our quality goals. High quality models have an additional benefit in footprint constrained environments like mobile and edge devices where with samples and labels, distillation transfers knowledge from a large model or ensemble of models to smaller models. The obvious question at this point is: why are we talking about them in the same breadth subsection elaborates it further. Using learning techniques to build smaller and faster efficient models Overall, as summarized in table 3-1, improving sample efficiency enables faster model training,
    0 码力 | 56 页 | 18.93 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review

    learning which has been instrumental in the success of natural language models like BERT. Self-Supervised learning helps models to quickly achieve impressive quality with a small number of labels. As while retaining the same labeling costs i.e., training data-efficient (specifically, label efficient) models. We will describe the general principles of Self-Supervised learning which are applicable to both tasks requires new models to be trained from scratch. For models that share the same domain, it is likely that the first few layers learn similar features. Hence training new models from scratch for these
    0 码力 | 31 页 | 4.03 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    temporal data. These breakthroughs contributed to bigger and bigger models. Although they improved the quality of the solutions, the bigger models posed deployment challenges. What good is a model that cannot will deepdive into their architectures and use them to transform large and complex models into smaller and efficient models capable of running on mobile and edge devices. We have also set up a couple of programming our journey with learning about embeddings in the next section. Embeddings for Smaller and Faster Models We humans can intuitively grasp similarities between different objects. For instance, when we see
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    lead to degradation in quality. In our case, we are concerned about compressing the deep learning models. What do we really mean by compressing though? As mentioned in chapter 1, we can break down the metrics footprint. In the case of deep learning models, the model quality is often correlated with the number of layers, and the number of parameters (assuming that the models are well-tuned). If we naively reduce work with Tensorflow 2.0 (TF) because it has exhaustive support for building and deploying efficient models on devices ranging from TPUs to edge devices at the time of writing. However, we encourage you to
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
  • pdf文档 AI大模型千问 qwen 中文文档

    series of the Qwen Team, Alibaba Group. Now the large language models have been upgraded to Qwen1.5. Both language models and multimodal models are pretrained on large-scale multilingual and multimodal data "You are a helpful assistant."}, {"role": "user", "content": "Tell me something about large language models."} ], }' 或者您可以按照下面所示的方式,使用 openai Python 包中的 Python 客户端: from openai import OpenAI # Set OpenAI's "You are a helpful assistant."}, {"role": "user", "content": "Tell me something about large language models."}, ] ) print("Chat response:", chat_response) 1.2.3 下一步 现在,您可以尽情探索 Qwen 模型的各种用途。若想了解更多,请随时查阅本文档中的其他内容。
    0 码力 | 56 页 | 835.78 KB | 1 年前
    3
  • pdf文档 Elasticity and state migration: Part I - CS 591 K1: Data Stream Processing and Analytics Spring 2020

    Queuing theory models: for latency objectives • Control theory models: e.g., PID controller • Rule-based models, e.g. if CPU utilization > 70% => scale out • Analytical dataflow-based models Action Predictive: at-once for all operators 8 ??? Vasiliki Kalavri | Boston University 2020 Queuing theory models 9 • Metrics • service time and waiting time per tuple and per task • total time spent processing predictive, at-once for all operators ??? Vasiliki Kalavri | Boston University 2020 Queuing theory models 9 • Metrics • service time and waiting time per tuple and per task • total time spent processing
    0 码力 | 93 页 | 2.42 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques

    with an eye towards conceptual understanding as well as practically using them in your deep learning models. We start with sparsity. If your goal was to optimize your brain for storage, you can often trim ensures the decoded value deviates less from the original value and can help improve the quality of our models. Did we get you excited yet? Let’s learn about these techniques together! Model Compression Using of removing (pruning) weights during the model training to achieve smaller models. Such models are called sparse or pruned models. The simplest form of pruning is to zero out a certain, say p, percentage
    0 码力 | 34 页 | 3.18 MB | 1 年前
    3
共 230 条
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 23
前往
页
相关搜索词
EfficientDeepLearningBookEDLChapterIntroductionPyTorchReleaseNoteskerastutorialTechniquesAdvancedTechnicalReviewArchitecturesCompressionAI模型千问qwen中文文档ElasticityandstatemigrationPartCS591K1DataStreamProcessingAnalyticsSpring2020
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩