积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部综合其他(12)人工智能(12)后端开发(11)Julia(10)Rust(1)

语言

全部中文(繁体)(10)zh(5)英语(3)[zh](1)fj(1)日语(1)kor(1)中文(简体)(1)

格式

全部PDF文档 PDF(23)
 
本次搜索耗时 0.198 秒,为您找到相关结果约 23 个.
  • 全部
  • 综合其他
  • 人工智能
  • 后端开发
  • Julia
  • Rust
  • 全部
  • 中文(繁体)
  • zh
  • 英语
  • [zh]
  • fj
  • 日语
  • kor
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 Deploy VTA on Intel FPGA

    INDUSTRIES, INCORPORATED ACCELERATED VISUAL PERCEPTION LIANGFU CHEN 11/16/2019 DEPLOY VTA ON INTEL FPGA©2019 HARMAN INTERNATIONAL INDUSTRIES, INCORPORATED 2 Moore’s Law is Slowing Down MOTIVATION©2019 Terasic DE10-Nano DEPLOY VTA ON INTEL FPGA©2019 HARMAN INTERNATIONAL INDUSTRIES, INCORPORATED 5 Software - CMA Contiguous Memory Allocation – Linux Kernel DEPLOY VTA ON INTEL FPGA https://pynq.readthedocs INCORPORATED 6 Software - CMA Contiguous Memory Allocation – Linux Kernel Module DEPLOY VTA ON INTEL FPGA Setup Environment Variables Navigate to 3rdparty/cma and build kernel module Copy kernel module
    0 码力 | 12 页 | 1.35 MB | 5 月前
    3
  • pdf文档 TVM Meetup: Quantization

    Target-independent Relay passes Target-optimized graph Target-dependent Relay passes Intel x86 ARM CPU Nvidia GPU ARM GPU Schedule templates written in TVM Tensor IR .. More targets AutoTVM – Tuning Target-independent Relay passes Target-optimized Int8 Relay Graph Intel x86 schedule ARM CPU schedule Nvidia GPU schedule ARM GPU schedule Relay Int8 Graph Target-dependent Relay layout opt© 2019 its Affiliates. All rights reserved. Outline • QNN Dialect • Design • Operators • Results on Intel Cascade Lake© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Quantized Operators
    0 码力 | 19 页 | 489.50 KB | 5 月前
    3
  • pdf文档 TVM@AliOS

    人 人 e 人 e@ TVM Q@ AliOs Overview TVM @ AliOs ARM CPU TVM @ AliOos Hexagon DSP TVM @ Alios Intel GPU Misc /NiiOS ! 驱动万物智能 PART ONE TVM Q@ AliOs Overview AiOS 1驱动万物智能 AliOs overview 。 AliOs (www AN 2X MobilenetV2 TFLite 1.34X MobilenetV2 QNNPACK AliOs @ Roewe RX5 MAX OpenVINO @ Intel GPU AliDS AR-Nav Product @ SUV Release and adopt TVM (Apollo Lake Gold) Model 1.6X Intel AliOs TVM Arch Model 。 Facelandmark Pedestrian & Vehicle Detection Voice-GUI Gesture Lanenet NLU DMS FacelD Multimodal Interection CPU (ARM、Intel) 1驱动万物智能 Accelerated
    0 码力 | 27 页 | 4.86 MB | 5 月前
    3
  • pdf文档 Bring Your Own Codegen to TVM

    © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon/Intel Confidentia Presenter: Zhi Chen, Cody Yu Amazon SageMaker Neo, Deep Engine Science Bring Your Own Codegen to TVM Chip© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Example showcase: Intel MKL-DNN (DNNL) library 1. Import packages import numpy as np from tvm import relay 2. Load a pretrained Relay Runtime (VM, Graph Runtime, Interpreter) Your Dispatcher Target Device General Devices (CPU/GPU/FPGA) Mark supported operators or subgraphs 1. Implement an operator-level annotator, OR 2. Implement
    0 码力 | 19 页 | 504.69 KB | 5 月前
    3
  • pdf文档 Trends Artificial Intelligence

    Impressive61 NVIDIA AI Ecosystem Tells Over Four Years = >100% Growth in Developers / Startups / Apps Note: GPU = Graphics Processing Unit. Source: NVIDIA (2021 & 2025) NVIDIA Computing Ecosystem – 2021-2025, per Cloud vs. AI Patterns105 Tech CapEx Spend Partial Instigator = Material Improvements in GPU PerformanceNVIDIA GPU Performance = +225x Over Eight Years 106 1 GPT-MoE Inference Workload = A type of workload Source: NVIDIA (5/25) Performance of NVIDIA GPU Series Over Time – 2016-2024, per NVIDIA Tech CapEx Spend Partial Instigator = Material Improvements in GPU Performance Pascal Volta Ampere Hopper Blackwell
    0 码力 | 340 页 | 12.14 MB | 4 月前
    3
  • pdf文档 亿联TVM部署

    performance gain by autotuning 3. TVM can support many kinds of hardware platform: Intel/arm CPU, Nividia/arm GPU, VTA…5 �������������� 1. Get a .log file from the autotvm on Ubuntu 2. Use the .log
    0 码力 | 6 页 | 1.96 MB | 5 月前
    3
  • pdf文档 Julia 1.11.4

    MPI.jl and Elemental.jl provide access to the existing MPI ecosystem of libraries. 4. GPU computing: The Julia GPU compiler provides the ability to run Julia code natively on GPUs. There is a rich ecosys- array operations distributed across workers, as outlined above. A mention must be made of Julia's GPU programming ecosystem, which includes: 1. CUDA.jl wraps the various CUDA libraries and supports compiling (GNU), others optionally permit placing hidden arguments directly after the character argument (Intel, PGI). For example, Fortran subroutines of the form subroutine test(str1, str2) character(len=*)
    0 码力 | 2007 页 | 6.73 MB | 3 月前
    3
  • pdf文档 Julia 1.11.5 Documentation

    MPI.jl and Elemental.jl provide access to the existing MPI ecosystem of libraries. 4. GPU computing: The Julia GPU compiler provides the ability to run Julia code natively on GPUs. There is a rich ecosys- array operations distributed across workers, as outlined above. A mention must be made of Julia's GPU programming ecosystem, which includes: 1. CUDA.jl wraps the various CUDA libraries and supports compiling (GNU), others optionally permit placing hidden arguments directly after the character argument (Intel, PGI). For example, Fortran subroutines of the form subroutine test(str1, str2) character(len=*)
    0 码力 | 2007 页 | 6.73 MB | 3 月前
    3
  • pdf文档 Julia 1.11.6 Release Notes

    MPI.jl and Elemental.jl provide access to the existing MPI ecosystem of libraries. 4. GPU computing: The Julia GPU compiler provides the ability to run Julia code natively on GPUs. There is a rich ecosys- array operations distributed across workers, as outlined above. A mention must be made of Julia's GPU programming ecosystem, which includes: 1. CUDA.jl wraps the various CUDA libraries and supports compiling (GNU), others optionally permit placing hidden arguments directly after the character argument (Intel, PGI). For example, Fortran subroutines of the form subroutine test(str1, str2) character(len=*)
    0 码力 | 2007 页 | 6.73 MB | 3 月前
    3
  • pdf文档 Julia 1.12.0 RC1

    MPI.jl and Elemental.jl provide access to the existing MPI ecosystem of libraries. 4. GPU computing: The Julia GPU compiler provides the ability to run Julia code natively on GPUs. There is a rich ecosys- array operations distributed across workers, as outlined above. A mention must be made of Julia's GPU programming ecosystem, which includes: 1. CUDA.jl wraps the various CUDA libraries and supports compiling (GNU), others optionally permit placing hidden arguments directly after the character argument (Intel, PGI). For example, Fortran subroutines of the form subroutine test(str1, str2) character(len=*)
    0 码力 | 2057 页 | 7.44 MB | 3 月前
    3
共 23 条
  • 1
  • 2
  • 3
前往
页
相关搜索词
DeployVTAonIntelFPGATVMMeetupQuantizationAliOSBringYourOwnCodegentoTrendsArtificialIntelligence亿联部署Julia1.11DocumentationReleaseNotes1.12RC1
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩