TVM: Where Are We GoingTVM: Where are we going Tianqi ChenCurrent Deep Learning Landscape Frameworks and Inference engines DL Compilers Kenrel Libraries Hardware CuDNN NNPack MKL-DNN Hand optimized Open source, automated automated end-to- end optimization framework for deep learning.TVM Stack High-Level Differentiable IR Tensor Expression and Optimization Search Space LLVM, CUDA, Metal VTA Edge FPGA Cloud FPGA FPGA ASIC Optimization AutoTVM Device FleetExisting Deep Learning Frameworks High-level data flow graph Hardware Primitive Tensor operators such as Conv2D eg. cuDNN Offload to heavily optimized0 码力 | 31 页 | 22.64 MB | 5 月前3
TVM Meetup Nov. 16th - Linaro16th, 2019Bringing together the Arm ecosystemLinaro AI Initiative Provide the best-in-class Deep Learning performance by leveraging Neural Network acceleration in IP and SoCs from the Arm ecosystem, through0 码力 | 7 页 | 1.23 MB | 5 月前3
PAI & TVM Meetup - Shanghai 20191116PLATFORM Loss Scaling in TF 下和全于由 loss = loss_fn() opt = tf.Adamoptimizer(learning_rate=...) # Choose a 1oss Scale manager which decides how to pick the right loss scale # throughout0 码力 | 26 页 | 5.82 MB | 5 月前3
共 3 条
- 1













