Dynamic Model in TVMrights reserved. Presenter: Haichen Shen, Yao Wang Amazon SageMaker Neo, Deep Engine Science Dynamic Model in TVM AWS AI© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Models with shapes ○ Dynamic inputs: batch size, image size, sequence length, etc. ○ Output shape of some ops are data dependent: arange, nms, etc. ○ Control flow: concatenate within a while loop Limitation of TVM/graph models© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Support dynamic model in TVM ● Support Any-dim in typing ● Use shape function to compute the type at runtime ● Virtual0 码力 | 24 页 | 417.46 KB | 5 月前3
DeepSeek-V2: A Strong, Economical, and Efficient
Mixture-of-Experts Language ModelEfficient Mixture-of-Experts Language Model DeepSeek-AI research@deepseek.com Abstract We present DeepSeek-V2, a strong Mixture-of-Experts (MoE) language model characterized by economical training and DeepSeek-V2 and its chat versions still achieve top-tier performance among open-source models. The model checkpoints are available at h t t p s : / / g i t h u b . c o m / d e e p s e e k - a i / D e e p Experimental Setups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.1.1 Data Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.1.2 Hyper-Parameters0 码力 | 52 页 | 1.23 MB | 1 年前3
Trends Artificial Intelligence
datapoints turned into this beast. As soon as we updated one chart, we often had to update another – a data game of whack-a-mole… a pattern that shows no sign of stopping…and will grow more complex as competition related to the artificial intelligence technology evolution is indeed unprecedented, as supported by the data. This document is filled with user, usage and revenue charts that go up-and-to-the-right… often supported Change Happening Faster Than Ever? Yes, It Is • AI User + Usage + CapEx Growth = Unprecedented • AI Model Compute Costs High / Rising + Inference Costs Per Token Falling = Performance Converging + Developer0 码力 | 340 页 | 12.14 MB | 4 月前3
Google 《Prompt Engineering v7》writing styles 59 For few-shot prompting with classification tasks, mix up the classes 59 Adapt to model updates 60 Experiment with output formats 60 JSON Repair 61 Working with Schemas 62 Experiment language model input and output, a text prompt (sometimes accompanied by other modalities such as image prompts) is the input the model uses to predict a specific output. You don’t need to be a data scientist can be complicated. Many aspects of your prompt affect its efficacy: the model you use, the model’s training data, the model configurations, your word-choice, style and tone, structure, and context0 码力 | 68 页 | 6.50 MB | 6 月前3
OpenAI - AI in the EnterpriseThey started with three model evals: 01 Language translation Measuring the accuracy and quality of translations produced by a model. 02 Summarization Evaluating how a model condenses information, using resilient to change. Evals are built around tasks that measure the quality of the output of a model against a benchmark—is it more accurate? More compliant? Safer? Your key metrics will depend on employees can focus on the things only people can do. And because AI can process huge amounts of data from many sources, it can create customer experiences that feel more human because they’re more relevant0 码力 | 25 页 | 9.48 MB | 5 月前3
OpenAI 《A practical guide to building agents》error-prone, for example performing vendor security reviews. 03 Heavy reliance on unstructured data: Scenarios that involve interpreting natural language, extracting meaning from documents, or interacting design foundations In its most fundamental form, an agent consists of three core components: 01 Model The LLM powering the agent’s reasoning and decision-making 02 Tools External functions or APIs the the workflow. Not every task requires the smartest model—a simple retrieval or intent classification task may be handled by a smaller, faster model, while harder tasks like deciding whether to approve0 码力 | 34 页 | 7.00 MB | 6 月前3
XDNN TVM - Nov 2019VGG16 ResNet-50 GoogleNet-V3 Aristotle on 7020 FPGA Iphone8plus Kirin 970 CPU MEM CONTROLLER BUS Data Mover IMG WR SCHEDULER WEIGHTS WR SCHEDULER SMART MEM FABRIC IMG RD SCHEDULER WEIGHTS RD >> 4© Copyright 2018 Xilinx Inference Flow >> 5 MxNet CPU Layers FPGA Layers Runtime Image Model Weights Calibration Set Quantizer Compiler Tensor Graph Optimization Framework Tensor Graph to attrs['output_layout'], attrs['model_name'], outs[0], *ins ), name=name) return out >> 10© Copyright 2018 Xilinx Example of FPGA node in TVM graph { "nodes": [ { "op": "null", "name": "data", "inputs": [] }0 码力 | 16 页 | 3.35 MB | 5 月前3
TVM@AliOSaccelerated NLU model @ 2018.10 OO 2019.4 OO 2019.8 AiOS 1驱动万物智能 @ 和 Yunqi Conf AR-Nav Product Show Lanenet Model 1.6X Intel AliOs TVM Arch Model 。 Facelandmark Upstream Master ) 。, Optimize on INT8 & FP32 AiiOS ! 驱动万物智能 Alios TVM @ ARM CPU INT8 * Cache 芍四 Data FO Data FOData … QNNPACK Convolution 。,NHWC layout Cach, 浆百 FeU Cach- 区下 。, re 。 Tensorize GEMM Cache 大站 Fe Data FO Data … FOData QNNPACK /NiiOS ! 驱动万物智能 P Cache 浆加 Data FO Data FOData … NHWC L2 da … FL2 da Alios TVM @ ARM CPU INT8 TVM /QNNPACK0 码力 | 27 页 | 4.86 MB | 5 月前3
TVM Meetup: QuantizationQuantize Operator fn (%input_data: Tensor[(2, 5), float32]) { qnn.quantize(%input_data, out_dtype="uint8", output_zero_point=127, output_scale=0.5f) } def @main(%input_data: Tensor[(2, 5), float32]) -> -> Tensor[(2, 5), uint8] { %0 = divide(%input_data, 0.5f /* ty=float32 */) /* ty=Tensor[(2, 5), float32] */; %1 = round(%0) /* ty=Tensor[(2, 5), float32] */; %2 = cast(%1, dtype="int32") /* ty=Tensor[(2 conv2d fn (%data: Tensor[(1, 3, 2, 3), uint8], %weight: Tensor[(3, 3, 2, 2), uint8]) { qnn.conv2d(%data, %weight, … , out_dtype="int32", input_zero_point=1, kernel_zero_point=1)} def @main(%data: Tensor[(10 码力 | 19 页 | 489.50 KB | 5 月前3
TVM@Alibaba AI Labsint8 int32 = int16 1 + int16 x int8 Alibaba Al.Labs 阿里巴巴人工智能实验室 CPU : MTK8167S (ARM32 A35 1.5GHz) Model : MobileNetV2_ 1.0_ 224 400 336 350 3丈 300 250 PowerVR GPU Alibaba Al.Labs 阿里巴巴人工智能实验室 PowerVR support by TVM NNVM Compiler -Execution graph -Model layers functions Computation Graph Optimizations -Param TvM Tensor Operators Algorithm &Schedule CUDA TOPI Backends Machine Learning Automated Optimizer Schedule explorer Cost model Mali TOPI ROCM TOPI PVRTOPI Alibaba Al.Labs 阿里巴巴人工智能实验室 PVR TOPI > TOPI for PVR,including what0 码力 | 12 页 | 1.94 MB | 5 月前3
共 26 条
- 1
- 2
- 3













