Dynamic Model in TVMrights reserved. Presenter: Haichen Shen, Yao Wang Amazon SageMaker Neo, Deep Engine Science Dynamic Model in TVM AWS AI© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Models with models© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Support dynamic model in TVM ● Support Any-dim in typing ● Use shape function to compute the type at runtime ● Virtual input_name = "data" input_shape = [tvm.relay.Any(), 3, 224, 224] dtype = "float32" block = get_model('resnet50_v1', pretrained=True) mod, params = relay.frontend.from_mxnet(block, shape={input_name:0 码力 | 24 页 | 417.46 KB | 5 月前3
DeepSeek-V2: A Strong, Economical, and Efficient
Mixture-of-Experts Language ModelEfficient Mixture-of-Experts Language Model DeepSeek-AI research@deepseek.com Abstract We present DeepSeek-V2, a strong Mixture-of-Experts (MoE) language model characterized by economical training and DeepSeek-V2 and its chat versions still achieve top-tier performance among open-source models. The model checkpoints are available at h t t p s : / / g i t h u b . c o m / d e e p s e e k - a i / D e e p Work 21 A Contributions and Acknowledgments 27 B DeepSeek-V2-Lite: A 16B Model Equipped with MLA and DeepSeekMoE 29 2 B.1 Model Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .0 码力 | 52 页 | 1.23 MB | 1 年前3
OpenAI 《A practical guide to building agents》design foundations In its most fundamental form, an agent consists of three core components: 01 Model The LLM powering the agent’s reasoning and decision-making 02 Tools External functions or APIs the the workflow. Not every task requires the smartest model—a simple retrieval or intent classification task may be handled by a smaller, faster model, while harder tasks like deciding whether to approve approve a refund may benefit from a more capable model. An approach that works well is to build your agent prototype with the most capable model for every task to establish a performance baseline. From there0 码力 | 34 页 | 7.00 MB | 6 月前3
DeepSeek从入门到精通(20250204)应用“多角度”提示探索不同视角 3. 使用“深化”提示拓展初始想法 4. 设计“反转”提示寻找替代方案 思维拓展的提示语链设计建立在创造性认知理论的基础上。根据Geneplore模型(Generate-Explore Model), 创造性思维包括两个主要阶段: 思维拓展的提示语链设计 聚合思维的提示语链设计 基于“FOCUS”框架 • Filter(筛选):评估和选择最佳想法 • Optimize(优化):改进选定的想法 假设需要撰写一篇关于“气候变化”的文章,目的是 “增强公众意识并促进行动”: 陈述型(Assertive) 指令型(Directive) 承诺型(Commissive) 表达型(Expressive) 宣告型(Declarative) 主题聚焦机制(TFM):锁定核心内容 �TFM的理论基础: TFM借鉴了认知语言学中的“原型理论”和“框架语义 学”,可开发以下技巧: �TFM实施步骤: 1. 定义主题原型:列出主题的关键特征和代表性例子0 码力 | 104 页 | 5.37 MB | 8 月前3
清华大学 DeepSeek 从入门到精通应用“多角度”提示探索不同视角 3. 使用“深化”提示拓展初始想法 4. 设计“反转”提示寻找替代方案 思维拓展的提示语链设计建立在创造性认知理论的基础上。根据Geneplore模型(Generate-Explore Model), 创造性思维包括两个主要阶段: 思维拓展的提示语链设计 聚合思维的提示语链设计 基于“FOCUS”框架 • Filter(筛选):评估和选择最佳想法 • Optimize(优化):改进选定的想法 假设需要撰写一篇关于“气候变化”的文章,目的是 “增强公众意识并促进行动”: 陈述型(Assertive) 指令型(Directive) 承诺型(Commissive) 表达型(Expressive) 宣告型(Declarative) 主题聚焦机制(TFM):锁定核心内容 �TFM的理论基础: TFM借鉴了认知语言学中的“原型理论”和“框架语义 学”,可开发以下技巧: �TFM实施步骤: 1. 定义主题原型:列出主题的关键特征和代表性例子0 码力 | 103 页 | 5.40 MB | 8 月前3
Google 《Prompt Engineering v7》writing styles 59 For few-shot prompting with classification tasks, mix up the classes 59 Adapt to model updates 60 Experiment with output formats 60 JSON Repair 61 Working with Schemas 62 Experiment When thinking about a large language model input and output, a text prompt (sometimes accompanied by other modalities such as image prompts) is the input the model uses to predict a specific output. You can be complicated. Many aspects of your prompt affect its efficacy: the model you use, the model’s training data, the model configurations, your word-choice, style and tone, structure, and context0 码力 | 68 页 | 6.50 MB | 6 月前3
Trends Artificial Intelligence
Change Happening Faster Than Ever? Yes, It Is • AI User + Usage + CapEx Growth = Unprecedented • AI Model Compute Costs High / Rising + Inference Costs Per Token Falling = Performance Converging + Developer 2/24 2/25 4/25 75% 60% 10% 21% 15% 0% Details on Page 293 USA – LLM #1 China USA – LLM #2 AI Model Compute Costs High / Rising + Inference Costs Per Token Falling = Performance Converging + Developer Change Happening Faster Than Ever? Yes, It Is • AI User + Usage + CapEx Growth = Unprecedented • AI Model Compute Costs High / Rising + Inference Costs Per Token Falling = Performance Converging + Developer0 码力 | 340 页 | 12.14 MB | 4 月前3
OpenAI - AI in the EnterpriseThey started with three model evals: 01 Language translation Measuring the accuracy and quality of translations produced by a model. 02 Summarization Evaluating how a model condenses information, using resilient to change. Evals are built around tasks that measure the quality of the output of a model against a benchmark—is it more accurate? More compliant? Safer? Your key metrics will depend on more tokens. To increase efficiency, OpenAI and Indeed worked together to fine-tune a smaller GPT model that was able to deliver similar results with 60% fewer tokens. Helping job seekers find the0 码力 | 25 页 | 9.48 MB | 5 月前3
XDNN TVM - Nov 2019>> 4© Copyright 2018 Xilinx Inference Flow >> 5 MxNet CPU Layers FPGA Layers Runtime Image Model Weights Calibration Set Quantizer Compiler Tensor Graph Optimization Framework Tensor Graph to ins, outs: tvm.call_packed('tvm.accel.accel_fused', attrs['path'], attrs['output_layout'], attrs['model_name'], outs[0], *ins ), name=name) return out >> 10© Copyright 2018 Xilinx Example of FPGA node Xilinx Performance Pipelines ˃ References to our latest results: https://github.com/Xilinx/AI-Model-Zoo (embedded i.e. ZC104/Ultra96) https://github.com/Xilinx/ml-suite/blob/master/examples/caffe/Benchmark_README0 码力 | 16 页 | 3.35 MB | 5 月前3
Facebook -- TVM AWS Meetup Talkmethods not delivering generalized performance 2 Why TVM? XTVM for Speech Synthesis - WaveRNN-style model architecture - Autoregressive sampling net running at faster than real-time - Compute split between - First PyTorch model used a 3,400us sampling net runtime Image from LPCNetExit, Pursued By A Bear - 3400us (baseline), 40us (target) - 85x speedup - Uh ohEnter, TVM and model co-design - PyTorch WaveRNN, Sparse Transformers, etc - Reduce precision with int8/float16 - very helpful to maintain model in core-private L1 dcaches - Use rational approximations for transcendentals (exp, tanh, erf, etc)0 码力 | 11 页 | 3.08 MB | 5 月前3
共 22 条
- 1
- 2
- 3













