Dynamic Model in TVMrights reserved. Presenter: Haichen Shen, Yao Wang Amazon SageMaker Neo, Deep Engine Science Dynamic Model in TVM AWS AI© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Models with models© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Support dynamic model in TVM ● Support Any-dim in typing ● Use shape function to compute the type at runtime ● Virtual input_name = "data" input_shape = [tvm.relay.Any(), 3, 224, 224] dtype = "float32" block = get_model('resnet50_v1', pretrained=True) mod, params = relay.frontend.from_mxnet(block, shape={input_name:0 码力 | 24 页 | 417.46 KB | 5 月前3
DeepSeek-V2: A Strong, Economical, and Efficient
Mixture-of-Experts Language ModelEfficient Mixture-of-Experts Language Model DeepSeek-AI research@deepseek.com Abstract We present DeepSeek-V2, a strong Mixture-of-Experts (MoE) language model characterized by economical training and DeepSeek-V2 and its chat versions still achieve top-tier performance among open-source models. The model checkpoints are available at h t t p s : / / g i t h u b . c o m / d e e p s e e k - a i / D e e p Work 21 A Contributions and Acknowledgments 27 B DeepSeek-V2-Lite: A 16B Model Equipped with MLA and DeepSeekMoE 29 2 B.1 Model Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .0 码力 | 52 页 | 1.23 MB | 1 年前3
Trends Artificial Intelligence
Change Happening Faster Than Ever? Yes, It Is • AI User + Usage + CapEx Growth = Unprecedented • AI Model Compute Costs High / Rising + Inference Costs Per Token Falling = Performance Converging + Developer 2/24 2/25 4/25 75% 60% 10% 21% 15% 0% Details on Page 293 USA – LLM #1 China USA – LLM #2 AI Model Compute Costs High / Rising + Inference Costs Per Token Falling = Performance Converging + Developer Richard Hirsh; John McCallum; OpenAI Details on Page 138 0 Years 72 Years Electric Power Computer Memory AI Inference AI Monetization Threats = Rising Competition + Open-Source Momentum + China’s Rise0 码力 | 340 页 | 12.14 MB | 4 月前3
XDNN TVM - Nov 2019FABRIC IMG RD SCHEDULER WEIGHTS RD SCHEDULER PE Array PE PE PE PE DISPATCHER ... EXTERNAL MEMORY INSTR FETCHER DECODER REG MAP WB WR SCHEDULER CTRL SIGNALS MISC CALC AVG POOL MAX POOL >> 4© Copyright 2018 Xilinx Inference Flow >> 5 MxNet CPU Layers FPGA Layers Runtime Image Model Weights Calibration Set Quantizer Compiler Tensor Graph Optimization Framework Tensor Graph to ins, outs: tvm.call_packed('tvm.accel.accel_fused', attrs['path'], attrs['output_layout'], attrs['model_name'], outs[0], *ins ), name=name) return out >> 10© Copyright 2018 Xilinx Example of FPGA node0 码力 | 16 页 | 3.35 MB | 5 月前3
PAI & TVM Meetup - Shanghai 20191116TensorCore Intrinsics 。Authored by @Hzfengsy 。 Intrinsics: tvm_load_matrix_sync tvm_mma_sync … “New Memory Scopes: wmma.matrix_a/b, accumulator 。Tensorization on warp level schedule Motivation load/store for higher bandwidth utilization 。Double buffer to hide memory load latency 。 storage align to reduce bank conflicts of shared memory 。 Virtual threads for data reuse (on going) Performance on V100 gradients 计算平台事业部 COMPUTING PLATFORM COMPUTING PLATFORM INT8 Inference on PAI- 引FTe[= PAI-Blade Model Analysis Graph optimization Blade Graph Optimizer TensorRT Customized OptimizeT TAO Compiler0 码力 | 26 页 | 5.82 MB | 5 月前3
TVM: Where Are We GoingAutomation is the Future Clear winner on emerging models in product Competitive on benchmarking type model Quickly enables other optimizations: fusion, layout, parallelization Portable performance across Specialized Accelerators Tensor Compute Primitives Unified Buffer Acc FIFO Explicitly Managed Memory Subsystem TPUsTensorization Challenge Compute primitives scalar vector tensor Challenge: Build0 码力 | 31 页 | 22.64 MB | 5 月前3
Manus AI:Agent元年开启>$1%Agent?@ABCDE • 1⃣ ‚•«Front-end¬5A+%ã:}GÏï÷øÑÒ> • *˜5StreamlitcFlaskcGradiocNode.jscNEXT.js> • 2⃣ ™•«Memory¬5𛕑 AI *+Gœ•“”,Ì()@Z°ž–+Ê> • *˜5ZepcMemgcCognéecLetta> • 3⃣ Ÿ «Authentication¬5š›%ã¡¢£ C¤=¥+> ()+I±5š›x²'# AI *+Ðd³,KfJK’3)€> • *˜5LangGraphcAutogencHaystackcSwarmcMulti-agent Orchestrator> • 7⃣ de´.«Model Routing¬5š›6¦ AI de•„G()µ¶C𷏤> • *˜5MartiancOpenRoutercNot Diamond> • 8⃣ ¡¹gde«Foundational Models¬5bº0 码力 | 23 页 | 4.87 MB | 5 月前3
Google 《Prompt Engineering v7》writing styles 59 For few-shot prompting with classification tasks, mix up the classes 59 Adapt to model updates 60 Experiment with output formats 60 JSON Repair 61 Working with Schemas 62 Experiment When thinking about a large language model input and output, a text prompt (sometimes accompanied by other modalities such as image prompts) is the input the model uses to predict a specific output. You can be complicated. Many aspects of your prompt affect its efficacy: the model you use, the model’s training data, the model configurations, your word-choice, style and tone, structure, and context0 码力 | 68 页 | 6.50 MB | 6 月前3
OpenAI 《A practical guide to building agents》design foundations In its most fundamental form, an agent consists of three core components: 01 Model The LLM powering the agent’s reasoning and decision-making 02 Tools External functions or APIs the the workflow. Not every task requires the smartest model—a simple retrieval or intent classification task may be handled by a smaller, faster model, while harder tasks like deciding whether to approve approve a refund may benefit from a more capable model. An approach that works well is to build your agent prototype with the most capable model for every task to establish a performance baseline. From there0 码力 | 34 页 | 7.00 MB | 6 月前3
OpenAI - AI in the EnterpriseThey started with three model evals: 01 Language translation Measuring the accuracy and quality of translations produced by a model. 02 Summarization Evaluating how a model condenses information, using resilient to change. Evals are built around tasks that measure the quality of the output of a model against a benchmark—is it more accurate? More compliant? Safer? Your key metrics will depend on more tokens. To increase efficiency, OpenAI and Indeed worked together to fine-tune a smaller GPT model that was able to deliver similar results with 60% fewer tokens. Helping job seekers find the0 码力 | 25 页 | 9.48 MB | 5 月前3
共 24 条
- 1
- 2
- 3













