Plug-in Based Software Architecture for RoboticsOutline ● What is plugin architecture? ● Why use plugin architecture? ● Designing a simplified plugin architecture ● Library used in robotics to implement plugin based system ○ Pluginlib ● Case study study for plugin architecture - MoveIt ● Limitations ● Summary 2Introduction •Abi Sivaraman •Robotics Engineer at PickNik Robotics •I work with robotic arms •MoveIt Maintainer 3What is plugin plugin architecture? Software Design Pattern that allows for developers to add functionality to a larger system without having to alter the source code of the system itself. Plug-ins are self-contained0 码力 | 75 页 | 2.40 MB | 6 月前3
Building API server-side architecture for BeginnersAPI server-side architecture for Beginners GopherCon ���� ����.��.�� - @hgsgtk © ����-���� BASE, Inc. � Talk abstract • A practical approach to build server-side architecture in a Go project � Problem of building architecture for beginners � Approach to build architecture � Summary � Talk structure © ����-���� BASE, Inc. � Problem of building architecture for beginners � Approach Approach to build architecture � Summary � Talk structure © ����-���� BASE, Inc. � Why I need server-side architecture �.Keep a design easy to change • -> Separate external input/output and business0 码力 | 38 页 | 690.29 KB | 1 年前3
Real-Time Unified Data Layers:
A New Era for Scalable Analytics,
Search, and AIUnified Data Layers: A New Era for Scalable Analytics, Search, and AI v 1.1Table of Contents Introduction 1. The Interconnection of Analytics, Search, and AI 2. What is a Real-Time Unified Data Layer unprecedented volumes of data across a growing number of sources and formats, data engineering and architecture teams must design systems that not only scale but also deliver real-time access and insights. personalize experiences and ensure performance. 32. The Interconnection of Analytics, Search, and AI Analytics, search, and AI are deeply interconnected in how they process, interpret, and extract value0 码力 | 10 页 | 2.82 MB | 5 月前3
The RISC-V Reader:
An Open Architecture AtlasFirst Edition, 1.0.0 - 2021uptake in many different computing sectors. The book also contains many insights about computer architecture in general, as well as explaining the particular de- sign choices we made in creating RISC-V. the point, and complete. The book’s commentaries provide a gratuitous history, motivation, and architecture critique. —C. Gordon Bell, Microsoft and designer of the Digital PDP-11 and VAX-11 instruction handy little book effortlessly summarizes all the essential elements of the RISC-V Instruction Set Architecture, a perfect reference guide for students and practitioners alike. —Professor Randy Katz, University0 码力 | 232 页 | 5.16 MB | 1 年前3
High-Performance Cross-Platform Architecture: C++20 Innovationsembedded software • Started using C++ in 1995 • First cross-platform project in 1994Cross-Platform Architecture Goals • Take advantage of all platforms • Focus on the compiler • Minimize boilerplate and unnecessary requiring implementations that differ depending upon the target machine architecture. • Features may be hardware: CPU architecture, SIMD instruction set, DMA controller, GPIO module, etc. • Features0 码力 | 75 页 | 581.83 KB | 6 月前3
《Efficient Deep Learning Book》[EDL] Chapter 7 - Automationplethora of choices that we face when training a deep learning model in the computer vision domain. A Search Space for n parameters is a n-dimensional region such that a point in such a region is a set of each of those parameters. The parameters can take discrete or continuous values. It is called a "search" space because we are searching for a point in which minimizes (or maximizes) an Evaluation Function example for choosing quantization and/or clustering techniques for model optimization. We have a search space which has two boolean valued parameters: quantization and clustering. A $$True$$ value means0 码力 | 33 页 | 2.48 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 1 - Introductionproblems. Machine learning in turn is one approach towards artificial intelligence. Deep learning with neural networks has been the dominant methodology of training new machine learning models for the past decade Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems 25 (2012): 1097-1105. do linear algebra operations the ImageNet dataset. 2 Glorot, Xavier, Antoine Bordes, and Yoshua Bengio. "Deep sparse rectifier neural networks." Proceedings of the fourteenth international conference on artificial intelligence and0 码力 | 21 页 | 3.17 MB | 1 年前3
2020美团技术年货 算法篇Feature: binaryBusinessTime; ReadKV 是一个 IO 类型的 OP ReadKV('mtptpoionlinefeatureexp','_id',_id,'ba_search.platform_poi_ business_hour_new.binarybusinesstime','STRING') // FeatureA : CtxDateInfo; ParseJSON 和多目标相关的工作,欢迎业界同行一起交流。 参考资料 [1] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//Advances in neural information processing systems. 2017: 5998-6008. [2] Devlin J, Chang M W, Lee K, et al. Bert: Pre-training Song W, Shi C, Xiao Z, et al. Autoint: Automatic feature interaction learning via self-attentive neural networks[C]//Proceedings of the 28th ACM International Conference on Information and Knowledge0 码力 | 317 页 | 16.57 MB | 1 年前3
2022年美团技术年货 合辑Optimal Transport Assignment for Object Detection, https://arxiv.org/ abs/2103.14259 [8] Computer Architecture: A Quantitative Approach [9] SIoU Loss: More Powerful Learning for Bounding Box Regression, https:// CIKM 论文:Trilateral Spatiotemporal Attention Network for User Behavior Modeling in Location-based Search[23]。 算法 < 61 图 19 基于三边时空注意力机制的用户行为序列网络 在实际建模中,相对于比赛涉及到更多线上部分,而比赛主要专注于离线数据集的精 度极值。同 Debiasing and Yue Hu. 2020. Graph neural architecture search. In IJCAI, Vol. 20. 1403–1409. [12] Matheus Nunes and Gisele L Pappa. 2020. Neural Architecture Search in Graph Neural Networks. In Brazilian Conference0 码力 | 1356 页 | 45.90 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Reviewshuffle_weights(bert_classifier) return bert_classifier Let’s invoke the training with the BERT-Small model architecture, but not its weights (we will set the keep_tfhub_weights parameter to False). bert_small_fro Using a pre-trained BERT-Base model achieves a best accuracy of 93.97%, while using the same architecture but not the pre-trained model achieves a best accuracy of 90.07%. Refer to figure 6-9. Figure directly optimize for similarity between and , but the authors found that it was better to add a small neural network referred to as the ‘projection head’ (represented by the function ) to first project the0 码力 | 31 页 | 4.03 MB | 1 年前3
共 1000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 100
相关搜索词
PluginBasedSoftwareArchitectureforRoboticsGolangGoRealTimeUnifiedDataLayersNewEraScalableAnalyticsSearchandAITheRISCReaderAnOpenAtlasFirstEdition1.02021HighPerformanceCrossPlatformC++20InnovationsEfficientDeepLearningBookEDLChapterAutomationIntroduction2020美团技术年货算法2022合辑AdvancedTechniquesTechnicalReview













