Notes for install Keras on Anaconda3installation works: library(keras) mnist <- dataset_mnist() train_images <- mnist$train$x train_labels <- mnist$train$y test_images <- mnist$test$x test_labels <- mnist$test$y #data structure checking0 码力 | 3 页 | 654.13 KB | 8 月前3
Leveraging the Power of C++ for Efficient Machine Learning on Embedded Devices(MB) 93 320 32 / 50Hand gesture recognition 33 / 50Data ◮ Google MediaPipe’s Rock-Paper-Scissors dataset for hand gesture recognition ◮ Contains 125 images for each class ◮ Images have various sizes ◮ ◮ Images have 3 color channels per pixel (RGB) ◮ Laurence Moroney’s Rock-Paper-Scissors dataset (published by Sani Kamal on Kaggle) ◮ Contains 964 images for each class split into training data (840) in Python, on a laptop Dataset Train duration Model size (MB) Small 2m15s 23 Big 7m10s 23 36 / 50Test the Rock-Paper-Scissors model ◮ Using the testing data of the big dataset (124 images per class)0 码力 | 51 页 | 1.78 MB | 6 月前3
清华大学 DeepSeek+DeepResearch 让科研像聊天一样简单describe the data? 描述数据 Show me the top trends in a visual format. 以视觉形式显示趋势 Can you clean this dataset? 清洗数据 Can you create a heatmap using this data? 创建一个热力图 Can you segment this data and create a table cloud? 做一个词云 Can you create a chart using this data? 画一个图表 What are the rows and columns in this dataset? 描述一下行和列 Can you make the graphs more beautiful? 把图美化一下 Can you write a one sentence recap of this the main takeaway from this dataset? 找出最主要的信息 Can you explain this dataset like I’m 5 years old? 像给五岁小朋友讲故事那样解释一 下这个数据集 Can you create a presentation based this dataset? 做一个整体展示 Can you create 10 graphs0 码力 | 85 页 | 8.31 MB | 8 月前3
Trends Artificial Intelligence
with significant use). Source: Epoch AI (5/25) Training Dataset Size (Number of Words) for Key AI Models – 1950-2025, per Epoch AI Training Dataset Size – Number of Words +260% / Year AI Technology Compounding Estimates99 CapEx Spend – Big Technology Companies = Inflected With AI’s Rise100 AI Model Training Dataset Size = 250% Annual Growth Over Fifteen Years, per Epoch AI Note: In AI language models, tokens represent represent basic units of text (e.g., words or sub-words) used during training. Training dataset sizes are often measured in total tokens processed. A larger token count typically reflects more diverse0 码力 | 340 页 | 12.14 MB | 4 月前3
Can Data-Oriented-Design be Improved?5How DoD is used in actual code • Platform specific code • Procedural/imperative code • Problem/dataset specific code • Hand-optimize cache lines and struct layout 6How can we improve it? • We could opposite philosophy Data oriented code • Platform specific code • Procedural/imperative code • Problem/dataset specific code • Hand-optimize cache lines and struct layout 9 Opposite philosophy • Cross platform opposite philosophy Data oriented code • Platform specific code • Procedural/imperative code • Problem/dataset specific code • Hand-optimize cache lines and struct layout 10 Opposite philosophy • Cross0 码力 | 39 页 | 1.18 MB | 6 月前3
Cooperative C++ Evolutionare all of them sorted by how often they appear as user-defined identifiers in that particular dataset: …” assertexpr 0 ccassert 0 co_assert 0 contract_assert 0 contractassert 0 cppassert are all of them sorted by how often they appear as user-defined identifiers in that particular dataset: …” assertexpr 0 ccassert 0 co_assert 0 contract_assert 0 contractassert 0 cppassert 3734 matches … but we also did *not* make yield a full keyword, which has 9533 matches in that dataset, and chose co_yield instead… Where is the bar?”76 (1) Naming is hard (2) Compatibility is a major0 码力 | 85 页 | 5.73 MB | 6 月前3
TVM Meetup: Quantization• Quantization within TVM - Automatic Quantization • TVM stack ingests a FP32 graph and a small dataset • Finds suitable quantization scale • Produces a quantized graph • Compiling Pre-quantized models0 码力 | 19 页 | 489.50 KB | 5 月前3
清华大学 普通人如何抓住DeepSeek红利答 对 , 例 如 使 用 d a t a s e t s 库 (Hugging Face的datasets库)来加载SQuAD数据集 (Stanford Question Answering Dataset),这个数据集 是一个著名的问答数据集,基于维基百科数据生成,并且数 据是2020年之前的。 AI幻觉问题抽取:多数据集 问题加载 探讨大语言模型(LLMs)在模拟人类意见动态和社 会现象(如极化和错误信息传播)中的表现,特别0 码力 | 65 页 | 4.47 MB | 8 月前3
TiDB v8.5 Documentationexperience the convenience and high performance of TiDB HTAP by querying an example table in a TPC-H dataset. TPC-H is a popular decision support benchmark that consists of a suite of business oriented ad-hoc for production. 3.2.2.2 Step 2. Prepare test data In the following steps, you can create a TPC-H dataset as the test data to use TiDB HTAP. If you are interested in TPC-H, see General Implementation Guidelines ### The directory must be empty, and the storage space must be greater than �→ the size of the dataset to be imported. ### For better import performance, it is recommended to use a directory �→ different0 码力 | 6730 页 | 111.36 MB | 10 月前3
TiDB v8.4 Documentationexperience the convenience and high performance of TiDB HTAP by querying an example table in a TPC-H dataset. TPC-H is a popular decision support benchmark that consists of a suite of business oriented ad-hoc for production. 3.2.2.2 Step 2. Prepare test data In the following steps, you can create a TPC-H dataset as the test data to use TiDB HTAP. If you are interested in TPC-H, see General Implementation Guidelines ### The directory must be empty, and the storage space must be greater than �→ the size of the dataset to be imported. ### For better import performance, it is recommended to use a directory �→ different0 码力 | 6705 页 | 110.86 MB | 10 月前3
共 37 条
- 1
- 2
- 3
- 4













