jsc::chunk_evenly Range Adaptor for Distributing Work Across Tasksstd::ptrdiff_t { chunk_index == remainder ]}; 了] 了 y 一Range adaptorfor dlstributing work across tasks (CZ) ASM comparison > GCC RISC-V 64-bit assembly manual_loop(long,long): jsc-chunk_evenly unnecessary waiting > Iftasks are scheduled as early as possible, then distributin8g work evenly across tasks can improve performance (O) Future Directions > Support random access in jsc: :chunk_even1y_view<>0 码力 | 1 页 | 1.38 MB | 6 月前3
Get off my thread: Techniques for moving k to background threadsthread: Spawn a new thread for each event handler Pass data to a dedicated background thread Submit tasks to a generic thread poolHow do we move work off the current thread? Possible ways to move the work thread for each event handler Pass data to a dedicated background thread Submit tasks to a generic thread pool Submit tasks to a special purpose executorSpawning new threads There are lots of ways to spawn pending_threads.push_back( std::move(handle)); }Managing thread handles III Can remove completed tasks by periodically checking: void check_for_done_threads(){ for(auto it=pending_threads.begin(); it0 码力 | 90 页 | 6.97 MB | 6 月前3
Rethinking Task Based Concurrency and Parallelism for Low Latency C++of our data ● Queues for logic (work) is not a good idea: ○ We couple logic and data together as tasks because: ■ Task Queues typically contain more than one type of task (different types of logic) ■ By other shortcomings which further complicate issues ● By their nature, queues bring tasks to threads: ○ Requiring tasks to convey both data and logic ○ Often leading to task class hierarchies, pointers to scale ○ Approximately 1/2N memory requirement (N = number of nodes) Work Contract: ● Enhanced “Tasks” separating data from logic: ○ Contain its own logic ○ Asynchronous execution ○ Recurrening execution0 码力 | 142 页 | 2.80 MB | 6 月前3
Taro: Task graph-based Asynchronous Programming Using C++ Coroutinestream • Each worker has: High-priority queue (HPQ): store suspended tasks Low-priority queue (LPQ): store new tasks We want to resume a suspended task by the same worker as soon as possible stream • Each worker has: High-priority queue (HPQ): store suspended tasks Low-priority queue (LPQ): store new tasks 61Taro’s Scheduler Taro: https://github.com/dian-lun-lin/taro D A C stream • Each worker has: High-priority queue (HPQ): store suspended tasks Low-priority queue (LPQ): store new tasks Worker 1 1. Offload GPU kernels in task A 2. Suspend task A 3. Go to sleep0 码力 | 84 页 | 8.82 MB | 6 月前3
Coroutines and Structured Concurrency in Practiceuncaught exceptions Task::detach() allows the task to run alongside the rest of the programDetached tasks considered harmful // don’t do this void bad(tcp::socket& s) { std::arraybuf(1024); asio::detached); } No way to figure out task lifetime => no automatic object lifetime managementDetached tasks considered harmful // don’t do this either void slightly_better(tcp::socket& s) { auto buf = asio::use_awaitable); }, asio::detached); }Detached tasks considered harmful // and also don’t do this void slightly_better(tcp::socket& s) { auto buf 0 码力 | 103 页 | 1.98 MB | 6 月前3
Design patterns for error handling in C++ programs using parallel algorithms and executorsEschew raw pointersOutline • Parallelism makes error handling harder… • …C++ parallel algorithms and tasks specifically • Message Passing Interface (MPI): 3 decades of distributed-memory parallel programming ExecutionPolicy: Permitted changes in execution order – Throw in loop body ➔ terminate (*) • Asynchronous tasks (C++11 async) – Uncaught exception in task gets captured – Waiting on result throws passed-along Separate path for handling ancestor task’s uncaught exception – when_all: Express dependency on >1 tasks – If >1 parent throws, when_all captures any 1 exception (*) for all policies currently in the StandardExceptions0 码力 | 32 页 | 883.27 KB | 6 月前3
Techniques to Optimise Multi-threaded Data Building During Game DevelopmentCreate read-only world cache for fast queries • Subdivide into regions to process as independent tasks • Some regions will take longer to process 16Pre-process data to create read-only world cache Stores memory cost Need to divide the world into regions - process as separate tasks Changing one region should not affect others Some tasks will take longer - more data to process in region Hope all jobs are similar notesDEALING WITH EXPONENTIALLY LONG TASKS • Task that takes hours instead of minutes • Could be bad data, bad algorithm, or code bug • One method to deal with them: 1. Build the tasks once 2. Upload to the cache0 码力 | 99 页 | 2.40 MB | 6 月前3
From Your First Line of Code to Your Largest Repo: How Visual Studio Code Can Help You Develop More Efficiently in C++on more satisfying work 88% feel more productive 96% of developers are faster with repetitive tasks Get inline text suggestions Use chat to understand the context of code, workspace, settings, and ft C++ Let’s take a look at all of this in practice with VS Code… vcpkg CMake Debugger CMake Tasks GitHub Codespaces Call Hierarchy Test Explorer Multi-language SupportMicrosoft C++ Test Explorer breakpointsMicrosoft C++ Improve your Day-to-Day C++ Productivity CMake Tasks Define tasks to automate your workflow instead of manually typing shell tasks Call Hierarchy View incoming/outgoing calls Easily see0 码力 | 31 页 | 2.76 MB | 6 月前3
Deciphering C++ Coroutinessynchronized correctly. ... Not a solution that scales well. Can be a good solution for small number of tasks.20/55 Green Threads aka Stackful Coroutines auto spawn_task () { return spawn_green_thread ( outer_function promise (). my_scheduler = h.promise (). my_scheduler; }45/55 Spawning up a stack of nested lazy tasks templatestruct Async { /* ... */ struct promise_type { /* ... */ std::suspend always middle_function (); Async outer_function (); .45/55 Spawning up a stack of nested lazy tasks template struct Async { /* ... */ struct promise_type { /* ... */ std:: suspend_always 0 码力 | 156 页 | 1.79 MB | 6 月前3
Back to Basics: Concurrency1. Concurrency Definition: Multiple things can happen at once, the order matters, and sometimes tasks have to wait on shared resources. 2. Parallelism Definition: Everything happens at once, instantaneously 1. Concurrency Definition: Multiple things can happen at once, the order matters, and sometimes tasks have to wait on shared resources. 2. Parallelism Definition: Everything happens at once, instantaneously 1. Concurrency Definition: Multiple things can happen at once, the order matters, and sometimes tasks have to wait on shared resources. 2. Parallelism Definition: Everything happens at once, instantaneously0 码力 | 141 页 | 6.02 MB | 6 月前3
共 110 条
- 1
- 2
- 3
- 4
- 5
- 6
- 11
相关搜索词
jscchunkevenlyRangeAdaptorforDistributingWorkAcrossTasksGetoffmythreadTechniquesmovingtobackgroundthreadsRethinkingTaskBasedConcurrencyandParallelismLowLatencyC++TarographbasedAsynchronousProgrammingUsingCoroutineCoroutinesStructuredinPracticeDesignpatternserrorhandlingprogramsusingparallelalgorithmsexecutorsOptimiseMultithreadedDataBuildingDuringGameDevelopmentFromYourFirstLineofCodeLargestRepoHowVisualStudioCanHelpYouDevelopMoreEfficientlyDecipheringBackBasics













