Partitioning Algorithm
A partitioning algorithm allows applications to optimize parallel algorithms using different scheduling methods, such as static partitioning, dynamic partitioning, and guided partitioning.
Define a Partitioner for Parallel Algorithms
A partitioner defines how to partition and distribute iterations to different workers when running parallel algorithms in Taskflow, such as tf::
std::vector<int> data = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} // create different partitioners tf::GuidedPartitioner guided_partitioner; tf::StaticPartitioner static_partitioner; tf::RandomPartitioner random_partitioner; tf::DynamicPartitioner dynamic_partitioner; // create four parallel-iteration tasks from the four execution policies taskflow.for_each(data.begin(), data.end(), [](int i){}, guided_partitioner); taskflow.for_each(data.begin(), data.end(), [](int i){}, static_partitioner); taskflow.for_each(data.begin(), data.end(), [](int i){}, random_partitioner); taskflow.for_each(data.begin(), data.end(), [](int i){}, dynamic_partitioner);
Each partitioner has a specific algorithm to partition iterations into a set of chunks and distribute chunks to workers. A chunk is the basic unit of work that will be run by a worker during the execution of parallel iterations. The following figure illustrates the scheduling diagram for three major partitioners, tf::

Depending on applications, partitioning algorithms can impact the performance a lot. For example, if a parallel-iteration workload contains a regular work unit per iteration, tf::
Define a Static Partitioner
Static partitioner splits iterations into iter_size/chunk_size chunks and distribute chunks to workers in order. If no chunk size is given (chunk_size is 1), Taskflow will partition iterations into chunks that are approximately equal in size. The following code creates a static partitioner with chunk size equal to 100:
tf::StaticPartitioner static_partitioner(100);
Define a Dynamic Partitioner
Dynamic partitioner splits iterations into iter_size/chunk_size chunks and distribute chunks to workers without any specific order. The default chunk size is 1, if not specified. The following code creates a dynamic partitioner with chunk size equal to 2:
tf::DynamicPartitioner dynamic_partitioner(2);
Define a Guided Partitioner
Guided partitioner dynamically decides the chunk size. The size of a chunk is proportional to the number of unassigned iterations divided by the number of the threads, and the size will gradually decrease to the specified chunk size (default 1). The last chunk may be smaller than the specified chunk size. The following code creates a guided partitioner with chunk size equal to 10:
tf::GuidedPartitioner guided_partitioner(10);
In most situations, guided partitioner can achieve decent performance due to adaptive parallelism, especially for those with irregular and unbalanced workload per iteration. As a result, guided partitioner is used as the default partitioner for our parallel algorithms.