dispatch_apply

Section: C Library Functions (3)
Page Index

BSD mandoc
Darwin  

NAME

dispatch_apply - schedule blocks for iterative execution  

SYNOPSIS

Fd #include <dispatch/dispatch.h> Ft void Fo dispatch_apply Fa size_t iterations dispatch_queue_t queue void (^block)(size_t) Fc Ft void Fo dispatch_apply_f Fa size_t iterations dispatch_queue_t queue void *context void (*function)(void *, size_t) Fc  

DESCRIPTION

The Fn dispatch_apply function provides data-level concurrency through a "for (;;)" loop like primitive:
size_t iterations = 10;

// 'idx' is zero indexed, just like:
// for (idx = 0; idx < iterations; idx++)

dispatch_apply(iterations, DISPATCH_APPLY_AUTO, ^(size_t idx) {
        printf("%zu\n", idx);
});

Although any queue can be used, it is strongly recommended to use Vt DISPATCH_APPLY_AUTO as the Vt queue argument to both Fn dispatch_apply and Fn dispatch_apply_f , as shown in the example above, since this allows the system to automatically use worker threads that match the configuration of the current thread as closely as possible. No assumptions should be made about which global concurrent queue will be used.

Like a "for (;;)" loop, the Fn dispatch_apply function is synchronous. If asynchronous behavior is desired, wrap the call to Fn dispatch_apply with a call to Fn dispatch_async against another queue.

Sometimes, when the block passed to Fn dispatch_apply is simple, the use of striding can tune performance. Calculating the optimal stride is best left to experimentation. Start with a stride of one and work upwards until the desired performance is achieved (perhaps using a power of two search):

#define STRIDE 3

dispatch_apply(count / STRIDE, DISPATCH_APPLY_AUTO, ^(size_t idx) {
        size_t j = idx * STRIDE;
        size_t j_stop = j + STRIDE;
        do {
                printf("%zu\n", j++);
        } while (j < j_stop);
});

size_t i;
for (i = count - (count % STRIDE); i < count; i++) {
        printf("%zu\n", i);
}
 

IMPLIED REFERENCES

Synchronous functions within the dispatch framework hold an implied reference on the target queue. In other words, the synchronous function borrows the reference of the calling function (this is valid because the calling function is blocked waiting for the result of the synchronous function, and therefore cannot modify the reference count of the target queue until after the synchronous function has returned).

This is in contrast to asynchronous functions which must retain both the block and target queue for the duration of the asynchronous operation (as the calling function may immediately release its interest in these objects).  

FUNDAMENTALS

Fn dispatch_apply and Fn dispatch_apply_f attempt to quickly create enough worker threads to efficiently iterate work in parallel. By contrast, a loop that passes work items individually to Fn dispatch_async or Fn dispatch_async_f will incur more overhead and does not express the desired parallel execution semantics to the system, so may not create an optimal number of worker threads for a parallel workload. For this reason, prefer to use Fn dispatch_apply or Fn dispatch_apply_f when parallel execution is important.

The Fn dispatch_apply function is a wrapper around Fn dispatch_apply_f .  

CAVEATS

Unlike Fn dispatch_async , a block submitted to Fn dispatch_apply is expected to be either independent or dependent only on work already performed in lower-indexed invocations of the block. If the block's index dependency is non-linear, it is recommended to use a for-loop around invocations of Fn dispatch_async .  

SEE ALSO

dispatch(3), dispatch_async3, dispatch_queue_create3


 

Index

NAME
SYNOPSIS
DESCRIPTION
IMPLIED REFERENCES
FUNDAMENTALS
CAVEATS
SEE ALSO