These functions support efficient temporal synchronization, background concurrency and data-level concurrency. These same functions can also be used for efficient notification of the completion of asynchronous blocks (a.k.a. callbacks).
int r = pthread_mutex_lock(&my_lock); assert(r == 0); // critical section r = pthread_mutex_unlock(&my_lock); assert(r == 0);
The Fn dispatch_sync function may be used with a serial queue to accomplish the same style of synchronization. For example:
dispatch_sync(my_queue, ^{ // critical section });
In addition to providing a more concise expression of synchronization, this approach is less error prone as the critical section cannot be accidentally left without restoring the queue to a reentrant state.
The Fn dispatch_async function may be used to implement deferred critical sections when the result of the block is not needed locally. Deferred critical sections have the same synchronization properties as the above code, but are non-blocking and therefore more efficient to perform. For example:
dispatch_async(my_queue, ^{ // critical section });
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0), ^{ // background operation });
This approach is an efficient replacement for pthread_create3.
void async_read(object_t obj, void *where, size_t bytes, dispatch_queue_t destination_queue, void (^reply_block)(ssize_t r, int err)) { // There are better ways of doing async I/O. // This is just an example of nested blocks. dispatch_retain(destination_queue); dispatch_async(obj->queue, ^{ ssize_t r = read(obj->fd, where, bytes); int err = errno; dispatch_async(destination_queue, ^{ reply_block(r, err); }); dispatch_release(destination_queue); }); }
As the dispatch framework was designed, we studied recursive locks. We found that the vast majority of recursive locks are deployed retroactively when ill-defined lock hierarchies are discovered. As a consequence, the adoption of recursive locks often mutates obvious bugs into obscure ones. This study also revealed an insight: if reentrancy is unavoidable, then reader/writer locks are preferable to recursive locks. Disciplined use of reader/writer locks enable reentrancy only when reentrancy is safe (the "read" side of the lock).
Nevertheless, if it is absolutely necessary, what follows is an imperfect way of implementing recursive locks using the dispatch framework:
void sloppy_lock(object_t object, void (^block)(void)) { if (object->owner == pthread_self()) { return block(); } dispatch_sync(object->queue, ^{ object->owner = pthread_self(); block(); object->owner = NULL; }); }
The above example does not solve the case where queue A runs on thread X which calls Fn dispatch_sync against queue B which runs on thread Y which recursively calls Fn dispatch_sync against queue A, which deadlocks both examples. This is bug-for-bug compatible with nontrivial pthread usage. In fact, nontrivial reentrancy is impossible to support in recursive locks once the ultimate level of reentrancy is deployed (IPC or RPC).
queue = dispatch_queue_create("com.example.queue", NULL); assert(queue); dispatch_sync(queue, ^{ do_something(); //dispatch_release(queue); // NOT SAFE -- dispatch_sync() is still using 'queue' }); dispatch_release(queue); // SAFELY balanced outside of the block provided to dispatch_sync()
This is in contrast to asynchronous functions which must retain both the block and target queue for the duration of the asynchronous operation (as the calling function may immediately release its interest in these objects).
The Fn dispatch_async function is a wrapper around Fn dispatch_async_f . The application-defined Fa context parameter is passed to the Fa function when it is invoked on the target Fa queue .
The Fn dispatch_sync function is a wrapper around Fn dispatch_sync_f . The application-defined Fa context parameter is passed to the Fa function when it is invoked on the target Fa queue .