Why GCD?
This is the first post in a series about Grand Central Dispatch.
In short: better parallelism.
In the last decade, CPU speed improvements have hit a wall. To continue to get more performance out of the same clock speed, CPU manufacturers have been including more and more cores on their dies (my MacBook Pro has 8 cores and my Mac Pro at work has 24).
(Source)
The problem is, programs can’t take advantage of these extra cores unless it knows how to farm out its work effectively.
Parallelism deals with the problem of doling out multiple jobs to be done at the same time. This is a very hard problem in general, but GCD makes it a little easier to manage.
You may not have a need to write a massively parallel program, but you can still use GCD to make a more responsive program by not having your UI or service wait for jobs to complete. Typically, you want to toss jobs to other cores while your main program continues serving requests asynchronously. GCD makes this, and other similar tasks, easier.
But first, let’s see what we had to work with before we had GCD.
Threads
The traditional way of farming out work to be done in parallel is to use threads. On a multi-core computer, each new thread can be assigned to a different core to run in parallel with other threads.
On a single core, the kernel regularly preempts a running thread to allow other threads to run. This gives the illusion of parallel execution.
But using threads also means living with some significant drawbacks. Passing data into and out of a thread is often a challenge. Passing control or signals from one thread to another is tricky. And for problems that can be solved in a highly parallel manner, figuring out how many threads you should create on the system is not easy.
Because creating and tearing down threads is a relatively expensive affair, many threaded programs end up with a thread pool, a collection of pre-made threads waiting to be used. Of course, you also have to manage these thread pools, which adds complexity to your code.
Synchronization
When you have multiple threads of execution, you invariably run into a class of problems called race conditions, where the result of a computation depends on which thread gets to a piece of data first.
A classic example of a race condition is the “bank account problem”:
@interface BankAccount: NSObject
@property (nonatomic, assign) double balance;
@end
// ...
// Create an account holding $100
BankAccount *account = [[BankAccount alloc] init];
account.balance = 100;
// ...
// Thread 1 - withdraw $10
void thread1() {
double balance = account.balance;
account.balance = balance - 10;
}
// ...
// Thread 2 - accrue 10% interest
void thread2() {
double balance = account.balance;
account.balance = balance * 1.10;
}
// account.balance = ?
Depending on the order that these statements are executed on the two threads (remember, these threads are running at the same time), the ending account balance will be different.
Assuming threads can only be preempted between the statements you see above, how many possible ending balances can this code snippet produce? What are their values?
Race conditions can occur even on single-core machines.
The solution to these kinds of issues traditionally takes the form of synchronization objects. The classic objects include:
- Semaphore - allows a limited resource to be consumed. Threads wait until resources are available.
- Mutual Exclusion (Mutex) - allows only one thread to execute at a time. Other threads wait while a thread has the mutex.
- Condition Variable (Condvar) - threads wait until some condition becomes true.
Queues, queues, queues
GCD takes the problem of thread management and synchronization and turns them on their head using a new abstraction: queues.
Queues, simply put, are data structures that can guarantee that tasks are executed in a serial manner. Multiple queues can be created in a program, and while these queues can process tasks in parallel relative to each other, enqueued tasks will be executed sequentially within each queue.
Queues have distinct advantages over threads. Firstly, the library takes care of thread management. Queues are assigned to threads as needed, and threads may be destroyed when they are no longer used. Further, the library can consult with the system to discover the optimal number of threads to be allocated at the time. And last, because queues don’t allocate threads unless they need to, queues are cheap. A program can have dozens of queues without straining the system.
In short: queues give you the benefit of threads, without requiring you to think about threads.
There are three kinds of queues in GCD:
- Serial — Performs tasks one at a time, in the order they are enqueued. One task must finish before the next one is started.
- Concurrent — Starts tasks in the order they are enqueued, but each following task is started concurrently, without waiting for the previous one to finish.
- Main — A pre-made serial queue that executes on the main thread. This queue interoperates with
NSRunLoop
. Your program always begins execution in this queue.
Blocks
To use queues, you enqueue blocks.
You can also enqueue function invocations on queues, but this series of posts will focus exclusively on GCD’s block-based API.
Blocks are pieces of code that can capture their context. Here’s an example:
#import <Foundation/Foundation.h>
typedef void(^Block)();
Block printer(NSString *string) {
return ^{
NSLog(@"%@", string);
};
}
int main(int argc, const char * argv[]) {
@autoreleasepool {
Block a = printer(@"Hello");
Block b = printer(@"World");
a(); // prints "Hello"
b(); // prints "World"
a(); // prints "Hello" again
}
return 0;
}
In the example above, printer()
returns a block that captures the string
parameter. Each time this block is called, the code is run with the variables that were captured when the block was created.
Hello, Dispatch World!
We now have everything we need to write our first “Hello World” program using GCD. Here it is:
#import <Foundation/Foundation.h>
int main() {
// Enqueue two blocks on the main queue.
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"Hello");
});
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"World");
});
// Nothing is printed yet because we are also running in the main queue.
dispatch_main(); // Print "Hello" "World".
return -1; // Never called.
}
Let’s see how this code works:
A call to dispatch_async()
tells GCD to enqueue a block, then continue execution without waiting for the block to finish. In our example, we call dispatch_async()
twice to enqueue two blocks on the main queue.
Note that at this time the program itself is running in the main queue (recall that the main queue exists before your program runs, and that main()
is always executed on that queue), so nothing is printed yet. The two blocks that we enqueued are still waiting “behind” the call to main()
.
To start executing other blocks on the main queue, we call dispatch_main()
, which stops the initial execution of main()
, and begins executing other enqueued blocks on the main queue. First, the block that logs “Hello” is executed. When that block completes, the next block is executed, which prints “World”.
If your program uses Foundation classes, it’s best to use [[NSRunLoop currentRunLoop] run]
instead of dispatch_main()
, because the latter won’t support run loop sources such as NSTimer.
When the main queue is drained, dispatch_main()
simply idles. The program never terminates! If you want the program to quit when it’s done, you must enqueue a call to exit(0)
on the main queue. Go ahead, try it!
Congratulations! You’ve written your first GCD program. In the next installment, we’ll talk about queues as synchronization objects.