Two significant trends of the past decade have had an enormous effect on the field of software development. First, the continued decrease in the cost of performing computations is no longer driven by increases in clock speed and transistor density, as illustrated by Figure 19.1. Rather, the cost of computation is now falling because it has become economical to make hardware containing multiple CPUs.
Second, computations now routinely involve enormous latency. Latency is, simply put, the amount of time required to obtain a desired result. There are two principal causes of latency. Processor-bound latency occurs when the computational task is complex; if a computation requires performing 12 billion arithmetic operations and the total processing power available is only 6 billion operations per second, at least 2 seconds of processor-bound latency will be incurred between asking for the result and obtaining it. I/O-bound latency, by contrast, is latency incurred by the need to obtain data from an external source such as a disk drive, web server, and so on. Any computation that requires fetching data from a web server physically located far from the client machine will incur latency equivalent to millions of processor cycles.
These two trends together create an enormous challenge for modern software developers. Given that machines have more computing power than ever, how are we to make effective use of that power to deliver results to the user quickly and without compromising on the user experience? How do we avoid creating frustrating user interfaces that freeze up when a high-latency operation is triggered? Moreover, how do we go about splitting CPU-bound work among multiple processors to decrease the time required for the computation?
The standard technique for engineering software that keeps the user interface responsive and CPU utilization high is to write multithreaded programs that perform multiple computations in parallel. Unfortunately, multithreading logic is notoriously difficult to get right; we spend the next four chapters exploring what makes multithreading difficult and learning how to use higher-level abstractions and new language features to ease that burden.
The first higher-level abstraction was the Parallel Extensions library, which was released with .NET 4.0. It includes the Task Parallel Library (TPL), which is discussed in this chapter, and the Parallel LINQ (PLINQ), which is discussed in Chapter 21. The second higher-level abstraction is the Task-based Asynchronous Pattern (TAP) and its accompanying language support1.
Although I strongly encourage you to use these higher-level abstractions, I also cover some of the lower-level threading APIs from previous versions of the .NET runtime at the end of this chapter2. Thus, if you want to fully understand the resources from multithreaded programming without the later features, you still have access to that material.
I begin this chapter with a few beginner topics in case you are new to multithreading.