Welcome to Herb Sutter’s Jungle

In an effort to keep posting something here until I’m in the right place mentally to write about things that probably interest you, my dear friends, family, and online diabetes peeps, here’s another computing performance excerpt and link. (Working on this stuff is the 9-5 part of your favorite international playboy’s life.)


A half-decade after Herb Sutter wrote that the “free lunch” of Moore’s Law is over, he’s back with his prophet’s wisdom about where we’re going in his January Dr. Dobbs article, “Welcome to the Jungle”. I’ll give you a moment to decide whether to get the Guns N’ Roses song out of your head or use it as a backdrop for this juicy quotation:

If hardware designers merely use Moore’s Law to deliver more big fat cores, on-device hardware parallelism will stay in double digits for the next decade, which is very roughly when Moore’s Law is due to sputter, give or take about a half decade. If hardware follows Niagara’s and MIC’s lead to go back to simpler cores, we’ll see a one-time jump and then stay in triple digits. If we all learn to leverage GPUs, we already have 1,500-way parallelism in modern graphics cards (I’ll say “cores” for convenience, though that word means something a little different on GPUs) and likely reach five digits in the decade timeframe.

But all of that is eclipsed by the scalability of the cloud, whose growth line is already steeper than Moore’s Law because we’re better at quickly deploying and using cost-effective networked machines than we’ve been at quickly jam-packing and harnessing cost-effective transistors. It’s hard to get data on the current largest cloud deployments because many projects are private, but the largest documented public cloud apps (which don’t use GPUs) are already harnessing over 30,000 cores for a single computation. I wouldn’t be surprised if some projects are exceeding 100,000 cores today. And that’s general-purpose cores; if you add GPU-capable nodes to the mix, add two more zeroes.

The big takeaway for software engineers like me is that we’d best be learning how to develop solutions using the emerging APIs so that we can harness all of those extra orders of magnitude of scalability. That involves figuring out how to . . .

  • Deal with the processor axis’ lower section [of Sutter's chart] by supporting compute cores with different performance (big/fast, slow/small).
  • Deal with the processor axis’ upper section by supporting language subsets, to allow for cores with different capabilities including that not all fully support mainstream language features.
  • Deal with the memory axis for computation, by providing distributed algorithms that can scale not just locally but also across a compute cloud.
  • Deal with the memory axis for data, by providing distributed data containers, which can be spread across many nodes.
  • Enable a unified programming model that can handle the entire [memory/locality/processor] chart with the same source code.

Perhaps our most difficult mental adjustment, however, will be to learn to think of the cloud as part of the mainstream machine — to view all these local and non-local cores as being equally part of the target machine that executes our application, where the network is just another bus that connects us to more cores. That is, in a few years we will write code for mainstream machines assuming that they have million-way parallelism, of which only thousand-way parallelism is guaranteed to always be available (when out of WiFi range). . . .

If you haven’t done so already, now is the time to take a hard look at the design of your applications, determine what existing features — or better still, what potential and currently unimaginable demanding new features — are CPU-sensitive now or are likely to become so soon, and identify how those places could benefit from local and distributed parallelism. Now is also the time for you and your team to grok the requirements, pitfalls, styles, and idioms of hetero-parallel (e.g., GPGPU) and cloud programming (e.g., Amazon Web Services, Microsoft Azure, Google App Engine).


p.s. — I can’t believe that it’s been almost four years since I took a course with Herb out in Washington. That was some hard-core learnin’.

This entry was posted in Computing, Fodder for Techno-weenies, Software Engineering. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>