The False Hope Of Apple’s Snow Leopard

The problem began several years ago. The processor community realized that despite the fact that they could continue to make chips with smaller transistors, they could no longer make chips with substantially faster clock speeds. There were two separate but related problems.

First, when adding more clock speed to chips, they were beginning to run too hot. Second, this heat generation was a reflection of energy consumption, which in turn meant that the cost of operation of new processors would be too high. And so the industry shifted direction and decided to put more processor cores on the same chip, but without increasing the clock speed. And so, what began was an urgent race to figure out how to leverage multiple processors in a mainstream computing environment.

Last week, Apple announced that their next operating system, Snow Leopard, is going to revolutionize computing by taking much better advantage of these multi-core processors. And perhaps in relative terms, as compared to Leopard, XP or Vista, this is true. Apple’s multi-core handling technology is called Grand Central and indeed I am sure it will bring important speed improvements. But from everything I can tell, there is nothing here that is going to bring back the kind of performance doubling speed increases to all applications that we used to see.

The problem is that most algorithms and program logic cannot be made to run better across many processors. This is not a swipe at Apple, because the problem is indeed industry wide. It’s just a recognition of basic logic principles, and an admonition to not get your hopes up as it relates to the real long-term impact of the industry’s efforts in this area.

The problem of multi-core computing is really very simple. As most of us have experienced, every problem *can’t* be solved better or faster with more people. Some problems can be solved faster by adding a few people, but most problems cannot. In truth, most problems can best, or only be solved by one person at a time. And so it is with computing. The vast majority of problems can only be solved by one logic thread at a time. The reason is obvious. For most process-oriented work, step B is based on the results of step A. And step C is based on the results of step B, and so on.

Of course there are definitely problems, and important problems, that *can* be solved by multiple processors. In fact there are problems that can leverage every single processor you can throw at it. Graphics is one such problem. Similarly, most every conception of how we model human or near human intelligence can infinitely leverage parallel computing. This includes old school AI techniques like neural networks, and new conceptions of how to model the brain’s neocortex like the promising work at Numenta, a company founded by creators of the original Palm Pilot. Additionally, parallel computing will solve many other far more mundane problems. So I am not saying we will not continue to see significant benefits from shrinking transistors.

But the problem is with that core thread, the main “thinker” inside the computer. You might think of it as the ringmaster. That guy is just not getting any faster. Though it may learn to leverage a couple of processors to some degree, it will top out very quickly. This core thread is at the heart of PC performance today, and its days of rapid speed gains are finished. For now, all we will really see are impressive, domain-specific performance increases. Some of these will indeed be important. But the era of wholesale speed improvements tied to new processor generations is gone, probably forever.

Post Author: Ruby H. Rosenbaum

Leave a Reply

Your email address will not be published. Required fields are marked *