Twice a year we get the Top 500 list of supercomputers in the world. By and large the list is pointless. The Number 1 machine 18 months ago is now Number 17 (which shows how fast this stuff changes), and the vast majority of machines are used by research institutions and governments; it's not exactly relevant to IT/business computing.
Back when I was a staffer with a news publication, this inevitably meant one thing: a string of self-congratulatory press releases from Intel Corp. (Nasdaq: INTC), Advanced Micro Devices Inc. (AMD) (NYSE: AMD), IBM Corp. (NYSE: IBM), and Hewlett-Packard Co. (NYSE: HPQ). Well, one more vendor has joined that list of vendors with well earned bragging rights: Nvidia Corp. (Nasdaq: NVDA). The company that powers our video games is now becoming a player in high-performance computing -- something it has pursued for a few years.
The GPU (graphics processing unit) is a logical choice for many supercomputing tasks. A GPU is nothing more than a glorified math co-processor. (Remember the good old days of 80387 processors to go with your 386 computer?) They do one thing: process floating-point math very, very fast. GPUs have hundreds of math processing cores, which is exactly what graphics processing requires.
Nvidia, looking for new growth markets, decided several years back to repurpose those massive calculators as high-performance computing processors under the Tesla brand name, complete with its own programming language, called CUDA, to make apps use the GPU instead of the CPU.
It was a slow road for awhile as CUDA matured. Nvidia couldn't get the big names in supercomputing hardware behind Tesla. IBM threw its weight behind the Cell processor it co-developed for the Sony PlayStation 3, while other firms stuck with x86 technology.
Slowly these firms have come around. As Nvidia has advanced the technology and improved CUDA, it’s found more universities to teach computer science students and increased the talent pool. It had to go with ODMs for the first few generations of Tesla, but now IBM, HP, and Dell Inc. are offering servers with integrated Tesla processors.
The result? Three of the top five computers on the Top 500 supercomputer list are using Nvidia's Tesla, and six overall on the list are using it. Only one is using AMD's Radeon GPUs, a German cluster. AMD has really been left behind in GPU computing, making virtually no effort in that area at all, despite the fact many people consider its Radeon video cards to be highly competitive with the Nvidia cards.
What I find bothersome is that the top Nvidia computers are all in Asia. Tianhe-1A, the top computer on the list with 2.566 petaflops of peak performance, is in China. It uses 14,336 six-core Intel Xeon X5670 CPUs and 7,168 Nvidia Tesla 2050 compute boards. Nebulae, a 1.271Pflop beast, is also in China, and Tsubame 2.0 (1.192PFlop) is in Japan.
China has a third Tesla system and Japan has a second, both further down the list. There is only one American machine, ranked No. 72, at the Lawrence Livermore Labs.
So, why is Asia getting GPU computing so right? Or to put it another way: Why are we not getting it right? Frozen budgets? Don't know where to start? You can always experiment with Amazon.com Inc. (Nasdaq: AMZN)'s new GPU-based EC2 service. The service will provide you with access to Tesla CPUs like the ones being used in Asia.
It may not give you petaflops of power, but it's pretty good. One guy used them to crack the SHA-1 encryption protocol. He was able to break it in less than two hours. I'm not sure that's what Amazon intended when it created this service, but it does show the power available for rent.
So hop to it, American IT. You're being left behind by the Chinese and Japanese. Again.