Jen-Hsun Huang, CEO Nvidia appeared on Charlie Rose last week and he touched on a wide range of subjects including his early years in a boarding school in Kentucky, the founding of Nvidia, CPUs, and GPUs. Amazingly, Charlie spent the last 10+ minutes of the show on the CUDA architecture (starting around minute 29:54 of the broadcast).
GPUs excel at mathematical computations, but until a few years ago there wasn’t an easy way to access the compute engine behind such manycore processors. With CUDA, a C programmer uses a few simple extensions to access abstractions (thread groups, shared memories, synchronization) that can be used for (fine-grained) parallel programming. Nvidia’s goal is to make every language now available in the CPU, also available in the GPU. The next wave of languages they are targeting are FORTRAN, Java, and C++.
In the interview Jen acknowledged that feedback from a few users encouraged them to start working on CUDA. To their credit, they acted quickly and if you visit the CUDA web site they highlight interesting applications mostly in the field of scientific computation, energy exploration and mathematical modeling. Other heavy users are hedge funds and other computational finance outfits.
Coincidentally, we talked to Nvidia late last year as part of our upcoming report on big data. For big data problems, they cited users who accelerated database computations such as sorts and relational joins, and bioinformatics researchers who used CUDA for their pattern matching algorithms. Their users also report that the combination of CPU/GPU in servers leads to smaller clusters and a substantial reduction in energy costs.
For now, the CUDA architecture is the province of C programmers and my fellow number crunchers. But Nvidia is allocating resources to make their tools even easier to use, and once that happens, surprising applications will emerge. Given that Apple and Intel have signaled that they too think GPUs are interesting, I’m fairly confident that simpler programming tools will emerge soon.