The GPU Technology Conference: NVIDIA's New Focus in a Changing Market

by Michael "SKYMTL" Hoenig     |     October 9, 2009

The CUDA Architectural Ecosystem

Before we even get started with this article, I think it is necessary to give everyone reading a quick crash course about Nvidia’s CUDA and why it is considered by many to be an essential part of their programming toolbox. CUDA also just so happened to be the focus of the GTC.

In order to make the GPU into a processing powerhouse, NVIDIA introduced CUDA or Comupte Unified Driver Architecture. It was first released to the public less than three years ago and allows developers to harness the potential number crunching power of NVIDIA GPUs through the use of the C programming language (with NVIDIA extensions). In the grand scheme of implementing a new technology, CUDA’s current upswing popularity is nothing short of incredible given how long it has been available.

However, NVIDIA will be the first to remind anyone that their ultimate goal is not to replace CPU processing with CUDA-enabled GPUs. Rather, NVIDIA’s goal is to give developers the ability to run parallel workloads containing large data sets on the GPU while leaving the CPU to crunch through slightly more mundane instruction sets.

Here is how NVIDIA describes it:

NVIDIA® CUDA™ is a general purpose parallel computing architecture that leverages the parallel compute engine in NVIDIA graphics processing units (GPUs) to solve many complex computational problems in a fraction of the time required on a CPU. It includes the CUDA Instruction Set Architecture (ISA) and the parallel compute engine in the GPU.

One of the main focuses of the GTC as well as the keynotes held therein was the use of the GPU not as a CPU replacement but as a tool that should be working in parallel with the CPU. Above we can see a chart that illustrates exactly why NVIDIA thinks the co-processing ecosystem with a combination of CPUs and GPUs will benefit the industry.

When it comes to serial code, a run-of the mill CPU is very efficient but falls flat on its face when asked to run parallel instructions. Meanwhile, a GPU eats through parallel code like no-one’s business but can’t efficiently run serial code. It all comes down to using the right tool for the job and with a combination of CPU and GPUs, a company is well set to handle any processing-intensive tasks they might have.

Perhaps unknown to many people is the fact that NVIDIA considers CUDA as an ecosystem that encompasses both open (OpenCL, OpenMP, etc.) and closed languages, libraries and compilers. Since it supports many of the industry’s most popular coding languages, support has been growing quickly with dozens of universities now teaching CUDA and even the large OEMs of the industry are beginning to jump on board as well.

Latest Reviews in Articles
March 22, 2015
The North American stop of the HWBOT World Tour 2015 rolled into Montreal on March 6, 7 and 8. This overclocking competition took place during the huge LanETS....
September 16, 2014
As you have all noticed, there has been a distinct lack of content here at Hardware Canucks over the last three or so weeks. Many asked for an explanation and that’s exactly what this quick post is in...
October 29, 2012
Last weekend AMD held the inaugural Radeon ExtravaLANza event out of their Markham, Ontario headquarters. It brought together gamers of all ages and walks of life into one area for some friendly compe...