How AMD will lose the GPU war.

Everyone knows the two gorillas in the graphics chip business: Nvidia and ATI/AMD. The race for performance has recently taken a more interesting aspect as developers realized that these chips have enormous processing power that, up until now, was really only tapped to run video games and 3d applications. During these older days of the GPU race, a successful generation of chips would come from good engineering, then successful fab production, then good driver development to pull the performance from the chips. For a long time, ATI has produced chips that look great on paper and perform quite well, but had built a reputation for poor software support, driver problems, etc… After AMD purchased ATI, some effort was invested in improving the quality of ATI’s drivers, and this resulted, I think, in a more wholesome product line that has been very successful in retaking some of Nvidia’s market clout.

But as of about a year ago, the game has changed drastically, and I see Nvidia ahead in its efforts to develop the chips, or rather the applications, of the future. In truth, the efforts are not in chip design, per say, but rather a combination of hardware design AND software application. The interesting change I am alluding to is GPU processing. The offloading of computations from your computer’s CPU to be handed off to the GPU. Current GPUs have as many transistors as your CPU, and are much more optimized at certain types of math; in fact they are built solely for the purpose of crunching through certain types of math, and do so much, much faster than your CPU. So developers took a look at all this untapped power and started figuring out ways to make use of it.

Applications are coming out in the graphics and media arenas, and these apps are taking advantage of GPU processing in ways that will make them run 10 or 20 times faster than by relying on the CPU only. We are talking about a performance boost equivalent of several generations of CPU’s. A mini quantum leap. This is currently being applied in video processing, for playback or trans-coding, 3D rendering, general math crunching for financial modeling, engineering, etc… All of a sudden, a box under your desk will be able to do the processing work of 10 or more computers. I could go on about the opportunities this creates, but you get the point.

Currently, Nvidia is at the forefront of developing applications that use their chips, and one of their technologies is called CUDA. It is a framework to provide parallel processing capacities to applications. ATI is working on its own system, and I am not sure why the two makers couldn’t agree to develop something together, but it probably has to do with wanting to be first to market… ATI has been pushing OpenCL and its own Stream Processing, but so far the results are less than impressive.

Online reviews of hardware H.264 video transcoding show that ATI’s offering  do not run as quickly and provide results of passable quality, and one would be better off using something else altogether. Usually, the finger is pointed at the immature or poorly conceived software, rather than the hardware which is pretty stellar.

I work in 3D graphics and there are VERY exciting developments for us end users in the form of GPU based rendering. 3D rendering is the process of calculating 3D imagery for use in production. It is very computationally intensive, and by effects shops and animations studios rely on titanic server farms to process all their cool CG.  Sony Imageworks, for instance, has about 5000 CPU’s at its disposal. WETA used around 4000 cpu’s to render Lord of the Rings. We are talking massive equipment and facility investments, not to mention huge energy bills. So when companies like Chaos Group or Mental Images (owned by Nvidia) announced last year that their renderers were starting to take advantage of the graphics chips to render with, rather than just the CPU, and demonstrated 10 fold speed improvements at SIGGRAPH last year, people freaked out. However, these improvements were only available with Nvidia cards, because the developers had a mature software set in the form of CUDA to work with, and are still waiting for something solid from the ATI camp.

Same goes with the recent release of Adobe’s latest version of Premiere and After Effects. The GPU processing, dubbed Mercury Engine, allows for fast playback of large footage. This can turn a $ 2000 PC into the ultimate video editing system. Again, it only works on CUDA because apparently it is more mature than ATI’s work and there was nothing to implement based on ATI cards.  So the latest, most exciting trend in computing is being spearheaded by Nvidia, and currently if you are in the market for a card, you would do well to consider the latter’s proven success with software tools that you will very much want to have at your disposal. Until ATI can prove it has something working in this promising new field, consumers will want to look to Nvidia, because it now offers so much more value for the money than ATI. When I say now, I am really saying, 6 months from now. But if you are buying hardware now, you will want it to make use of the latest software in 6 months…

Save for some fairly mediocre video encoding tool, ATI has not released even an inkling of progress on GPU processing, at a time when Nvidia is making huge inroads. I think this is a major strategic mistake, and ATI should be throwing everything into software development to catch up into this paradigm shifting field. I hope they do. Their hardware is great, but it will become pointless to make a good chip when it has less usage than its competition. GPU computing can benefit every computer user; if AMD wants to keep selling graphics chips, they must help software vendors build tools that will take advantage of their hardware in new ways. There is plenty of this running on Nvidia hardware at the moment, and ATI/AMD is missing the race. All it takes is a little software development, you would think. The questions is, why is AMD lagging?  In contrast, Nvidia has bought up various companies in the 3D arena to develop software solutions based on their chips, so that they won’t need to compete with AMD for raw performance anymore; customers will want to buy their chips because they do more things for them…

For example, content creators like game studios, post houses, or video editors, will have more incentive to outfit their studios with Nvidia hardware to have access to new promising tool sets. These include rendering, dynamic simulations, distributed computing of complex math for science or finance, etc…  On the consumer end, media encoding and decoding is another arena that benefits from more processing power. Encryption could be another area that makes use of the GPU… I love AMD’s products, and hope that they will put some R&D effort to expand their cards’ capabilities soon.
Raphael

Comments are closed.