GPUs


AMD Celebrates 30 Years of Gaming and Graphics Innovation

AMD Celebrates 30 Years of Gaming and Graphics Innovation

AMD sent us word that tomorrow they will be hosting a Livecast celebrating 30 years of graphics and gaming innovation. Thirty years is a long time, and certainly we have a lot of readers that weren’t even around when AMD had its beginnings. Except we’re not really talking about the foundation of AMD; they had their start in 1969. It appears this is more a celebration of their graphics division, formerly ATI, which was founded in… August, 1985.

AMD is apparently looking at a year-long celebration of the company formerly known as ATI, Radeon graphics, and gaming. While they’re being a bit coy about the exact contents of the Livecast, we do know that there will be three game developers participating along with a live overclocking event. If we’re lucky, maybe AMD even has a secret product announcement, but if so they haven’t provided any details. And while we can now look forward to a year of celebrating AMD graphics and most likely a final end-of-the-year party come next August, why not start out with a brief look at where AMD/ATI started and where they are now?

Commodore-64-Computer.png
Source: Wikimedia Evan-Amos

I’m old enough that I may have been an owner of one of ATI’s first products, as I began my addiction career as a technology enthusiast way back in the hoary days of the Commodore 64. While the C64 initially started shipping a few years earlier, Commodore was one of ATI’s first customers and they were largely responsible for an infusion of money that kept ATI going in the early days.

By 1987, ATI began moving into the world of PC graphics with their “Wonder” brand of chips and cards, starting with 8-bit PC/XT-based board supporting monochrome or 4-color CGA. Over the next several years ATI would move to EGA (640×350 and provided an astounding 16 colors) and VGA (16-bit ISA and 256 colors). If you wanted a state-of-the-art video card like the ATI VGA Wonder in 1988, you were looking at $500 for the 256K model or $700 for the 512K edition. But all of this is really old stuff; where things start to become interesting is in the early 90s with the launch and growing popularity of Windows 3.0.

Mach8isa.jpg
Source: Wikimedia Misterzeropage

ATI’s Mach 8 was their first true graphics processor from the company. It was able to offload 2D graphics functions from the CPU and render them independently, and at the time it was one of the few video cards that could do this. Sporting 512K-1MB of memory, it was still an ISA card (or it was available in MCA if you happened to own an IBM PS/2).

Two years later the Mach 32 came out, the first 32-bit capable chip with support for ISA, EISA, MCA, VLB, and PCI slots. Mach 32 shipped with either 1MB or 2MB DRAM/VRAM and added high-color (15-bit/16-bit) and later True Color (the 24-bit color that we’re still mostly using today) to the mix, along with a 64-bit memory interface. And two years after came the Mach 64, which brought support for up to 8MB of DRAM, VRAM, or the new SGRAM. Later variants of the Mach 64 also started including 3D capabilities (and were rebranded as Rage, see below), and we’re still not even in the “modern” era of graphics chips yet!


Rage Fury MAXX

Next in line was the Rage series of graphics chips, and this was the first line of graphics chips built with 3D acceleration as one of the key features. We could talk about competing products from 3dfx, NVIDIA, Virge, S3, etc. here, but let’s just stick with ATI. The Rage line appropriately began with the 3D Rage I in 1996, and it was mostly an enhancement of the Mach64 design with added on 3D support. The 3D Rage II was another Mach64 derived design, with up to twice the performance of the 3D Rage. The Rage II also found its way into some Macintosh systems, and while it was initially a PCI part, the Rage IIc later added AGP support.

That part was followed by the Rage Pro, which is when graphics chips first started handling geometry processing (circa 1998 with DirectX 6.0 if you’re keeping track), and you could get the Pro cards with up to 16MB of memory. There were also low-cost variations of the Rage Pro in the Rage LT, LT Pro, and XL models, and the Rage XL may hold the distinction of being one of the longest-used graphics chips in history, as I know even in 2005 or thereabouts there were many servers still shipping with that chip on the motherboard providing graphics output. In 1998 ATI released the Rage 128 with AGP 2X support (the enhanced Rage 128 Pro added AGP 4X support among other things a year later), and up to 32MB RAM. The Rage 128 Ultra even supported 64MB in its top configuration, but that wasn’t the crowning achievement of the Rage series. No, the biggest achievement for Rage was with the Rage Fury MAXX, ATI’s first GPU to support alternate frame rendering to provide up to twice the performance.


Radeon 9700 Pro

And last but not least we finally enter the modern era of ATI/AMD video cards with the Radeon line. Things start to get pretty dense in terms of releases at this point, so we’ll mostly gloss over things and just hit the highlights. The first iteration Radeon brought support for DirectX 7 features, the biggest being hardware support for transform and lighting calculations – basically a way of offloading additional geometry calculations. The second generation Radeon chips (sold under the Radeon 8000 and lower number 9000 models) added DirectX 8 support, the first appearance of programmable pixel and vertex shaders in GPUs.

Perhaps the best of the Radeon breed goes to the R300 line, with the Radeon 9600/9700/9800 series cards delivering DirectX 9.0 support and, more importantly, holding onto a clear performance lead over their chief competitor NVIDIA for nearly two solid years! It’s a bit crazy to realize that we’re now into our tenth (or eleventh, depending on how you want to count) generation of Radeon GPUs, and while the overall performance crown is often hotly debated, one thing is clear: games and graphics hardware wouldn’t be where it is today without the input of AMD’s graphics division!

That’s a great way to finish things off, and tomorrow I suspect AMD will have much more to say on the subject of the changing landscape of computer graphics over the past 30 years. It’s been a wild ride, and when I think back to the early days of computer games and then look at modern titles, it’s pretty amazing. It’s also interesting to note that people often complain about spending $200 or $300 on a reasonably high performance GPU, when the reality is that the top performing video cards have often cost several hundred dollars – I remember buying an early 1MB True Color card for $200 back in the day, and that was nowhere near the top of the line offering. The amount of compute performance we can now buy for under $500 is awesome, and I can only imagine what the advances of another 30 years will bring us. So, congratulations to AMD on 30 years of graphics innovation, and here’s to 30 more years!

Intel Demonstrates Direct3D 12 Performance and Power Improvements

Intel Demonstrates Direct3D 12 Performance and Power Improvements

Since the introduction of Direct3D 12 and other low-level graphics APIs, the bulk of our focus has been on the high end. One of the most immediate benefits to these new APIs is their ability to better scale out with multiple threads and alleviate CPU bottlenecking, which has been a growing problem over the years due to GPU performance gains outpacing CPU performance gains.

However at the opposite end of the spectrum and away from the performance benefits are the efficiency benefits, and those are gains that haven’t been covered nearly as well. With that subject in mind, Intel is doing just that this week at SIGGRAPH 2014, where the company is showcasing both the performance and efficiency gains from Direct3D 12 on their hardware.

When it comes to power efficiency Intel stands to be among the biggest beneficiaries of Direct3D 12 due to the fact that they exclusvely ship their GPUs as part of an integrated CPU/GPU product. Because the GPU and CPU portions of their chips share a thermal and power budget, by reducing the software/CPU overhead of Direct3D, Intel can offer both improved performance and power usage with the exact same silicon in the same thermal environment. With Intel’s recent focus on power consumption, mobile form factors, and chips like Core M, Direct3D 12 is an obvious boon to Intel.

Intel wisely demonstrated this improvement using a modern low-power mobile device: the Microsoft Surface Pro 3. For this demo Intel is using the Core i5-4300U version, Microsoft’s middle of the road model that clocks up to 2.9GHz on the CPU and features one of Intel’s HD 4400 GPUs, with a maximum GPU clockspeed of 1.1GHz. In our testing, we found the Surface Pro 3 to be thermally constrained – throttling when met with a medium to long duration GPU task. Broadwell should go a long way to improve the situation, and so should Direct3D 12 for current and future Intel devices.

To demonstrate the benefits of Direct3D 12, Intel put together a tech demo that renders 50,000 unique asteroid objects floating in space. The demo can operate in maximum performance mode with the frame rate unrestricted, as well as a fixed frame rate mode to limit CPU and GPU utilization in order to reduce power consumption. The demo can also dynamically switch between making Direct3D 11 and Direct3D 12 API calls. Additionally, an overlay shows power consumption of both the CPU and GPU portions of the Intel processor.

Intel states this demo data was taken after steady-state thermals were reached.

In the performance mode, Direct3D 11 reaches 19 frames per second and the power consumption is roughly evenly split between CPU and GPU. Confirming that while this is a graphical demo, there is significant CPU activity and overhead from handling so many draw calls.

After dynamically switching to Direct3D 12 while in performance mode, the frames per second jumps nearly 75% to 33fps and the power consumption split goes from 50/50 (CPU/GPU) to 25/75. The lower CPU overhead of making Direct3D 12 API calls versus Direct3D 11 API calls allows Intel’s processor to maintain its thermal profile but shift more of its power budget to the GPU, improving performance.

Finally, in the power efficiency focused fixed frame rate mode, switching between Direct3D 11 and 12 slightly reduces GPU power consumption but dramatically reduces CPU power consumption, all while maintaining the same 19fps frame rate. Intel’s data shows a 50% total power reduction, virtually all of which comes from CPU power savings. As Intel notes, not only do they save power from having to do less work overall, but they also save power because they are able to better distribute the workload over more CPU cores, allowing each core in turn to run at a lower clockspeed and voltage for greater power efficiency.

To put these numbers in perspective, a 50% reduction in power consumption is about what we would see from a new silicon process (i.e. moving from 22nm to 14nm), so to achieve such a reduction in consumption with software alone is a very significant result and a feather in Microsoft’s cap for Direct3D 12. If this carries over to when DirectX 12 games and applications launch in Q4 2015, it could help usher in a new era of mobile gaming and high end graphics. It is not often we see such a substantial power and performance improvement from a software update.

Source: Intel, Microsoft

Intel Demonstrates Direct3D 12 Performance and Power Improvements

Intel Demonstrates Direct3D 12 Performance and Power Improvements

Since the introduction of Direct3D 12 and other low-level graphics APIs, the bulk of our focus has been on the high end. One of the most immediate benefits to these new APIs is their ability to better scale out with multiple threads and alleviate CPU bottlenecking, which has been a growing problem over the years due to GPU performance gains outpacing CPU performance gains.

However at the opposite end of the spectrum and away from the performance benefits are the efficiency benefits, and those are gains that haven’t been covered nearly as well. With that subject in mind, Intel is doing just that this week at SIGGRAPH 2014, where the company is showcasing both the performance and efficiency gains from Direct3D 12 on their hardware.

When it comes to power efficiency Intel stands to be among the biggest beneficiaries of Direct3D 12 due to the fact that they exclusvely ship their GPUs as part of an integrated CPU/GPU product. Because the GPU and CPU portions of their chips share a thermal and power budget, by reducing the software/CPU overhead of Direct3D, Intel can offer both improved performance and power usage with the exact same silicon in the same thermal environment. With Intel’s recent focus on power consumption, mobile form factors, and chips like Core M, Direct3D 12 is an obvious boon to Intel.

Intel wisely demonstrated this improvement using a modern low-power mobile device: the Microsoft Surface Pro 3. For this demo Intel is using the Core i5-4300U version, Microsoft’s middle of the road model that clocks up to 2.9GHz on the CPU and features one of Intel’s HD 4400 GPUs, with a maximum GPU clockspeed of 1.1GHz. In our testing, we found the Surface Pro 3 to be thermally constrained – throttling when met with a medium to long duration GPU task. Broadwell should go a long way to improve the situation, and so should Direct3D 12 for current and future Intel devices.

To demonstrate the benefits of Direct3D 12, Intel put together a tech demo that renders 50,000 unique asteroid objects floating in space. The demo can operate in maximum performance mode with the frame rate unrestricted, as well as a fixed frame rate mode to limit CPU and GPU utilization in order to reduce power consumption. The demo can also dynamically switch between making Direct3D 11 and Direct3D 12 API calls. Additionally, an overlay shows power consumption of both the CPU and GPU portions of the Intel processor.

Intel states this demo data was taken after steady-state thermals were reached.

In the performance mode, Direct3D 11 reaches 19 frames per second and the power consumption is roughly evenly split between CPU and GPU. Confirming that while this is a graphical demo, there is significant CPU activity and overhead from handling so many draw calls.

After dynamically switching to Direct3D 12 while in performance mode, the frames per second jumps nearly 75% to 33fps and the power consumption split goes from 50/50 (CPU/GPU) to 25/75. The lower CPU overhead of making Direct3D 12 API calls versus Direct3D 11 API calls allows Intel’s processor to maintain its thermal profile but shift more of its power budget to the GPU, improving performance.

Finally, in the power efficiency focused fixed frame rate mode, switching between Direct3D 11 and 12 slightly reduces GPU power consumption but dramatically reduces CPU power consumption, all while maintaining the same 19fps frame rate. Intel’s data shows a 50% total power reduction, virtually all of which comes from CPU power savings. As Intel notes, not only do they save power from having to do less work overall, but they also save power because they are able to better distribute the workload over more CPU cores, allowing each core in turn to run at a lower clockspeed and voltage for greater power efficiency.

To put these numbers in perspective, a 50% reduction in power consumption is about what we would see from a new silicon process (i.e. moving from 22nm to 14nm), so to achieve such a reduction in consumption with software alone is a very significant result and a feather in Microsoft’s cap for Direct3D 12. If this carries over to when DirectX 12 games and applications launch in Q4 2015, it could help usher in a new era of mobile gaming and high end graphics. It is not often we see such a substantial power and performance improvement from a software update.

Source: Intel, Microsoft