What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

NVIDIA GeForce GTX 580 Review

Status
Not open for further replies.

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Core Temperature & Acoustics / Power Consumption

Core Temperature & Acoustics


For all temperature testing, the cards were placed on an open test bench with a single 120mm 1200RPM fan placed ~8” away from the heatsink. The ambient temperature was kept at a constant 22°C (+/- 0.5°C). If the ambient temperatures rose above 23°C at any time throughout the test, all benchmarking was stopped. For this test we use the 3DMark Batch Size test at it highest triangle count with 4xAA and 16xAF enabled and looped it for one hour to determine the peak load temperature as measured by GPU-Z.

For Idle tests, we let the system idle at the Windows 7 desktop for 15 minutes and recorded the peak temperature.


GTX-580-89.jpg

The efficiency of the vapor chamber can clearly be seen in this test. Even though the TDB of the GF110 core is within spitting distance of the older GF100, temperatures are never allowed to climb above the 80 degree mark. Even at this extremely lower temperature (compared to the GTX 480), the fan never increases its speed past the 60% mark which means a very quiet gaming experience. Honestly, the difference between the GTX 580 and the outgoing GTX 480 in this test is simply unbelievable.


System Power Consumption


For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we once again use the Batch Render test in 3DMark06 and let it run for 30 minutes to determine the peak power consumption while letting the card sit at a stable Windows desktop for 30 minutes to determine the peak idle power consumption. We have also included several other tests as well.

Please note that after extensive testing, we have found that simply plugging in a power meter to a wall outlet or UPS will NOT give you accurate power consumption numbers due to slight changes in the input voltage. Thus we use a Tripp-Lite 1800W line conditioner between the 120V outlet and the power meter.

GTX-580-88.jpg

Both idle and load power consumption is kept well in check even though the GTX 580 has its full 512 cores enabled. The reason behind this is explained on the next page. We would recommend a good 600W PSU for this card nonetheless.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The Effect of Heat on Power Consumption

The Effect of Heat on Power Consumption


We have seen in the past that NVIDIA’s GF100 architecture’s power consumption is highly affected by temperatures. Since one of the GTX 580’s main claims to fame is its ability to run circles around the GTX 480 when it comes to temperatures. As we saw in the power consumption tests, there is definitely something happening behind the scenes and we’re about to find out what it is.

In order to test out how heat affects power consumption on this card, we used our typical Folding @ Home GPU3 benchmark which uses a 611 point WU while logging power consumption at certain temperature intervals. Since the GTX 480 runs so hot, we used EVGA’s Precision utility to increase fan speed to 70% for the sub-50 degree tests.

GTX-580-92.jpg

First things first: when trying to determine actual power consumption, TDP doesn’t mean a damn thing. Especially when an architecture like Fermi is affected quite negatively by increased temperatures.

Since NVIDIA has been able to keep the GTX 580’s temperatures well in check (they don’t increase above the 70-degree mark in Folding @ Home), power consumption stays well below the peaks achieved by the hot-running GTX 480. For the record, the GTX 480 hit a massive 88 degrees Celsius when folding.

Even when operating at the same temperature this new card consumes a good 5% less power than its older sibling. This may not seem like much of an achievement but it is actually quite a massive step in the right direction when you take into account the additional cores and increased clock speeds on the GTX 580.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Overclocking Results

Overclocking Results


The GTX 580 is actually clocked quite high when compared to many of the overclocks achieved on air-cooled GTX 480 cards but it actually has a good amount of headroom left in the tank. To push the clock frequencies, we used EVGA’s Precision overclocking utility and kept the default fan speed profile. It should also be noted that memory was increased until the GDDR5’s error correction kicked in and performance stopped being affected by additional clock speed dumps.

Here are the results:

Core Speed: 902Mhz
Processor Clock: 1804Mhz
Memory Speed: 4408Mhz (QDR)

GTX-580-96.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Conclusion

Conclusion


A few weeks ago, if someone would have mentioned NVIDIA possibly releasing a bona fide flagship GPU before the end of 2010 many would have pointed, laughed and gone about their business. To say the release of the GTX 580 prior to AMD’s own high end refresh has shocked the market is an understatement of epic proportions but that’s exactly what has happened.

When you take a step back and look at the numbers posted by the GTX 580, it looks like this is the card people have been hoping for since the Fermi architecture was first introduced back in 2009. Its performance is simply biblical for a single GPU card across a large cross section of today’s most demanding games and it seems to have plenty left in the tank for future games as well. AMD’s closest single GPU competitor –the HD 5870- is just a speck in the GTX 580’s rearview mirror.

NVIDIA put a lot of thought into the rationalization of the Fermi architecture for their refreshed series of DX11 cards but the main focus was upon efficiency. The GTX 580 really is a breath of fresh air for NVIDIA’s lineup considering the experiences many had with the power hungry and sometimes loud GTX 480. Through the use of a vapor chamber heatsink design, they have been able to lower temperatures and noise even though overall TDP has stayed similar to that of the GTX 480. These lower thermals have in turn lead to a significant reduction in power consumption even though the GTX 580 sports higher clocks and has an additional SM enabled. It should also be mention that even with reduced power consumption, the GF110 is still a guzzles down the juice.

GTX-580-95.jpg

When the GTX 580 is directly compared to the GTX 480, it really is amazing to see what a few well thought out architectural tweaks can accomplish when teamed up with a flexible architecture. Make no mistake about it, an AVERAGE improvement of 18% over a high end card that was released about 7 months ago is no small feat and yet NVIDIA has done that and more. There were even several games where the GTX 580 displayed a 30% or higher increase in framerates (and yes, about 10% in others). Meanwhile, its highest increases seemed to be reserved for areas where it matters the most for enthusiast-branded products: high resolution, high image quality situations. This also highlights in sharp contrast one of the HD 5000 series’ failings: anti aliasing performance.

AMD will naturally rave about the benefits of their HD 5970 and with good reason; through driver updates it is now a viable ultra high end card that offers mostly consistent framerates. Knowing this, would we take it over the GTX 580? No, but the reason for this is actually a bit more complicated than some simple graphs would have you believe. You see, it took nearly a year for the HD 5970 to reach the performance level it currently boasts and there have been plenty of teething problems along the way. Dual GPU cards also live and die by driver support and all too many times users have to wait months to enjoy full performance in the newest games. Granted, AMD’s Catalyst app profiles have done a lot to alleviate this issue but we will always choose a single GPU solution over Crossfire, SLI or even dual core cards like the HD 5970.

Naturally, the final chapter of the 2010 Christmas GPU rush is far from set in stone but NVIDIA is off to one hell of a start. The GTX 580 is undoubtedly the flagship product that many hoped would come from the Fermi architecture and the suddenness of its release shouldn’t be lost on anyone. It is meant to compete against whatever AMD has planned in the next month or so but somehow NVIDIA beat their rivals to the punch and that in itself speaks volumes about a true dedication towards moving the gaming market forward. How will this play out once the competition hits their stride? We have absolutely no idea but it goes without saying that NVIDIA hit this one straight out of the park.



 
Last edited:
Status
Not open for further replies.

Latest posts

Top