Quantcast
 


NVIDIA GTX TITAN vs. SLI & Crossfire

Author: SKYMTL
Date: March 13, 2013
Part Number: GTX TITAN
Share |

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: Gigabyte Z68X-UD3H-B3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 314.14 Beta
AMD 13.2 Beta 7



*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Time Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FRAPS also has the capability to log average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs we have begun to use. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

While frame times are an interesting metric to cover, it’s important to put them into a more straightforward context as well. In its frametime analysis, FRAPS reports a timestamp (again in milliseconds) for each rendered frame in the sequence in which it was rendered. For example, Frame 20 occurred at 19ms and Frame 21 at 30ms of the benchmark run. Subtracting Frame 20 from Frame 21 allows us to determine how much time it took to render Frame 21. This method would be repeated over and over again to show frame time consistency.

In order to put a meaningful spin on frame times ,we can equate them directly to framerates. A constant 60 frames across a single second would lead to an individual frame time of 1/60th of a second or about 17 milliseconds, 33ms equals 30 FPS, 50ms is about 20FPS and so on. Contrary to framerate evaluation results, in this case higher frame times are actually worse since they would represent a longer interim “waiting” period between each frame.

With the milliseconds to frames per second conversion in mind, the “magical” maximum number we’re looking for is 40ms or 25FPS. If too much time spent above that point, performance suffers and the in game experience will begin to degrade.

Consistency is a major factor here as well. Too much variation in adjacent frames could induce stutter or slowdowns. For example, spiking up and down from 13ms (75 FPS) to 40ms (25 FPS) several times over the course of a second would lead to an experience which is anything but fluid. However, even though deviations between slightly lower frame times (say 10ms and 30ms) wouldn’t be as noticeable, some sensitive individuals may still pick up a slight amount of stuttering. As such, the less variation the better the experience.

Since the entire point of this exercise is to determine how much the frame time varies within each second, we will see literally thousands of data points being represented. So expect some truly epic charts.
 
 
 

Latest Reviews in Video Cards
October 13, 2014
NVIDIA's GTX 970 is arguably the best bang for buck graphics card currently available. That makes two of them in SLI an amazing performance value that will provide plenty of future proofing....
October 7, 2014
NVIDIA's Maxwell architecture is a perfect fit for the notebook market. With the new GTX 980M and GTX 970M, Maxwell is poised to offer unheard-of performance and extremely long battery life, two benc...
October 2, 2014
NVIDIA's GTX 980 is without a doubt the most popular graphics card right now. In this review, we take a look at the GTX 980's performance in an SLI setup....