What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

NVIDIA GeForce GTX 780 Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The GeForce GTX 780 may be this year’s most unexpected graphics card. After AMD publically stated the HD 7970 GHz Edition would continue to be their fastest single GPU solution throughout 2013, many expected NVIDIA to maintain the status quo as well. The GTX 680 has continually been in a dominating position within the market while other 600-series cards are well placed to combat AMD’s product stack. Maxwell, with its brand new architecture and a focus on dynamic parallelism is due to launch next year so it only made sense that valuable resources would be directed towards next generation parts. Well that hasn’t happened and the new GTX 700-series is already upon us.

A number of well executed plans allowed for the transition from 600 to 700 series to happen so quickly. The yields of larger Kepler cores like GK110 and GK114 have gradually improved, leading to the possibility of higher end variations of numerous products. NVIDIA also ensured their smaller, more efficient cores were able to compete against AMD’s flagship GPUs, virtually ensuring performance dominance once the larger ASICs were ready for prime time.

GTX-780-NV-18.jpg

As one might expect, NVIDIA didn’t have to reinvent the wheel in order to create the GTX 780. They’re simply using a GK110 core which was first used in TITAN while cutting back on a number of elements. Instead of 14 SMX modules, two have been disabled, resulting in 12 nodes and a total of 2304 CUDA cores and 192 Texture Units.

Naturally, this reduction cuts back on the GTX 780’s performance somewhat in comparison to TITAN but double precision processing has been significantly scaled back as well. Unlike the TITAN, its DP throughput ratio is akin to that of a GTX 680 or roughly 1/24th the speed of single precision. For budding GPGPU developers, stunted double precision abilities will be disappointing but gamers –the market targeted by the GTX 780- won’t be impacted by this in any way.

The GTX 780 still retains many of the hallmarks which made TITAN such a dominant presence. Its six 64-bit memory controllers offer a 384-bit interface alongside 6 ROP partitions (for a total of 48 ROPs) and 1536KB of L2 cache. These all represent a significant improvement over the specifications of a GTX 680.

GTX-780-NV-57.jpg

For the time being TITAN will remain front and center as NVIDIA's single GPU flagship, allowing the GTX 780 to replace the GTX 680. Expect the former GK114 frontrunner to gradually fade from the shelves as it makes way for other GTX 700-series cards. While we can’t discuss the specifics, expect NVIDIA to quickly roll out the 700-series, battering AMD at every conceivable price point.

With a decisive edge in core count, memory size (up from 2GB to 3GB), bandwidth, ROPs and Texture Units, NVIDIA claims the GTX 780 performs roughly 40% faster than its predecessor. Due to the use of a massive GK110 core, this new card can’t hit the same default Boost clocks as a GTX 680 but the raw specifications should more than make up for any potential shortfall. In addition, NVIDIA has equipped their latest creation with 3GB of GDDR5 memory, eliminating many of the GTX 680’s perceived bottlenecks at ultra high resolutions and extreme anti aliasing settings.

One of the more interesting aspects of the GTX 780 is its adherence to the 250W TDP specification from the GK110-equipped TITAN. Since TDP can’t be directly equated with power consumption, this doesn’t mean both cards will require the same amount of juice. Rather, due to the core similarity, the GTX 780 has the capability to produce as much heat as its bigger brother, even though two additional SMX modules have been disabled.

A number of other features are present here as well. GPU Boost 2.0 makes an encore appearance and ensures clock speeds, voltage and power always remain in harmony to ensure optimal performance. NVIDIA will also allow limited software-centric voltage modifications, though as with TITAN they likely won’t result in a significant amount of additional clock speed overhead.

GTX-780-NV-15.jpg

Pricing is key to the GTX 780’s success and NVIDIA had some big expectations to live up to here. Their GTX 680 was released at a surprisingly low $499, causing instant headaches for AMD’s HD 7970. However, unlike the situation a year or so ago, NVIDIA’s GTX 780 finds itself in exclusive possession of a given performance segment. Other than some judicious price cuts, AMD can’t respond either. The Tahiti architecture -regardless of optimizations realized through silicon respins- just can’t be pushed any further due to its current thermal and power consumption limits. This means on paper, AMD doesn’t have anything which can compete against the GTX 780 and won’t have a concerted response for at least the next six to seven months.

With these factors taken into account alongside the GTX 780’s potential performance, a price of $649 doesn’t seem all that extreme. It perfectly demonstrates this new card’s position well above the outgoing GTX 680 and AMD’s GHz Edition while ensuring the TITAN remains an exclusive niche product.

GTX-780-NV-14.jpg

The GTX 780 isn’t meant to be an upgrade path for GTX 680 users, though many enthusiasts will likely switch over anyways. Rather, it focuses on offering up a large framerate boost for the GTX 580 users who found a GTX 680 didn’t provide enough performance to justify such a large investment. Current AMD users should also appreciate this new card’s efficiency, acoustics, smooth gaming experience and driver support considering the current Radeon lineup’s rather spotty reputation in these key areas.

NVIDIA’s timing seems to be impeccable. The GTX 780 is being released with a hard launch at a relatively competitive price and at a time when AMD would need to significantly revise their roadmap in order to deliver a timely response. And as we will see in this review, there’s a whole lot to like about the latest iteration of GK110.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
A Closer Look at the GTX 780

A Closer Look at the GTX 780


GTX-780-NV-1.jpg

With the GTX 780 using the same core as NVIDIA’s TITAN, it shouldn’t come as a surprise that its visual design follows very much the same lines. Even the length is the same at 10.5”. Truth be told, the only way to distinguish the two reference cards from one another is the imprinted logo right in front of the backplate.

GTX-780-NV-2.jpg
GTX-780-NV-3.jpg

Unlike the GTX 690’s design there aren’t any space-age materials being used but the overall effect is still amazing in our opinion. Most board partners will be releasing custom designs soon after launch but the reference version maintains the status quo for high end GeForce cards with a metal and plastic heatshink shroud. Until then, NVIDIA’s addition of a secondary fan intake grille over the VRMs helps speed up airflow while also ensuring critical components are adequately cooled.

GTX-780-NV-4.jpg

While this design may look overly industrial, its windowed view towards the heatsink and glowing LED logo have won many gamers over. Some applications like EVGA’s Precision can even control the LED’s color, allowing for a nearly countless number of combinations. We’ve also heard through the grape vine that NVIDIA will be releasing a special SLI bridge to compliment their high end cards which will also incorporate a lit logo.

GTX-780-NV-17.png

While the heatsink carries all of the hallmarks of class-leading engineering like vapor chamber technology and aluminum fins that are specifically designed to channel airflow, NVIDIA has also enhanced their fan controller. The older controller tended to adapt almost too well to changing conditions, causing peaks and valleys in the fan’s rotational speed. As a result, gamers were sometimes able to hear the quick yet subtle changes.

This new design relies on software to stabilize fan speeds within an optimal range, ensuring output is constant without the need to sacrifice core temperatures. While it may cause slightly higher decibel readings in some cases, the noise will be much less noticeable to end users due to consistent output.


The connector selection on NVIDIA’s GTX 780 remains the same as previous cards in the 600-series with two DVI connectors alongside HDMI and DisplayPort outputs. Meanwhile, the move towards a 6+8 pin power connector layout reflects the use of a higher power consumption GK110 core and 3-way SLI support has been retained.

GTX-780-NV-8.jpg

Even the bare PCB layout hasn’t changed all that much from the one we saw on TITAN. NVIDIA is still using an advanced all digital 6-phase PWM for the core while the 3GB of GDDR5 memory receives an additional two phases. The only major difference between TITAN and the GTX 780 from this perspective is the former’s use of additional memory modules to achieve its 6GB capacity.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
GPU Boost 2.0 Explained & Tested

GPU Boost 2.0 Explained & Tested


When the GTX 680 was introduced, GPU Boost quickly became one of its most talked-about features. At its most basic, NVIDIA’s GPU Boost monitored the power requirements of your graphics card and dynamically adjusted clock speeds in order to keep it at a certain power target. Since most games don’t take full advantage of a GPU’s resources, in many cases this meant Kepler-based cards were able to operate a higher than reference frequencies.

In order to better define this technology, NVIDIA created a somewhat new lexicon for enthusiasts which was loosely based upon Intel’s current nomenclature. Base Clock is the minimum speed at which the GPU is guaranteed to operate at under strenuous gaming conditions, while Boost Clock refers to the average graphics clock rate when the system detects sufficient TDP overhead. As we saw in the GTX 680 review, the card was able to Boost above the stated levels in non-TDP limiting scenarios but the technology was somewhat limited in the way it monitored on-chip conditions. For example, even though the ASIC’s Power Limit could be modified to a certain extent, the monitoring solution took TDP as a relative term instead of factoring in additional (and essential) items like temperature.

GTX-TITAN-15.gif

In an effort to bypass GPU Boost’s original limitations, the GPU Boost 2.0 available on TITAN and the GTX 780 will use the available temperature headroom when determining clock speeds. This should lead to a solution that takes into account critical metrics before making a decision about the best clocks for a given situation. The Power Target is also taken into account but you can now tell EVGA Precision or other manufacturers’ software to prioritize either temperatures or power readings when determining Boost clocks.

While some other technologies described in this article will be eventually find their way into Kepler and even Fermi cards, GPU Boost 2.0 will remain a GK110 exclusive for the time being.

GTX-780-NV-13.jpg

In order to better demonstrate this new Boost Clock calculation, we ran some simple tests on a GTX 780 using different temperature targets while running at a 106% Power Offset (the max available on this card). For this test we used EVGA’s Precision utility with priority on the Temp Target.

GTX-780-NV-54.jpg

The success of GPU Boost 2.0 is becomes readily apparent when different temperature targets and their resulting clock speeds are compared against one another. At default, the GTX 780 is set to run with a target of 80°C and a relatively pedestrian fan speed of between 30% and 50%, making it very quiet.

As we can see, voltage and clock speeds steadily decrease as the target is lowered. This is because the built-in monitoring algorithm is trying to strike a delicate balance between maximizing clock speeds and voltage, while also taming noise output and temperatures. With a TDP of some 250W, accomplishing such a feat isn’t easy at lower temperatures so Boost clocks are cut off at the knees in some cases.

Increasing the Temperature Target to above 80°C has a positive impact, up to a certain extent. Since there are some limits imposed at GK110’s default Boost voltage (1.125V in this case), clock speeds tend to plateau around the 940MHz mark without modifying the GPU Clock Offset or core voltage.

This new GPU Boost algorithm rewards lower temperatures which will be a huge boon for people using water cooling or those willing to put up with slightly louder fan speeds. Simply keeping the Temp Target at its default 80°C setting and cooling the core to anything under that point will allow for moderately better clock speeds (up to 1GHz in our tests) with a minimum of effort. If you’re the enterprising type, a combination of voltage and a higher GPU Offset could allow a better cooling solution to start paying dividends in no time. Also remember that ambient in-case temperatures play a huge part in GPU Boost’s calculation so ensuring a well ventilated case could lead to potential clock speed improvements.

GTX-TITAN-16.gif

As you can imagine, mucking around with the temperature offset could potentially have a dramatic effect upon fan speeds but NVIDIA has taken care of that concern. In the new GPU Boost, their fan speed curves dynamically adjust themselves to the new Temperature Target and will endeavor to remain at a constant frequency without any distracting rotational spikes.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Kepler’s Overclocking: Overhauled & Over-Volted

Kepler’s Overclocking: Overhauled & Over-Volted


One of the major criticisms leveled at the early Kepler cards like the GTX 680 was their bewildering lack of core voltage control. Some board partners eventually added the capability to modify their cards’ voltage values but they were quickly removed in favor of a supposedly “safer” approach to overclocking. Enthusiasts weren’t happy and neither were NVIDIA’s partners since they had to eliminate certain advertised features from their products.

With GK110, voltage control is making a return….to a certain extent, and regrettably it won’t cascade down to lower-end SKUs like the GTX 680. NVIDIA will be allowing voltage changes, but the actual upper limits will remain under strict control and could be eliminated altogether on some products should a board partner decide to err on the side of caution.

GTX-TITAN-28.gif

On the previous page, we detailed how GK110’s Boost Clock is largely determined by a complex calculation which takes into account temperatures, clock speeds, fan speeds, Power Target and core voltage. As temperatures hit a given point, Boost 2.0 will adapt performance and voltage in an effort to remain at a constant thermal point. This methodology remains in place when using GK110’s expanded overclocking suite. However, in this case, the application of additional voltage will give Boost another “gear” so to speak, allowing for higher clock speeds than would normally be achieved.

GTX-TITAN-29.gif

Since Over Voltage control falls under the auspices of GPU Boost 2.0, the associated limitations regarding thermal limits and their relation to final in-game clock speeds remain in place. Increasing voltage will of course have a negative impact upon thermal load which could lead to Boost throttling performance back until an acceptable temperature is achieved. Therefore, even when a voltage increase is combined with a higher Power Limit and GPU Clock Offset, an overclock may still be artificially limited by the software’s iron clad grip unless temperatures are reigned in. This is why you’ll likely want to look at improving cooling performance before assuming an overclock will yield better results or, while not recommended, expand the Temperature Target to somewhere above the default 80°C mark.

GTX TITAN’s maximum default voltage is currently set roughly 1.125V which results in clock speeds that (in our case at least) run up to the 940MHz mark, provided there is sufficient thermal overhead. EVGA’s Precision tool meanwhile allows this to be bumped up to by a mere .035V which likely won’t result in a huge amount of additional overclocking headroom. However, while the maximum Boost clock may not be significantly impacted by the additional voltage, it should allow the GTX 780 to reach higher average speeds more often, thus improving overall performance. Also expect other board partners to have different maximum voltages set in their VBIOS.

In order to put NVIDIA’s new voltage controls to the test, we ran the GTX 780 at default clock speeds (GPU and Memory offsets were pegged at “0”) with the Power Target set to 106% and keeping the core temperature at a constant 75°C versus a Temp Target of 80°C. In practice, this should allow that extra .035V to push the maximum Boost speed to another level.

GTX-780-NV-53.jpg

While the frequency increase we’re seeing here is rather anemic, NVIDIA’s inclusion of limited control over core voltage should be welcomed with open arms. Regardless of the end result, seeing a card like the GTX 780 operating above 1GHz is quite impressive. Some enthusiasts will likely throw their noses up at the severe handicap placed upon the maximum allowable limits but any more could negatively impact ASIC longevity.

In addition, don’t expect the same support from every board partner since NVIDIA hasn’t made the inclusion of voltage control mandatory, nor is the maximum voltage set in stone. Some manufacturers may simply decide to forego the inclusion of voltage modifiers within their VBIOS, eliminating this feature altogether.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
GeForce Experience's ShadowPlay & OCing Gets "Reasons"

GeForce Experience’s ShadowPlay


When NVIDIA first announced GeForce Experience, many enthusiasts just shrugged and moved on with their gaming lives. However, this deceptively simple looking piece of software could very well revolutionize PC gaming, allowing for high fidelity image quality without the need to tweak countless in-game settings.

GTX-780-NV-19.png

For regular PC gamers, finding just the right settings which optimize a given hardware configuration’s performance is part of the fun. Unfortunately for novices and casual gamers who are used to the “load and play” mentality of console and tablet games, the process of balancing framerates and image quality can prove to be a daunting one. With GeForce Experience, NVIDIA takes the guesswork out of the equation by linking their software with a cloud-based service which uses a broadly established database to automatically find the best possible in-game settings for your hardware. This could potentially open up PC gaming to a much larger market.

GeForce Experience’s goals are anything but modest and judging from a highly successful Beta phase (over 2.5 million people downloaded the application), NVIDIA will likely begin rolling out the final version in the next few months. The software's next evolutionary steps are being done in parallel with its ongoing development so new features are being added in preparation for launch. GFE will soon be used as the backbone for NVIDIA’s SHIELD handheld gaming device and a brand new addition aptly named ShadowPlay has entered the picture too.

GTX-780-NV-20.png

With recoded and live gaming sessions becoming hugely popular on video streaming services, ShadowPlay aims to offer a way to seamlessly log your onscreen activities without the problems of current solutions. Applications like FRAPS which have long been used for in-game recording are inherently inefficient since they tend to require a huge amount of resources, bogging down performance during situations when you need it the most. In addition, their file formats aren’t all that space conscious with 1080P videos of over 10 minutes routinely eating up over a gigabyte of storage space.

By leveraging the compute capabilities of NVIDIA’s GeForce graphics cards, ShadowPlay can automatically buffer up to 20 minutes of previous in-game footage. In many ways it acts like a PVR by recording in the background using a minimum of resources, ensuring a gamer will never notice a significant performance hit when it is enabled. There is also a Manual function which can start and stop recording with the press of a hotkey. All videos are encoded in real time using H.264 / MPEG4 compression by some of the GPU’s compute modules, making for relatively compact files.

Since ShadowPlay’s recording and encoding is processed on the fly, it can be done asynchronously to the onscreen framerate so there won’t be any FRAPS-like situations where you’ll need to game at 30 or 60 FPS when recording.


NVIDIA Adds “Reasons” to Overclocking


After TITAN launched, NVIDIA spent a good amount of time talking to overclockers in order to get their feedback about GeForce Boost 2.0 and its impact upon clock speeds. From voltage to Power Limit to Temperature Limits, Boost imparts a large number of limiting factors onto core frequencies but enthusiasts had no way of knowing exactly which of these was limiting their overclocks. In order to give this much-needed information to us in a meaningful way, NVIDIA has implemented what they call “Reasons” into overclocking software. In layman’s terms, “Reasons” simply allows you to see what settings have to be modified in order to achieve higher overclocks and it does so in a brilliantly simple manner.

GTX-780-NV-12.jpg

As we can see above, within the Monitoring tab of EVGA Precision there are now five new categories: Temp Limit, Power Limit, Voltage Limit, OV Max Limit and Utilization Limit. Each of these logs the information for a specific Boost modifier, all of which can artificially hold back clock speeds. The graphs are presented in such a way that a reading of “1” means that a limit is being reached while a “0” means there’s still some overhead. Just take note that getting a “1” in OV (over voltage) Max Limit is a serious red flag. It means the ASIC is overly stressed, possibly leading to core damage so either the voltage or clock speeds should be dialed back as soon as possible.

If the example above is used, our card is being held back by the Power Limit and Voltage Limit, so increasing both within Precision should theoretically lead to higher clock speeds. This is definitely helpful but with such stringent limitations being put on the Voltage and Power Limit modifications, one of these will always become the bottleneck regardless of how well a card is cooled.

According to NVIDIA, this feature will be available in EVGA’s Precision, ASUS’ GPU Tweak, MSI’s AfterBurner and most other vendor-specific overclocking software.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Testing Methodologies Explained & FCAT Gets Explained

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: Gigabyte Z68X-UD3H-B3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 320.18 Beta
NVIDIA 320.14 Beta
AMD 13.5 Beta 2



*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Time Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.


Frame Time Testing & FCAT

To put a meaningful spin on frame times, we can equate them directly to framerates. A constant 60 frames across a single second would lead to an individual frame time of 1/60th of a second or about 17 milliseconds, 33ms equals 30 FPS, 50ms is about 20FPS and so on. Contrary to framerate evaluation results, in this case higher frame times are actually worse since they would represent a longer interim “waiting” period between each frame.

With the milliseconds to frames per second conversion in mind, the “magical” maximum number we’re looking for is 28ms or about 35FPS. If too much time spent above that point, performance suffers and the in game experience will begin to degrade.

Consistency is a major factor here as well. Too much variation in adjacent frames could induce stutter or slowdowns. For example, spiking up and down from 13ms (75 FPS) to 28ms (35 FPS) several times over the course of a second would lead to an experience which is anything but fluid. However, even though deviations between slightly lower frame times (say 10ms and 25ms) wouldn’t be as noticeable, some sensitive individuals may still pick up a slight amount of stuttering. As such, the less variation the better the experience.

In order to determine accurate onscreen frame times, a decision has been made to move away from FRAPS and instead implement real-time frame capture into our testing. This involves the use of a secondary system with a capture card and an ultra-fast storage subsystem (in our case five SanDisk Extreme 240GB drives hooked up to an internal PCI-E RAID card) hooked up to our primary test rig via a DVI splitter. Essentially, the capture card records a high bitrate video of whatever is displayed from the primary system’s graphics card, allowing us to get a real-time snapshot of what would normally be sent directly to the monitor. By using NVIDIA’s Frame Capture Analysis Tool (FCAT), each and every frame is dissected and then processed in an effort to accurately determine latencies, frame rates and other aspects.

We've also now transitioned all testing to FCAT which means standard frame rates are also being logged and charted through the tool. This means all of our frame rate (FPS) charts use onscreen data rather than the software-centric data from FRAPS, ensuring dropped frames are taken into account in our global equation.

As you might expect, this is an overly simplified explanation of FCAT but expect our full FCAT article and analysis to be posted sometime in June. In the meantime, you can consider this article a transitional piece, though FCAT is being used for all testing with FRAPS being completely cast aside.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Assassin’s Creed III / Crysis 3

Assassin’s Creed III (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/RvFXKwDCpBI?rel=0" frameborder="0" allowfullscreen></iframe>​

The third iteration of the Assassin’s Creed franchise is the first to make extensive use of DX11 graphics technology. In this benchmark sequence, we proceed through a run-through of the Boston area which features plenty of NPCs, distant views and high levels of detail.


2560x1440

GTX-780-NV-37.jpg

GTX-780-NV-30.jpg

In Assassin’s Creed III, the new GTX 780 ends up significantly faster than the GTX 680 and HD 7970 GHz Edition, behaving more like a TITAN than a previous generation flagship GPU. It also seems like the GTX 780’s faster clock speeds allow it to overcome some of the handicap brought about by its two disabled SMX modules.



Crysis 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/zENXVbmroNo?rel=0" frameborder="0" allowfullscreen></iframe>​

Simply put, Crysis 3 is one of the best looking PC games of all time and it demands a heavy system investment before even trying to enable higher detail settings. Our benchmark sequence for this one replicates a typical gameplay condition within the New York dome and consists of a run-through interspersed with a few explosions for good measure Due to the hefty system resource needs of this game, post-process FXAA was used in the place of MSAA.


2560x1440

GTX-780-NV-38.jpg

GTX-780-NV-31.jpg

Crysis 3 is one of the most GPU-bound games around and even the GTX 780 dips towards the 30 FPS mark every now and then. However, once again, this new GK110-based GPU is significantly ahead of its predecessor and AMD’s HD 7970 GHz Edition.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Dirt: Showdown / Far Cry 3

Dirt: Showdown (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/IFeuOhk14h0?rel=0" frameborder="0" allowfullscreen></iframe>​

Among racing games, Dirt: Showdown is somewhat unique since it deals with demolition-derby type racing where the player is actually rewarded for wrecking other cars. It is also one of the many titles which falls under the Gaming Evolved umbrella so the development team has worked hard with AMD to implement DX11 features. In this case, we set up a custom 1-lap circuit using the in-game benchmark tool within the Nevada level.


2560x1440

GTX-780-NV-39.jpg

GTX-780-NV-32.jpg

While Dirt Showdown may seriously favor AMD’s cards, the GTX 780 is able to maintain framerates which are perfectly playable and slightly ahead of the GHz Edition. In comparison against the GTX 680, the difference is like night and day.


Far Cry 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/mGvwWHzn6qY?rel=0" frameborder="0" allowfullscreen></iframe>​

One of the best looking games in recent memory, Far Cry 3 has the capability to bring even the fastest systems to their knees. Its use of nearly the entire repertoire of DX11’s tricks may come at a high cost but with the proper GPU, the visuals will be absolutely stunning.

To benchmark Far Cry 3, we used a typical run-through which includes several in-game environments such as a jungle, in-vehicle and in-town areas.



2560x1440

GTX-780-NV-40.jpg

GTX-780-NV-33.jpg

Once again we see the GTX 780 benchmarking much closer to TITAN than either of the two other cards. It provides excellent performance without the associated cost of a TITAN while also significantly outpacing the GTX 680 and HD 7970 GHz Edition.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Hitman Absolution / Max Payne 3

Hitman Absolution (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.


2560x1440

GTX-780-NV-41.jpg

GTX-780-NV-34.jpg

NVIDIA’s performance in Hitman Absolution has never been what one would call impressive but the GTX 780 provides a massive improvement over the GTX 680. Against the HD 7970 GHz on the other hand, it provides just under 20% higher onscreen framerates which lines up almost perfectly with its comparative price.



Max Payne 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/ZdiYTGHhG-k?rel=0" frameborder="0" allowfullscreen></iframe>​

When Rockstar released Max Payne 3, it quickly became known as a resource hog and that isn’t surprising considering its top-shelf graphics quality. This benchmark sequence is taken from Chapter 2, Scene 14 and includes a run-through of a rooftop level featuring expansive views. Due to its random nature, combat is kept to a minimum so as to not overly impact the final result.


2560x1440

GTX-780-NV-42.jpg

GTX-780-NV-35.jpg

Like many of Rockstar’s games, Max Payne 3 has always been a memory hog which is likely one of the reasons why the 3GB GTX 780 holds such a significant advantage over its 2GB predecessor. However, AMD’s HD 7970 GHz puts up a compelling fight, coming in at just 15% slower.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Tomb Raider

Tomb Raider (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.


2560x1440

GTX-780-NV-43.jpg

GTX-780-NV-36.jpg

It seems like NVIDIA has yet to fully optimize performance for Tomb Raider after successive patches essentially broke several optimizations. This results in the HD 7970 GHz providing some relatively good competition while the GTX 680 lags quite far behind.
 

Latest posts

Top