What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

NVIDIA GeForce GTX 980 Performance Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The launch of the GeForce GTX 980 is supposed to provide gamers with a high end graphics card but, looking past its obvious performance features, the underlying Maxwell architecture represents a significant shift in GPU engineering philosophies. By approaching GPU design in a way that mirrors NASA’s “faster, better, cheaper” approach from the mid-90’s, NVIDIA is looking to offer something unique, forward-thinking and highly efficient without offering up performance like a sacrificial lamb. Considering what was achieved with Kepler and its various iterations, what we’re seeing today is nothing more than the tip of the iceberg.

NVIDIA’s Maxwell architecture was actually heralded by a seemingly nondescript pipe-cleaning part, the GTX 750 Ti. Its GM107 core allowed for impressive performance per watt output and set the stage perfectly for today’s follow-on launch of the GTX 980 and GTX 970. Many of GM107’s design advances have been carried over en masse to the higher end GM204 core which powers these higher-end Maxwell parts.

<iframe width="640" height="360" src="//www.youtube.com/embed/TmAXedJ4FKQ?list=UUTzLRZUgelatKZ4nyIKcAbg" frameborder="0" allowfullscreen></iframe>​

Unlike in previous generations, necessity has dictated that efficiency improvements are based on a given core architecture rather than switching to a different manufacturing process. Whereas Kepler’s optimizations on the performance per watt front partially originated from the switch towards 28nm, Maxwell doesn’t rely on anything quite a obvious to achieve its goals. While it still uses the same highly refined 28nm manufacturing process of its predecessors, power consumption and heat production are both far, far lower. By optimizing at the architectural and draw command levels, NVIDIA can remain on a mature, less expensive manufacturing process while deftly avoiding the potential supply issues which are typically created when using a new process technology.

GTX-980-123-17.jpg

Many aspects of this launch will look like NVIDIA’s Kepler introduction from more than two years ago. Back then a so-called “mid-range” part was also launched first since the competition had absolutely nothing to respond with and the focus was upon squeezing the most performance out of an energy-efficient core. Not much has changed in the last two years since NVIDIA’s architectural goals are still firmly tied at the hip to minimizing TDP while offering class-leading performance across both the mobile and desktop spaces. Due to a number of factors, these aspects eluded AMD’s grasp with Hawaii which will likely allow Maxwell to surge ahead on a number of different fronts.

In many ways Maxwell arguably have further reaching implications within the notebook space than it does on the desktop. While desktop gamers aren’t typically constrained with worries over power consumption, mobile users want gaming on the go and up until now, achieving playable framerates while maintaining battery efficiency has been nearly impossible. Maxwell will change this equation and has already been launched within a version of NVIDIA’s GTX 860M which is specifically targeted towards the thin and light segment.

GTX-980-123-50.jpg

NVIDIA’s new GM204 core will be used on a pair of graphics cards: the aforementioned GTX 980 and GTX 970. With the GTX 980 being the spiritual successor to the GTX 780 Ti / GTX 780 and boasting an incredible 5 Teraflops of single precision throughput, one would expect it to have significantly better specifications. On paper at least, it doesn’t but it can still easily outperform the GTX 780 Ti in many instances. There are just 2048 CUDA cores but due to Maxwell’s broad scale efficiency improvements those cores a far better utilized and can provide up to 40% better performance on a per-unit basis. The Texture Unit count has also seen a significant reduction but the internal processing format results in drastic uplifts here as well.

One area where NVIDIA is putting additional hard resources is within the Render Output Units. Since the Maxwell core can theoretically process a much greater amount of data than the GTX 780 Ti ever could, additional raster pipelines were required. The inclusion of 64 ROPs should also benefit the overall performance of the GPU’s local memory buffer which has already received a shot of adrenalin.

Speaking of memory, don’t let those memory performance numbers in the chart above fool you. NVIDIA is doing some clever backroom compression and efficiency optimizations which allow that 7Gbps GDDR5 memory to achieve throughput that’s roughly equal to a speed of 9.3Gbps. In theory that results in an effective bandwidth of 298 GB/s which is still well short of the GTX 780 Ti’s 336 GB/s and the R9 290X’s 320 GB/s. However, there’s now 4GB on tap for improved high resolution framerate consistency.

Much of the GM204’s stated improvements stem from its ability to reach extremely high clock frequencies which are a direct result of the core’s incredibly low TDP overhead. Even though the GTX 980 can hit over 1200MHz on a regular basis and the GTX 970 isn’t far behind at 1085MHz, their TDPs are only are only 165W and 145W respectively. Yes you read that right: these new cards are, in a broad sense, nearly 100W more efficient than their predecessors. That leads to a much cooler-running, less power hungry system. We can almost hear the ITX crowd rejoicing now….

GTX-980-123-5.jpg

With all of those factors taken into account, there are several factors other than clock speeds which make the GTX 970 different. NVIDIA can cut out a trio of SMM’s (Maxwell’s version of the SMX module) which results in 1664 cores and 104 Texture Units. Meanwhile, it retains the same back end as its bigger sibling so the ROP partitions, cache structure and memory interface remain intact.

Much like NVIDIA’s launch of the GTX 680, the GTX 980 and GTX 970 will be highly disruptive cards for the entire graphics market. The GTX 980’s price of $549 undercuts the GTX 780 Ti by a massive $150 and even manages to be just a bit more expensive than AMD’s substantially lower performing R9 290X. As a matter of fact, it is so disruptive, NVIDIA is discontinuing their GTX 780 Ti and GTX 780 without announcing any price cuts. This is likely due to the fact that there are very few of these cards left in the channel. Meanwhile, the GTX 760 will fall to the lower $219 bracket

The GTX 970 on the other hand will land at exactly the same price as the GTX 770 and will push that card into EOL status as well. NVIDIA’s numbers have this card beating the higher priced R9 290 by a substantial margin so this one-two punch will likely have AMD looking long and hard at their pricing structure.

By introducing a card that replaces the GTX 780 Ti at a much lower price point, NVIDIA is once again showing that they can create an efficient next generation mid-tier core that can compete with the best being offered only a few hours ago. They seem to have done so without moving towards a different manufacturing process which makes these achievements all that much more impressive on paper. However, past the paper specifications and a seemingly awesome price point, we still have to find out how the GTX 980 fares in true comparative testing.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
In-Depth with the Maxwell Architecture & GM204

In-Depth with the Maxwell Architecture & GM204


Maxwell represents the next step in the evolution of NVIDIA’s Fermi DX11 architecture, a process which started with Kepler and now continues into this generation. This means many of the same “building blocks” are being used but they’ve gone though some major revisions in an effort to further reduce power consumption while also optimizing die area usage and boosting process efficiency.

At this point some of you may be thinking that by focusing solely on TDP numbers, NVIDIA is offering up performance as a sacrificial lamb. This couldn’t be further from the truth. By improving on-die efficiency and lowering power consumption and heat output, engineers are giving themselves more room to work with. Let’s take a high end part like the GK110 or GTX 780 Ti as an example of this metric. Like its predecessors, NVIDIA was constrained to a TDP of 250W and they basically crammed as much as possible into a die which hit that plateau at reasonable engine frequencies. As Maxwell comes into the fold, the 250W ceiling won’t change but the amount of performance which can be squeezed out of that 250W will improve exponentially. In short, Maxwell’s evolutionary design will have a profound effect upon their flagship parts’ overall performance while also bringing substantial perf per watt benefits to every market segment. GM204 and the previous pip-cleaning GM107 core are just the starting points of what will be a top-to-bottom initiative.

Before we move on, some mention has to be made about the 28nm manufacturing process NVIDIA has used for the GM204 core because it’s an integral part of the larger Maxwell story. With the new core layout efficiency has taken a significant step forward, allowing NVIDIA to provide GK110-matching performance out of a smaller, less power hungry core. GPU manufacturers used to rely upon manufacturing process improvements to address their need for lower TDP but now NVIDIA has made reduced power requirements an integral part of their next generation architecture. This puts the need for a smaller process node into question for now but it certainly doesn’t preclude the use of 20nm or smaller nodes sometime in the future, particularly when larger Maxwell designs beckon. NVIDIA likely feels that with their enhanced, highly optimized core redesign, jumping to an unproven manufacturing process really isn’t necessary for great results.

GTX-980-123-16.png

Much like with Kepler and Fermi, the basic building block of all Maxwell cores is the Streaming Multiprocessor. This is where the similarities end since the main processing stages of Maxwell’s SMs have undergone some drastic changes.

While Kepler’s SMXs each housed a single core logic block which consisted of a quartet of Warp Schedulers, eight Dispatch Units, a large 65,536 x 32-bit Register File, 16 Texture Units and 192 CUDA cores, Maxwell’s design breaks these up into smaller chunks for easier management and more streamlined data flows. While the number schedulers, Dispatch Units and Register File size remains the same, they’re separated into four distinct processing blocks, each containing 32 CUDA cores and a purpose-built Instruction Buffer for better routing. In addition, load / store units are now joined to just four cores rather than the six of Kepler, allowing each SMM to process 32 thread per clock despite its lower number of cores. This layout ensures the CUDA cores aren’t all fighting for the same resources, thus reducing computational latency.

Maxwell’s Streaming Multiprocessor’s design was created to diminish the SM’s physical size, allowing more units to be used on-die in a more power-conscious manner. This has been achieved by lowering the number of CUDA cores from Kepler’s 192 to 128 while Texture Unit allotment goes from 16 to just 8. However, due to the inherent processing efficiency within Maxwell, these shortcomings don’t amount to much since each individual CUDA core is now able to offer up to 35% more performance while the Texture Units have also received noteworthy improvements. In an apples to apples comparison, an SMM can deliver 90% of an SMX’s performance while taking up much less space.

GTX-980-123-22.png

The Maxwell SM (or SMM) uses NVIDIA’s third generation PolyMorph Engine which still includes various fixed function stages like a dedicated tessellator and parts of the vertex fetch pipeline alongside a shared instruction cache and 96KB of shared memory. There are some internal changes which have substantially improved throughput, particularly at higher tessellation levels. This also marks a significant change from the GM107, which used the older PolyMorph 2.0 engine.

There are still a number of shared resources here as well but in some cases they have been thoroughly overhauled. For example, each pair of processing blocks has access to 24KB of L1 / Texture cache (for a total of 64KB per SMM) servicing 64 CUDA cores. This is up significantly in comparison to Kepler as well as GM107.

GTX-980-123-15.png

With the changes outlined above, we can now get a better understanding of how NVIDIA utilized the SMM to create their new GM204 core. In this iteration, the GTX 980 receives the full allotment with 16 Streaming Multiprocessors for a total of 2048 CUDA cores while the GTX 970 has a trio of SMMs disabled and includes 1664 cores.

The back-end functionality is where there have been some significant changes. There are four 64-bit memory controllers which boast improved efficiency (more on that on the next page) and these are tied to four ROP partitions, each of which contains 16 ROPs, a huge increase over previous core designs. Meanwhile, on-die L2 cache has been boosted to 2048KB, broken into four partitions of 512KB. This substantial increase in on-chip cache means there will be fewer requests to the DRAM, reducing power consumption and eliminating certain bandwidth bottlenecks when paired up with the other memory enhancements built into Maxwell.

As with Fermi and Kepler, scaling on the ROP partitions, memory controllers and L2 is done in a linear fashion so eliminating a 64-bit controller will also result in lower ROP and L2 allotments.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Refreshed Memory Caching & A New Video Engine

Refreshed Memory Caching


As resolutions continue to increase and DX12 promises to enhance GPU to CPU communications, the graphics card’s onboard memory will begin playing an increasingly expanded role in overall game performance. Knowing this, the GTX 980’s 4GB, 7Gbps 256-bit memory interface may look a bit anemic and anything but “next gen” at first glance. On paper at least it provides less bandwidth than the GTX 780Ti’s layout, but there’s far more going on behind the scenes than what first meets the eye.

In creating GM204, NVIDIA has thoroughly revised their memory subsystem rather than enhancing speed or moving towards a wider interface. Focusing upon core architectural improvements over raw power was borne out of necessity since a larger 384-bit or 512-bit interface would have taken up a significant amount of die space while GDDR5 modules operating at frequencies higher than 7Gbps aren’t available yet.

GTX-980-123-1.png

While the core’s caching and memory hierarchy has largely gone unchanged from GK110, NVIDIA’s engineers have instituted a number of features that allow GM204 to utilize its available bandwidth more efficiently. These enhancements begin with a new third generation lossless compression data algorithm as the core’s data is written out to memory. Additional bandwidth savings are achieved when secondary processing stages like the core’s Texture Units read data stored within the memory buffer since the information is compressed yet again for quick transfer.

In order to enhance output bandwidth and memory subsystem efficacy, NVIDIA is actually using multiple layers of compression. In principle this approach works well in some instances (like on anti-aliased surfaces) but basic compression methods struggle to cope with lower instances of AA and non-AA situations. This is where delta color compression mode comes into play.

First instituted in Fermi, DCC quickly calculates the difference between each pixel within a data block and its adjacent neighbor and then attempts to compress the values together into the smallest size data packet possible. The effectiveness of this method is largely determined by the algorithm’s built-in calculation possibilities which is why, in this third generation, NVIDIA has added more delta compression “choices”. This leads to less data being determined as uncompressible and sent on through the rendering pipeline in raw lossless format, hogging bandwidth.

GTX-980-123-2.png

Alongside the compression algorithm improvements, core caching and general data access has been addressed as well. As a result, the GM204 core is able to reduce the number of bytes that have to be fetched from local memory and thus reduce memory bandwidth overhead.

Practically speaking NVIDIA claims up to a 25% improvement in efficiency when compared directly against Kepler’s capabilities which gives Maxwell the equivalent of 9.6Gbps of effective bandwidth in some situations. It should be interesting to see whether this will help in higher resolution scenarios, where lower bandwidth cards tend to struggle.


Maxwell’s New Video Engine Explained


NVIDIA’s Kepler architecture had an extremely robust video engine and NVENC encoder, as evidenced by its capability to pre-process and stream high definition content to NVIDIA’s SHIELD. However, in a market that’s increasingly gravitating towards 4K resolutions alongside technologies like G-SYNC and DisplayPort’s Adaptive Sync, some revisions were in order to make sure Maxwell was natively able to support upcoming display features.

GTX-980-123-3.png

One of the primary additions to this new video engine is its capability to provide an extreme amount of output bandwidth. It boasts full support for upcoming 5K (5120×3200) resolutions at 60Hz and up to four 4K MST displays can be driven from a single card. Alternately, these capabilities also open the door to 120Hz and 144Hz 4K panels. In addition, GM204-based products will be the first to boast native support for HDMI 2.0 which brings with it 4K / 60Hz capabilities along with and enhanced HD audio backbone. (eDP) protocol has been rolled into Maxwell as well. In order to create a connectivity standard that is compatible with these formats, the GTX 900-series uses a single dual link DVI output as well as three DisplayPort 1.2 connectors and a single HDMI 2.0 port.

Further helping things along is an updated NVENC encoder that now includes H.265 support and features H.264 encoding that’s roughly 2.5x faster than Kepler. Not only will this allow for 4K video capture at 60Hz within ShadowPlay (an option which will be added immediately to NVIDIA’s application) but H.265’s compression improvements have far-reaching ramifications for home video streaming. With H.265 a user could technically stream a 4K video from their gaming PC to a next generation SHIELD portable device and then pass that signal onto their 4K HDTV. Alternately, the encoding efficiency could also lead to drastically reduced lag times when using NVIDIA’s GameStream technology.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Dynamic Super Resolution / Enhanced DX12 Support

Dynamic Super Resolution Explained


With 4K becoming all the rage these days, gamers who can’t afford to spend a fortune on a display may feel like they’re being left behind. However, some users have taken the slightly different path of downsampling in an effort to squeeze the best possible image quality out of their current screens. In a downsampling scenario the graphics processor renders a given scene at a higher resolution than the screen can display and then scales the image back to fit the screen’s native resolution on. Under certain circumstances this method can provide significantly higher image quality, though it can also introduce noticeable artifacts and it tends to be quite challenging to set up properly.

Dynamic Super Resolution (DSR) does the hard work for you by providing a toggle within GeForce Experience and NVIDIA’s driver control panel that quickly enables downsampling. It also applies a 13-tap Gaussian filter during the conversion process, which is supposed to eliminate or at least reduce any onscreen artifacts.

GTX-980-123-4.png

Left: DSR Off / Right: DSR On

There are two different settings for DSR: 2X and 4X, each of which represents a different downsampling level. For example, setting 2X on with a 1080P monitor will case the original “background” render to be performed at 2715x1527 (or twice the original pixels) while 4X will essentially quadruple the initial pre-output resolution. The same goes for high resolution 1440P monitors, though pushing things to 4X would be performance-crippling since the GPU(s) would be forced to render the scene at 5120x3200.

When taken at face value, the potential of improved image quality through the use of a simple toggle switch seems quite tempting but there are some potential drawbacks as well. First and foremost, even though the onscreen images still uses your screen’s native (lower) resolution, the graphics card is rendering a more demanding scene in the background. In some situations this could dramatically reduce performance, particularly on more affordable graphics cards. In addition, you will still be somewhat limited by the actual pixel count of whatever screen you have.

Naturally, more testing will be needed and NVIDIA has opened the possibility of better DSR performance as Maxwell matures. Plus it is compatible with all GeForce GPUs so everyone can give it a try.


Enhanced DX12 Support


Maxwell is the first GPU architecture that is designed with full native support for DX12. Now this may sound like an odd statement since Microsoft and NVIDIA announced that existing DX11 cards (Fermi, Kepler and Maxwell) would be compatible with the new API but what hasn’t been revealed is the actual extent of that compatibility. Now, a few more details have emerged.

GTX-980-123-14.png

While one of DX12’s primary goals is to streamline system communication by granting developers expanded control over how their applications access the GPU and CPU pipelines, there are some additional add-on features as well. Some of these remain behind closely guarded NDA’s but we do now know that both Conservative Raster and Raster Ordered Views will only be supported on NVIDIA’s GM204 core and won’t be compatible with any other DX11-class GPU (that includes any previous version of Maxwell like the GM1xx products). We’ll cover these two technologies as additional details about DX12 are released.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
A Closer Look at the GTX 980

A Closer Look at the GTX 980


GTX-980-123-6.jpg

From the outside at least, there’s not much to distinguish the GTX 980 from its predecessors. It uses the same blower-style heatsink with a great looking windowed aluminum and anodized black cover alongside the usual glowing GeForce logo. Internally, the actual cooler design is identical to the one found on NVIDIA’s reference GTX 770.

NVIDIA hasn’t moved away from their SLI interface and the 980 is compatible with quad card setups so pair of edge-mounted connectors is present.

For those wondering, the card’s length of about 10.5” mirrors that of the GTX 780, GTX 780 Ti and GTX 770.

GTX-980-123-9.jpg

Around back things start to get interesting with a full-length backplate even though NVIDIA hasn’t located any GDDR5 modules on the PCB’s underside. Nonetheless, the addition of this small touch gives a much more “finished” perfection to the GTX 980’s overall design language.

GTX-980-123-10.jpg

You may have noticed in the last image that there is a small upraised area on the GTX 980’s backplate. By removing a screw, this can actually be removed. According to NVIDIA this was conceived as a way to increase airflow if multiple cards are placed close to one another in SLI. Normally the bottom card’s fan would be starved for fresh air but by removing this section, a narrow but clear path is opened directly above the fan’s outside blades, allowing for improved performance. Most motherboards have sufficient space but smaller micro ATX boards may still have SLI support but typically place the GPUs extremely close together.

GTX-980-123-12.jpg

Power is provided by a pair of six-pin connectors which actually allow for far more input current than the GTX 980 needs. However, with this design there’s still a good amount of room for overclocking.

GTX-980-123-11.jpg

The most drastic changes brought about with this generation of cards is their rear I/O panel layout. Not only does the GTX 980 receive a fully perforated shielded area to provide additional airflow but there is also a trio of DisplayPort 1.2 outputs, an HDMI 2.0 port and the usual DVI connector. This allows a GTX 980 to drive up to four 4K displays at once.

GTX-980-123-18.jpg

The bare card reveals an extremely basic layout and component selection. Even though NVIDIA says component commonality between Kepler cards and the GTX 980 was maximized in order to facilitate production for board partners, the reference cards use a different layout. This means any full coverage water blocks that were compatible with reference GTX 770, GTX 780 and GTX 780 Ti won’t fit here.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Test System & Setup

Main Test System

Processor: Intel i7 4930K @ 4.7GHz
Memory: G.Skill Trident 16GB @ 2133MHz 10-10-12-29-1T
Motherboard: ASUS P9X79-E WS
Cooling: NH-U14S
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell 2412M (1440P) / ASUS PQ321Q (4K)
OS: Windows 8.1 Professional


Drivers:
AMD 14.7 Beta
NVIDIA 344.07 Beta


*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 2 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results, other than 4K.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Assassin’s Creed IV: Black Flag / Battlefield 4

Assassin’s Creed IV: Black Flag


<iframe width="640" height="360" src="//www.youtube.com/embed/YFgGnFoRAXU?rel=0" frameborder="0" allowfullscreen></iframe>​

The fourth iteration of the Assassin’s Creed franchise is the first to make extensive use of DX11 graphics technology. In this benchmark sequence, we proceed through a run-through of the Havana area which features plenty of NPCs, distant views and high levels of detail.


2560 x 1440

GTX-980-123-39.jpg

GTX-980-123-30.jpg


Battlefield 4


<iframe width="640" height="360" src="//www.youtube.com/embed/y9nwvLwltqk?rel=0" frameborder="0" allowfullscreen></iframe>​

Amidst its teething problems since its release, BF4 has been a bone of contention among gamers. In this sequence, we use the Singapore level which combines three of the game’s major elements: a decayed urban environment, a water-inundated city and finally a forested area. We chose not to include multiplayer results simply due to their randomness injecting results that make apples to apples comparisons impossible.

2560 x 1440

GTX-980-123-40.jpg

GTX-980-123-31.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Call of Duty: Ghosts / Far Cry 3

Call of Duty: Ghosts


<iframe width="640" height="360" src="//www.youtube.com/embed/gzIdSAktyf4?rel=0" frameborder="0" allowfullscreen></iframe>​

The latest Call of Duty game may have been ridiculed for its lackluster gameplay but it remains one of the best-looking games out there. Unfortunately due to mid-level loads, getting a “clean” runthrough without random slowdowns is nearly impossible, even with a dual SSD system like ours. Hence why you should ignore any massive framerate dips as they are anomalies of poor loading optimizations. For this benchmark we used the first sequence of the 5th Chapter entitled Homecoming as every event is scripted so runthroughs will be nearly identical.

2560 x 1440

GTX-980-123-41.jpg

GTX-980-123-32.jpg


Far Cry 3


<iframe width="560" height="315" src="http://www.youtube.com/embed/mGvwWHzn6qY?rel=0" frameborder="0" allowfullscreen></iframe>​

One of the best looking games in recent memory, Far Cry 3 has the capability to bring even the fastest systems to their knees. Its use of nearly the entire repertoire of DX11’s tricks may come at a high cost but with the proper GPU, the visuals will be absolutely stunning.

To benchmark Far Cry 3, we used a typical run-through which includes several in-game environments such as a jungle, in-vehicle and in-town areas.



2560 x 1440

GTX-980-123-43.jpg

GTX-980-123-34.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Hitman Absolution / Metro: Last Light

Hitman Absolution


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.


2560 x 1440

GTX-980-123-44.jpg

GTX-980-123-35.jpg


Metro: Last Light


<iframe width="640" height="360" src="http://www.youtube.com/embed/40Rip9szroU" frameborder="0" allowfullscreen></iframe>​

The latest iteration of the Metro franchise once again sets high water marks for graphics fidelity and making use of advanced DX11 features. In this benchmark, we use the Torchling level which represents a scene you’ll be intimately familiar with after playing this game: a murky sewer underground.


2560 x 1440

GTX-980-123-45.jpg

GTX-980-123-36.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Thief / Tomb Raider

Thief


<iframe width="640" height="360" src="//www.youtube.com/embed/p-a-8mr00rY?rel=0" frameborder="0" allowfullscreen></iframe>​

When it was released, Thief was arguably one of the most anticipated games around. From a graphics standpoint, it is something of a tour de force. Not only does it look great but the engine combines several advanced lighting and shading techniques that are among the best we’ve seen. One of the most demanding sections is actually within the first level where you must scale rooftops amidst a thunder storm. The rain and lightning flashes add to the graphics load, though the lightning flashes occur randomly so you will likely see interspersed dips in the charts below due to this.


2560 x 1440

GTX-980-123-46.jpg

GTX-980-123-37.jpg



Tomb Raider


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.


2560 x 1440

GTX-980-123-47.jpg

GTX-980-123-38.jpg
 
Top