View Single Post
  #15 (permalink)  
Old September 18, 2012, 10:16 AM
SKYMTL's Avatar
SKYMTL SKYMTL is offline
HardwareCanuck Review Editor
 
Join Date: Feb 2007
Location: Montreal
Posts: 11,836
Default

I'm really sorry I didn't see this when it was first posted. I'll take a crack at answering this. PLEASE FEEL FREE TO REPOST THIS as I am only saying this once.

First and foremost, I can tell you straight off the bat that neither NVIDIA nor AMD requires that certain benchmarks be used. Both companies issue what's called a Reviewer's Guide that gives reviewers an idea of general performance numbers for the graphics card in question against the competition. Under no circumstance are the games mentioned in the RG a requirement. Rather, they are a rough guideline that reviewers are supposed to use to ensure their review unit is operating within specified norms.

On the other hand, both AMD and NVIDIA love it when reviewers go above and beyond the call of duty. That means calling out and possibly benchmarking features that they talk about in marketing materials and that they feel are beneficial to end users. Discussing PhysX, Eyefinity or 3D Vision may seem like fanboyism to some but ignoring them is a huge mistake IMO. The trick is not to let too many of those "extra" features impact one's overall conclusion as they are mostly value added items rather than an integral part of the gaming experience for most people.

Now, the real question lies in why many writers use games like Sleeping Dogs, Dirt Showdown, Shogun 2, Batman AC and in the past Far Cry 2, Just Cause 2, Metro 2033 and others within reviews. Every one of those titles has a major feature that some reviewers appreciate: a built in or stand-alone benchmarking tool. NVIDIA and AMD love having developers add the tools since it virtually guarantees that certain lazy reviewers will use these sponsored titles in their benchmarking suites. Brilliant, isn't it? This is why we see so many Gaming Evolved or TWIMTBP titles in reviews.

To expand upon my point of reviewer laziness, let's focus upon some situations and how accuracy has been thrown into the wind in favor of production speed. While built-in benchamarks can be accurate, all too often their performance is under no circumstance reflective of in-game situations. Take a look at Sleeping Dogs or Batman: Arkham City for examples of this; a player will NEVER be doing a "flythough" of a game scene. Ever. We also can't forget that both NVIDIA (examples: Just Cause 2 & Batman: AC) and AMD (examples: Sleeping Dogs & Shogun 2) have been known to optimize drivers for specific instances occurring within the aforementioned predetermined sequences.

Unfortunately, most reviewers don't care about accuracy, they just care about getting comparative benchmark numbers, regardless of whether a GPU performs a certain way when actually playing a game or not. Plus, what could be easier than running a script which does the preset benchmarking for them, spits out results and doesn't involve actually sitting down for hours to playing the games? You can pretty much see who does this (and yes, even sites that say they use runthroughs are sometimes lying) by simply running through the built-in benchmark with a similar system and comparing the results.

With that being said, benchmarking a single runthrough has issues as well since using one instead of a repetitive built-in benchmark sequence introduces a certain amount of variance into the equation. Variance is bad when a a mere 5% differential could separate one GPU from another and skew a conclusion. We try to eliminate that by running through the same scene three individual times. However, HWC also uses built-in sequences (right now the only one we're using is Dirt 3 since it can be highly modified from its original form) but ONLY after extensive validation is done that ensures the sequence A) depicts an actual gameplay sequence B) lines up with in-game performance.

Sorry for this post's length but there were quite a few things to cover.
__________________
Reply With Quote