NVIDIA RTX 2080 Ti, 2080 & 2070 Explained
Two years, three months and three days. That’s how long it’s been since NVIDIA announced their GTX 1080, the first of many graphics cards featuring the Pascal architecture. That’s an eternity in technology years but Pascal is now on its way out, replaced with a brand new architecture code-named Turing and the products it will span could change the way we look at games now and into the future.
When compared to the situation two years ago, the market is a very different place right now. Despite leading a fast paced hype train AMD’s Vega proved to be too little too late despite their entire lineup getting a significant sales boost from the cryptocurrency craze. The number of popular games that actually need every ounce of leading edge graphics horsepower has also dwindled. Even mid-tier GPUs are now able to push enough pixels to satisfy 1080P and even 1440P displays in literally every popular game around. Optimization has come a long way and for proof of that look no further than titles like Doom and the Call of Duty series, both of which look stunning but require impressively low requirements to run well.
So where do we go from here? There’s obviously a phenomenal amount of performance potential being left on the table even with current generation hardware, having reached the peak of what many modern game engines and rendering techniques can accomplish. But outside the gaming space NVIDIA has made rapid evolutions in the realms of artificial intelligence and purpose-built raytracing. Their intent is to further improve performance for traditional techniques but also leverage AI and ray tracing within the game environment to enhance visuals to create truly next generation games.
In the past, such techniques were available but their hardware requirements were so dauntingly high, using them in real-time game rendering was impossible. NVIDIA’s intent is to give developers a suite of tools that allow these features to be implemented in future games but without those incredibly high performance penalties. That’s where RTX steps into the equation.
Under The Hood
At the heart of Turing’s success or failure lies that RTX ecosystem which is essentially a holistic platform of hardware, software and APIs that’s meant to fuse advanced AI deep learning and ray tracing into the more traditional graphics pipeline. That means ray tracing would be harnessed through Microsoft’s DXR, Vulkan or NVIDIA’s own Optix engine while AI is accessed via NVIDIA’s NGX deep learning features.
Meanwhile, the ubiquitous graphics elements we’ve all come to know like the Streaming Multiprocessor for rasterization and compute-focused CUDA cores are still present and accounted for. Basically RTX will (should?) allow the purpose-built elements within NVIDIA Turing architecture to hum along.
The Turing architecture includes two new core types alongside the graphics pipeline, both of which we have seen in the past on both Volta and NVIDIA’s new Quadro series. There’s the RT cores which are specifically engineered to excel in scenarios where ray tracing is required while the Tensor cores home in on deep neural network processing. What’s been done here is pretty interesting since there are now very specific cores for very specific tasks, almost like fixed function stages in other CPU and GPU architectures.
Since it has dedicated cores for varying workloads what Turing does is internally parallelize as many functions as possible in an effort to properly direct its four primary resource pools (RT, Tensor, CUDA and raster engine). Within each frame the architecture can render, process AI and pull complex ray tracing in a nearly lateral manner rather than sequentially, offering a massive speedup over traditional chip designs. Hence it is able to offer up to 10 times the performance of a standard GTX 1080 Ti in operations that include ray tracing, shading and DNN operations.
But what about operations that don’t use RTX-specific chip sections like the RT and Tensor cores? If you think that means a huge chunk of Turing’s 18.9 billion transistors won’t be utilized by the vast majority of games, you are perfectly right. NVIDIA is banking a substantial amount of money on developers getting onboard to properly harness these new features since without RTX being enabled in a game engine, Turing won’t be utilized to anywhere near its fullest potential.
Like it or not, more than ever developer relations and buy-in will lead to the ultimate success or failure of NVIDIA’s latest architecture. In an effort to offer a truly forward looking next generation graphics architecture that has its roots planted firmly in professional spheres, Turing may end up being a jack of all trades but master of none. However, these new graphics cards will have a pretty substantial fallback plan as well. In order to discuss that, let’s finally get to the specifications of the new Turing-based GeForce graphics cards.
RTX – This Isn’t A Cheap Date
While the RTX-level functions won’t be put to good use until games are released that support them, the GeForce RTX series nonetheless packs an almighty punch despite pricing that could only be called stratospheric. In order to balance out performance in the current crop of titles and many of those on the horizon, NVIDIA has improved their Streaming Multiprocessor architecture within Turing. They’ve done this by optimizing the SM design, enhancing its feature set and also expanding the number of available CUDA cores in each price bracket.
The RTX 2080 Ti, RTX 2080 and RTX 2070 all have notably more traditional processing cores under their hoods. In addition, NVIDIA has moved to the all-new high speed GDDR6 memory (notice a distinct lack of HBM here folks?) which offers up to 14Gbps of speed. So despite these cards having the same amount of physical memory as their processors, bandwidth has shot up through the roof. And while we don’t know the number of ROPs or Texture units yet nor what other kinds of optimizations have been wrought within the typical shader pipeline, the resulting performance speedup in non-RTX games should still be substantial.
Looking past the excitement of NVIDIA’s launching of three GPUs in one spray-and-pray approach, one thing that’s concerning is of course the premium you’ll be paying over Pascal-based GPUs. From a reference card perspective, the RTX 2080 Ti, RTX 2080 and RTX 2070 will respectively cost $300, $150 and $120 than the products they are supposed to replace. High end gaming just got a lot less affordable but if performance improvements align with those price increases, this pill may be a bit easier to swallow.
The Founders Edition cards on the other hand take things to a whole new perspective in terms of pricing since NVIDIA is treating them as premium commodities this time around rather than the reference cards. Basically it will be up to the board partners to implement and design “reference spec” cards that hit MSRP while NVIDIA themselves take a stab at the custom-cooled, pre-overclocked segment. Essentially we are looking at another $100 to $200 tacked onto the already high RTX 2000 series costing from the previous chart.
The new Founders Edition cards have taken a very different approach from previous iterations by forgoing the blower-style heatsink and instead leveraging an axial-style design that we typically see from board partners. NVIDIA also claims these cards will boast an upgraded PWM, an integrated vapor chamber, a backplate for better cooling and near-silent operation even when overclocked.
Turing brings some important advancements in other areas as well, some of which are in evidence on the Founders Edition cards. It supports the new standardized VR connection called VirtualLink which passes VR data over a USB-C interface and implements 8K video output via DisplayPort. For creators there’s now the possibility for hardware accelerated 8K HEVC encoding and a bitrate improvement of about 25%. Finally, there’s also an NVLink interface that supplements the traditional multi-card SLI. All in all, there will be a lot to talk about as we get closer to the official release date on September 20th.
Parting Thoughts – A Personal Opinion
I’m going to start wrapping this article up but not in the usual way. Rather, I want to talk on a bit more personal level about my experience watching Jen-Hsun Huang present Turing and the new GeForce RTX series. In short, while I wasn’t the one in attendance, it was frustrating and exciting all at the same time. I think those feelings were shared by the vast majority of people commenting in the live stream’s chat.
In an attempt to justify their move towards higher levels of rendering through ray tracing and AI, NVIDIA ran a very real risk of alienating their intended audience: the very gamers who will buy these cards. Explaining new concepts (or in this case old concepts that finally have the horsepower to become reality) is never easy and even my eyes glazed over a few times while hoping they’d just get on to gaming. And yet when it came to showing off how these new features impacted actual games, it turned out there was plenty to be excited about.
NVIDA is striving for something new and they should be applauded for it. Here’s the challenge though: at present there aren’t any games that can take advantage of the very features you’re paying a premium for. A lot of this architecture’s success or failure depends upon the number of developers who are willing to play in NVIDIA’s RTX sandbox. Will it catch on or could RTX become the next True Audio? Only time will tell but at this infantile stage, there certainly seems to be a lot of potential. The initial offering of some 21 games is also a great start.
Turing seems to be an architecture that is well optimized for current games but also one that will be honed in the next months and even years. Its technologies are forward-looking and impressive, at least on paper rather than in practice. Think of it this way. If the game engine is a blank canvas and current game technologies the paint brushes, Turing could very well be the rainbow of colored paints forward-looking developers use to create a masterpiece. On the other hand, those painters also need to look beyond the traditional black and white.
This has been a decade-long development process for NVIDIA and for many gamers that concepts they’re pushing are confusing, enticing and frightening all at the same time. Not to mention expensive. However, in a market that’s devoid of any real competition, it is undeniably great to see both evolutionary and revolutionary ideas flourishing.
|Latest Reviews in Video Cards|