The day has finally arrived: NVIDIA GeForce RTX is here and we’re able to tell you all about it. We’ve spent the last week with both the GeForce 2080 and 2080 Ti and feel like we’ve only scratched the surface. There’s a lot going on with these cards and they’re poised to change the face of game rendering as we know it, so let’s not waste any time and get right into it. This is our review of the NVIDIA GeForce RTX 2080 and 2080 Ti.
- MSRP: $1199 (RTX 2080 Ti Founders Edition)/$799 (RTX 2080 Founders Edition)
- CUDA Cores: 4352 (2080 Ti)/2944 (2080)
- RTX Ops: 78T (2080 Ti)/60T (2080)
- Giga Rays/s: 10 (2080 Ti)/8 (2080)
- Base Clock: 1350 (2080 Ti)/1515 (2080)
- Base Clock: 1635MHz OC (2080 Ti)/1800 OC (2080)
- Memory Speed: 14 Gbps (2080 Ti)
- Memory Config: 11GB GDDR6 (2080 Ti)/8GB GDDR6
- Memory Interface Width: 352-bit (2080 Ti)/256-bit (2080)
- Memory Bandwidth (GB/s): 616 GB/s (2080 Ti)/448 GB/s (2080)
- Displays Supported: 4
- Maximum Resolution: 7680x4320
- Standard Connectors: DisplayPort 1.4, HDMI 2.0, USB Type-C
- Maximum GPU Temperature: 89C (2080 Ti)/88C (2080)
- Graphics Card Power: 260W (2080 Ti)/225W (2080)
- Recommended Power Supply: 650W (2080 Ti)
- Power Connectors: 8 pin + 8 pin (2080 Ti)/6 pin + 8 pin (2080)
- Dimensions: 4.556” (H) x 10.5” (L) x 2-slot (W)
- Key Features (full list here and here): Real-time ray tracing, GPU Boost 4.0, GeForce Experience, NVIDIA Ansel, NVIDIA Highlights, G-Sync Compatible, Microsoft DirectX 12/Vulcan/Open GL 4&5 support, DisplayPort 1.4, HDMI 2.0b, HDCP 2.2, NVIDIA NVLink (SLI), USB Type-C and Virtual Link
To say NVIDIA’s 20-series was highly anticipated would be the understatement of the year. In both tech and gaming circles, the anticipation has been extremely high. Given the level of excitement and new premium pricing, I was happy to see NVIDIA give their cards a facelift. The cards look, feel, and perform better than ever before.
The biggest change is that each GPU now features an extended vapor chamber and dual-fan cooling. Both the 2080 and 2080 Ti share a design here and, in our testing, results in a quieter, cooler card.
Compared to the 1080, the 2080 line is certainly taller but shares the same length and width as its predecessor, so it should fit in your existing case just fine. To accommodate the larger dual axial fans, it does adopt a more rectangular shape, putting it in line with a number of partner cards rather than the narrow chassis we’ve grown used to.
Without question, though, these are the nicest video cards I have ever held. The aluminum body feels great and more premium than any other card we’ve had in for testing. They’re simple, but the mix of black and aluminum looks absolutely fantastic. They’ve even gone so far as to bevel the edges and polish the text, so the card appears trimmed in whatever case lighting you have.
The cards also feature a gorgeous new backplate. Without a hot-spot tool, I can’t make claims on its effectiveness, but it certainly looks good. The whole card gets warm to the touch, notably so on the backplate, so it’s safe to say it’s dissipating some heat. Even if it wasn’t, though, it’s a bit of trim that really ties the look together.
Do note, however, that these cards do seem substantially harder to disassemble than prior generations of card. We don’t take things to that extent here at MMORPG but other reviewers have had a much harder time than on prior generations, so if you’re looking to mod either of these for water cooling, be prepared to do a lot of unscrewing.
Another neat addition we have here is the move to NVLink over the tradition SLI fingers found on past cards. NVLink is a new connection method that NVIDIA says will allow for “50X the transfer bandwidth” of last-gen SLI. We haven’t had the chance to test this for ourselves but we’re curious whether this indicates a shift in support toward SLI after it seemed to wane in the 10-series. When properly supported by games, that kind of bandwidth could lead to much better scaling in multi-GPU systems.
Lastly we have the new rear-IO. With support for up to four displays, both cards support 3x DisplayPort 1.4, a single HDMI 2.0b with HDCP 2.2 support, and VirtualLink USB Type-C as a new standard for VR headsets.
Turing Architecture - RTX 20-Series is RICH in Potential
The thing to understand about RTX is this: they’re changing the game for way more than raising the FPS in today’s games. In fact, I’d say that the 2080 is only likely to get better with time as titles are patched and developed to take advantage of this new technology and rendering methods. Both of these cards are capable of 4K60 or better (especially in the case of the 2080 Ti), but what’s going on behind the scenes has the potential to lead to dramatic improvements in what developers can do with their games. We’re not just talking about making games look pretty, either.
Put simply, inside the 2080 and 2080 Ti, you have a three-part processor that allows for simultaneous processing, enabling never-before-possible techniques like Deep Learning Super-Sampling (DLSS), which is an AI-powered method of anti-aliasing.
On the left, you have the Turing SM, which is responsible for shading duties, offering 14 teraflops of performance. On the right, you have the tensor core, which previously was found in super-computers responsible for deep learning neural networks. On the bottom is the ray tracing core which, you guessed it, handles ray tracing for vastly more realistic lighting.
When it comes to individual specs, the 2080 Ti features an impressive 4352 CUDA Cores with a boost clock, stock, at 1635MHz on the Founders Edition, overclocked from NVIDIA by 90MHz. This is hardly what these cards are capable of thanks to the new NVIDIA Scanner one-click overclock, but we’ll get to that soon. The card also packs 11GBs of GDDR6 VRAM running at 14Gbps with a 352-bit bus width, giving us an incredible 616 GB/s transfer speed. It’s also specced for 10 Giga Rays a second an 78T RTX-OPS up from the 76T on the reference edition.
The 2080, on the other hand, features 2944 CUDA cores and a boost clock of 1800MHz, again up-clocked by 90MHz from reference. This model features 8GB of GDDR6 again running at 14Gbps but on a 256-bit bus leading to a total bandwidth of 448GB/s. It’s designed to deliver 60T RTX-OPS (up from 57T on reference) and 8 Giga Rays a second.
Both cards promise, and deliver, 4K 60FPS gameplay and full HDR support.
But back to that GPU, because it’s about more than the specs we’re used to.
Hybrid Rendering, RTX-OPS
A typical rendering load is likely to use all three of the elements in the picture above (more so if it’s actively Ray Tracing which we have been unable to test). As the Turing chip takes on the workload, though, it will be physically offloading tasks to the different cores, allowing a degree of synchronous multi-tasking that’s never been possible. Yet, because of this, the cores won’t be utilized to their maximum 100% of the time, which makes quoting the maximum capability a bit misleading. As a result, “RTX-OPS” a figure that’s a result of an algorithm based upon average utilization of each core in a gaming scenario.
Mesh Shading, Deep Learning Super-Sampling (DLSS), Ray Tracing
Hybrid Rendering allows for a brand new shading model that extends the possibilities of what’s possible in video games. This new model is called Mesh Shading. Since Turing allows parallel processing, it’s able to pull geometric, vertex, and tessellation shading stages into its own workflow, when previously those techniques would rely heavily on the CPU.
Mesh Shading also opens the door to Dynamic Level of Detail (LOD) control on the GPU, again removing the CPU bottleneck developers previously faced. Using Hybrid Rendering in conjunction with Dynamic LOD, Turing can ensure that what you’re seeing is high resolution and detailed where everything far away and out of sight isn’t sucking down system resources and killing performance.
Together, these elements mean that developers can now make many, many more objects appear on the screen. The demo I experienced featured over 340,000 asteroids at one point, looking amazing. Three-hundred-forty-thousand. That’s amazing.
The other game-changer is DLSS, Deep Learning Super-Sampling. Using Turing’s Tensor Core, the 2080 and 2080 Ti, will use its neural network to intelligently analyze the scene as it’s being rendered. Using the power of AI, it will intelligently fill in portions of the scene to provide a clear image akin to NVIDIA’s TAA anti-aliasing method at a fraction of the performance cost. We were able to test performance between TAA and DLSS in a new Final Fantasy XV benchmark and saw a FPS boost from 40 to 55 - 27% - with no visible difference in quality. (Ed. note - performance testing on page 2 was conducted before this updated demo was made available and thus uses the standard download and High preset without DLSS).
Finally, we come to ray tracing and, combined with AI and NVIDIA’s rendering algorithms, RTX. Unfortunately, all we can say is that it’s rich with potential. At the time of this review, no game has released a patch to enable it and the demos, while beautiful, aren’t the same as going hands on in a game. When those games do come out, you can expect a follow up article breaking down the reality of real-time ray tracing versus the promise.
If you’re unfamiliar with the concept, ray tracing allows lighting, shadows, and reflections to be more realistic than ever before. In the video above, you can see it in action. By calculating individual rays and rendering with RTX, these cards are able to calculate the reflection, refraction, and absorption of light in a way that’s never before been possible. If they’re able to perform while still offering reasonable frame rates, we’ll truly be in the next-generation of video game graphics.
GPU Boost 4.0 and One-Click Overclocking (NVIDIA Scanner)
Before getting to the benchmarks, I’d be remiss to gloss over two other features: GPU Boost 4.0 and NVIDIA Scanner. GPU Boost is NVIDIA’s system for intelligently ramping up clock speeds to deliver the best performance possible. In GPU Boost 3.0, users could adjust the power and temperature targets and their GTX card would run uncapped, ramping itself up until it reached those thresholds before dropping itself back down to the base clock speed. With GPU Boost 4.0, we now have the freedom to adjust the actual performance curves being applied, lengthening the time before the card slows itself down and providing a plateau of heightened performance.
More exciting to me was the new NVIDIA Scanner tool, which allows you to safely overclock your card with the press of a single button. I tested this within EVGA’s new Precision X1 software (it’s an API, developers can adopt it into their own programs) and simply by pressing “Scan” and waiting 15 minutes for it to finish its tests, I was able to apply a 1870MHz overclock on my 2080 Ti, without adjusting any thermal or power limits. Wow. By simply raising the power target, not the temp, the system reassessed and came back with a 1956MHz overclock! Since the software is developed by NVIDIA directly, they’re able to use their knowledge or the architecture and voltage requirements to do this without compromising the card with too much wattage or unnecessary crashes in the testing process. Color me impressed: it’s simple, safe, and works well.