Nvidia's GeForce RTX 2080 graphics card reviewed

Nvidia’s GeForce RTX 2080 Ti has already proven itself the fastest single graphics card around by far for 4K gaming, but the $1200 price tag on the Founders Edition card we tested—and even higher prices for partner cards at this juncture—mean all but the one percent of the one percent are going to be looking at cheaper Turing options.

So far, that mission falls to the GeForce RTX 2080. At a suggested price of $700 for partner cards or $800 for the Founders Edition we’re testing today, the RTX 2080 is hardly cheap. To be fair, Nvidia introduced the GTX 1080—which this card ostensibly replaces—at $600 for partner cards and $700 for its Founders Edition trim, but that card’s price fell to $500 after the GTX 1080 Ti elbowed its way onto the scene. Right now, pricing for the RTX 2080 puts it in contention with the GeForce GTX 1080 Ti. That’s not a comfortable place to be, given that software support for Turing’s unique features is in its earliest stages. Our back-of-the-napkin math puts the RTX 2080’s rasterization capabilities about on par with those of the 1080 Ti, and rasterization resources are the dukes the middle-child Turing card has to put up today.

On top of that, plenty of gamers are just plain uncomfortable with any generational price increase from the GTX 1080 to the RTX 2080. That’s because recent generational advances in graphics cards have delivered new levels of graphics performance to the same price points we’ve grown used to. For example, AMD was able to press Nvidia hard on this point as recently as the Kepler-Hawaii product cycle, most notably with the $400 R9 290. Once Maxwell arrived, the $330 GeForce GTX 970 thoroughly trounced the Kepler GTX 770 on performance and the R9 290 on value, and the $550 GTX 980 outclassed the GTX 780 Ti for less cash. The arrival of the $650 GTX 980 Ti some months later didn’t push lesser GeForce cards’ prices down much, but it did prove an exceptionally appealing almost-Titan. AMD delivered price- and performance-competitive high-end products shortly after the 980 Ti’s release in the form of the R9 Fury X and R9 Fury.

Overall, life for PC gamers in the Maxwell-Hawaii-Fiji era was good. Back then, competition from the red and green camps was vigorous, and that competition provided plenty of reason for Nvidia and AMD to deliver more performance at the same price points—or at least to cut prices on existing products when new cards weren’t in the offing.

Pascal’s release in mid-2016 echoed this cycle. At the high end, the GTX 1080 handily outperformed the GTX 980 Ti, while the GTX 1070 brought the Maxwell Ti card’s performance to a much lower price point. AMD focused its contemporaneous efforts on bringing higher performance to more affordable price points with new chips on a more efficient fabrication process, and Nvidia responded with the GTX 1060, GTX 1050 Ti, and GTX 1050. Some months later, we got a Titan X Pascal at $1200, then a GTX 1080 Ti at $699. The arrival of the 1080 Ti pushed GTX 1080 prices down to $500. Life was, again, good.

The problem today is that AMD has lost its ability to keep up with Nvidia’s high-end product cycle. The RX Vega 56 and RX Vega 64 arrived over a year after the GTX 1070 and GTX 1080, and they only achieved performance parity with those cards while proving much less power-efficient. Worse, Vega cards proved frustratingly hard to find for their suggested prices. Around the same time, a whole bunch of people got the notion to do a bunch of cryptographic hashing with graphics cards, and we got the cryptocurrency boom. Life was definitely not good for gamers from late summer 2017 to the present, but it wasn’t entirely graphics-card makers’ fault.

Cryptocurrency miners’ interest in graphics cards has waned of late, so graphics cards are at least easier to buy for gamers of every stripe. The problem for AMD is that Vega 56 and Vega 64 cards are still difficult to get for anything approaching their suggested prices, even as Pascal performance parity has remained an appealing prospect for gamers without 4K displays. On top of that, AMD has practically nothing new on its Radeon roadmap for gamers at any price point for a long while yet. Sure, AMD is fabricating a Vega compute chip at TSMC on 7-nm FinFET technology, but that part doesn’t seem likely to descend from the data center any time soon.

No two ways about it, then: the competitive landscape for high-end graphics cards right now is dismal. As any PC enthusiast knows, a lack of competition in a given market leads to stagnation, higher prices, or both. In the case of Turing, Nvidia is still taking the commendable step of pushing performance forward, but it almost certainly doesn’t feel threatened by AMD’s Radeon strategy at the moment. Hence, we’re getting high-end cards with huge, costly dies and price increases to match whatever fresh performance potential is on tap.  Nvidia is a business, after all, and businesses’ first order of business is to make money. The green team’s management can’t credibly ignore simple economics.

gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw== - Nvidia's GeForce RTX 2080 graphics card reviewed

A block diagram of the TU102 GPU. Source: Nvidia

On that note, the RTX 2080 draws its pixel-pushing power from a smaller GPU than the 754-mm² TU102 monster under the RTX 2080 Ti’s heatsink. The still-beefy 545-mm² TU104 maintains the six-graphics-processing-cluster (GPC) organization of TU104, but each GPC only contains eight Turing streaming multiprocessors, or SMs, versus 12 per GPC in TU102. Those 48 SMs offer a total of 3072 FP32 shader ALUs (or CUDA cores, if you prefer). Thanks to Turing’s concurrent integer execution path, those SMs also offer a total of 3072 INT32 ALUs. Nvidia has disabled two SMs on TU104 to make an RTX 2080. Fully operational versions of this chip are reserved for the Quadro RTX 5000.

ROP pixels/
path (bits)
RX Vega 56147164224/11235842048410 GB/s8 GB
GTX 1070168364108/1081920256259 GB/s8 GB
RTX 2070 FE171064120/1202304256448 GB/s8 GB
GTX 1080173364160/1602560256320 GB/s8 GB
RX Vega 64154664256/12840962048484 GB/s8 GB
RTX 2080 FE180064184/1842944256448 GB/s8 GB
GTX 1080 Ti158288224/224?3584352484 GB/s11 GB
RTX 2080 Ti FE163588272/2724352352616 GB/s11 GB
Titan Xp158296240/2403840384547 GB/s12 GB
Titan V145596320/32051203072653 GB/s12 GB

The massive TU104 die only invites further comparisons between the RTX 2080 and the GTX 1080 Ti. The GP102 chip in the 1080 Ti measures 471 mm² in area, although it’s given over entirely to rasterization resources. That means GP102 has more ROPs than TU104 has in its entirety—88 of which are enabled on the RTX 2080 Ti—and a wider memory bus, at 352 bits versus 256 bits. Coupled with GDDR5X RAM running at 11 Gbps per pin, the GTX 1080 Ti boasts 484.4 GB/s of memory bandwidth.

Like the RTX 2080 Ti, the 2080 relies on the latest-and-greatest GDDR6 RAM to shuffle bits around. On this card, Nvidia taps 8 GB of GDDR6 running at 14 Gbps per pin on a 256-bit bus for a total of 448 GB/s of memory bandwidth. Not far off the 1080 Ti, eh? While the GTX 1080 Ti has a raw-bandwidth edge on the 2080, we know that the Turing architecture boasts further improvements to Nvidia’s delta-color-compression technology that promise higher effective bandwidth than the raw figures for GeForce 20-series cards would suggest. The TU104 die has eight memory controllers capable of handling eight ROP pixels per clock apiece, for a total of 64. All of TU104’s ROPs are enabled on the RTX 2080.

RX Vega 5694330/1655.910.5
GTX 1070108202/2025.07.0
RTX 2070 FE109246/2465.17.9
GTX 1080111277/2776.98.9
RX Vega 6499396/1986.212.7
RTX 2080115331/33110.810.6
GTX 1080 Ti139354/3549.511.3
RTX 2080 Ti144473/4739.814.2
Titan Xp152380/3809.512.1
Titan V140466/4668.716.0

As a Turing chip, TU104 boasts execution resources new to Nvidia gaming graphics cards. First up, TU104 has 384 total tensor cores for running deep-learning inference workloads, of which 368 are active on the RTX 2080. Compare that to 576 total and 544 active tensor cores on the RTX 2080 Ti. For accelerating bounding-volume hierarchy traversal and triangle intersection testing during ray-tracing operations, TU104 has 48 RT cores, 46 of which are active on the RTX 2080. TU102 boasts 72 RT cores in total, and 68 of those are active on the RTX 2080 Ti.

The RTX 2080 Founders Edition we’re testing today has the same swanky cooler as the RTX 2080 Ti FE on top of its TU104 GPU. Underneath that cooler’s fins, however, Nvidia has provided only an eight-phase VRM versus 13 on the 2080 Ti, and the card draws power through a six-pin and eight-pin connector rather than the dual eight-pin plugs on the RTX 2080 Ti. Nvidia puts the stock board power of the 2080 FE at 225 W, down slightly from the GTX 1080 Ti’s 250-W spec but way up from the GTX 1080’s 180-W figure. Given the RTX 2080’s massive price tag, massive die, and extra power requirements versus the GTX 1080 Founders Edition, however, the 45-W increase isn’t that surprising.

Let’s block ads! (Why?)

Source link

One Response

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.