I'm not going to pretend I know what you guys are talking about, but I skimmed through and didn't see anything about Vulkan. I thought it was going to be a big deal.
AmazingAveryAge of Conan AdvocateMemberUncommonPosts: 7,188
I'm not going to pretend I know what you guys are talking about, but I skimmed through and didn't see anything about Vulkan. I thought it was going to be a big deal.
Unfortunately it hasn’t been a runaway success for PC gaming. Don’t think any 2018 games use it so far. Star Citizen announced they would switch to Vulkan but really it seems mobile games on androis and iOS have been where the focus for Vulkan has been lately.
But you’re assuming the consumer die size is going to 754mm2? A workstation card size comparison to a 1080ti that’s not right (your comparison) The consumer variant might be that big. It’ll be imo a relative comparison it won’t be as big and if it is it’ll need some good cooling for sure. Yes TMSC 12nm is just a refinement of TSMC 16nm but we’re talking maturity now with great bins. That small refinement will also bring some efficiencies. Who knows where the RT cores will reside. The RT cores are likely built into the SMs, they also might also reside in a separate area, like the center block.
Did you really completely miss the presence of the rate computations that were the point of my post? In order for a Turing GeForce card to make sense, it has to be faster than a Pascal card of the same die size. If they naively scale down the chip they've shown, it would probably be slower. Maybe there's a bunch of junk that they could chop out to save on die space, but we don't know how much.
The Turing architecture itself introduces the ability to run floating-point and integer workloads in parallel, which should help improve other aspects of performance. That will be a carry over for sure.
What are you talking about? GPUs have been able to mix floating-point and integer data types in their computations at least since the dawn of programmable shaders and probably for as long as there have been GPUs that could do 3D graphics.
"The simple is the seal of the true and beauty is the splendor of truth" -Subrahmanyan Chandrasekhar Authored 139 missions in VendettaOnline and 6 tracks in Distance
I'm not going to pretend I know what you guys are talking about, but I skimmed through and didn't see anything about Vulkan. I thought it was going to be a big deal.
Unfortunately it hasn’t been a runaway success for PC gaming. Don’t think any 2018 games use it so far. Star Citizen announced they would switch to Vulkan but really it seems mobile games on androis and iOS have been where the focus for Vulkan has been lately.
Vulkan is huge regarding Linux gaming since DXVK can now translate D11 to Vulkan so that people can play Windows games on Linux at nearly native performance. It is funny that some Windows games run even slightly faster when translated to Vulkan on Linux. It is not typical though but DXVK is already in a surprisingly good state if we look at compatibility and performance.
Even if some games actually allow ray tracing, it will be slower than what we have now. I don't see that selling in the marketplace right now. The RT stuff will be high end workstations, you won't see it in the consumer line, it just does not make sense to put it there.
AmazingAveryAge of Conan AdvocateMemberUncommonPosts: 7,188
Excellent, 2080 and 2080 Ti announcement on Monday incoming! Good to see NVLink and Type-C VirtualLink connector included on the gamer editions. 2080Ti is going to be a cutdown chip from the Quadro 8000, featuring 4352from 4608. 2080 is also a cutdown chip from Quadro 5000, featuring 2944from 3072. Codenames look to be TU104 (2080) and TU102 (Ti). Undetermined 2080 die size but the Ti looks to be bigger, won’t matter much at all if it delivers performance and the power envelop is efficient and not crazy over what there is today. I’m thinking it’ll be a bigger die than a Vega 64 but won’t draw as much power
Excellent, 2080 and 2080 Ti announcement on Monday incoming! Good to see NVLink and Type-C VirtualLink connector included on the gamer editions. 2080Ti is going to be a cutdown chip from the Quadro 8000, featuring 4352from 4608. 2080 is also a cutdown chip from Quadro 5000, featuring 2944from 3072. Codenames look to be TU104 (2080) and TU102 (Ti). Undetermined 2080 die size but the Ti looks to be bigger, won’t matter much at all if it delivers performance and the power envelop is efficient and not crazy over what there is today. I’m thinking it’ll be a bigger die than a Vega 64 but won’t draw as much power
Based on those specs, it would seem like the rumored RTX 2080 would be both slower and more expensive to build than a GeForce GTX 1080 Ti. That doesn't seem like a good way to make money.
Making a GeForce card based on the top bin would make somewhat more sense if they price it accordingly--such as $1000 or more. They'd probably disable the tensor cores and ray tracing stuff so as not to cannibalize their Quadro market. Alternatively, if they're having yield problems with the tensor cores or ray tracing parts, that would give them a way to sell defective parts as GeForce cards. Still, in terms of units sold, the market for professional cards is massively smaller than for consumer cards.
Even so, in the past, Nvidia hasn't been shy at all about using salvage parts for Quadro cards. In their current lineup, a Quadro P4000 is basically a further cut down GTX 1070, while a Quadro P2000 is a further cut down GTX 1060. It wouldn't really be that shocking if all three of the Quadro Turing parts announced thus far are based on the same die, though I suspect that it's two dies.
AmazingAveryAge of Conan AdvocateMemberUncommonPosts: 7,188
With these extra leaks today i'm now looking at it like this for the 2080- 3072/4608 CUDA = 2/3rds (~67%) count, and most likely requires ~67% of the die area. 256/384 Memory bus = ~67%, so about ~67% of the memory controllers, and similarly 67% of the die space for memory controllers. 384/576 Tensor cores = ~67% of tensor cores. In almost every way this is 2/3(67%) of RX8000.
754*.67 = ~500 mm2. The RTX5000 (RTX 2080) - TU104 is essentially 2/3 of RTX8000 (RTX 2080Ti) (TU102?) functional units, so it should be about 2/3 of the die area, so about ~500mm2 area, give or take minor amounts.
With these extra leaks today i'm now looking at it like this for the 2080- 3072/4608 CUDA = 2/3rds (~67%) count, and most likely requires ~67% of the die area. 256/384 Memory bus = ~67%, so about ~67% of the memory controllers, and similarly 67% of the die space for memory controllers. 384/576 Tensor cores = ~67% of tensor cores. In almost every way this is 2/3(67%) of RX8000.
754*.67 = ~500 mm2. The RTX5000 (RTX 2080) - TU104 is essentially 2/3 of RTX8000 (RTX 2080Ti) (TU102?) functional units, so it should be about 2/3 of the die area, so about ~500mm2 area, give or take minor amounts.
The claimed RTX 2080 Ti offers 19% more flops and 27% more memory bandwidth than a GTX 1080 Ti, at a cost of 60% more die size. For a top end part, that can be justified as, if you want the top end, you pay what it costs. And selling 754 mm^2 dies for $1000 each can be plenty profitable.
The problem comes when you try to scale it down. If there's an RTX 1080 that is 2/3 of the 1080 Ti, you then have less flops and less bandwidth, but from a larger die than GP102. What's the point of making a card that does that? Why not just keep selling GTX 1080 Tis, or rebrand them as GTX 2080s if you have to?
I realize that architectural differences can offer advantages, as one architecture can use resources more efficiently than another. But I'm not convinced that there are major gains to be had here, as Maxwell/Pascal is already pretty good at that--and far better than anything that preceded it. You can make huge gains by fixing things that are broken, but it's much harder to greatly improve on things that already worked well.
Quadro cards first saw the light of day nearly 20 years ago.
As always in niche markets the performance price ratio is key. And in much the same way that AMD has the edge in crypto mining these cards - and the accredited software such as Autodesk (compuer aided design) and Maya (animation) - have the edge in this particular niche.
One objective of the the new generation will be to protect the existing market - to see off cpu advances that might nibble away at the low end in particular; the bottom end of the Quadro range can be had for maybe a couple hundred dollars.
As far as "pc use goes". Around for nearly 20 years, some cheaper than current gaming cards .... and the reason Quadro cards are not talked about when it comes to games is that Quadro's have never been "gaming" cards. The price performance ratio for games is simply wrong.
Needless to say NVidia will be hoping to grow the market - and the prices talked about in the presentation are less than what the current cards used to cost - for more. (Pretty sure the top end was $10k+ - suggested prices seem to have changed though.) To grow the market though they will have to supply the cards - supplies of the current gen cards were usually restricted.
In games - maybe we will see more cut scenes.
AmazingAveryAge of Conan AdvocateMemberUncommonPosts: 7,188
The claimed RTX 2080 Ti offers 19% more flops and 27% more memory bandwidth than a GTX 1080 Ti, at a cost of 60% more die size.
Also approx 20% more CUDA cores (2080 Ti to 1080 Ti and 70% more than the base 1080) The RTX 2080 sees a more moderate boost over the GTX 1080 at 15% more CUDA cores.
The claimed RTX 2080 Ti offers 19% more flops and 27% more memory bandwidth than a GTX 1080 Ti, at a cost of 60% more die size.
Also approx 20% more CUDA cores (2080 Ti to 1080 Ti and 70% more than the base 1080) The RTX 2080 sees a more moderate boost over the GTX 1080 at 15% more CUDA cores.
More flops is because more shaders. That's where the flops come from, as in, shaders are what do the main floating point operations. Or at least, that's where the flops that count come from.
Comparing the rumored RTX 2080 to a GeForce GTX 1080 is the wrong comparison. It needs to be compared to the GTX 1080 Ti, as that's the card that is the problem for it. The salient thing is price and performance, not marketing names. If Nvidia decided that their next top end card would be a GeForce GT 2020, performance was awesome, and the price was really cheap, we'd all buy it without worrying that the name sounded low end.
AmazingAveryAge of Conan AdvocateMemberUncommonPosts: 7,188
I'm not going to pretend I know what you guys are talking about, but I skimmed through and didn't see anything about Vulkan. I thought it was going to be a big deal.
Here is a bit more info mate - https://developer.nvidia.com/rtx/raytracing "NVIDIA is developing a ray-tracing extension to the Vulkan cross-platform graphics and compute API. Available soon, this extension will enable Vulkan developers to access the full power of RTX. NVIDIA is also contributing the design of this extension to the Khronos Group as an input to potentially bringing a cross-vendor ray-tracing capability to the Vulkan standard."
Mutterings that Witcher 3 and FFXV might get ray-tracing features
I'm not going to pretend I know what you guys are talking about, but I skimmed through and didn't see anything about Vulkan. I thought it was going to be a big deal.
Here is a bit more info mate - https://developer.nvidia.com/rtx/raytracing "NVIDIA is developing a ray-tracing extension to the Vulkan cross-platform graphics and compute API. Available soon, this extension will enable Vulkan developers to access the full power of RTX. NVIDIA is also contributing the design of this extension to the Khronos Group as an input to potentially bringing a cross-vendor ray-tracing capability to the Vulkan standard."
Mutterings that Witcher 3 and FFXV might get ray-tracing features
Features that get added to core APIs often start as extensions. Sometimes one GPU vendor proposes and supports something, and then other says, yeah, that's a good idea. Let's add it. Sometimes one vendor proposes one extension, another vendor proposes a slightly different extension to do about the same thing, and then they hash out their differences.
And sometimes one vendor proposes something, and everyone else says, no, that's a stupid idea. We're not going to support it or anything like it. That's why, if you want cross-vendor support, it matters tremendously what AMD thinks of ray tracing.
It's possible that Navi will be as much into ray tracing as Turing, in which case, ray tracing support will probably be added to the industry standard APIs soon enough. Even if that happens, it will probably be several years before it gets used much outside of a handful of sponsored titles. Or it might not, and if it's not in Navi, it's probably several years after that before AMD's next major overhaul of their GPU architecture.
My guess is that if AMD thinks that supporting ray tracing will get it heavily used in games, they will. If they think that even going all in with a heavy ray tracing push still won't get games to use it at all, then they won't.
One very important background piece to remember is that neither AMD nor Nvidia wants the discrete GPU market to die. They don't want GPU vendors to end up like sound card vendors. They want people to still feel like paying $500 for a GPU offers real value over an integrated GPU.
As integrated GPUs become more and more powerful, the argument for buying a discrete GPU for a given workload becomes weaker. In order to continue to justify the discrete GPUs, they need to convince people that they need to run more intense workloads. They've been pushing VR and higher resolutions for a while, but that can only go so far.
If ray tracing becomes the norm in games, then their problem of needing graphics to be more computationally expensive is solved for many years, and likely beyond the end of Moore's Law. Thus, AMD certainly has incentives to want ray tracing to catch on eventually.
Game developers might not, however. The mobile market especially struggles with complex graphics even using rasterization. Ray tracing will probably be viable there sometime around never. And even for desktops, anything that you have a ray tracing implementation of also needs a rasterization implementation, and that can easily add a ton of cost. If your game needs people to have a $500 GPU to run the game acceptably even at low settings, you're not going to sell very many copies.
That said, ray tracing is in some senses intrinsically simpler than rasterization, as it doesn't need the boatloads of complicated fakery that rasterization does for lighting effects. Making a ray tracing implementation only could easily end up being a lot simpler and cheaper than making a rasterization implementation only. That won't be viable in desktops for a long time because it will kill the size of your potential market. But if a game console has a GPU architecture heavily built for ray tracing, a game for that console that is ray tracing only is fine.
Of course, betting on ray tracing as being the point of your console has the potential to backfire spectacularly if it ends up not working out how you hope. We're talking Virtual Boy level of potential disaster here, not just mistakes like the PS3's Cell process or the Xbox One's pushing Kinect so hard.
AmazingAveryAge of Conan AdvocateMemberUncommonPosts: 7,188
And so, I think the answer to your question is, developers all have access to RTX. It’s in Microsoft’s DirectX; it’s in the most popular game engine in the world; and you’re going to start to see developers use it. We’re going to see some really exciting stuff this year.
NVIDIA RTX with Turing is the greatest advance since CUDA, nearly a decade ago. I’m incredibly proud of our Company for tackling this incredible challenge, reinventing the entire graphic stack, and giving the industry a surge of excitement as we reinvent computer graphics. Stay tuned as we unfold the exciting RTX story.
There’s just simply no comparison to Turing. Turing is a reinvention of computer graphics; it is the first ray-tracing GPU in the world; it’s the first GPU that will be able to ray trace light in an environment and create photorealistic shadows and reflections and be able to model things like areal lights and global illumination and indirect lighting. The images are going to be so subtle and so beautiful, when you look at it, it just looks like a movie. And yet it’s backwards compatible, with everything that we’ve done.
This new hybrid rendering model which extends what we’ve built before but added to it two new capabilities artificial intelligence and accelerated ray-tracing is just fantastic. So, everything of the past will be brought along and benefits, and it’s going to create new visuals that weren’t possible before.
We also did a good job on laying the foundations of the development platform for the developers. We partnered with Microsoft to create DXR, Vulkan RT is also coming, and we have OptiX that are used by ProViz renderers and developers all over the world. And so, we have the benefit of laying the foundation stack by stack by stack over the years. And as result, on the data that Turing comes out, we’re going to have a richness of applications that gamers will be able to enjoy.
Reading between the lines it does seem to be definitely foundational layering at this point and the wheels are spinning up for enablement.
I suspect we'll hear more tomorrow morning at the announcement of supporting games.
And so, I think the answer to your question is, developers all have access to RTX. It’s in Microsoft’s DirectX; it’s in the most popular game engine in the world; and you’re going to start to see developers use it. We’re going to see some really exciting stuff this year.
NVIDIA RTX with Turing is the greatest advance since CUDA, nearly a decade ago. I’m incredibly proud of our Company for tackling this incredible challenge, reinventing the entire graphic stack, and giving the industry a surge of excitement as we reinvent computer graphics. Stay tuned as we unfold the exciting RTX story.
There’s just simply no comparison to Turing. Turing is a reinvention of computer graphics; it is the first ray-tracing GPU in the world; it’s the first GPU that will be able to ray trace light in an environment and create photorealistic shadows and reflections and be able to model things like areal lights and global illumination and indirect lighting. The images are going to be so subtle and so beautiful, when you look at it, it just looks like a movie. And yet it’s backwards compatible, with everything that we’ve done.
This new hybrid rendering model which extends what we’ve built before but added to it two new capabilities artificial intelligence and accelerated ray-tracing is just fantastic. So, everything of the past will be brought along and benefits, and it’s going to create new visuals that weren’t possible before.
We also did a good job on laying the foundations of the development platform for the developers. We partnered with Microsoft to create DXR, Vulkan RT is also coming, and we have OptiX that are used by ProViz renderers and developers all over the world. And so, we have the benefit of laying the foundation stack by stack by stack over the years. And as result, on the data that Turing comes out, we’re going to have a richness of applications that gamers will be able to enjoy.
Reading between the lines it does seem to be definitely foundational layering at this point and the wheels are spinning up for enablement.
I suspect we'll hear more tomorrow morning at the announcement of supporting games.
That's a bunch of stupid marketing junk. Calling something the greatest advance since CUDA is like calling someone the greatest baseball player since Willie Bloomquist. No matter which current player you're talking about, such a claim doesn't work. Also, Turing isn't the first GPU to attempt ray tracing; Imagination made a ray tracing GPU a while back that everyone ignored.
One enormous question is, how much die space does the ray tracing fixed function logic take? If it's 1% of the die, AMD might add it so that they can say, we can do that, too. If it takes 20% of the die and AMD declines to add it, then they end up with an enormous efficiency advantage over Nvidia in games that don't use ray tracing--which includes basically every game ever made and nearly all of the ones that will release in the next several years. Or if it's variable, we might see a situation like with tessellation where Nvidia adds a zillion hardware tessellators and AMD adds one, which only gave Nvidia an advantage in synthetic benchmarks or sponsored titles that did intentionally stupid and wasteful things with tessellation.
Even so, fixed function ray tracing logic in the GPU accessible from industry standards at least has a chance of working out and being a huge deal. That's a lot more than you can say for GPU PhysX done via CUDA. Using a GPU for physics or other non-graphical computations while playing a game isn't intrinsically a bad idea, and some games do that today, but doing it via CUDA sure was a terrible idea.
AmazingAveryAge of Conan AdvocateMemberUncommonPosts: 7,188
And so, I think the answer to your question is, developers all have access to RTX. It’s in Microsoft’s DirectX; it’s in the most popular game engine in the world; and you’re going to start to see developers use it. We’re going to see some really exciting stuff this year.
NVIDIA RTX with Turing is the greatest advance since CUDA, nearly a decade ago. I’m incredibly proud of our Company for tackling this incredible challenge, reinventing the entire graphic stack, and giving the industry a surge of excitement as we reinvent computer graphics. Stay tuned as we unfold the exciting RTX story.
There’s just simply no comparison to Turing. Turing is a reinvention of computer graphics; it is the first ray-tracing GPU in the world; it’s the first GPU that will be able to ray trace light in an environment and create photorealistic shadows and reflections and be able to model things like areal lights and global illumination and indirect lighting. The images are going to be so subtle and so beautiful, when you look at it, it just looks like a movie. And yet it’s backwards compatible, with everything that we’ve done.
This new hybrid rendering model which extends what we’ve built before but added to it two new capabilities artificial intelligence and accelerated ray-tracing is just fantastic. So, everything of the past will be brought along and benefits, and it’s going to create new visuals that weren’t possible before.
We also did a good job on laying the foundations of the development platform for the developers. We partnered with Microsoft to create DXR, Vulkan RT is also coming, and we have OptiX that are used by ProViz renderers and developers all over the world. And so, we have the benefit of laying the foundation stack by stack by stack over the years. And as result, on the data that Turing comes out, we’re going to have a richness of applications that gamers will be able to enjoy.
Reading between the lines it does seem to be definitely foundational layering at this point and the wheels are spinning up for enablement.
I suspect we'll hear more tomorrow morning at the announcement of supporting games.
That's a bunch of stupid marketing junk. Calling something the greatest advance since CUDA is like calling someone the greatest baseball player since Willie Bloomquist. No matter which current player you're talking about, such a claim doesn't work. Also, Turing isn't the first GPU to attempt ray tracing; Imagination made a ray tracing GPU a while back that everyone ignored.
One enormous question is, how much die space does the ray tracing fixed function logic take? If it's 1% of the die, AMD might add it so that they can say, we can do that, too. If it takes 20% of the die and AMD declines to add it, then they end up with an enormous efficiency advantage over Nvidia in games that don't use ray tracing--which includes basically every game ever made and nearly all of the ones that will release in the next several years. Or if it's variable, we might see a situation like with tessellation where Nvidia adds a zillion hardware tessellators and AMD adds one, which only gave Nvidia an advantage in synthetic benchmarks or sponsored titles that did intentionally stupid and wasteful things with tessellation.
Even so, fixed function ray tracing logic in the GPU accessible from industry standards at least has a chance of working out and being a huge deal. That's a lot more than you can say for GPU PhysX done via CUDA. Using a GPU for physics or other non-graphical computations while playing a game isn't intrinsically a bad idea, and some games do that today, but doing it via CUDA sure was a terrible idea.
Turing is the first GPU (not to attempt Ray Tracing) but the RTX's are the first to have the hardware already onboard to properly utilize the new software they'll be used for. When RTX starts to ramp up (And it will) Nvidia will be the only game in town that actually has the required hardware onboard to take advantage of the new features. RTX is nothing like gameworks. Unity, Unreal Engine and many others play better with Nvidia cards. However, you need specific RT hardware to even properly use it. Every other gen is going to need to emulate ...and good luck emulating Raytracing, Deep learning ...AND rasterization (Pascal struggled to even Pre-empt Async). How devs choose to implement RTX will vary depending on how close they are to Nvidia and my bet is older hardware is going to be just as SLOW as AMD at using RTX features. AMD wont have any dedicated RT hardware until Q3 next year if it chooses to implement, but they will.
At the end of the day if you're looking for an upgrade the RT stuff will be a bonus and shouldn't be a strong determination of whether to buy an RTX card. The decision should be based of raw performance differences with what you have today.
Turing is the first GPU (not to attempt Ray Tracing) but the RTX's are the first to have the hardware already onboard to properly utilize the new software they'll be used for. When RTX starts to ramp up (And it will) Nvidia will be the only game in town that actually has the required hardware onboard to take advantage of the new features. RTX is nothing like gameworks. Unity, Unreal Engine and many others play better with Nvidia cards. However, you need specific RT hardware to even properly use it. Every other gen is going to need to emulate ...and good luck emulating Raytracing, Deep learning ...AND rasterization (Pascal struggled to even Pre-empt Async). How devs choose to implement RTX will vary depending on how close they are to Nvidia and my bet is older hardware is going to be just as SLOW as AMD at using RTX features. AMD wont have any dedicated RT hardware until Q3 next year if it chooses to implement, but they will.
At the end of the day if you're looking for an upgrade the RT stuff will be a bonus and shouldn't be a strong determination of whether to buy an RTX card. The decision should be based of raw performance differences with what you have today.
If you're a developer, how much work are you going to put into ray tracing graphical features that you know that the overwhelming majority of your player base will have to turn off? If you're a sponsored title and Nvidia pays you to implement a few things, then maybe you do. For anything beyond the handful of sponsored titles, you're not going to seriously consider it until several years from now. And that's even if Navi is very heavy on ray tracing.
For comparison, the first GPU that I'm aware of to have hardware tessellation units was the Radeon HD 2900 XT in 2007. More than a decade later, tessellation still isn't that common in games. And to my eyes, that's a bigger deal than ray tracing.
It's possible that in 2025, ray tracing will be common enough for a Turing card to be substantially better than a Pascal or Vega card that offers comparable performance in rasterization. Even that is hardly guaranteed; it's also possible that ray tracing catches on, but ends up being done differently from what Nvidia implemented in Turing, so that the ray tracing of 2025 can't use Turing's fixed function logic. There never were any games that could use AMD's first three generations of hardware tessellators. Besides, people who pay $700 for a video card likely aren't planning on keeping it for that long before the next upgrade.
I'm not saying that Nvidia shouldn't add it. If ray tracing is to catch on, it has to start somewhere. But on a list of marketing bullet points, I'd certainly see it as less important today than the Adaptive Sync that Nvidia still doesn't support.
Comments
"The simple is the seal of the true and beauty is the splendor of truth" -Subrahmanyan Chandrasekhar
Authored 139 missions in Vendetta Online and 6 tracks in Distance
2080Ti is going to be a cutdown chip from the Quadro 8000, featuring 4352 from 4608.
2080 is also a cutdown chip from Quadro 5000, featuring 2944 from 3072.
Codenames look to be TU104 (2080) and TU102 (Ti). Undetermined 2080 die size but the Ti looks to be bigger, won’t matter much at all if it delivers performance and the power envelop is efficient and not crazy over what there is today. I’m thinking it’ll be a bigger die than a Vega 64 but won’t draw as much power
Making a GeForce card based on the top bin would make somewhat more sense if they price it accordingly--such as $1000 or more. They'd probably disable the tensor cores and ray tracing stuff so as not to cannibalize their Quadro market. Alternatively, if they're having yield problems with the tensor cores or ray tracing parts, that would give them a way to sell defective parts as GeForce cards. Still, in terms of units sold, the market for professional cards is massively smaller than for consumer cards.
Even so, in the past, Nvidia hasn't been shy at all about using salvage parts for Quadro cards. In their current lineup, a Quadro P4000 is basically a further cut down GTX 1070, while a Quadro P2000 is a further cut down GTX 1060. It wouldn't really be that shocking if all three of the Quadro Turing parts announced thus far are based on the same die, though I suspect that it's two dies.
3072/4608 CUDA = 2/3rds (~67%) count, and most likely requires ~67% of the die area.
256/384 Memory bus = ~67%, so about ~67% of the memory controllers, and similarly 67% of the die space for memory controllers.
384/576 Tensor cores = ~67% of tensor cores.
In almost every way this is 2/3(67%) of RX8000.
754*.67 = ~500 mm2.
The RTX5000 (RTX 2080) - TU104 is essentially 2/3 of RTX8000 (RTX 2080Ti) (TU102?) functional units, so it should be about 2/3 of the die area, so about ~500mm2 area, give or take minor amounts.
Oh here is a link to PNY RTX 2080 Ti @ $1000 http://www.pny.com/RTX-2080-Ti-Overclocked-XLR8-Edition?sku=VCG2080T11TFMPB-O
The problem comes when you try to scale it down. If there's an RTX 1080 that is 2/3 of the 1080 Ti, you then have less flops and less bandwidth, but from a larger die than GP102. What's the point of making a card that does that? Why not just keep selling GTX 1080 Tis, or rebrand them as GTX 2080s if you have to?
I realize that architectural differences can offer advantages, as one architecture can use resources more efficiently than another. But I'm not convinced that there are major gains to be had here, as Maxwell/Pascal is already pretty good at that--and far better than anything that preceded it. You can make huge gains by fixing things that are broken, but it's much harder to greatly improve on things that already worked well.
https://www.nvidia.com/en-us/titan/titan-v/
That has more flops and more memory bandwidth than the rumored RTX 2080 Ti.
As always in niche markets the performance price ratio is key. And in much the same way that AMD has the edge in crypto mining these cards - and the accredited software such as Autodesk (compuer aided design) and Maya (animation) - have the edge in this particular niche.
One objective of the the new generation will be to protect the existing market - to see off cpu advances that might nibble away at the low end in particular; the bottom end of the Quadro range can be had for maybe a couple hundred dollars.
As far as "pc use goes". Around for nearly 20 years, some cheaper than current gaming cards .... and the reason Quadro cards are not talked about when it comes to games is that Quadro's have never been "gaming" cards. The price performance ratio for games is simply wrong.
Needless to say NVidia will be hoping to grow the market - and the prices talked about in the presentation are less than what the current cards used to cost - for more. (Pretty sure the top end was $10k+ - suggested prices seem to have changed though.) To grow the market though they will have to supply the cards - supplies of the current gen cards were usually restricted.
In games - maybe we will see more cut scenes.
The RTX 2080 sees a more moderate boost over the GTX 1080 at 15% more CUDA cores.
Comparing the rumored RTX 2080 to a GeForce GTX 1080 is the wrong comparison. It needs to be compared to the GTX 1080 Ti, as that's the card that is the problem for it. The salient thing is price and performance, not marketing names. If Nvidia decided that their next top end card would be a GeForce GT 2020, performance was awesome, and the price was really cheap, we'd all buy it without worrying that the name sounded low end.
"NVIDIA is developing a ray-tracing extension to the Vulkan cross-platform graphics and compute API. Available soon, this extension will enable Vulkan developers to access the full power of RTX. NVIDIA is also contributing the design of this extension to the Khronos Group as an input to potentially bringing a cross-vendor ray-tracing capability to the Vulkan standard."
Mutterings that Witcher 3 and FFXV might get ray-tracing features
Or PhysX, another great open API.
Gameworks. Gsync. The list just goes on and on.
i remain highly skeptical about RTX
And sometimes one vendor proposes something, and everyone else says, no, that's a stupid idea. We're not going to support it or anything like it. That's why, if you want cross-vendor support, it matters tremendously what AMD thinks of ray tracing.
It's possible that Navi will be as much into ray tracing as Turing, in which case, ray tracing support will probably be added to the industry standard APIs soon enough. Even if that happens, it will probably be several years before it gets used much outside of a handful of sponsored titles. Or it might not, and if it's not in Navi, it's probably several years after that before AMD's next major overhaul of their GPU architecture.
My guess is that if AMD thinks that supporting ray tracing will get it heavily used in games, they will. If they think that even going all in with a heavy ray tracing push still won't get games to use it at all, then they won't.
One very important background piece to remember is that neither AMD nor Nvidia wants the discrete GPU market to die. They don't want GPU vendors to end up like sound card vendors. They want people to still feel like paying $500 for a GPU offers real value over an integrated GPU.
As integrated GPUs become more and more powerful, the argument for buying a discrete GPU for a given workload becomes weaker. In order to continue to justify the discrete GPUs, they need to convince people that they need to run more intense workloads. They've been pushing VR and higher resolutions for a while, but that can only go so far.
If ray tracing becomes the norm in games, then their problem of needing graphics to be more computationally expensive is solved for many years, and likely beyond the end of Moore's Law. Thus, AMD certainly has incentives to want ray tracing to catch on eventually.
Game developers might not, however. The mobile market especially struggles with complex graphics even using rasterization. Ray tracing will probably be viable there sometime around never. And even for desktops, anything that you have a ray tracing implementation of also needs a rasterization implementation, and that can easily add a ton of cost. If your game needs people to have a $500 GPU to run the game acceptably even at low settings, you're not going to sell very many copies.
That said, ray tracing is in some senses intrinsically simpler than rasterization, as it doesn't need the boatloads of complicated fakery that rasterization does for lighting effects. Making a ray tracing implementation only could easily end up being a lot simpler and cheaper than making a rasterization implementation only. That won't be viable in desktops for a long time because it will kill the size of your potential market. But if a game console has a GPU architecture heavily built for ray tracing, a game for that console that is ray tracing only is fine.
Of course, betting on ray tracing as being the point of your console has the potential to backfire spectacularly if it ends up not working out how you hope. We're talking Virtual Boy level of potential disaster here, not just mistakes like the PS3's Cell process or the Xbox One's pushing Kinect so hard.
From the transcript of Nvidia's fiscal call this week - https://seekingalpha.com/article/4199978-nvidia-corporation-nvda-ceo-jensen-huang-q2-2019-results-earnings-call-transcript?part=single
NVIDIA RTX with Turing is the greatest advance since CUDA, nearly a decade ago. I’m incredibly proud of our Company for tackling this incredible challenge, reinventing the entire graphic stack, and giving the industry a surge of excitement as we reinvent computer graphics. Stay tuned as we unfold the exciting RTX story.
There’s just simply no comparison to Turing. Turing is a reinvention of computer graphics; it is the first ray-tracing GPU in the world; it’s the first GPU that will be able to ray trace light in an environment and create photorealistic shadows and reflections and be able to model things like areal lights and global illumination and indirect lighting. The images are going to be so subtle and so beautiful, when you look at it, it just looks like a movie. And yet it’s backwards compatible, with everything that we’ve done.
This new hybrid rendering model which extends what we’ve built before but added to it two new capabilities artificial intelligence and accelerated ray-tracing is just fantastic. So, everything of the past will be brought along and benefits, and it’s going to create new visuals that weren’t possible before.
We also did a good job on laying the foundations of the development platform for the developers. We partnered with Microsoft to create DXR, Vulkan RT is also coming, and we have OptiX that are used by ProViz renderers and developers all over the world. And so, we have the benefit of laying the foundation stack by stack by stack over the years. And as result, on the data that Turing comes out, we’re going to have a richness of applications that gamers will be able to enjoy.
Reading between the lines it does seem to be definitely foundational layering at this point and the wheels are spinning up for enablement.
I suspect we'll hear more tomorrow morning at the announcement of supporting games.
One enormous question is, how much die space does the ray tracing fixed function logic take? If it's 1% of the die, AMD might add it so that they can say, we can do that, too. If it takes 20% of the die and AMD declines to add it, then they end up with an enormous efficiency advantage over Nvidia in games that don't use ray tracing--which includes basically every game ever made and nearly all of the ones that will release in the next several years. Or if it's variable, we might see a situation like with tessellation where Nvidia adds a zillion hardware tessellators and AMD adds one, which only gave Nvidia an advantage in synthetic benchmarks or sponsored titles that did intentionally stupid and wasteful things with tessellation.
Even so, fixed function ray tracing logic in the GPU accessible from industry standards at least has a chance of working out and being a huge deal. That's a lot more than you can say for GPU PhysX done via CUDA. Using a GPU for physics or other non-graphical computations while playing a game isn't intrinsically a bad idea, and some games do that today, but doing it via CUDA sure was a terrible idea.
At the end of the day if you're looking for an upgrade the RT stuff will be a bonus and shouldn't be a strong determination of whether to buy an RTX card. The decision should be based of raw performance differences with what you have today.
For comparison, the first GPU that I'm aware of to have hardware tessellation units was the Radeon HD 2900 XT in 2007. More than a decade later, tessellation still isn't that common in games. And to my eyes, that's a bigger deal than ray tracing.
It's possible that in 2025, ray tracing will be common enough for a Turing card to be substantially better than a Pascal or Vega card that offers comparable performance in rasterization. Even that is hardly guaranteed; it's also possible that ray tracing catches on, but ends up being done differently from what Nvidia implemented in Turing, so that the ray tracing of 2025 can't use Turing's fixed function logic. There never were any games that could use AMD's first three generations of hardware tessellators. Besides, people who pay $700 for a video card likely aren't planning on keeping it for that long before the next upgrade.
I'm not saying that Nvidia shouldn't add it. If ray tracing is to catch on, it has to start somewhere. But on a list of marketing bullet points, I'd certainly see it as less important today than the Adaptive Sync that Nvidia still doesn't support.
거북이는 목을 내밀 때 안 움직입니다