There are a lot of games on both lists. By my count, it is:
11 games that will support some sort of hybrid rendering 11 games that will support DLSS 5 games that will support both
DLSS is a new form of anti-aliasing. With SSAA, in some sense, we already have the perfect anti-aliasing from an image quality perspective, so DLSS is pretty much guaranteed to be inferior to that. DLSS has to be computationally cheaper than SSAA in order to have a point, while still offering better image quality than cheaper forms of anti-aliasing such as FXAA. Will it? Maybe it will, or maybe it won't. Regardless, offering DLSS in addition to other forms of anti-aliasing doesn't break anything. And in games that are light enough on your GPU for a decently high degree of SSAA to be fine, DLSS can't possibly offer anything right out of the gate, so don't expect to see it take over.
That leaves hybrid rendering, which I'm extremely skeptical of. Imagine that you want to run a game at ultra settings, but discover that your GPU isn't powerful enough to handle it. But there is a setting to run the game at ultra settings on half of the screen and medium on the other half. Think that sounds like a good idea? Having the obviously mixed settings would probably look worse than medium on the entire screen.
If you've got the hardware available to do full ray tracing for the entire game, yeah, that's going to look better than rasterization. Probably a lot better, not just a little. But mixing partially ray tracing and partially rasterization? In order to do the lighting effects properly with ray tracing, you have to have ray tracing data for everything in the entire scene. Otherwise, it's going to end up looking all sorts of broken, and likely worse than pure rasterization. I'm sure that they can keep the ways it looks broken out of demo videos, and maybe out of the game entirely if it's sufficiently on rails to not let you have much of a look around.
But what if you have some reflections that look really nice, until you notice that half of the objects in the game simply don't appear in the reflection? What if you have object A cast a flawless shadow on object B, until you realize that object C is between them and neither casts a shadow on B nor has a shadow cast on it by A? The details will vary wildly by how they implement it, of course. But I'm very skeptical that they can avoid the half ultra/half medium settings problem.
AMD has had raytracing since 2006, but not at a sufficient level to be worth it. Not sure how RTX solves the raytracing issues. Guess gotta wait and see. Don't see it being used anytime soon but you need to start somewhere.
I'm not going to pretend I know what you guys are talking about, but I skimmed through and didn't see anything about Vulkan. I thought it was going to be a big deal.
Here is a bit more info mate - https://developer.nvidia.com/rtx/raytracing "NVIDIA is developing a ray-tracing extension to the Vulkan cross-platform graphics and compute API. Available soon, this extension will enable Vulkan developers to access the full power of RTX. NVIDIA is also contributing the design of this extension to the Khronos Group as an input to potentially bringing a cross-vendor ray-tracing capability to the Vulkan standard."
Mutterings that Witcher 3 and FFXV might get ray-tracing features
Sorry putting ray tracing in the Witcher 3 would slow it to a crawl even on the best GPU.
You could make ray tracing run however fast you need to just by having a simpler game world with a lot fewer objects to draw. You might be able to get acceptable performance in a game like Street Fighter, where you have only two characters in a wide open arena, even if you can't do so in an MMORPG.
The point NVIDIA are making is that you can make it all run fine on the RTX platform which has the required hardware to do this. That’s the whole point of this gen of top end cards. The in game function will be for folks with RTX cards. For how well all that is going to be will be in the reviews and not to take their word at this point in time.
Again,it’s not ala Game works. There are two extra processors on the die RT and Tensor that deal with this. I suspect you’re not going to get a RT option for legacy hardware but you wouldn’t really want to enable anyways which goes back to the point of having RTX cards for that.
We will have to wait for reviews to see performance without Ray Tracing and with Ray Tracing all I’ve see for a test so far will be the Battlefield V beta.
Confirmed Ray Tracing enabled games so far: Battlefield V Metro Exodus Shadow of the Tomb Raider
Other games with planned NVIDIA RTX support so far:
Ark: Survival Evolved
Assetto Corsa Competizione
Atomic Heart
Control
Dauntless
In Death
Enlisted
Final Fantasy XV
The Forge Arena
Fractured Lands
Hitman 2
Justice
JK3
Mechwarrior 5: Mercenaries
PlayerUnknown's Battlegrounds
Remnant from the Ashes
Serious Sam 4: Planet Badass
We Happy Few
You don't seem to get it. Sure these games will use a little ray tracing here or there, but the main graphics will still be non ray tracing. Ray tracing takes way too much overhead, even with this faster cards, FPS is paramount in most of these games and it and ray tracing are like oil and water.
Full ray traced games are years away and will require faster hardware. Your list is just silly, Ark is already so unoptimized and you think they are going to add ray tracing to it. Sorry, same with PUBG, FPS is a priority to that game, won't happen anytime soon.
They're relative benchmarks at unspecified settings--and probably the settings most favorable to the RTX 2080. They also game the benchmarks by comparing different tiers of cards, as they compare the RTX 2080 to the GTX 1080 rather than the GTX 1080 Ti that is the more relevant comparison.
They also show the RTX 2080 as being much faster with DLSS on than without it. I'm not sure what that means, as usually enabling an extra feature brings a performance hit. It's possible that it's just a new, creative way of making a deceptive graph. Perhaps they're using the same bar for the GTX 1080 to mean two different things, and the gap between the RTX 2080 and the GTX 1080 is larger if you go out of your way to sabotage the latter.
If that's not what it means, then my best guess is that DLSS is actually just a way to improve frame rates at the expense of lower image quality. That would be a peculiar feature to offer as part of a new generation of parts, so I hope that's not what it means.
AmazingAveryAge of Conan AdvocateMemberUncommonPosts: 7,188
Looking forward to real world testing by reviewers. Interesting Nvidia RTX Performance numbers- 4K, reference version, not Founders. Very little context but they do show SM optimizations have pushed performance beyond what RTX 2080 vs GTX 1080 core counts would have you believe. Also, DLSS (Deep Learning Super Sampling) gives HUGE benefits over traditional AA methods. Basically they are saying 2080 performance is 50% faster than a 1080 on average at 4K and 2 times faster with Nvidia DLSS tech.
Also, DLSS (Deep Learning Super Sampling) gives HUGE benefits over traditional AA methods.
That's probably not what the graph says. As I read it, the graph only says that DLSS brings a much bigger performance hit on a GTX 1080 than on an RTX 2080. That could plausibly mean that it's a fairly bad idea on an RTX 2080 and a really awful idea on a GTX 1080.
I think that DLSS and hybrid rendering have a very large case of "if you have a hammer..."
Basically, Nvidia created the tensor cores for machine learning compute purposes. They created whatever they did with ray tracing for professional graphics purposes. And then, once they had they chip, they thought they'd like to sell it to gamers, too. Above, I said that making a 754 mm^2 GPU on a cutting edge process node and then selling the whole card for $500 wasn't a way to make a profit. It sure is if you can sell a whole lot of them for $1000 each, though.
The problem is that the die size, and hence the cost of the card, are bloated by using a bunch of die space on things not intended for games. So how do you get gamers to pay a premium for that? Nvidia has decided to treat that as a software problem. They've got some hardware that isn't intended for games, and now they have to find some way to get games to make use of it.
AmazingAveryAge of Conan AdvocateMemberUncommonPosts: 7,188
This is a relative comparison versus Pascal. If this chart stands true, then each Turing core could be ~50% faster than Pascal core in applications such as 3DMark. This might be very fruitful information for further comparisons.
And
It should be pointed that these are just per core performance gains at the same clock speeds without adding the benefits of other technologies that Turing comes with. That would further increase the performance in a wide variety of gaming applications as we have already seen the gaming performance of a GeForce RTX 2080 to be 50% faster than the GTX 1080 on average and twice as fast with the new DLSS technology.
What does 50% performance "per core" mean? If they're claiming 50% IPC gains out of shaders, which is about what you'd mean by "per core" on a CPU, then color me extremely skeptical of that, at least outside of some oddball corner cases. Maxwell/Pascal is already quite good there.
My big takeaway from the link is that it's 64 shaders per compute unit. Hopefully, that means that they beefed up the local memory capacity and bandwidth as compared to Maxwell/Pascal, though the slide didn't cite local memory capacity or register file size, nor did it explicitly say that it's still 32 local memory banks.
If they did move to fewer shaders per compute unit without shrinking the cache sizes or bandwidths, akin to GP100 or GV100, that would certainly be welcome for compute purposes. But it probably wouldn't be terribly important for gaming. Don't expect huge graphical performance gains from catching up to where AMD has been since 2012. Maxwell/Pascal already demonstrated that you don't need that for graphics.
I said that I didn't think that Nvidia would do a paper launch of Turing. And then they just did the paperiest of paper launches. They want you to pre-order cards at well above already very high MSRPs. They want you to do this before there are any independent reviews or even plausible leaks, but only cherry-picked numbers offered by Nvidia itself. And this is in spite of us having no real idea how the cards will perform in realistic gaming loads.
Normally, I'd say that smacks of desperation. But I don't see any reason for Nvidia to be desperate right now. Quite the opposite: I'd think they'd want people to buy up Pascal as soon as possible. Rumors say that they produced too many and now need to get rid of them, and you can now get a GTX 1080 or GTX 1080 Ti for substantially below MSRP.
But instead of letting gamers buy those cards, they seem to be saying, no, don't buy them yet. Wait for the next generation. Which is an absolutely bizarre thing for them to say.
The only way that makes sense to me is if AMD has something very good coming very soon, and Nvidia wants people to pre-order Turing before they find out about the new AMD cards. But if that were the case, then why wouldn't we hear from AMD about it? AMD is behind here, so if anyone should want gamers to wait on a purchase, it should be AMD. You'd think that AMD would be less a fan of gamers going out and buying Pascal cards than Nvidia would be.
Or maybe it's what Ridelynn said: they're just trying to manipulate their stock price. Maybe they're trying to squeeze some extra revenue into one quarter rather than the next,
I still doubt my 1080 will be unable to play any games coming out over the next few years even at or close to max settings. I game at 1440p and have no desire to game at 4K. Maybe things will run better on the 20 series but it will not make the 1080's run bad all of a sudden. I'll probably sit out this round of cards.
The point of ray tracing is to make your GTX 1080 run poorly all of a sudden.
I still doubt my 1080 will be unable to play any games coming out over the next few years even at or close to max settings. I game at 1440p and have no desire to game at 4K. Maybe things will run better on the 20 series but it will not make the 1080's run bad all of a sudden. I'll probably sit out this round of cards.
The point of ray tracing is to make your GTX 1080 run poorly all of a sudden.
Unless RT ends up more of a fad that never catches on. Marketing is a wonderful thing.
or the 3rd option is to have the extra goodness of hybrid ray tracing you need to dedicated hardware that doesn't exist or work well in legacy hardware. Usually the simplest thing is the right thing.
I still doubt my 1080 will be unable to play any games coming out over the next few years even at or close to max settings. I game at 1440p and have no desire to game at 4K. Maybe things will run better on the 20 series but it will not make the 1080's run bad all of a sudden. I'll probably sit out this round of cards.
Yea, I'm still interested to see the low-end 20 series benchmarks to compare to 1080, but at this point I think my plan remains to upgrade to a 1080 in the next 6-7 months.
I still doubt my 1080 will be unable to play any games coming out over the next few years even at or close to max settings. I game at 1440p and have no desire to game at 4K. Maybe things will run better on the 20 series but it will not make the 1080's run bad all of a sudden. I'll probably sit out this round of cards.
The point of ray tracing is to make your GTX 1080 run poorly all of a sudden.
Unless RT ends up more of a fad that never catches on. Marketing is a wonderful thing.
or the 3rd option is to have the extra goodness of hybrid ray tracing you need to dedicated hardware that doesn't exist or work well in legacy hardware. Usually the simplest thing is the right thing.
A hybrid rendering approach of part ray tracing and part rasterization is definitely not the simplest thing.
I still doubt my 1080 will be unable to play any games coming out over the next few years even at or close to max settings. I game at 1440p and have no desire to game at 4K. Maybe things will run better on the 20 series but it will not make the 1080's run bad all of a sudden. I'll probably sit out this round of cards.
The point of ray tracing is to make your GTX 1080 run poorly all of a sudden.
Unless RT ends up more of a fad that never catches on. Marketing is a wonderful thing.
RT is not a fad - it's been around forever and produces the best graphics results.
The huge downside of RT is huge computational costs that limit its viability in real time video gameplay.
So hardware that will do real time RT is inevitable, be it RTX or something else entirely is only a matter of time.
The question isn't whether you can do ray tracing in real time. The question is how complicated of scenes, how high of resolutions, and how high of frame rates. The answer to that will always be that rasterization can do more complicated scenes at higher resolutions and higher frame rates than ray tracing.
I don't think it will ever get to the point that rasterization goes away entirely. Just think of how many things still struggle with rasterization today. How many games are there that let you have 100 characters running around on the screen without causing frame rate problems, for example?
It's possible that some games will eventually move to ray tracing if they know that the world is going to be very simple. But I'd bet on the end of Moore's Law and GPU improvements slowing to a crawl before client-side ray tracing becomes mainstream in games.
One thing about ray tracing is that scaling it to multiple GPUs is nearly trivial to do. Every GPU must have its own copy of all of the data to render. But having each GPU render half of a frame and then combining them is easy.
If you tried to do that naively with rasterization, you don't get such large gains because you don't find out where an object is until relatively late in the rendering pipeline, so all of that would have to be replicated on both GPUs.
With ray tracing, you're fundamentally drawing one pixel at a time, not one model or some portion of it. And you know immediately which half of the window that pixel is on, so you don't have to replicate the work.
This obviously means that when gamers complain that a $1200 GPU isn't fast enough for the ray tracing games, Nvidia will reply that you need quad SLI with those $1200 GPUs to do it right.
Honestly, why do I think this is just a marketing spiel to justify the huge increase in gfxc price? And that this whole Ray Tracing nonsense will end up just the same as Tesselation. A very nice idea, but completely unattainable right now. But! Don't forget to preorder those sexy, if completely overpriced RTXs on the way out!
"Ray tracing" may or may not turn out to be "nonsence" for games.
So why have NVidia bothered? For corporate applications "ray tracing" is absolutely not nonsence. There is a whole raft of industries that make extensive use of "ray tracing" (computational fluid dynamics etc etc). From wind tunnels to particle physics, design engineers to artists, animators and film makers. Companies have been buying NVidia Quadro (and their AMD equivalent) for nearly two decades. Part of NVidia's financial calculation - I assume - is that Turing will "refresh" their Quadro line of cards and that sales will underpin the extra development costs. And if it provides an extra marketing push for their game cards that will be a bonus.
Comments
https://techreport.com/news/34019/nvidia-lists-games-that-will-support-turing-cards-hardware-capabilities
There are a lot of games on both lists. By my count, it is:
11 games that will support some sort of hybrid rendering
11 games that will support DLSS
5 games that will support both
DLSS is a new form of anti-aliasing. With SSAA, in some sense, we already have the perfect anti-aliasing from an image quality perspective, so DLSS is pretty much guaranteed to be inferior to that. DLSS has to be computationally cheaper than SSAA in order to have a point, while still offering better image quality than cheaper forms of anti-aliasing such as FXAA. Will it? Maybe it will, or maybe it won't. Regardless, offering DLSS in addition to other forms of anti-aliasing doesn't break anything. And in games that are light enough on your GPU for a decently high degree of SSAA to be fine, DLSS can't possibly offer anything right out of the gate, so don't expect to see it take over.
That leaves hybrid rendering, which I'm extremely skeptical of. Imagine that you want to run a game at ultra settings, but discover that your GPU isn't powerful enough to handle it. But there is a setting to run the game at ultra settings on half of the screen and medium on the other half. Think that sounds like a good idea? Having the obviously mixed settings would probably look worse than medium on the entire screen.
If you've got the hardware available to do full ray tracing for the entire game, yeah, that's going to look better than rasterization. Probably a lot better, not just a little. But mixing partially ray tracing and partially rasterization? In order to do the lighting effects properly with ray tracing, you have to have ray tracing data for everything in the entire scene. Otherwise, it's going to end up looking all sorts of broken, and likely worse than pure rasterization. I'm sure that they can keep the ways it looks broken out of demo videos, and maybe out of the game entirely if it's sufficiently on rails to not let you have much of a look around.
But what if you have some reflections that look really nice, until you notice that half of the objects in the game simply don't appear in the reflection? What if you have object A cast a flawless shadow on object B, until you realize that object C is between them and neither casts a shadow on B nor has a shadow cast on it by A? The details will vary wildly by how they implement it, of course. But I'm very skeptical that they can avoid the half ultra/half medium settings problem.
Full ray traced games are years away and will require faster hardware. Your list is just silly, Ark is already so unoptimized and you think they are going to add ray tracing to it. Sorry, same with PUBG, FPS is a priority to that game, won't happen anytime soon.
https://www.forbes.com/sites/jasonevangelho/2018/08/21/nvidia-rtx-20-graphics-cards-why-you-should-jump-off-the-hype-train/#2c000ddd3f8e
Might be a good time to pick up a 1080.
https://techreport.com/news/34022/nvidia-releases-its-first-official-benchmarks-for-the-rtx-2080
They're relative benchmarks at unspecified settings--and probably the settings most favorable to the RTX 2080. They also game the benchmarks by comparing different tiers of cards, as they compare the RTX 2080 to the GTX 1080 rather than the GTX 1080 Ti that is the more relevant comparison.
They also show the RTX 2080 as being much faster with DLSS on than without it. I'm not sure what that means, as usually enabling an extra feature brings a performance hit. It's possible that it's just a new, creative way of making a deceptive graph. Perhaps they're using the same bar for the GTX 1080 to mean two different things, and the gap between the RTX 2080 and the GTX 1080 is larger if you go out of your way to sabotage the latter.
If that's not what it means, then my best guess is that DLSS is actually just a way to improve frame rates at the expense of lower image quality. That would be a peculiar feature to offer as part of a new generation of parts, so I hope that's not what it means.
Basically, Nvidia created the tensor cores for machine learning compute purposes. They created whatever they did with ray tracing for professional graphics purposes. And then, once they had they chip, they thought they'd like to sell it to gamers, too. Above, I said that making a 754 mm^2 GPU on a cutting edge process node and then selling the whole card for $500 wasn't a way to make a profit. It sure is if you can sell a whole lot of them for $1000 each, though.
The problem is that the die size, and hence the cost of the card, are bloated by using a bunch of die space on things not intended for games. So how do you get gamers to pay a premium for that? Nvidia has decided to treat that as a software problem. They've got some hardware that isn't intended for games, and now they have to find some way to get games to make use of it.
Turing Shader Performance -
And
https://youtu.be/CT2o_FpNM4g
^ relates to the 50% more performance claim and how in the middle
My big takeaway from the link is that it's 64 shaders per compute unit. Hopefully, that means that they beefed up the local memory capacity and bandwidth as compared to Maxwell/Pascal, though the slide didn't cite local memory capacity or register file size, nor did it explicitly say that it's still 32 local memory banks.
If they did move to fewer shaders per compute unit without shrinking the cache sizes or bandwidths, akin to GP100 or GV100, that would certainly be welcome for compute purposes. But it probably wouldn't be terribly important for gaming. Don't expect huge graphical performance gains from catching up to where AMD has been since 2012. Maxwell/Pascal already demonstrated that you don't need that for graphics.
거북이는 목을 내밀 때 안 움직입니다
Normally, I'd say that smacks of desperation. But I don't see any reason for Nvidia to be desperate right now. Quite the opposite: I'd think they'd want people to buy up Pascal as soon as possible. Rumors say that they produced too many and now need to get rid of them, and you can now get a GTX 1080 or GTX 1080 Ti for substantially below MSRP.
But instead of letting gamers buy those cards, they seem to be saying, no, don't buy them yet. Wait for the next generation. Which is an absolutely bizarre thing for them to say.
The only way that makes sense to me is if AMD has something very good coming very soon, and Nvidia wants people to pre-order Turing before they find out about the new AMD cards. But if that were the case, then why wouldn't we hear from AMD about it? AMD is behind here, so if anyone should want gamers to wait on a purchase, it should be AMD. You'd think that AMD would be less a fan of gamers going out and buying Pascal cards than Nvidia would be.
Or maybe it's what Ridelynn said: they're just trying to manipulate their stock price. Maybe they're trying to squeeze some extra revenue into one quarter rather than the next,
거북이는 목을 내밀 때 안 움직입니다
거북이는 목을 내밀 때 안 움직입니다
거북이는 목을 내밀 때 안 움직입니다
거북이는 목을 내밀 때 안 움직입니다
I don't think it will ever get to the point that rasterization goes away entirely. Just think of how many things still struggle with rasterization today. How many games are there that let you have 100 characters running around on the screen without causing frame rate problems, for example?
It's possible that some games will eventually move to ray tracing if they know that the world is going to be very simple. But I'd bet on the end of Moore's Law and GPU improvements slowing to a crawl before client-side ray tracing becomes mainstream in games.
I should patent that before it becomes the big buzzword
If you tried to do that naively with rasterization, you don't get such large gains because you don't find out where an object is until relatively late in the rendering pipeline, so all of that would have to be replicated on both GPUs.
With ray tracing, you're fundamentally drawing one pixel at a time, not one model or some portion of it. And you know immediately which half of the window that pixel is on, so you don't have to replicate the work.
This obviously means that when gamers complain that a $1200 GPU isn't fast enough for the ray tracing games, Nvidia will reply that you need quad SLI with those $1200 GPUs to do it right.
So why have NVidia bothered? For corporate applications "ray tracing" is absolutely not nonsence. There is a whole raft of industries that make extensive use of "ray tracing" (computational fluid dynamics etc etc). From wind tunnels to particle physics, design engineers to artists, animators and film makers. Companies have been buying NVidia Quadro (and their AMD equivalent) for nearly two decades. Part of NVidia's financial calculation - I assume - is that Turing will "refresh" their Quadro line of cards and that sales will underpin the extra development costs. And if it provides an extra marketing push for their game cards that will be a bonus.