As for Nvidia and CUDA / coding - CUDA is the defacto standard. If AMD can deliver better architecture in every way just like ATI did in radeon 9700 to x800 period people will buy them.
But since geforce 8800 launch there hasn’t been anything better than Nvidia. They need be better in every way for at least 2 generations just like ATI and they can have 60% market share. That hasn’t happened in such a longtime.
Fury-Vega is a huge disappointment seeing as it went from 28-14nm. Vega is still a disappointment from a gaming perspective. They threw a bunch of stuff in it for compute / professional use that bloated the die size and power use and some of the promised features to help the gaming side were MIA. With Turing we have a bunch of stuff thrown in that also bloated the die size, the jury is out on the feature benefits (2 weeks time).
I wouldn’t say DLSS is a joke. Tensor Cores are not used for Raytracing (they are used for DLSS and image AI training and image de-noising).
During the Video, I linked above,Tom mentions at least once, that training your networks for DLSS at least will be FREE, as part of the Nvidia developer relations program.
Most games use Microsoft DXR which AMD can implement as well via Radeon ProRender technology. AMDs next card will have AI focus but as I mentioned above Nvidia have advantage of dedicated hardware for acceleration now. When AMD follows suit it will validate Nvidia’s approach.
Most of the people who test these cards are saying that that 35-45% speed increase is really bloated, they don't expect more than a 15-20% increase
Of the last 5 Nvidia generational cycles, only 1 had more than 30% uplift, which was GTX 1000 series, which had the biggest uplift since GTX 280, way back in 2008. So GTX 1000 may have spoiled people a bit with expectations.
Most of the people who test these cards are saying that that 35-45% speed increase is really bloated, they don't expect more than a 15-20% increase
Of the last 5 Nvidia generational cycles, only 1 had more than 30% uplift, which was GTX 1000 series, which had the biggest uplift since GTX 280, way back in 2008. So GTX 1000 may have spoiled people a bit with expectations.
GTX 1080->2080 Looks like 30% to 50% depending on game, and 80% or more when using DLSS vs older high quality AA. (So much for DLSS being a joke. )
So the 2000 series will have Better than Average performance uplift, even in OLD games, to go along with a wealth of new features.
Let’s wait for the reviews to find out.
The problem with your comparison is that you don't take prices into account. Right now the comparison is whether RTX 2080 gives us enough performance boost over GTX 1080 Ti to be worth the price. Comparing it to GTX 1080 is as wrong as comparing it to GTX 1050 because those slower cards are also much lower priced.
As for Nvidia and CUDA / coding - CUDA is the defacto standard. If AMD can deliver better architecture in every way just like ATI did in radeon 9700 to x800 period people will buy them.
But since geforce 8800 launch there hasn’t been anything better than Nvidia. They need be better in every way for at least 2 generations just like ATI and they can have 60% market share. That hasn’t happened in such a longtime.
Fury-Vega is a huge disappointment seeing as it went from 28-14nm. Vega is still a disappointment from a gaming perspective. They threw a bunch of stuff in it for compute / professional use that bloated the die size and power use and some of the promised features to help the gaming side were MIA. With Turing we have a bunch of stuff thrown in that also bloated the die size, the jury is out on the feature benefits (2 weeks time).
I wouldn’t say DLSS is a joke. Tensor Cores are not used for Raytracing (they are used for DLSS and image AI training and image de-noising).
During the Video, I linked above,Tom mentions at least once, that training your networks for DLSS at least will be FREE, as part of the Nvidia developer relations program.
Most games use Microsoft DXR which AMD can implement as well via Radeon ProRender technology. AMDs next card will have AI focus but as I mentioned above Nvidia have advantage of dedicated hardware for acceleration now. When AMD follows suit it will validate Nvidia’s approach.
AMD generally had better GPU architectures than Nvidia from June 2008 (launch of the Radeon HD 4770) until September 2014 (launch of the GeForce GTX 980). Nvidia generally had better GPU architectures before that and more recently. Toward the start of the period where AMD was ahead, it was really just about a 2x die size advantage for a given level of performance, so Nvidia was able to compete on price but just not make very much money. Toward the end, GCN was massively better than Kepler at compute, but Kepler was more narrowly focused on graphics, so they managed to be about even on graphics. You could argue that neither of those constituted AMD being way ahead, but from December 2009 (Radeon HD 5000 series showing up in large volumes) until May 2012 (GeForce 600 series showing up in large volumes), it was a huge chasm.
If we're talking about games, DirectX is the de facto standard. I don't know if there has ever been a significant game that used CUDA outside of sponsored titles where Nvidia paid someone specifically to use CUDA. Even if you could have made a weak case for using CUDA in a game about 8 years ago, DirectX, OpenGL, and Vulkan have all long had compute shaders, so the only plausible case for using CUDA anymore is if Nvidia pays you a bunch of money specifically to use CUDA.
If you're talking about compute, then CUDA is the de facto standard if you're not writing GPU code of your own, but only want to run code written or paid for by Nvidia. Nvidia pays people specifically to use CUDA, with the express goal of making GPU code incompatible with AMD GPUs. That's kind of like saying that you're using C++ in some sense if you play a game written in C++, and is very different from choosing C++ as a language in which you will write your own code.
If you're writing your own GPU code for compute purposes, then CUDA works well if your only goal is to write Hello World, declare victory, and never touch a GPU ever again. It can be made to work for pretty simple programs. But if you need to write anything complicated, and you also need to optimize and debug it, you'd be nuts to use CUDA. It simply doesn't have the features available to make optimization and debugging a practical thing to do without making an enormous mess. It's also pretty much unmaintainable, so if the first GPU you target isn't also the last, you'll have plenty of reasons to sorely regret using CUDA. OpenCL is usually the way to go there.
The problem with saying that DLSS is "free" is that you're ignoring the maintenance costs, which is to say, most of the potential costs. If every possible combination of graphics settings required a completely separate x86 executable, that would be such an awful mess as to make having more than a handful combinations of graphical settings impractical. But it would still be "free" in the sense that you don't have to pay extra for the compiler.
If you want DLSS to work properly, it's probably going to be that kind of mess. That's much worse than costing money. Plenty of developers will happily pay money to get better tools that let them avoid that sort of mess.
On ray tracing, Nvidia looks like they're doing the right things for the right reasons. But ray tracing is so slow that I think it's going to be a long time before real-time ray tracing particularly matters to gaming. We might never get there. But pre-rendered lighting effects using ray tracing are very much a thing already, and making that cheaper or better is also good.
So, if nVidia is doing all the "deep learning" for free on their rack servers and pushing it out via drivers, what exactly are your tensor cores doing then?
Not exactly seeing how this is going to be better than SSAA (which is generally the highest quality), or why we necessarily need it over shader-based AA like FXAA or SMAA, which already can be used with a negligible performance hit and can produce pretty good results.
Sounds like the same thing as rasterized baked in lighting/shadows... only with Antialiasing.
And then there's the argument, if you are rendering at a sufficiently high enough PPI/Resolution, you don't need AA anyway....
Remember Temporal AA (TXAA)? There are like... a dozen games that supported it total, and it was supposed to be one of the big tech improvements of Kepler. I think DLSS is just the next buzzword they can print on the box and use to generate hype in this introduction period; a handful of devs will get paid to use it for pretty screenshots, and no one will remember it even exists in 12 months time.
Nvidia needs to stop with the proprietary nonsense because the development companies are not going to waste their time developing for it.
You know, it will be interesting...
Historically, nVidia would pay some developers (typically large AAA projects, but not always, anything that generates a lot of hype really) to use their proprietary stuff. The developers are more than happy to oblige - they get specific attention to add something that looks awesome in their game along with the technical assistance to implement and debug it, the developer gets some nice screenshots out of it, and nVidia gets to slap their logo on it.
So, if... ~if~ this does turn into a one horse race -- and that is not a foregone conclusion, the high end cards are sexy, but the sausage of the industry are in the <=$200 market, where AMD is still very competitive. But let's say it does, AMD's 7nm is a trainwreck, and nVidia starts going down the path of Intel (minimal updates over time -- Ampere could be the last big R&D effort nVidia has to make in the GPU space, analogous to Sandy Bridge).
So nVidia's proprietary stuff... by shear weight of the market, it becomes the de facto standard.
On the surface, that sounds like game over - nVidia wins. I'm not so sure it actually plays out like that though. Remember 3dfx? The company that used to be the premier name in 3D graphics (and interestingly enough, were purchased by nVidia). 3dfx had their own API - Glide. It competed with the (then very fledgling) DirextX and (focused on fidelity, not speed) OpenGL.
Sure, there was competition to 3dfx - S3, Cirrus Logic, Trident... none of them were even close to 3dfx's capabilities though. If you were around and gaming at the time, seeing something run on a S3 Virge was "wow" (also interesting fact - the S3 Virge was labeled as a "Virtual Reality Graphics Engine", in 1995... But then seeing the same game on a Voodoo, and your jaw dropped. It was a totally different experience. Graphics went from chunky, pixelated, and choppy to silky smooth overnight. Nothing would be the same ever again.
So... 3dfx was the defacto graphics leader for several years. No one was even close. Where is Glide today? Where is 3dfx today?
Turns out, the market does have some influence. Those high end (and high dollar) cards are expensive, and make for great halo products with sexy screenshots. But they aren't the driving force in the market, all those smaller cards - the Trio64s, the Rages, the Blades.... They didn't support Glide.
Quake also helped in this arena. ID didn't believe in proprietary either (or, at the very least, Carmack didn't). glQuake, for a lot of people, was the first OMG experience that opened up everyone's eyes to PC gaming. It ran extraordinarily well on Voodoo, but it did so using OpenGL - pushing an open standard. Developers started gravitating to this - it meant they could write code once, and not have to include support for a lot of different proprietary standards... which when you aren't getting paid to do so, and you aren't getting specific (and free) technical assistance, all of a sudden doesn't look nearly as attractive to throw your budget dollars at.
So, to wrap up this long post - nVidia is a juggernaut. But it doesn't necessarily mean they get to throw their weight around. The market still drives the car.
Nvidia needs to stop with the proprietary nonsense because the development companies are not going to waste their time developing for it.
You know, it will be interesting...
Historically, nVidia would pay some developers (typically large AAA projects, but not always, anything that generates a lot of hype really) to use their proprietary stuff. The developers are more than happy to oblige - they get specific attention to add something that looks awesome in their game along with the technical assistance to implement and debug it, the developer gets some nice screenshots out of it, and nVidia gets to slap their logo on it.
So, if... ~if~ this does turn into a one horse race -- and that is not a foregone conclusion, the high end cards are sexy, but the sausage of the industry are in the <=$200 market, where AMD is still very competitive. But let's say it does, AMD's 7nm is a trainwreck, and nVidia starts going down the path of Intel (minimal updates over time -- Ampere could be the last big R&D effort nVidia has to make in the GPU space, analogous to Sandy Bridge).
So nVidia's proprietary stuff... by shear weight of the market, it becomes the de facto standard.
On the surface, that sounds like game over - nVidia wins. I'm not so sure it actually plays out like that though. Remember 3dfx? The company that used to be the premier name in 3D graphics (and interestingly enough, were purchased by nVidia). 3dfx had their own API - Glide. It competed with the (then very fledgling) DirextX and (focused on fidelity, not speed) OpenGL.
Sure, there was competition to 3dfx - S3, Cirrus Logic, Trident... none of them were even close to 3dfx's capabilities though. If you were around and gaming at the time, seeing something run on a S3 Virge was "wow" (also interesting fact - the S3 Virge was labeled as a "Virtual Reality Graphics Engine", in 1995... But then seeing the same game on a Voodoo, and your jaw dropped. It was a totally different experience. Graphics went from chunky, pixelated, and choppy to silky smooth overnight. Nothing would be the same ever again.
So... 3dfx was the defacto graphics leader for several years. No one was even close. Where is Glide today? Where is 3dfx today?
Turns out, the market does have some influence. Those high end (and high dollar) cards are expensive, and make for great halo products with sexy screenshots. But they aren't the driving force in the market, all those smaller cards - the Trio64s, the Rages, the Blades.... They didn't support Glide.
Quake also helped in this arena. ID didn't believe in proprietary either (or, at the very least, Carmack didn't). glQuake, for a lot of people, was the first OMG experience that opened up everyone's eyes to PC gaming. It ran extraordinarily well on Voodoo, but it did so using OpenGL - pushing an open standard. Developers started gravitating to this - it meant they could write code once, and not have to include support for a lot of different proprietary standards... which when you aren't getting paid to do so, and you aren't getting specific (and free) technical assistance, all of a sudden doesn't look nearly as attractive to throw your budget dollars at.
So, to wrap up this long post - nVidia is a juggernaut. But it doesn't necessarily mean they get to throw their weight around. The market still drives the car.
NVidia's proprietary stuff can only have the weight of market behind it if they can beat AMD to consoles.
Otherwise the devs will build for whatever the consoles have, and after that it's a question if NVidia's proprietary stuff is good enough to develop on top of those console graphics.
AMD has consoles now, but historically it's been fairly rare for someone to have consoles and have a PC market presence. And it isn't uncommon for generation breaks in consoles to go completely different directions in architectures.
I'm waiting for someone like Imagination to make another play at it. Imagination did have Sega Dreamcast in the past, so it wouldn't be unheard of, and that particular company is hungry again since it lost it's Apple contract.
Granted, given where graphics technology is today versus 20 years ago, it's not surprising to see the consolidation of patents and talent. And it would take a good bit to break away from an AMD APU - there are a lot of benefits there for consoles. I could easily see Microsoft making a break to try to distance from XB1 and PS4 and put themselves back out in front, like they had with the 360. Sony, less likely to break from AMD at this point, PS4 was good to them, and AMD has a good roadmap that fits in line with Sony.
So, if nVidia is doing all the "deep learning" for free on their rack servers and pushing it out via drivers, what exactly are your tensor cores doing then?
Machine learning generally has two stages: training and inference. Training is done on Nvidia's servers and inference is done locally.
The idea of training is that we can render a bunch of frames and exactly what each pixel should be. We can also try to drop a pixel and compute what it should be from its neighbors. Or perhaps you should think of "samples" rather than pixels with some supersampling going on. The idea is of training in machine learning is to try a ton of different formulas to see which formula for computing a pixel from its neighbors tends to get closest to the actual color of the pixel. That gets computed on Nvidia's servers.
The idea of inference is that, once they have the formula, they can say, for the particular game that you happen to be playing, here's the formula to compute the missing pixels from their neighbors, or perhaps rather, additional fake samples from the ones that we actually computed. Actually taking the values of the neighbors and plugging it in to create extra samples for use in anti-aliasing is done on the GPU as part of rendering the game.
Rather than having a single formula that works for all games ever, they can customize it on a per-game basis. And having the tensor cores allows them to use a much more expensive formula to compute the missing pixels than before. So it's plausible that there could be some benefit.
But if you use a formula for the wrong game, or for the wrong settings in the right game, it's likely to be worse outright than a one size fits all model that would work well for an "average" game. Thus, you should expect support to be very limited.
The outputs from machine learning are very opaque and you really can't track where they came from. They don't have any theoretical basis other than, we did a ton of computations and here's the output. If it's customized to your particular data, it can work pretty well. But if it's trained on data not substantially similar to your own, you should expect a case of garbage in, garbage out. So for a general purpose thing that works with arbitrary games, it will probably be inferior to what you could get from something like FXAA that has a good theoretical basis of why it ought to work.
As should be incandescently obvious, trying to guess extra samples from the ones that you have isn't going to be nearly as good as actually computing the extra samples via SSAA. So this isn't going to improve on image quality at the high end. You can't beat computing the right values exactly.
It's plausible that there could be cases where both the image quality and the performance hit are somewhere between FXAA and SSAA, with the image quality closer to the latter and the performance closer to the former. If that happens, then it kind of has a point in the handful of sponsored titles that implement it at the particular settings that they use. But even that is far from guaranteed.
AmazingAveryAge of Conan AdvocateMemberUncommonPosts: 7,188
^ Great video that explain RT on the BFV demo
It's really too soon to call the performance hit. DICE was working on Volta cards, and only got the RTX cards with real RT HW, 2 weeks before they showed the demo. There is a lot of room for optimization, because, after all, 2 weeks is 2 weeks. There was a clear statement from the Battlefield developers, that the sub 60 fps was a result of the short time they had with the card and that they have clear improvements planned.
RT has always been the gold standard for image quality in offline rendering, and the only reason it wasn't commonly used at the very beginning of real-time 3D was its massive compute costs. I still remember playing with mirrored surfaces in U:ED back on my Voodoo 4, and being amazed at how cool RT was and also how it would drag my system to a crawl.
RT obviates the need for multiple rasterization techniques, such as AO, which are becoming quite computationally expensive in their own right. So it's only natural to move to the simpler coding solution as soon as the compute resources become available.
It's my belief that NVIDIA chose this moment to release RT as a strategic decision to move the industry forward knowing that they could still beat AMD in conventional rendering even with the "wasted" die area for games that don't benefit from hardware accelerated RT.
However, I also don't predict any sort of abandonment of "traditional" rendering pathways happening until every sub ~$300 GPU has reasonable RT performance.
In the short term, I expect RT to have the same kind of industry effect as tessellation. NVIDIA will be a lot better at it than AMD for several generations, and will leverage their advantage by "optimizing" ray counts in green titles just like they did with tessellation factors.
While I still expect the occasional feature or optimization to be released in the next 5-10 years, rasterization techniques will gradually begin to stagnate. RT will become a focal point for gaming benchmarks very shortly after the hardware becomes available, which will begin the process of game companies drawing attention away from rasterization.
It's really too soon to call the performance hit. DICE was working on Volta cards, and only got the RTX cards with real RT HW, 2 weeks before they showed the demo. There is a lot of room for optimization, because, after all, 2 weeks is 2 weeks. There was a clear statement from the Battlefield developers, that the sub 60 fps was a result of the short time they had with the card and that they have clear improvements planned.
RT has always been the gold standard for image quality in offline rendering, and the only reason it wasn't commonly used at the very beginning of real-time 3D was its massive compute costs. I still remember playing with mirrored surfaces in U:ED back on my Voodoo 4, and being amazed at how cool RT was and also how it would drag my system to a crawl.
RT obviates the need for multiple rasterization techniques, such as AO, which are becoming quite computationally expensive in their own right. So it's only natural to move to the simpler coding solution as soon as the compute resources become available.
It's my belief that NVIDIA chose this moment to release RT as a strategic decision to move the industry forward knowing that they could still beat AMD in conventional rendering even with the "wasted" die area for games that don't benefit from hardware accelerated RT.
However, I also don't predict any sort of abandonment of "traditional" rendering pathways happening until every sub ~$300 GPU has reasonable RT performance.
In the short term, I expect RT to have the same kind of industry effect as tessellation. NVIDIA will be a lot better at it than AMD for several generations, and will leverage their advantage by "optimizing" ray counts in green titles just like they did with tessellation factors.
While I still expect the occasional feature or optimization to be released in the next 5-10 years, rasterization techniques will gradually begin to stagnate. RT will become a focal point for gaming benchmarks very shortly after the hardware becomes available, which will begin the process of game companies drawing attention away from rasterization.
Oh give me a break, RT is only useful if FPS is not important to the game. Any game where FPS is important will highly limit any RT in those situations. RT right now is an impractical tech at this point in time. As to the Nvidia proprietary nonsense, no one is going to use it.
AmazingAveryAge of Conan AdvocateMemberUncommonPosts: 7,188
Oh give me a break, RT is only useful if FPS is not important to the game. Any game where FPS is important will highly limit any RT in those situations. RT right now is an impractical tech at this point in time. As to the Nvidia proprietary nonsense, no one is going to use it.
Why are folks assuming people are paying $ for raytracing? I’m pretty sure most people are paying $$$$ for the fastest graphics card available. That includes the best card for FPS games.
NVIDIA is not alone in this, dozens of developers are making the same claim with their demos running mulitple times faster on Turing compared to Volta and even more compared to Pascal.
Please tell me how it's impractical? How so? Based on DICE devs getting the hardware two weeks before a demo was made? Do you know the cost to implement or even performance hits in scenarios? No you don’t, There are no real reviews yet.
It's really too soon to call the performance hit. DICE was working on Volta cards, and only got the RTX cards with real RT HW, 2 weeks before they showed the demo. There is a lot of room for optimization, because, after all, 2 weeks is 2 weeks. There was a clear statement from the Battlefield developers, that the sub 60 fps was a result of the short time they had with the card and that they have clear improvements planned.
RT has always been the gold standard for image quality in offline rendering, and the only reason it wasn't commonly used at the very beginning of real-time 3D was its massive compute costs. I still remember playing with mirrored surfaces in U:ED back on my Voodoo 4, and being amazed at how cool RT was and also how it would drag my system to a crawl.
RT obviates the need for multiple rasterization techniques, such as AO, which are becoming quite computationally expensive in their own right. So it's only natural to move to the simpler coding solution as soon as the compute resources become available.
It's my belief that NVIDIA chose this moment to release RT as a strategic decision to move the industry forward knowing that they could still beat AMD in conventional rendering even with the "wasted" die area for games that don't benefit from hardware accelerated RT.
However, I also don't predict any sort of abandonment of "traditional" rendering pathways happening until every sub ~$300 GPU has reasonable RT performance.
In the short term, I expect RT to have the same kind of industry effect as tessellation. NVIDIA will be a lot better at it than AMD for several generations, and will leverage their advantage by "optimizing" ray counts in green titles just like they did with tessellation factors.
While I still expect the occasional feature or optimization to be released in the next 5-10 years, rasterization techniques will gradually begin to stagnate. RT will become a focal point for gaming benchmarks very shortly after the hardware becomes available, which will begin the process of game companies drawing attention away from rasterization.
While ray tracing makes a lot of awful fakery commonly paired with rasterization superfluous, including the ambient occlusion that you site, it's not like you're replacing one big performance hit with another. The reason why so many games have some sort of ambient occlusion but not ray tracing in spite of ambient occlusion not particularly making the game look better are that it doesn't carry the killer performance hit of ray tracing. It also helps that ambient occlusion can easily be turned off and still have a perfectly good approach to rendering.
I'm actually somewhat baffled by what Nvidia is doing. I suspect that Turing was originally aimed at 10 nm and then ported back to 12 nm when Nvidia realized that 10 nm was going to be bad for GPUs. But if Nvidia releases a full lineup on 12 nm and then AMD releases a full lineup on 7 nm, AMD is going to slaughter Nvidia for that generation until Nvidia gets to 7 nm. It would be really shocking for Nvidia to release a full lineup on 12 nm, and then another full lineup on 7 nm six months later. If they do that, then the engineering costs of creating Turing dies on 12 nm will mostly be wasted. Recall that Nvidia has only launched five new GPUs for GeForce cards in the last three years. Launching Turing now on 12 nm is a huge screw-up unless 7 nm is a lot further away than it looks.
I don't think your comparison to tessellation works. For starters, our history of it is all wrong. Recall that AMD had four entire generations of GPUs launch with tessellation before Nvidia launched their first GPU with it. Even when Fermi decided to go with one tessellator per compute unit while AMD implemented them at basically the level of what Nvidia calls a graphics processing cluster, that massive difference in raw tessellation power was unnoticeable outside of synthetic benchmarks. Many years later, tessellation still isn't used very much even today. I don't think I've yet played a game that used it apart from the amateur project that I worked on for a while; I certainly haven't seen it in any video settings menu.
And then there's also the fundamental issue that tessellation is a major performance optimization. Go all-in on tessellation and you decrease the rendering load considerably, as you have massively fewer polygons to render on the far-away objects, which is to say, most of them. That's quite the opposite of what ray tracing does. The problem with tessellation was that it wasn't available until hardware was powerful enough to not particularly need it, at least outside of cell phones that don't use it for other reasons.
I wouldn't say that rasterization techniques will begin to stagnate in the future so much as that they did so several years ago. DirectX 12 and Vulkan brought not so much new features that would make games look better, but the ability to implement old features more efficiently. The previous version, DirectX 11, released in 2009.
I do think you're right that ray tracing will end up being important in benchmarking video cards. A benchmark that shows that you can get 200 frames per second with a GTX 1080 Ti or 150 with a Vega 64 isn't showing that one card is faster than the other. It's showing that they're both really fast, and not really a reason to prefer one card over the other. Change those frame rates to 40 and 30 and we're talking about a difference that matters, at least for the few people willing to turn ray tracing on.
That could also have the salutary effect of meaning that reviews focus more on games trying to do more demanding things that genuinely do matter and less on games that found creative ways to run inefficiently. Right now, the latter is the main way to get low frame rates on top end hardware.
Oh give me a break, RT is only useful if FPS is not important to the game. Any game where FPS is important will highly limit any RT in those situations. RT right now is an impractical tech at this point in time. As to the Nvidia proprietary nonsense, no one is going to use it.
Why are folks assuming people are paying $ for raytracing? I’m pretty sure most people are paying $$$$ for the fastest graphics card available. That includes the best card for FPS games.
NVIDIA is not alone in this, dozens of developers are making the same claim with their demos running mulitple times faster on Turing compared to Volta and even more compared to Pascal.
Please tell me how it's impractical? How so? Based on DICE devs getting the hardware two weeks before a demo was made? Do you know the cost to implement or even performance hits in scenarios? No you don’t, There are no real reviews yet.
I'm extremely skeptical that Turing is "multiple times faster" than Volta in anything other than ray tracing. Outside of ray tracing, I'd expect a Titan V to beat an RTX 2080 Ti at just about everything.
You can make a case for an RTX 2080 Ti as being the fastest card available other than the Titan V that costs $3000. It's much less likely that there will be a good case for buying the RTX 2080. And once 7 nm GPUs arrive next year, it's unlikely that there will be much of a case for buying either at all.
Looking back, at least as far back as Tesla (the last 10 years):
nVidia averages about 1.5 years between marketing generations. I use the term marketing because not all of those were actual chip/technology generations.
There have been two notable exceptions:
400 to 500 series: both of those were Fermi, they were separated by only about 7 months. There were some reasons (not saying good reasons) why either the 400 series came out too early, or the 500 series was rushed out to replace it so soon - depending on how you want to look at it.
10 to 20 series: Pascal to Turing has had more than two years separating them, the longest period between releases in recent history. There are reasons here as well why there is so much distance between them - you can say either lack of competition forcing them to move, or milking the mining sales, or difficulty getting Volta out for whatever reason, or lack of RAM availability, or any number of other theories.
I think this goes back to a good point - why release Turing now on 12nm if 7nm is around the corner? If 7nm is available to AMD, it's also available to nVidia. That doesn't necessarily mean nVidia engineers have figured it out (2009 is a good indicator of this: nVidia was still using 55nm on the 280, when AMD was using 40nm on the 5870. nVidia put out the 400 series on 40nm to compete but didn't quite have it down, and then had to push the 500 series respin out very quickly. During that period, AMD had a significant advantage.
So, could we see this play out again on 7nm? Possible. But i doubt it.
I think Turing came out just because the mining boom was waning significantly, and nVidia needed to do something to push sales in their primary revenue market. I think they sat on Pascal as long as they could, given that AMD/Intel haven't given them any competition to need to push anything else. I think Turing will end up being Volta architecture with the addition of whatever proprietary tech makes up the RTX cores, and that Titan V will be a good performance metric for games that do not utilize RTX (only a few weeks left until this is debunked or confirmed, I suppose).
I don't think AMD will sweep in with 7nm and take anything away from nVidia - at least not yet. AMD focused on mid range, and has been focused there for a long time now. I don't believe Turing was put out to counter an AMD 7nm Vega/Navi varient. I don't think nVidia feels particularly threatened by AMD right now in the gaming space. Turing was strictly a play to Wall Street, and an effort to get people who have been driving Pascal 1080/1080Ti's something new to throw their money at.
The real money maker is in the low/mid tier sales: The 1050/1060 tier, and nVidia hasn't even announced anything to replace Pascal there, at least officially. That's the area that AMD is very much still competitive with. But a new halo product goes a long way with marketing - after all, how many people do you know who just go and buy an nVidia lower tier card, pay more money for it, because they believe that nVidia across the board will be faster than AMD (or have better drivers, or run cooler, or whatever)? It's very common, even among people on this forum.
So nVidia's announcement of a new halo product will drive some revenue, no doubt, but it's not the primary revenue driver, which is why I think it's mostly aimed at Wall Street. And I don't think AMD has much at all to do with an oddly timed 12nm product. I think it was strictly because Pascal sales were finally starting to slump and nVidia wants to keep their stock price high.
If AMD does happen to provide some 7nm competition, it isn't unheard of for nVidia to roll out a new generation sooner than normal. A 30 series could come out in as little as a few months time... If that were the case, I would expect the 30 series to lead with the revenue drivers: The 3050s and 3060s, and it's possible we wouldn't even see a 2050/2060 released at all. Or it could be that nVidia is planning their 7nm transition to occur with the 2050/2060 ... that's been done in the past as well (the 200 series, for example, the halo cards were on 55nm, but the lower tiers came out later on 40nm).
My guess would be: if AMD can beat a 2080Ti with a 7nm Navi, nVidia will call the new product a 30 series so they can re-release a new flagship on 7nm to retake the performance crown and push that out within 9 months of Navi's release. If 7nm Navi doesn't quite beat out a 2080Ti, nVidia will retain the 20 series name and just move their lower tier to 7nm, and save 7nm halo product for a later 30 series in 2020.
The real money maker is in the low/mid tier sales: The 1050/1060 tier, and nVidia hasn't even announced anything to replace Pascal there, at least officially. That's the area that AMD is very much still competitive with. But a new halo product goes a long way with marketing - after all, how many people do you know who just go and buy an nVidia lower tier card, pay more money for it, because they believe that nVidia across the board will be faster than AMD (or have better drivers, or run cooler, or whatever)? It's very common, even among people on this forum.
I think NVidia hasn't announced anything on 1050/1060 tier because whatever their replacement is, it likely won't support ray tracing. They need to keep their low end cards competitively priced, and with all the dedicated ray tracing hardware it's just not possible.
Even if they had GTX 2050 and GTX 2060 coming we wouldn't see any announcement yet because it would hurt their ray tracing hype.
Or it could be that nVidia is planning their 7nm transition to occur with the 2050/2060 ... that's been done in the past as well (the 200 series, for example, the halo cards were on 55nm, but the lower tiers came out later on 40nm).
That's a possibility that I hadn't considered. It might be that Nvidia has decided that 7 nm isn't ready for 400 mm^2 chips, but is ready for 100 mm^2, and so the big dies have to be on 12 nm. But higher end parts have much more to gain from better process nodes, and AMD has promised a big Vega on 7 nm this year. So if that's the case, it would point to Nvidia having severely botched the process node somehow.
It's also possible that Turing will really only be the higher end parts. Rumors already say that only the already announced RTX 2070 and higher will have the "RTX" moniker, while the rest are GTX. It's possible that the GeForce GTX 2060 and GTX 2050 will simply be a rebranded GeForce GTX 1070 Ti and GTX 1060, respectively. That would be enough for the fanboys to say "look how much faster the GTX 2060 is than the GTX 1060", as proof that the 2000 series is awesome.
The reason why the GeForce 500 series came so closely on the heels of the 400 series is that the entire 400 series was severely broken. The 500 series was nothing more than a base layer respin of the 400 series. You can do that in six months, so Nvidia did. It's very unusual for a GPU die to be so horribly broken as to justify a base layer respin, as that costs a ton of money, takes long enough that it would commonly be late in the GPU's intended product cycle before the respin launched, and most importantly, usually doesn't gain you much. That's as opposed to metal layer respins, which are far more common.
AmazingAveryAge of Conan AdvocateMemberUncommonPosts: 7,188
you know who just go and buy an nVidia lower tier card, pay more money for it, because they believe that nVidia across the board will be faster than AMD (or have better drivers, or run cooler, or whatever)? It's very common.
The market share across everything is 70% Nvidia / 30% AMD. AMD does not have a good brand any more, they aren’t competitive in many segments for gamers. The “belief” as you put it is very much true, faster, better drivers and best performance are apparent and proven at many levels for Nvidia. Steam market share, board partner sales, general press reviews etc etc everything all points to AMD not competing for a long time now.... and it boils down to a 30% overall market share. If you believe all that is because Nvidia has better “marketing” that’s crazy. Poor Vega. It is even worse in the professional market.
AMD is not even considered by professionals when it comes to GPU selection for 3d rendering, only a handful of small teams go for AMD solutions. The gap was enormous before, now it is on a whole new level.
NVIDIA owns not just the gaming market, they completely obliterate AMD on the professional/compute staff as well - automotive, AI / neural networks / deep learning, visualization / rendering.
With the RT cores there's a chance that all the OpenCL based render engines become obsolete, not that there are many, or are popular. AMD has nearly nothing (GPUOpen, ProRender, etc but these are not on the same page as NVIDIA's solutions, not in the same league).
If AMD wants to stay relevant in gaming they have to focus on consoles, I bet NV will do everything in their power to get a console deal for the next round.
Even if AMD had hardware, which they dont, people would still have no other option than to choose Nvidia for this line of work, as all the major renderers with enough features to be considered production-grade (Redshift, Octane) are CUDA based.
The companies that will integrate RT cores to their render engines: Autodesk (Arnold), Chaos Group (VRAY GPU & Project Lavina), Blender (Cycles), Epic (Unreal), Otoy (OctaneRender), Pixar (Renderman), Redshift, Weta —— Here is a short portion of a recent interview: CG Channel: What’s the status of OpenCL in V-Ray GPU? Your blog posts only mention CUDA.
Lon Grohs (Chaos Group): We keep some support for it, but it’s not really advised. We’ve had a good relationship with AMD, but there have been driver problems and all kinds of things. And then of course they’ve released their own renderer, which complicates things.
The reality is that [something] like 99% of our customers are on Nvidia hardware. And Nvidia has really helped us push our development forward.
CGC: In your blog posts, you also mention that you’re looking Microsoft’s new DirectX Raytracing.
LG: [When DXR was announced at GDC this year] a lot of people [thought] we must have been caught off-guard, that we weren’t expecting it. No, no, no. We’ve known it was coming. We’re the largest company in the world solely dedicated to ray tracing technology. We’ve been dedicated to optimising ray tracing for 20 years. So we’re pretty excited about it.
We used to have to make the [case for] why ray tracing is superior to rasterised graphics. But now we’re in a new conversation: ray tracing in real time. And yes, you [have it] as a concept, but you don’t have the ray tracing that you’re used to … all of the complexity that you see when you’re watching Game of Thrones or The Avengers. That’s not what real-time ray tracing is doing yet. It’s very limited. It’s one bounce of shadows, one of reflections. Which is definitely the way to go; it’s just not as ‘real’ as full ray tracing.
—— One just have to wonder how AMD is run, I mean how on Earth is this acceptable at any level? AMD still doesn't understand that no one will integrate / code their solutions to any software, they have to devote the resources to make things happen. This is evident today with Vega, ngg and Primitive Shader fiasco.
I used to game ATI/AMD. I simply provide wallet share to what I can afford that suits my needs. As my needs change and evolve so has my purchasing decisions.
I don’t think there is any fault in what I state. A healthy competitor drives down prices.
When you say AMD is not competitive in "many segments for gamers", I think you mean "the top end". Because AMD is competitive everywhere else for gamers. The only way that you can make that into segments plural is if you mean something like the $700 segment, the $800 segment, the $1000 segment, the $1200 segment, and the $3000 segment.
For professional GPUs, it depends on what you're doing. There are some programs where AMD is competitive, and there are some where AMD simply isn't. If what you need is something where you know that AMD isn't competitive, then of course you buy Nvidia.
Compute is that way, too, and there are plenty of compute algorithms where a Vega 64 will completely obliterate a GTX 1080 Ti or a Tesla P100. I'd bet on AMD's upcoming 7 nm Vega doing the same to a Tesla V100 and for the same reasons: whipping it in some algorithms while losing soundly in others. If you're willing to look at a per node level rather than per GPU, being on 7 nm is likely to give AMD a large efficiency advantage, too.
Outside of mining (which is a form of compute), the money in compute GPUs is largely dominated by people buying GPUs to run code written or paid for by Nvidia, and AMD hasn't been competitive at writing their own code. But if you want to run algorithms where Nvidia can't hand you completed code to do what you need, that's not a factor. Which is to say, in the overwhelming majority of things that could run well on GPUs, Nvidia having already written a bunch of code is completely irrelevant. At the moment, most of the world just says, well then, we just won't use GPUs at all for that. CUDA has done a lot to poison the well and convince people that GPU programming is harder than necessary if you need anything non-trivial.
Gamers have mostly bought Nvidia for the last 16 months or so because the miners had a fairly strong preference for AMD and were buying up the AMD GPUs before gamers could get to them. If AMD and Nvidia each have about as good of GPUs with an MSRP of $200, but miners have made it so that you have to pay $400 to get the AMD GPU or $300 for the Nvidia, then of course you buy Nvidia. That doesn't mean that AMD isn't competitive; they were selling GPUs as fast as they could make them. It only means that a huge mining bubble distorted the market.
The real money maker is in the low/mid tier sales: The 1050/1060 tier, and nVidia hasn't even announced anything to replace Pascal there, at least officially. That's the area that AMD is very much still competitive with. But a new halo product goes a long way with marketing - after all, how many people do you know who just go and buy an nVidia lower tier card, pay more money for it, because they believe that nVidia across the board will be faster than AMD (or have better drivers, or run cooler, or whatever)? It's very common, even among people on this forum.
I think NVidia hasn't announced anything on 1050/1060 tier because whatever their replacement is, it likely won't support ray tracing. They need to keep their low end cards competitively priced, and with all the dedicated ray tracing hardware it's just not possible.
Even if they had GTX 2050 and GTX 2060 coming we wouldn't see any announcement yet because it would hurt their ray tracing hype.
The ONLY ray tracing you will use is on the high end 2080 and 2080 ti cards. Ray tracing on a 2060 or a 2050 is a complete joke. Too much of a performance hit. The 2070 can probably be included in that too. The best buy for a GPU right now is the 1080, as they are going for under $500 just about everywhere.
Comments
https://forums.guru3d.com/threads/rtx-2080ti-versus-gtx-1080ti-game-benchmarks-first-real-leaks-appear-online.422790/#post-5580623
As for Nvidia and CUDA / coding - CUDA is the defacto standard.
If AMD can deliver better architecture in every way just like ATI did in radeon 9700 to x800 period people will buy them.
But since geforce 8800 launch there hasn’t been anything better than Nvidia.
They need be better in every way for at least 2 generations just like ATI and they can have 60% market share. That hasn’t happened in such a longtime.
Fury-Vega is a huge disappointment seeing as it went from 28-14nm. Vega is still a disappointment from a gaming perspective. They threw a bunch of stuff in it for compute / professional use that bloated the die size and power use and some of the promised features to help the gaming side were MIA. With Turing we have a bunch of stuff thrown in that also bloated the die size, the jury is out on the feature benefits (2 weeks time).
I wouldn’t say DLSS is a joke. Tensor Cores are not used for Raytracing (they are used for DLSS and image AI training and image de-noising).
During the Video, I linked above,Tom mentions at least once, that training your networks for DLSS at least will be FREE, as part of the Nvidia developer relations program.
Most games use Microsoft DXR which AMD can implement as well via Radeon ProRender technology. AMDs next card will have AI focus but as I mentioned above Nvidia have advantage of dedicated hardware for acceleration now. When AMD follows suit it will validate Nvidia’s approach.
Of the last 5 Nvidia generational cycles, only 1 had more than 30% uplift, which was GTX 1000 series, which had the biggest uplift since GTX 280, way back in 2008. So GTX 1000 may have spoiled people a bit with expectations.
GTX 480->580 14.9%
GTX 580->680 28.8%
GTX 680->780 20.4%
GTX 780->980 29.9%
GTX 980->1080 56.3%
GTX 1080->2080 Looks like 30% to 50% depending on game, and 80% or more when using DLSS vs older high quality AA. (So much for DLSS being a joke. )
So the 2000 series will have Better than Average performance uplift, even in OLD games, to go along with a wealth of new features.
Let’s wait for the reviews to find out.
If we're talking about games, DirectX is the de facto standard. I don't know if there has ever been a significant game that used CUDA outside of sponsored titles where Nvidia paid someone specifically to use CUDA. Even if you could have made a weak case for using CUDA in a game about 8 years ago, DirectX, OpenGL, and Vulkan have all long had compute shaders, so the only plausible case for using CUDA anymore is if Nvidia pays you a bunch of money specifically to use CUDA.
If you're talking about compute, then CUDA is the de facto standard if you're not writing GPU code of your own, but only want to run code written or paid for by Nvidia. Nvidia pays people specifically to use CUDA, with the express goal of making GPU code incompatible with AMD GPUs. That's kind of like saying that you're using C++ in some sense if you play a game written in C++, and is very different from choosing C++ as a language in which you will write your own code.
If you're writing your own GPU code for compute purposes, then CUDA works well if your only goal is to write Hello World, declare victory, and never touch a GPU ever again. It can be made to work for pretty simple programs. But if you need to write anything complicated, and you also need to optimize and debug it, you'd be nuts to use CUDA. It simply doesn't have the features available to make optimization and debugging a practical thing to do without making an enormous mess. It's also pretty much unmaintainable, so if the first GPU you target isn't also the last, you'll have plenty of reasons to sorely regret using CUDA. OpenCL is usually the way to go there.
The problem with saying that DLSS is "free" is that you're ignoring the maintenance costs, which is to say, most of the potential costs. If every possible combination of graphics settings required a completely separate x86 executable, that would be such an awful mess as to make having more than a handful combinations of graphical settings impractical. But it would still be "free" in the sense that you don't have to pay extra for the compiler.
If you want DLSS to work properly, it's probably going to be that kind of mess. That's much worse than costing money. Plenty of developers will happily pay money to get better tools that let them avoid that sort of mess.
On ray tracing, Nvidia looks like they're doing the right things for the right reasons. But ray tracing is so slow that I think it's going to be a long time before real-time ray tracing particularly matters to gaming. We might never get there. But pre-rendered lighting effects using ray tracing are very much a thing already, and making that cheaper or better is also good.
So, if nVidia is doing all the "deep learning" for free on their rack servers and pushing it out via drivers, what exactly are your tensor cores doing then?
Not exactly seeing how this is going to be better than SSAA (which is generally the highest quality), or why we necessarily need it over shader-based AA like FXAA or SMAA, which already can be used with a negligible performance hit and can produce pretty good results.
Sounds like the same thing as rasterized baked in lighting/shadows... only with Antialiasing.
And then there's the argument, if you are rendering at a sufficiently high enough PPI/Resolution, you don't need AA anyway....
Remember Temporal AA (TXAA)? There are like... a dozen games that supported it total, and it was supposed to be one of the big tech improvements of Kepler. I think DLSS is just the next buzzword they can print on the box and use to generate hype in this introduction period; a handful of devs will get paid to use it for pretty screenshots, and no one will remember it even exists in 12 months time.
Historically, nVidia would pay some developers (typically large AAA projects, but not always, anything that generates a lot of hype really) to use their proprietary stuff. The developers are more than happy to oblige - they get specific attention to add something that looks awesome in their game along with the technical assistance to implement and debug it, the developer gets some nice screenshots out of it, and nVidia gets to slap their logo on it.
So, if... ~if~ this does turn into a one horse race -- and that is not a foregone conclusion, the high end cards are sexy, but the sausage of the industry are in the <=$200 market, where AMD is still very competitive. But let's say it does, AMD's 7nm is a trainwreck, and nVidia starts going down the path of Intel (minimal updates over time -- Ampere could be the last big R&D effort nVidia has to make in the GPU space, analogous to Sandy Bridge).
So nVidia's proprietary stuff... by shear weight of the market, it becomes the de facto standard.
On the surface, that sounds like game over - nVidia wins. I'm not so sure it actually plays out like that though. Remember 3dfx? The company that used to be the premier name in 3D graphics (and interestingly enough, were purchased by nVidia). 3dfx had their own API - Glide. It competed with the (then very fledgling) DirextX and (focused on fidelity, not speed) OpenGL.
Sure, there was competition to 3dfx - S3, Cirrus Logic, Trident... none of them were even close to 3dfx's capabilities though. If you were around and gaming at the time, seeing something run on a S3 Virge was "wow" (also interesting fact - the S3 Virge was labeled as a "Virtual Reality Graphics Engine", in 1995... But then seeing the same game on a Voodoo, and your jaw dropped. It was a totally different experience. Graphics went from chunky, pixelated, and choppy to silky smooth overnight. Nothing would be the same ever again.
So... 3dfx was the defacto graphics leader for several years. No one was even close. Where is Glide today? Where is 3dfx today?
Turns out, the market does have some influence. Those high end (and high dollar) cards are expensive, and make for great halo products with sexy screenshots. But they aren't the driving force in the market, all those smaller cards - the Trio64s, the Rages, the Blades.... They didn't support Glide.
Quake also helped in this arena. ID didn't believe in proprietary either (or, at the very least, Carmack didn't). glQuake, for a lot of people, was the first OMG experience that opened up everyone's eyes to PC gaming. It ran extraordinarily well on Voodoo, but it did so using OpenGL - pushing an open standard. Developers started gravitating to this - it meant they could write code once, and not have to include support for a lot of different proprietary standards... which when you aren't getting paid to do so, and you aren't getting specific (and free) technical assistance, all of a sudden doesn't look nearly as attractive to throw your budget dollars at.
So, to wrap up this long post - nVidia is a juggernaut. But it doesn't necessarily mean they get to throw their weight around. The market still drives the car.
Otherwise the devs will build for whatever the consoles have, and after that it's a question if NVidia's proprietary stuff is good enough to develop on top of those console graphics.
I'm waiting for someone like Imagination to make another play at it. Imagination did have Sega Dreamcast in the past, so it wouldn't be unheard of, and that particular company is hungry again since it lost it's Apple contract.
Granted, given where graphics technology is today versus 20 years ago, it's not surprising to see the consolidation of patents and talent. And it would take a good bit to break away from an AMD APU - there are a lot of benefits there for consoles. I could easily see Microsoft making a break to try to distance from XB1 and PS4 and put themselves back out in front, like they had with the 360. Sony, less likely to break from AMD at this point, PS4 was good to them, and AMD has a good roadmap that fits in line with Sony.
The idea of training is that we can render a bunch of frames and exactly what each pixel should be. We can also try to drop a pixel and compute what it should be from its neighbors. Or perhaps you should think of "samples" rather than pixels with some supersampling going on. The idea is of training in machine learning is to try a ton of different formulas to see which formula for computing a pixel from its neighbors tends to get closest to the actual color of the pixel. That gets computed on Nvidia's servers.
The idea of inference is that, once they have the formula, they can say, for the particular game that you happen to be playing, here's the formula to compute the missing pixels from their neighbors, or perhaps rather, additional fake samples from the ones that we actually computed. Actually taking the values of the neighbors and plugging it in to create extra samples for use in anti-aliasing is done on the GPU as part of rendering the game.
Rather than having a single formula that works for all games ever, they can customize it on a per-game basis. And having the tensor cores allows them to use a much more expensive formula to compute the missing pixels than before. So it's plausible that there could be some benefit.
But if you use a formula for the wrong game, or for the wrong settings in the right game, it's likely to be worse outright than a one size fits all model that would work well for an "average" game. Thus, you should expect support to be very limited.
The outputs from machine learning are very opaque and you really can't track where they came from. They don't have any theoretical basis other than, we did a ton of computations and here's the output. If it's customized to your particular data, it can work pretty well. But if it's trained on data not substantially similar to your own, you should expect a case of garbage in, garbage out. So for a general purpose thing that works with arbitrary games, it will probably be inferior to what you could get from something like FXAA that has a good theoretical basis of why it ought to work.
As should be incandescently obvious, trying to guess extra samples from the ones that you have isn't going to be nearly as good as actually computing the extra samples via SSAA. So this isn't going to improve on image quality at the high end. You can't beat computing the right values exactly.
It's plausible that there could be cases where both the image quality and the performance hit are somewhere between FXAA and SSAA, with the image quality closer to the latter and the performance closer to the former. If that happens, then it kind of has a point in the handful of sponsored titles that implement it at the particular settings that they use. But even that is far from guaranteed.
https://www.tomshardware.com/news/battlefield-v-ray-tracing,37732.html
Mainly because it is a major hindrance to fps.
Great video that explain RT on the BFV demo
It's really too soon to call the performance hit. DICE was working on Volta cards, and only got the RTX cards with real RT HW, 2 weeks before they showed the demo. There is a lot of room for optimization, because, after all, 2 weeks is 2 weeks. There was a clear statement from the Battlefield developers, that the sub 60 fps was a result of the short time they had with the card and that they have clear improvements planned.
RT has always been the gold standard for image quality in offline rendering, and the only reason it wasn't commonly used at the very beginning of real-time 3D was its massive compute costs. I still remember playing with mirrored surfaces in U:ED back on my Voodoo 4, and being amazed at how cool RT was and also how it would drag my system to a crawl.
RT obviates the need for multiple rasterization techniques, such as AO, which are becoming quite computationally expensive in their own right. So it's only natural to move to the simpler coding solution as soon as the compute resources become available.
It's my belief that NVIDIA chose this moment to release RT as a strategic decision to move the industry forward knowing that they could still beat AMD in conventional rendering even with the "wasted" die area for games that don't benefit from hardware accelerated RT.
However, I also don't predict any sort of abandonment of "traditional" rendering pathways happening until every sub ~$300 GPU has reasonable RT performance.
In the short term, I expect RT to have the same kind of industry effect as tessellation. NVIDIA will be a lot better at it than AMD for several generations, and will leverage their advantage by "optimizing" ray counts in green titles just like they did with tessellation factors.
While I still expect the occasional feature or optimization to be released in the next 5-10 years, rasterization techniques will gradually begin to stagnate. RT will become a focal point for gaming benchmarks very shortly after the hardware becomes available, which will begin the process of game companies drawing attention away from rasterization.
NVIDIA is not alone in this, dozens of developers are making the same claim with their demos running mulitple times faster on Turing compared to Volta and even more compared to Pascal.
Please tell me how it's impractical? How so? Based on DICE devs getting the hardware two weeks before a demo was made? Do you know the cost to implement or even performance hits in scenarios? No you don’t, There are no real reviews yet.
I'm actually somewhat baffled by what Nvidia is doing. I suspect that Turing was originally aimed at 10 nm and then ported back to 12 nm when Nvidia realized that 10 nm was going to be bad for GPUs. But if Nvidia releases a full lineup on 12 nm and then AMD releases a full lineup on 7 nm, AMD is going to slaughter Nvidia for that generation until Nvidia gets to 7 nm. It would be really shocking for Nvidia to release a full lineup on 12 nm, and then another full lineup on 7 nm six months later. If they do that, then the engineering costs of creating Turing dies on 12 nm will mostly be wasted. Recall that Nvidia has only launched five new GPUs for GeForce cards in the last three years. Launching Turing now on 12 nm is a huge screw-up unless 7 nm is a lot further away than it looks.
I don't think your comparison to tessellation works. For starters, our history of it is all wrong. Recall that AMD had four entire generations of GPUs launch with tessellation before Nvidia launched their first GPU with it. Even when Fermi decided to go with one tessellator per compute unit while AMD implemented them at basically the level of what Nvidia calls a graphics processing cluster, that massive difference in raw tessellation power was unnoticeable outside of synthetic benchmarks. Many years later, tessellation still isn't used very much even today. I don't think I've yet played a game that used it apart from the amateur project that I worked on for a while; I certainly haven't seen it in any video settings menu.
And then there's also the fundamental issue that tessellation is a major performance optimization. Go all-in on tessellation and you decrease the rendering load considerably, as you have massively fewer polygons to render on the far-away objects, which is to say, most of them. That's quite the opposite of what ray tracing does. The problem with tessellation was that it wasn't available until hardware was powerful enough to not particularly need it, at least outside of cell phones that don't use it for other reasons.
I wouldn't say that rasterization techniques will begin to stagnate in the future so much as that they did so several years ago. DirectX 12 and Vulkan brought not so much new features that would make games look better, but the ability to implement old features more efficiently. The previous version, DirectX 11, released in 2009.
I do think you're right that ray tracing will end up being important in benchmarking video cards. A benchmark that shows that you can get 200 frames per second with a GTX 1080 Ti or 150 with a Vega 64 isn't showing that one card is faster than the other. It's showing that they're both really fast, and not really a reason to prefer one card over the other. Change those frame rates to 40 and 30 and we're talking about a difference that matters, at least for the few people willing to turn ray tracing on.
That could also have the salutary effect of meaning that reviews focus more on games trying to do more demanding things that genuinely do matter and less on games that found creative ways to run inefficiently. Right now, the latter is the main way to get low frame rates on top end hardware.
You can make a case for an RTX 2080 Ti as being the fastest card available other than the Titan V that costs $3000. It's much less likely that there will be a good case for buying the RTX 2080. And once 7 nm GPUs arrive next year, it's unlikely that there will be much of a case for buying either at all.
nVidia averages about 1.5 years between marketing generations. I use the term marketing because not all of those were actual chip/technology generations.
There have been two notable exceptions:
400 to 500 series: both of those were Fermi, they were separated by only about 7 months. There were some reasons (not saying good reasons) why either the 400 series came out too early, or the 500 series was rushed out to replace it so soon - depending on how you want to look at it.
10 to 20 series: Pascal to Turing has had more than two years separating them, the longest period between releases in recent history. There are reasons here as well why there is so much distance between them - you can say either lack of competition forcing them to move, or milking the mining sales, or difficulty getting Volta out for whatever reason, or lack of RAM availability, or any number of other theories.
I think this goes back to a good point - why release Turing now on 12nm if 7nm is around the corner? If 7nm is available to AMD, it's also available to nVidia. That doesn't necessarily mean nVidia engineers have figured it out (2009 is a good indicator of this: nVidia was still using 55nm on the 280, when AMD was using 40nm on the 5870. nVidia put out the 400 series on 40nm to compete but didn't quite have it down, and then had to push the 500 series respin out very quickly. During that period, AMD had a significant advantage.
So, could we see this play out again on 7nm? Possible. But i doubt it.
I think Turing came out just because the mining boom was waning significantly, and nVidia needed to do something to push sales in their primary revenue market. I think they sat on Pascal as long as they could, given that AMD/Intel haven't given them any competition to need to push anything else. I think Turing will end up being Volta architecture with the addition of whatever proprietary tech makes up the RTX cores, and that Titan V will be a good performance metric for games that do not utilize RTX (only a few weeks left until this is debunked or confirmed, I suppose).
I don't think AMD will sweep in with 7nm and take anything away from nVidia - at least not yet. AMD focused on mid range, and has been focused there for a long time now. I don't believe Turing was put out to counter an AMD 7nm Vega/Navi varient. I don't think nVidia feels particularly threatened by AMD right now in the gaming space. Turing was strictly a play to Wall Street, and an effort to get people who have been driving Pascal 1080/1080Ti's something new to throw their money at.
The real money maker is in the low/mid tier sales: The 1050/1060 tier, and nVidia hasn't even announced anything to replace Pascal there, at least officially. That's the area that AMD is very much still competitive with. But a new halo product goes a long way with marketing - after all, how many people do you know who just go and buy an nVidia lower tier card, pay more money for it, because they believe that nVidia across the board will be faster than AMD (or have better drivers, or run cooler, or whatever)? It's very common, even among people on this forum.
So nVidia's announcement of a new halo product will drive some revenue, no doubt, but it's not the primary revenue driver, which is why I think it's mostly aimed at Wall Street. And I don't think AMD has much at all to do with an oddly timed 12nm product. I think it was strictly because Pascal sales were finally starting to slump and nVidia wants to keep their stock price high.
If AMD does happen to provide some 7nm competition, it isn't unheard of for nVidia to roll out a new generation sooner than normal. A 30 series could come out in as little as a few months time... If that were the case, I would expect the 30 series to lead with the revenue drivers: The 3050s and 3060s, and it's possible we wouldn't even see a 2050/2060 released at all. Or it could be that nVidia is planning their 7nm transition to occur with the 2050/2060 ... that's been done in the past as well (the 200 series, for example, the halo cards were on 55nm, but the lower tiers came out later on 40nm).
My guess would be: if AMD can beat a 2080Ti with a 7nm Navi, nVidia will call the new product a 30 series so they can re-release a new flagship on 7nm to retake the performance crown and push that out within 9 months of Navi's release. If 7nm Navi doesn't quite beat out a 2080Ti, nVidia will retain the 20 series name and just move their lower tier to 7nm, and save 7nm halo product for a later 30 series in 2020.
Even if they had GTX 2050 and GTX 2060 coming we wouldn't see any announcement yet because it would hurt their ray tracing hype.
It's also possible that Turing will really only be the higher end parts. Rumors already say that only the already announced RTX 2070 and higher will have the "RTX" moniker, while the rest are GTX. It's possible that the GeForce GTX 2060 and GTX 2050 will simply be a rebranded GeForce GTX 1070 Ti and GTX 1060, respectively. That would be enough for the fanboys to say "look how much faster the GTX 2060 is than the GTX 1060", as proof that the 2000 series is awesome.
The reason why the GeForce 500 series came so closely on the heels of the 400 series is that the entire 400 series was severely broken. The 500 series was nothing more than a base layer respin of the 400 series. You can do that in six months, so Nvidia did. It's very unusual for a GPU die to be so horribly broken as to justify a base layer respin, as that costs a ton of money, takes long enough that it would commonly be late in the GPU's intended product cycle before the respin launched, and most importantly, usually doesn't gain you much. That's as opposed to metal layer respins, which are far more common.
AMD is not even considered by professionals when it comes to GPU selection for 3d rendering, only a handful of small teams go for AMD solutions. The gap was enormous before, now it is on a whole new level.
NVIDIA owns not just the gaming market, they completely obliterate AMD on the professional/compute staff as well - automotive, AI / neural networks / deep learning, visualization / rendering.
With the RT cores there's a chance that all the OpenCL based render engines become obsolete, not that there are many, or are popular. AMD has nearly nothing (GPUOpen, ProRender, etc but these are not on the same page as NVIDIA's solutions, not in the same league).
If AMD wants to stay relevant in gaming they have to focus on consoles, I bet NV will do everything in their power to get a console deal for the next round.
Even if AMD had hardware, which they dont, people would still have no other option than to choose Nvidia for this line of work, as all the major renderers with enough features to be considered production-grade (Redshift, Octane) are CUDA based.
The companies that will integrate RT cores to their render engines:
Autodesk (Arnold), Chaos Group (VRAY GPU & Project Lavina), Blender (Cycles), Epic (Unreal), Otoy (OctaneRender), Pixar (Renderman), Redshift, Weta
——
Here is a short portion of a recent interview:
CG Channel: What’s the status of OpenCL in V-Ray GPU? Your blog posts only mention CUDA.
Lon Grohs (Chaos Group): We keep some support for it, but it’s not really advised. We’ve had a good relationship with AMD, but there have been driver problems and all kinds of things. And then of course they’ve released their own renderer, which complicates things.
The reality is that [something] like 99% of our customers are on Nvidia hardware. And Nvidia has really helped us push our development forward.
CGC: In your blog posts, you also mention that you’re looking Microsoft’s new DirectX Raytracing.
LG: [When DXR was announced at GDC this year] a lot of people [thought] we must have been caught off-guard, that we weren’t expecting it. No, no, no. We’ve known it was coming. We’re the largest company in the world solely dedicated to ray tracing technology. We’ve been dedicated to optimising ray tracing for 20 years. So we’re pretty excited about it.
We used to have to make the [case for] why ray tracing is superior to rasterised graphics. But now we’re in a new conversation: ray tracing in real time. And yes, you [have it] as a concept, but you don’t have the ray tracing that you’re used to … all of the complexity that you see when you’re watching Game of Thrones or The Avengers. That’s not what real-time ray tracing is doing yet. It’s very limited. It’s one bounce of shadows, one of reflections. Which is definitely the way to go; it’s just not as ‘real’ as full ray tracing.
——One just have to wonder how AMD is run, I mean how on Earth is this acceptable at any level? AMD still doesn't understand that no one will integrate / code their solutions to any software, they have to devote the resources to make things happen. This is evident today with Vega, ngg and Primitive Shader fiasco.
I used to game ATI/AMD. I simply provide wallet share to what I can afford that suits my needs. As my needs change and evolve so has my purchasing decisions.
I don’t think there is any fault in what I state. A healthy competitor drives down prices.
For professional GPUs, it depends on what you're doing. There are some programs where AMD is competitive, and there are some where AMD simply isn't. If what you need is something where you know that AMD isn't competitive, then of course you buy Nvidia.
Compute is that way, too, and there are plenty of compute algorithms where a Vega 64 will completely obliterate a GTX 1080 Ti or a Tesla P100. I'd bet on AMD's upcoming 7 nm Vega doing the same to a Tesla V100 and for the same reasons: whipping it in some algorithms while losing soundly in others. If you're willing to look at a per node level rather than per GPU, being on 7 nm is likely to give AMD a large efficiency advantage, too.
Outside of mining (which is a form of compute), the money in compute GPUs is largely dominated by people buying GPUs to run code written or paid for by Nvidia, and AMD hasn't been competitive at writing their own code. But if you want to run algorithms where Nvidia can't hand you completed code to do what you need, that's not a factor. Which is to say, in the overwhelming majority of things that could run well on GPUs, Nvidia having already written a bunch of code is completely irrelevant. At the moment, most of the world just says, well then, we just won't use GPUs at all for that. CUDA has done a lot to poison the well and convince people that GPU programming is harder than necessary if you need anything non-trivial.
Gamers have mostly bought Nvidia for the last 16 months or so because the miners had a fairly strong preference for AMD and were buying up the AMD GPUs before gamers could get to them. If AMD and Nvidia each have about as good of GPUs with an MSRP of $200, but miners have made it so that you have to pay $400 to get the AMD GPU or $300 for the Nvidia, then of course you buy Nvidia. That doesn't mean that AMD isn't competitive; they were selling GPUs as fast as they could make them. It only means that a huge mining bubble distorted the market.
i did not not see this article before right now
and
i did not write this article I am linking to
https://seekingalpha.com/amp/article/4205391-amd-drink-kool-aid