TLDR Quotes-
"Saying that AMD is "deep in development" of their own ray tracing technology and that this development "is concurrent between hardware and software."
“I think ray tracing is an important technology, and it’s something we’re working on as well, both from a hardware and software standpoint”
Tom’s Hardware: How do you think ray tracing, in particular, fits into the future of GPUs, and are you working to address that type of rendering with specific optimizations for Radeon VII?
Lisa Su: I think that ray tracing is an important capability and we view that as important. We are continuing to work on ray tracing both on the hardware side and the software side, and you’ll hear more about our plans as we go through the year.
---
Recent rumours suggested that AMD's Navi 7nm GPU was supposed to of been talked about at CES, however we heard next to nothing. The sources for those same rumours came back pretty much saying that the Vega VII was a stop gap rush out as Navi had run into serious hardware problems, in turn leading to zero silence at CES because of said problems.
Is there a connection between the delay of having Navi be prominent at CES and the Ray Tracing news today that they are working on? Unsure, but with the interviews in the tech press today, AMD is confirming that Nvidia was right to go with Ray Tracing simply because they are following suit. Now, we don't know that what or if any AMD dedicated hardware will do but it will be interesting to see. There is even hints at possible ray tracing support on next gen consoles and an E3 (June) announcement.
Another way to look at it is "we are late for Ray Tracing and will not be ready until lots of RT supported games are in the market". Which is mighty convenient IMO. Nevertheless, RT and gaming right now is also in a sad state.
Sources -
https://www.overclock3d.net/news/gpu_displays/amd_has_ray_tracing_gpus_in_development/1https://www.tomshardware.com/reviews/amd-7nm-lisa-su-interview,5961-2.htmlhttps://www.pcworld.com/article/3332205/amd/amd-ceo-lisa-su-interview-ryzen-raytracing-radeon.html
Comments
it would probably be nearly worthless right now but I bet a Vega would make a showing.
in a couple years and another architecture jump or two it’s entirely feasible AMD could support this with little to no dedicated hardware required.
Ray tracing is in DirectX now, and that surely came after Microsoft had extensive consultation with Nvidia, AMD, and even Intel. Hopefully Navi will have a bit of dedicated silicon for ray tracing in whatever way makes sense. Getting it right is more important than getting it first, as the ecosystem isn't ready for heavy ray tracing yet.
Ray tracing won't really be that important until there are games that have to have ray tracing on to work properly. That's surely several years away. But it's not going to happen the day after AMD launches their first ray tracing card. The sooner that both AMD and Nvidia have full lineups out that can handle it--not just a few high-end cards that are way out of the budget of most people--the sooner that clock can start ticking.
As for rumors of hardware problems, there are always hardware problems, especially for a new architecture, a new process node, or both at once. If it means an extra metal layer respin, that merits a shrug. It's if it requires a base layer respin (Fermi may be the only GPU example of this ever) or the architecture just doesn't work how they hoped (see Kepler, Radeon HD 2000 series, Bulldozer, NetBurst) that it's trouble.
As for AMD's GCN architecture and its derivatives being heavy on compute, they're built to be awesome at compute tasks that fit nicely onto GPUs. That's why they were so good at mining, for example. For things that don't fit easily on GPUs, the architecture is considerably less forgiving than Nvidia's Maxwell/Pascal. One way to put it is that a GeForce GTX 1080 will tend to beat a Radeon RX Vega 64 at compute tasks in which both get crushed by a desktop Core i7 or Ryzen 7 CPU--which is to say, tasks where using a GPU is ridiculous.
The reason that ray-tracing is so expensive is not that the computations are intrinsically expensive. Rather, it's that they completely break a bunch of the optimizations that GPUs have to do rasterization efficiently--optimizations that Vega relies on to be good at compute--and in ways that don't suggest alternative optimizations to make ray-tracing efficient.
Arcturus (post Navi) actually was planned to release in late 2019 back in roadmap slides from mid-2017 (back then called "Next Gen"). It's Navi that's awfully late. But even looking back then, Navi was supposed to have an early 2018 release. i.e. it was initially planned as a 10nm chip at best, 14nm at worst.
I hope so too. AMD FireRays / Radeon Rays 2.0 which is targeted at developers looking at ray-tracing capabilities with AMD GPUs, CPUs, and APUs via asynchronous compute is a bit different to Nvidia’s RTX ray-tracing technology, which as we know is on DX Raytracing API. It'll be interesting to see if Radeon Rays lives long or if AMD take differing approaching considering RR is open source (OpenCL 1.2 standard) giving it leverage so it could be deployed with non-AMD hardware and on multiple OS environments. (Consoles?). The thing is I doubt Navi was ever considered / planned to have RT silicon and I suspect a course change happened possibly resulting in the delay we see today. Their competitor showed it's hand and capability so it's time to call.
I think 2019 will be a big year and step forward for ray tracing games. I fully agree with you. It sounds like AMD's tactic is to wait till the "ecosystem" is better.
If it's in DirectX now, then AMD surely knew that Nvidia was working on ray tracing years ago. Microsoft is not going to ambush their partners with abrupt changes to the API. Nor will they add something to the core API just because one vendor requests it. That sort of thing is heavily discussed far ahead of time, as the precise details of the API need to be something amicable to everyone who wants to implement it. Vendor-specific extensions are a different matter, but if real-time ray-tracing were only available via vendor-specific extensions, it would be dead on arrival.
Really? At the current price point of a GTX 2060, I do not see it flying off the shelves. Again the sweet spot for graphic cards is at most $250 and more like $200. I don't see Nvidia discounting it either. And it is obvious that the card is not going to do ray tracing considering the performance hit. Most people don't play at 4k, and a $200 card will do great at 1k or 2k. So I do not see much volume for Nvidia's 20XX lineup.
The sales volume won't be anywhere near to what an $200 GPU would have, but it will still be good enough.
2080 die size: 12nm 545 mm2
1080Ti die size: 16nm 471 mm2
Gotta admit I don't know that you can just directly proportion it and get accurate comparison... but I don't know that you have to. With a die shrink in there and still being a larger die - you can tell Tensor cores are taking up ~a lot~ of space. I just couldn't tell you if it's 1% or 20%, but I suspect it's a lot more than 20%...
With Nvidia at 70% market share and AMD loosing 4% last quarter it makes sense they care for their shareholders and their revenue. Having competitors intake disposable income of your customers (and potential customers) when you don't have (and a late) competing product isn't good at all.
Tensor cores are basically a machine learning ASIC portion of the chip, in the sense that the video decode block that all modern GPUs have is a video decode ASIC. Building them or something like them makes a ton of sense if you're building a machine learning ASIC. But it's a completely stupid thing to put into a consumer GPU because it wastes a ton of die space for basically no benefit.
Also, part of Turing's die space increase is probably because they doubled the amount of register space per shader. While that only brings Turing up to parity with what GCN has been offering since 2012, the die space must be significant or else Nvidia would have offered it several years ago.
Remember that Nvidia had nothing for the over $100 market (and not a very good lineup for the $100 and under market, either) for about six months in late 2009 and early 2010. That's a much, much worse position than AMD is in now, but it hardly meant the end of Nvidia. AMD is still competitive in the under $300 market and has the Vega cards (including the upcoming Radeon VII) that aren't a completely terrible option above that.
For that matter, let's not forget that we're talking about a company that withdrew from the x86 server market entirely for a few years. That's a much bigger market than the entire GPU market, let alone the $350+ portion of it. Or that Nvidia withdrew from the cell phone market entirely, which is also an enormous market.