Microsoft, Nvidia, and AMD have now all announced new ray-tracing initiatives within a day of each other. This is not a coincidence.
https://techreport.com/news/33392/directx-12-dxr-and-nvidia-rtx-bring-real-time-ray-tracing-to-3d-engineshttps://techreport.com/news/33399/amd-casts-a-light-on-its-radeon-rays-real-time-ray-tracing-toolsThere are a lot of ways to do 3D graphics, and in the early days of it, it wasn't immediately obvious what would win out. Rasterization basically won because it can run fast, and all but the earliest of 3D GPUs are heavily optimized to do rasterization efficiently. But there are other approaches, and ray-tracing is one of them.
The idea of rasterization is that all of the models in a scene get broken up into a bunch of triangles. For each triangle, you figure out which pixels on the screen it covers. For each of those pixels, you figure out whether anything else previously drawn is in front of it, and if not, you color the pixel with an appropriate color from the model. If you later discover a new triangle in front of a previously colored pixel, you can overwrite the previous color with the new one. Repeat for all the rest of the triangles in all the rest of the models and you're done with the frame.
The downside of rasterization is that you can't do lighting effects. Or perhaps rather, you can do lighting effects wrongly, but can't make them accurate except in the simplest of cases. If you look closely at shadows in a game cast upon anything other than a large, flat object, they're guaranteed to be wrong. They can make parts of the object upon which the shadow is cast look darker, and it looks like it might be a shadow if you don't look too closely, but it won't be the correct parts of it. What you can do with reflections and transparency is likewise very restricted.
The idea of ray-tracing is that, for each pixel on the screen, you cast a ray from the camera and see what it hits. Or maybe several rays and then average the colors if you want to do anti-aliasing. When you do it this way, you can have a ray hit a reflective object, then cast the ray from there to see what else it hits. That lets you draw very accurate reflections. You can cast rays from what one hits toward light sources to see if it makes it there without running into anything else, and get highly accurate shadows that way. You can do transparency by having the color chosen be some mix of a semi-transparent object with whatever the ray hits after passing through it.
The downside of ray-tracing is that it's slow. Really, really slow. Especially on GPUs. GPUs are built to do things in a very heavily SIMD manner, where many different threads (32 on Nvidia, 64 on AMD) do the same thing at the same time, each on their own data. Nvidia calls the collections of threads a "warp", while AMD calls them a "wavefront". That data could be a vertex of a model, a pixel from a triangle, or various other things. But it tremendously simplifies the scheduling.
GPU memory controllers also rely very heavily on having coalescence in order to work well. Whenever you touch GDDR5 or HBM2 memory (and probably GDDR5X, though I'm not 100% sure about that), you have to access a 128 byte chunk. Ideally, you have 32 threads in a warp each grab 4 bytes from the same 128-byte chunk so that the memory controllers can do all the reads at once just by pulling the 128 bytes in and distributing each thread's requested portion to it. Or maybe 8 of the 32 threads in a warp each grab 16 bytes out of a 128 byte chunk, or several different threads grab the same memory, or whatever. But you want a whole lot of cases of different threads grabbing data from the same 128 byte cache line at the same time.
Ray-tracing completely breaks this. After reflection or transparency or hitting the edge of an object, adjacent pixels might have their rays go off in wildly different directions. Many optimizations that GPUs have been able to do to make rasterization efficient simply don't work for ray-tracing. That will make ray-tracing massively slower than rasterization for a given complexity of scene.
From the perspective of the GPU vendors, that's the attraction of it. They need to give you some reason why your current video card isn't good enough and you need to buy a new one. They've been pushing higher resolutions, higher frame rates, and VR, but that only goes so far before it gets kind of ridiculous.
Just don't expect ray-tracing to come to MMORPGs anytime soon. If you thought standing around in town with 50 other players nearby was bad with rasterization, just wait until you see the number of seconds per frame that you'd get from a comparably messy scene with proper ray-tracing and all the lighting effects enabled that are the point of using ray-tracing in the first place.
Comments
https://www.pcgamer.com/nvidia-talks-ray-tracing-and-volta-hardware/
If it turns out to be true, then NVidia might be about to launch a line of very expensive ray-tracing enabled GPUs, and cheaper product line of GPU's not meant for ray-tracing.
This post is only wild guesswork, but with all the money NVidia is investing in their AI/deep learning hardware, I bet they'd love any excuse to sell it also to high-end gamers.
Fair enough, but do you think our top end graphic games need more photo realism? I would be on the fence there, on the one hand its better quality graphics, on the other maybe not being able to make graphics any better would turn game designers minds to gameplay?
And we're definitely sucking up all those new GPUs and games with better graphics.
Ray tracing on GPU has been around for a while, but with a Microsoft API it's obviously getting a big push.
GPUs are pretty heavily optimized for floating-point FMA operations, where fma(a, b, c) = a * b + c, as a single instruction, with all of variables floats. The tensor cores can basically do that same operation, except that a, b, and c are half-precision 4x4 matrices, and with 1/8 of the throughput of doing the same thing with floats. That's a huge win if you need massive amounts of it, as doing the matrix multiply-add naively would be 64 instructions. Being able to do that with 1/8 of the throughput of one instruction is an 8x speed improvement.
The problem is that basically nothing fits that setup. Pretty much nothing in graphics does. Pretty much nothing in non-graphical compute does. Nvidia thinks that machine learning will, which strikes me as plausible, though I haven't had a look at the fine details. But I'd think of the tensor cores as being special purposes silicon to do one dedicated thing (i.e., machine learning), kind of like the video decode block or tessellation units in a GPU, or the AES-NI instructions in a CPU.
What's far more likely is that Nvidia is getting considerable mileage out of their beefed-up L2 cache in Maxwell and later GPUs. So long as the scene is simple enough that most memory accesses can go to L2 cache rather than off-chip DRAM, the memory bandwidth problems wouldn't be as bad.
That's very much opinion. Where should they stop? I mean, I'm sure there were people who thought things were fine years ago. Suddenly new breakthroughs and we get absolutely beautiful images/vistas/characters.
While I'm very much capable of enjoying a game that has dated graphics or is only "so good" I'm all for them pushing the bounds of technology to bring me breathtaking worlds.
I say "bring it".
Godfred's Tomb Trailer: https://youtu.be/-nsXGddj_4w
Original Skyrim: https://www.nexusmods.com/skyrim/mods/109547
Serph toze kindly has started a walk-through. https://youtu.be/UIelCK-lldo
There used to be a thriving, competitive market for sound cards. Then they got good enough, and then integrated sound chips got plenty good enough for most people. Now hardly anyone buys a discrete sound card anymore. The GPU vendors really, really want for that to not happen to GPUs.
Godfred's Tomb Trailer: https://youtu.be/-nsXGddj_4w
Original Skyrim: https://www.nexusmods.com/skyrim/mods/109547
Serph toze kindly has started a walk-through. https://youtu.be/UIelCK-lldo
Not saying that ray tracing is bad, just saying that having MS's backing isn't necessarily a guarantee of success. It's a large company and they aren't afraid to shotgun a lot of different approaches, knowing that not all of them are going to succeed.
For some people, the best is never enough. But for most people, good enough is good enough. I think we are almost there with 1080p/60Hz - nearly every discrete card out there can run nearly every title at that metric with moderate visuals in the last couple of generations, and IGP/APU is almost there (maybe it is there for the Vega APUs). For most people, yeah, they can tell the difference between the Medium and Ultra notches on visuals, but it's probably not worth the extra $500-700 it takes to get there.
I don't know what level it takes before we just say good enough - other than we are getting closer and closer all the time.
(Side Note: Maybe we should stop worrying about pixel count and start worrying about PPI instead)
https://www.davincicoders.com/codingblog/2017/2/28/exponential-growth-of-computing-power
I'm sure with enough interest, the API/hardware developers will come up with something for large animated scenes that will work well with their cards.
a) Be so costly that no-one buys it, or
b) The game looks like crap because there's no hardware left for other graphic effects
There's no conspiracy and the devs aren't being lazy. They just haven't managed to place enough hardware to a cheap enough package so they've had to compromise and leave some of the most hardware demanding effects out.
Here's one of those engine demos with ray tracing you were talking about. If you bother to take another look, it uses ray tracing, but it looks bad compared to games from that age