Some have noticed that DirectX 12 benchmarks look far more pro-AMD than DirectX 11. Some have speculated that DirectX 12 would itself be very favorable to AMD due to similarities to Mantle or some such. I'm very skeptical of that.
But what will be very favorable to AMD is if games do a lot of non-graphical compute. That's true whether the graphics API is DirectX 12, DirectX 11, Vulkan, or for that matter, OpenGL. Why is that? Let's look at the caches you rely heavily on for compute. Just for fun, let's compare a GeForce GTX 1080 and a Radeon RX 480 straight up, without any modifications for price, power consumption, or die size.
Register file capacity (total, for the entire card):
Radeon RX 480: 9 MB
GeForce GTX 1080: 5 MB
Local memory capacity (again, entire card):
Radeon RX 480: 2.25 MB
GeForce GTX 1080: 1.875 MB
Local memory bandwidth at max turbo (again, entire card):
Radeon RX 480: 5.31 TB/s
GeForce GTX 1080: 4.03 TB/s
(I use 1 TB = 2^40 bytes here.)
Think leaning heavily on any of those is going to make AMD look good? I don't list register bandwidth because it's enough to keep the shaders busy all of the time, so it's not a meaningful difference and would basically constitute repeating TFLOPS numbers.
And remember, we're comparing Nvidia's high end to AMD's mid range. And AMD still wins handily at some key compute resources. And it's been like that continuously since GCN arrived in 2012.
Adding all that cache capacity and bandwidth does have costs, and it means that if games have no use for the extra caches and bandwidth, you take a considerable efficiency hit. That's a large chunk of why GCN was less efficient than Maxwell in many games, and may be just about the entire reason why Polaris is often less efficient than Pascal.
But if you do need the extra compute resources? Then AMD's efficiency looks a lot better. And if you make a game that really leans hard on local memory for compute? For the Radeon RX 480 to beat a GTX 1080 outright wouldn't necessarily be an outlier among compute-heavy games, though we haven't seen any yet.
So will game developers go that route? If you're developing strictly for PC, probably not. But if the PS4 or Xbox One is your target, maybe you do. It takes time for developers to figure out what they can really do with new hardware that you couldn't do with the old hardware. This is why AAA PS3 games that launched in 2012 often looked considerably better than analogous ones from 2006. So it will take a while for this to play out yet.
Now, Nvidia does know how to make a compute-heavy architecture. They've done it in the past. It was called Fermi. It was a disaster. Or maybe it was just ahead of its time by so far that driver support will be ended before its time comes. I'm more serious about that than you think.
Comments
That's some shady stuff right there, AMD's launch was a huge fail because not only can you get an R9 380 for less the price (around £150-160) but it's a better card overall giving equivalent results.
Don't get me wrong i wouldn't mind a decent AMD card to use, but the driver support as of late is horrifying, i am sitting comfortably with a 980ti and it's everything i need right now, no issues, no driver issues no problems yet, my next card will be either the 1080 or 1080TI which both cards are top dogs right regardless of number crunches.
Boobs are LIFE, Boobs are LOVE, Boobs are JUSTICE, Boobs are mankind's HOPES and DREAMS. People who complain about boobs have lost their humanity.
But that's also irrelevant here. Even with flawless asynchronous compute support, Pascal is going to struggle with any compute tasks that need a lot of register capacity or local memory usage.
thats the trouble with looking at individual benchmarks. Technically it sounds great, and you have some awesome numbers to work with, but then when you switch to 'real world' benchmarks, ie, actually using the things, you get completely different results.
Driver support? I didn't hear about any issues with the driver, except the power draw and it's fix, which apparently works as advertised. And counting one problem twice isn't exactly the nice thing to do.
Further, it's not like the new Nvidia cards would be any better. Lots of issues with VR devices, and the card does not render shaders correctly. That is either a problem with the card itself (which would be even worse) or the driver.
How can a card fail at the one thing it's supposed to do - the actual graphics - and nobody cares? Thats like a car that randomly accelerates or breaks, but at least it uses only 2 l/100 km.
I also don't get how having software-locked additional VRAM on the cheaper version is a problem or shady. It's almost the exact opposite of what Nvidia did with it's previous generation. If you don't want it, don't activate it, and if you do, you get more than you paid for without any disadvantage whatsoever.
I'll wait to the day's end when the moon is high
And then I'll rise with the tide with a lust for life, I'll
Amass an army, and we'll harness a horde
And then we'll limp across the land until we stand at the shore
What is said to be fact or true is usually nothing of the sort when you get said product home in YOUR PC.
I prefer to see consumers be much more diligent ,wait for bargain prices NEVER get caught up in the NOW and over spend ,WASTING your money.
Never forget 3 mile Island and never trust a government official or company spokesman.
If you google the issues i have stated you would see that these are facts and not something i made up just to annoy AMD fans.
Basically it doesn't matter how big the numbers are on the 480, it still gets beaten by far by the 1080, it doesn't need huge computing to be the better card, it does well with the tech it's got, you can make all the excuses you want or come up with something that will no doubt try to discredit the 1080 but the truth of the matter is that it beats the 480 hands down.
GP100 is a far more compute-heavy chip than the rest of the Pascal line, and will be competitive with AMD there. But it's also huge and expensive--and you can safely bet that it will be a whole lot more expensive than a GeForce GTX 1080.
The HBM2 on GP100 is, indeed, a newer generation of HBM than the first generation that AMD used on Fiji. But AMD Vega will use HBM2, too. That's not an advantage for Nvidia over AMD.
JEDEC standards are free to use for everyone, regardless of who originally developed them. Hynix and Samsung have both started production of HBM2. Micron hasn't yet, or at least hasn't announced it yet, but probably will soon. And seriously, who cares if your video card has memory from Hynix as opposed to Samsung?
Pascal does use TSMC's 16 nm finfet process node. Polaris uses Global Foundries' 14 nm finfet process node. It's not clear whether that's an advantage or disadvantage for Nvidia, but it's likely a net wash.
Finfets aren't what enables GDDR5X. What enables GDDR5X is the existence of GDDR5X. Which at the moment is only in token quantities, but there will be a ton of it coming soon. Still, HBM2 is a clearly superior technology to GDDR5X if you ignore cost of production, so it's not clear how long GDDR5X will be around. It could hang around for a long time at the lower end if it's substantially cheaper, kind of like how low end cards commonly use DDR3 today.
GDDR5X is not compressed memory. While both Nvidia and AMD have moved toward better texture compression with recent architectures, that has nothing to do with GDDR5 or GDDR5X. They can do the same with HBM2, or for that matter, DDR3.
But you've completely missed the point of this thread. AMD has fared relatively better as compared to Nvidia in DirectX 12 benchmarks than in DirectX 11. I'm arguing that it's likely not the API that is responsible, but a movement toward heavier use of compute will make AMD look much better.
Importantly, this is a comparative advantage, not an absolute advantage. If a GTX 1080 is 80% faster than a Radeon RX 480 in an "average" game, then one where it's 120% faster is relatively more favorable to Nvidia, and one where it's only 40% faster is relatively more favorable to AMD. Games that go heavier on compute are going to tend to be relatively more favorable to AMD.
abd the rest of your post(s) is just as nonsensical
ROTTR, new patch which implemented Async, 980ti comparison DX11/DX12 async on/off
Going from Async On to Async Off, well now your just into the semantics of how you are using DX12, and most people aren't going to care that much (unless it has a dramatic effect, like it does for AMD).
What DirectX 12 and Vulkan might be changing is making it easier for games to use more of that brute force hardware AMD has available.
It's priceless
and thats not sarcastic either.. i genuinely find this hilarious and accurate.
I think that came about with Vista, where they started requiring some level of GPU support for Aero, and a lot of the cheaper computers just didn't have it. So for Windows 7, they relaxed it a bit so that some lower end machines could be "Compliant".
Now, AMD and Nvidia generally don't do that because it would open them up to all sorts of shenanigans. If you claim to support some feature but you're really running it on the CPU, maybe you lose half of your performance while your competitor is fine. Then your competitor pushes that feature hard in all sponsored titles as a way to claim their video cards are faster than yours.
See some old PhysX numbers where Nvidia-sponsored games would run PhysX on the GPU for Nvidia and on the CPU for AMD for an example. Of course, then, even-handed sites threw that out as irrelevant, as both were running code written by Nvidia (or possibly Ageia, depending on how old it was). But if you write a software renderer for your own hardware and claim feature support, comparing that to your competitor's hardware support is legitimate, and you're going to get slaughtered.
Ugh, PhysX was the dumbest thing in the world. I remember when they (whoever created PhysX originally) were trying to sell people separate $250 physics cards that I think 2 games at the time could use and it made a ~2% fps difference and didn't improve the gameplay experience in any appreciable way
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
Bounce around on a trampoline
Octopi with realistic tentacles
Open seas ship movements
Bust big balloons
Stop moving objects with realistic inertia.
I really can't think of anything better for Aegia to have invented the physics accelerator for, honestly.
The rest is just...ignore it..