It looks like you're new here. If you want to get involved, click one of these buttons!
So, mainly this post is for Quiz, I know you've been somewhat of a proponent of AMD's GCN architecture. Where it seemed to have its failings is that it was so massively parallel, that a lot of games weren't able to properly utilize all of the cores, and thus it suffered in comparison with Nvidia's architecture.
However, with the leaked specs for pascal, it seems that NVidia is taking a page from AMD as the new x80 card has 4096 shader cored, the x80ti has 5120, and the x80 Titan has 6144.
This is in fact identic or upwards of 50% more than a Fury X has in the case of the x80 Titan. I'm curious what the general consensus is for this, if it will help force graphics engine developers to more properly utilize the parallelism now that both NVidia and AMD are on the same page?
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
Comments
Where High end GPUs shine is when using abnormal setups like Super Resolution or 4k. The more pixels on a screen, the more gpu cores needed. They also alternatively have uses in GPGPU computing, which was big in BitCoin mining a couple years ago. My guess is nVidia knows they need to have 60 fps at 4K with this next generation as a minimum.
My hope is that usually game engine developers code for the broadest possible platform. So, for example if they made an engine that ran great on AMD hardware because of the number of cores, but it was absolutely crap on nvidia, but not vice versa. I.e. it ran "ok on both" or "good" on nvidia and "close but not quite as good" on GCN. Perhaps now that both have a ton of cores, game developers/engine developers might choose to make engines that might be better suited towards the architectures as they are now more similar and they won't have to take a "lowest common denominator" approach.
Edit: I forgot to clarify. The days of every individual game developer making their own engine seems too be in the relative past. It appears that most game companies are going to use things like unreal engine, cry engine, etc. The companies who are making the engines and focusing on the engines do have a vested interest in making sure the engines are efficient and perform well on all platforms. As i was saying above, my hope is now that nvidia and amd are more aligned in architecture, perhaps this will enable the game engine developers to produce more efficient or more focused designs.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
You do realize that is a click-bait site and they post articles that may or may not have a shread of truth.
Unfortunately for NVidia, not having DX12 GPU in 2016 and 2017 will hurt their sales. A lot.
As far as that "leak" goes, its FAKE, someone played "continue the progression" on that chart. Even the most rabid NVidia fanbois proclaimed it as fake, thats how bad it is.
Parallelism has nothing to do with core count, NVidia cant do parallelism (well, very limited), they already do all what they can, getting on DX12 level requires archtectural change, they are stuck in DX11 until Volta in 2018 as with all that weve seen pascal is just shrunken maxwell.
In my opinion AMD is climbing fast. We know with dx12/Vulcan AMD is doing well while Nvidia has nothing to show. I agree with a poster in another thread stating that the days of studios cooking their own game engine is likely gone with all the major engine lowering the barrier of entry, so Nvidia is likely to lose more grounds as it's in the best interest for those developers to make good use of all tech and not cripple anything.
But that's precisely my point. Nvidia did have the larger portion of the market, so it's a safe assumption that game engine devs would code for it to run best on NVidia first and AMD second.
Now that the architectures are much more similar (at least in regards to core counts) then it should be a good thing for both NVidia AND AMD. Personally I do believe massive parallelism is the way we will make significant gains in graphics, but the software has to be able to utilize it.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
It's not just about the past; it's also about the future. Don't you think it would be quite a black eye for a game developer if their game that releases today can't run well on Pascal or Polaris?
And then there's shaders, texture units, render outputs, raster engines, tessellation units, local memory cache, constant memory cache, texture cache, L1 cache, L2 cache, and global memory all doing stuff at the same time. Even if the L1 cache is the same physical cache as local memory on some Nvidia architectures and the same as the texture cache on others.
The graphics APIs actually do a really good job of covering this up so that game developers don't have to worry about the fine details. So long as you're doing stuff that is reasonably common for graphics, the hardware will usually handle it intelligently. It's when you move away from graphics or want to do something really unorthodox that you need to be aware of all of the ways that things can go horribly wrong.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche