Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Shader core count in Pascal significantly higher...

HrimnirHrimnir Member RarePosts: 2,415

So, mainly this post is for Quiz, I know you've been somewhat of a proponent of AMD's GCN architecture.  Where it seemed to have its failings is that it was so massively parallel, that a lot of games weren't able to properly utilize all of the cores, and thus it suffered in comparison with Nvidia's architecture.

However, with the leaked specs for pascal, it seems that NVidia is taking a page from AMD as the new x80 card has 4096 shader cored, the x80ti has 5120, and the x80 Titan has 6144.

This is in fact identic or upwards of 50% more than a Fury X has in the case of the x80 Titan.  I'm curious what the general consensus is for this, if it will help force graphics engine developers to more properly utilize the parallelism now that both NVidia and AMD are on the same page?

"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

- Friedrich Nietzsche

Comments

  • CleffyCleffy Member RarePosts: 6,414
    edited March 2016
    It depends, it's not like Developers artificially limit the amount of cores the GPU uses. It would be problematic to care so much about GPU architecture for a Developer. They pretty much target a level of performance and design their assets around that target. So it means that developers by default will underutilize any high end GPU.
    Where High end GPUs shine is when using abnormal setups like Super Resolution or 4k. The more pixels on a screen, the more gpu cores needed. They also alternatively have uses in GPGPU computing, which was big in BitCoin mining a couple years ago. My guess is nVidia knows they need to have 60 fps at 4K with this next generation as a minimum.
  • HrimnirHrimnir Member RarePosts: 2,415
    edited March 2016
    Cleffy said:
    It depends, it's not like Developers artificially limit the amount of cores the GPU uses. It would be problematic to care so much about GPU architecture for a Developer. They pretty much target a level of performance and design their assets around that target. So it means that developers by default will underutilize any high end GPU.
    Where High end GPUs shine is when using abnormal setups like Super Resolution or 4k. The more pixels on a screen, the more gpu cores needed. They also alternatively have uses in GPGPU computing, which was big in BitCoin mining a couple years ago. My guess is nVidia knows they need to have 60 fps at 4K with this next generation as a minimum.
    That's actually a very good point, as far as the higher resolutions, etc.

    My hope is that usually game engine developers code for the broadest possible platform.  So, for example if they made an engine that ran great on AMD hardware because of the number of cores, but it was absolutely crap on nvidia, but not vice versa.  I.e. it ran "ok on both" or "good" on nvidia and "close but not quite as good" on GCN.  Perhaps now that both have a ton of cores, game developers/engine developers might choose to make engines that might be better suited towards the architectures as they are now more similar and they won't have to take a "lowest common denominator" approach.

    Edit: I forgot to clarify.  The days of every individual game developer making their own engine seems too be in the relative past.  It appears that most game companies are going to use things like unreal engine, cry engine, etc.  The companies who are making the engines and focusing on the engines do have a vested interest in making sure the engines are efficient and perform well on all platforms.  As i was saying above, my hope is now that nvidia and amd are more aligned in architecture, perhaps this will enable the game engine developers to produce more efficient or more focused designs.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • thinktank001thinktank001 Member UncommonPosts: 2,144
    DMKano said:

    The thing is game devs are painfully aware that 3 out of 4 gamers own an Nvidia card - sure the games run on AMD just fine but Nvidia holds the majority marketshare and has been doing so for a long time:



    see the last figure on the right - Nvidia has 75% of marketshare - Nvidia is on an upward trend, AMD downward.

    Again Korean dev studios develop 100% on Nvidia, the vast majority of US studios - use Nvidia as well - that's what's running on dev's desktops.

    Sure there's testing done on AMD, and a few devs will run AMD in their machines - but it's a minority.

    Nvidia is simply dominating the PC GPU market right now, it has been for a long time - and no I am NOT saying that AMD is bad, ok. This is not AMD vs Nvidia better debate - simply stating who has the marketshare.

    You do realize that is a click-bait site and they post articles that may or may not have a shread of truth. 
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    NVidia is on the fall, last quarter AMD managed to get back almost 3% of marketshare and thats with "historic sales from NVidia". And thats just with 300/Fury series.

    Unfortunately for NVidia, not having DX12 GPU in 2016 and 2017 will hurt their sales. A lot.

    As far as that "leak" goes, its FAKE, someone played "continue the progression" on that chart. Even the most rabid NVidia fanbois proclaimed it as fake, thats how bad it is.

    Parallelism has nothing to do with core count, NVidia cant do parallelism (well, very limited), they already do all what they can, getting on DX12 level requires archtectural change, they are stuck in DX11 until Volta in 2018 as with all that weve seen pascal is just shrunken maxwell.
    Post edited by Malabooga on
  • mastersam21mastersam21 Member UncommonPosts: 70
    edited March 2016
    DMKano said:
    Hrimnir said:
    Cleffy said:
    It depends, it's not like Developers artificially limit the amount of cores the GPU uses. It would be problematic to care so much about GPU architecture for a Developer. They pretty much target a level of performance and design their assets around that target. So it means that developers by default will underutilize any high end GPU.
    Where High end GPUs shine is when using abnormal setups like Super Resolution or 4k. The more pixels on a screen, the more gpu cores needed. They also alternatively have uses in GPGPU computing, which was big in BitCoin mining a couple years ago. My guess is nVidia knows they need to have 60 fps at 4K with this next generation as a minimum.
    That's actually a very good point, as far as the higher resolutions, etc.

    My hope is that usually game engine developers code for the broadest possible platform.  So, for example if they made an engine that ran great on AMD hardware because of the number of cores, but it was absolutely crap on nvidia, but not vice versa.  I.e. it ran "ok on both" or "good" on nvidia and "close but not quite as good" on GCN.  Perhaps now that both have a ton of cores, game developers/engine developers might choose to make engines that might be better suited towards the architectures as they are now more similar and they won't have to take a "lowest common denominator" approach.

    Edit: I forgot to clarify.  The days of every individual game developer making their own engine seems too be in the relative past.  It appears that most game companies are going to use things like unreal engine, cry engine, etc.  The companies who are making the engines and focusing on the engines do have a vested interest in making sure the engines are efficient and perform well on all platforms.  As i was saying above, my hope is now that nvidia and amd are more aligned in architecture, perhaps this will enable the game engine developers to produce more efficient or more focused designs.

    The thing is game devs are painfully aware that 3 out of 4 gamers own an Nvidia card - sure the games run on AMD just fine but Nvidia holds the majority marketshare and has been doing so for a long time:



    see the last figure on the right - Nvidia has 75% of marketshare - Nvidia is on an upward trend, AMD downward.

    Again Korean dev studios develop 100% on Nvidia, the vast majority of US studios - use Nvidia as well - that's what's running on dev's desktops.

    Sure there's testing done on AMD, and a few devs will run AMD in their machines - but it's a minority.

    Nvidia is simply dominating the PC GPU market right now, it has been for a long time - and no I am NOT saying that AMD is bad, ok. This is not AMD vs Nvidia better debate - simply stating who has the marketshare.
    The reason Nvidia has the majority market share has nothing to do them having better products, a lot of it come from their anti-competition tactics with proprietarization of tech like CUDA, PhysX, G-Sync, etc. That graph don't include consoles which will heavily favor Nvidia of course (I know it's just dGPU, but for a better comparison of the current market we should include them).

    In my opinion AMD is climbing fast. We know with dx12/Vulcan AMD is doing well while Nvidia has nothing to show. I agree with a poster in another thread stating that the days of studios cooking their own game engine is likely gone with all the major engine lowering the barrier of entry, so Nvidia is likely to lose more grounds as it's in the best interest for those developers to make good use of all tech and not cripple anything.
  • HrimnirHrimnir Member RarePosts: 2,415
    DMKano said:
    Hrimnir said:
    Cleffy said:
    It depends, it's not like Developers artificially limit the amount of cores the GPU uses. It would be problematic to care so much about GPU architecture for a Developer. They pretty much target a level of performance and design their assets around that target. So it means that developers by default will underutilize any high end GPU.
    Where High end GPUs shine is when using abnormal setups like Super Resolution or 4k. The more pixels on a screen, the more gpu cores needed. They also alternatively have uses in GPGPU computing, which was big in BitCoin mining a couple years ago. My guess is nVidia knows they need to have 60 fps at 4K with this next generation as a minimum.
    That's actually a very good point, as far as the higher resolutions, etc.

    My hope is that usually game engine developers code for the broadest possible platform.  So, for example if they made an engine that ran great on AMD hardware because of the number of cores, but it was absolutely crap on nvidia, but not vice versa.  I.e. it ran "ok on both" or "good" on nvidia and "close but not quite as good" on GCN.  Perhaps now that both have a ton of cores, game developers/engine developers might choose to make engines that might be better suited towards the architectures as they are now more similar and they won't have to take a "lowest common denominator" approach.

    Edit: I forgot to clarify.  The days of every individual game developer making their own engine seems too be in the relative past.  It appears that most game companies are going to use things like unreal engine, cry engine, etc.  The companies who are making the engines and focusing on the engines do have a vested interest in making sure the engines are efficient and perform well on all platforms.  As i was saying above, my hope is now that nvidia and amd are more aligned in architecture, perhaps this will enable the game engine developers to produce more efficient or more focused designs.

    The thing is game devs are painfully aware that 3 out of 4 gamers own an Nvidia card - sure the games run on AMD just fine but Nvidia holds the majority marketshare and has been doing so for a long time:



    see the last figure on the right - Nvidia has 75% of marketshare - Nvidia is on an upward trend, AMD downward.

    Again Korean dev studios develop 100% on Nvidia, the vast majority of US studios - use Nvidia as well - that's what's running on dev's desktops.

    Sure there's testing done on AMD, and a few devs will run AMD in their machines - but it's a minority.

    Nvidia is simply dominating the PC GPU market right now, it has been for a long time - and no I am NOT saying that AMD is bad, ok. This is not AMD vs Nvidia better debate - simply stating who has the marketshare.


    But that's precisely my point.  Nvidia did have the larger portion of the market, so it's a safe assumption that game engine devs would code for it to run best on NVidia first and AMD second.

    Now that the architectures are much more similar (at least in regards to core counts) then it should be a good thing for both NVidia AND AMD.  Personally I do believe massive parallelism is the way we will make significant gains in graphics, but the software has to be able to utilize it.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    The only game developers who code for primarily for one architecture are console developers.  Even if you only care about Nvidia, Maxwell, Kepler, Fermi, and Tesla are very different architectures.

    It's not just about the past; it's also about the future.  Don't you think it would be quite a black eye for a game developer if their game that releases today can't run well on Pascal or Polaris?
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    There's a lot more to GPU parallelism than just shader counts.  There's the heavily SIMD nature of the architectures so that in order to schedule anything at all, it's as expensive as if you were executing 32 threads at a time on Nvidia or 64 on AMD.  There's switching between which warps get scheduled every single clock cycle to help cover up the high latency of GPU registers.  There's interleaving instructions within a thread to also help cover up register latency.  On Nvidia (but not AMD!), there's different sets of shaders having different instructions present so that one set of shaders can execute rarer instructions while other shaders that don't have the rarer instructions can execute more common instructions (mainly floating point arithmetic) and keep them all busy.  There's having a ton of threads resident at a time to cover up the very long latencies of anything that has to go off chip, by having other threads keep working at the same time.

    And then there's shaders, texture units, render outputs, raster engines, tessellation units, local memory cache, constant memory cache, texture cache, L1 cache, L2 cache, and global memory all doing stuff at the same time.  Even if the L1 cache is the same physical cache as local memory on some Nvidia architectures and the same as the texture cache on others.

    The graphics APIs actually do a really good job of covering this up so that game developers don't have to worry about the fine details.  So long as you're doing stuff that is reasonably common for graphics, the hardware will usually handle it intelligently.  It's when you move away from graphics or want to do something really unorthodox that you need to be aware of all of the ways that things can go horribly wrong.
  • HrimnirHrimnir Member RarePosts: 2,415
    This is why I wanted you to respond.  I really don't know much about the internal architecture differences outside of the obvious.  However I know in a post a while back you talked about how GCN might have too many shader cores and could have been suffering as a result.  Thus why I wanted your input.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Hrimnir said:
    This is why I wanted you to respond.  I really don't know much about the internal architecture differences outside of the obvious.  However I know in a post a while back you talked about how GCN might have too many shader cores and could have been suffering as a result.  Thus why I wanted your input.
    Upon further review, my newer guess is that it's not a problem of too many shaders, but only too little of something else.  Nvidia's top end card has 6 raster engines, and AMD's Fiji only has four--and did not increase that number from Hawaii.  That or render outputs, or perhaps some other fixed-function graphical thing, is more likely the problem for AMD.
Sign In or Register to comment.