Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

AMD announces more details about 7 nm Vega

QuizzicalQuizzical Member LegendaryPosts: 25,499
There are a variety of sources on this.  For example:

https://techreport.com/news/34243/amd-radeon-instinct-mi50-and-mi60-bring-7-nm-gpus-to-the-data-center

AMD says that the GPU will be rated at 14.7 TFLOPS.  For comparison, a Radeon RX Vega 64 is 12.5 TFLOPS.  That's faster, but not hugely so.  AMD also says that it will have 4 stacks of HBM2 rather than 2, and clocked higher, which will more than double the memory bandwidth.

Today's presentation was about data centers, not gaming, so AMD didn't say whether the GPU will come to Radeon cards.  While I don't know, I'd lean toward an answer of "no", as while it could be a new top end card for AMD, it would probably still be slower than a GeForce RTX 2080 (non-Ti), so what's the point?  If they do make Radeon cards out of it, or perhaps out of the salvage parts, it still won't be competitive with Nvidia's top end, which limits how AMD can price it.

But that's not to say that all that AMD got out of the die shrink was another 20% performance.  It's a compute card, and they put a lot of compute stuff in that Vega 10 doesn't have.  For starters, it has half speed double precision compute, as compared to 1/16 in a Vega 64.  It also has full ECC memory.  Four stacks of HBM2 probably means 32 GB of memory, which is a lot more than you need in a consumer card.  PCI Express 4.0 means double the bandwidth to get data to and from the GPU.  Infinity Fabric allows a GPU to GPU connection akin to NVlink.  They've added 8-bit and 4-bit packed integer instructions like what Nvidia added to Turing, though without doing a full matrix multiply-add with them.  On paper, it looks competitive with a Tesla V100, and in about 40% of the die space.  That's what AMD got out of the die shrink.

But all that compute stuff costs money, and if you're going to disable it in consumer GPUs, you can't charge extra for it there.  It's highly probable that using the same die size and power for a Navi card on the same architecture would offer a far superior gaming card, so those wanting a new, high end gaming GPU from AMD will probably have to wait for Navi.
Sal1
«1

Comments

  • gervaise1gervaise1 Member EpicPosts: 6,919
    As you say they will have gotten "gains" out of the shift to 7nm. We just don't know what yet. However the info available and testing that has been done on Apple's A12X suggest that the gains could be significant.
    Ozmodan[Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Gorwe said:
    The answer is "no" until at least 2020 imo. Then we can expect Radeon new Vega 7nm gfx cards. Just imo.
    AMD has said that the 7 nm Vega will be shipping later this year, which probably means widely available to buy early next year.
    GorweOzmodan
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Gorwe said:
    Quizzical said:
    Gorwe said:
    The answer is "no" until at least 2020 imo. Then we can expect Radeon new Vega 7nm gfx cards. Just imo.
    AMD has said that the 7 nm Vega will be shipping later this year, which probably means widely available to buy early next year.
    Hm. That's much earlier than I expected. Aha one thing. You didn't specify which Vega. The Radeon or...?
    Radeon Instinct will be available early next year.  I'd bet on Radeon Pro, too, as this should offer something clearly superior to their current Radeon Pro WX 9100.  For ordinary Radeon, it's either early next year or never, and my guess is the latter.  If you're looking for a high end gaming GPU from AMD, you'll need to wait for Navi.
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Gorwe said:
    So, no gaming product until Navi? When's that?
    Navi is more of a successor to Polaris than Vega as it will most likely target mid range and not the high end market. A version of Navi will most likely turn up in the PS5 in 2020 if mainstream speculation is to be believed which includes no Vega 7nm for gamers either. For desktop gamers rumour also is you're looking more than a year away from now and then a bit more added on for Navi. Can read the rumours here
    Gorwe



  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Gorwe said:
    So, no gaming product until Navi? When's that?
    Navi is more of a successor to Polaris than Vega as it will most likely target mid range and not the high end market. A version of Navi will most likely turn up in the PS5 in 2020 if mainstream speculation is to be believed which includes no Vega 7nm for gamers either. For desktop gamers rumour also is you're looking more than a year away from now and then a bit more added on for Navi. Can read the rumours here
    You say that as though Polaris and Vega are meaningfully different architectures, rather than merely what AMD decided to name the cards that they launched in a particular year.  They're fundamentally both still GCN, with only minor tweaks.
  • HrimnirHrimnir Member RarePosts: 2,415
    Quizzical said:
    There are a variety of sources on this.  For example:

    https://techreport.com/news/34243/amd-radeon-instinct-mi50-and-mi60-bring-7-nm-gpus-to-the-data-center

    AMD says that the GPU will be rated at 14.7 TFLOPS.  For comparison, a Radeon RX Vega 64 is 12.5 TFLOPS.  That's faster, but not hugely so.  AMD also says that it will have 4 stacks of HBM2 rather than 2, and clocked higher, which will more than double the memory bandwidth.

    Today's presentation was about data centers, not gaming, so AMD didn't say whether the GPU will come to Radeon cards.  While I don't know, I'd lean toward an answer of "no", as while it could be a new top end card for AMD, it would probably still be slower than a GeForce RTX 2080 (non-Ti), so what's the point?  If they do make Radeon cards out of it, or perhaps out of the salvage parts, it still won't be competitive with Nvidia's top end, which limits how AMD can price it.

    But that's not to say that all that AMD got out of the die shrink was another 20% performance.  It's a compute card, and they put a lot of compute stuff in that Vega 10 doesn't have.  For starters, it has half speed double precision compute, as compared to 1/16 in a Vega 64.  It also has full ECC memory.  Four stacks of HBM2 probably means 32 GB of memory, which is a lot more than you need in a consumer card.  PCI Express 4.0 means double the bandwidth to get data to and from the GPU.  Infinity Fabric allows a GPU to GPU connection akin to NVlink.  They've added 8-bit and 4-bit packed integer instructions like what Nvidia added to Turing, though without doing a full matrix multiply-add with them.  On paper, it looks competitive with a Tesla V100, and in about 40% of the die space.  That's what AMD got out of the die shrink.

    But all that compute stuff costs money, and if you're going to disable it in consumer GPUs, you can't charge extra for it there.  It's highly probable that using the same die size and power for a Navi card on the same architecture would offer a far superior gaming card, so those wanting a new, high end gaming GPU from AMD will probably have to wait for Navi.
    Welp, that pretty much seals the deal.  We can count on Nvidia to have a market monopoly for the foreseaable future.

    Guess we should all get used to $600 "midrange" cards from now on.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    edited November 2018
    Hrimnir said:
    Welp, that pretty much seals the deal.  We can count on Nvidia to have a market monopoly for the foreseaable future.

    Guess we should all get used to $600 "midrange" cards from now on.
    This is a super interesting and on topic of what you're saying viewing :)
    A good analysis of why and how imo.





  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Hrimnir said:
    Quizzical said:
    There are a variety of sources on this.  For example:

    https://techreport.com/news/34243/amd-radeon-instinct-mi50-and-mi60-bring-7-nm-gpus-to-the-data-center

    AMD says that the GPU will be rated at 14.7 TFLOPS.  For comparison, a Radeon RX Vega 64 is 12.5 TFLOPS.  That's faster, but not hugely so.  AMD also says that it will have 4 stacks of HBM2 rather than 2, and clocked higher, which will more than double the memory bandwidth.

    Today's presentation was about data centers, not gaming, so AMD didn't say whether the GPU will come to Radeon cards.  While I don't know, I'd lean toward an answer of "no", as while it could be a new top end card for AMD, it would probably still be slower than a GeForce RTX 2080 (non-Ti), so what's the point?  If they do make Radeon cards out of it, or perhaps out of the salvage parts, it still won't be competitive with Nvidia's top end, which limits how AMD can price it.

    But that's not to say that all that AMD got out of the die shrink was another 20% performance.  It's a compute card, and they put a lot of compute stuff in that Vega 10 doesn't have.  For starters, it has half speed double precision compute, as compared to 1/16 in a Vega 64.  It also has full ECC memory.  Four stacks of HBM2 probably means 32 GB of memory, which is a lot more than you need in a consumer card.  PCI Express 4.0 means double the bandwidth to get data to and from the GPU.  Infinity Fabric allows a GPU to GPU connection akin to NVlink.  They've added 8-bit and 4-bit packed integer instructions like what Nvidia added to Turing, though without doing a full matrix multiply-add with them.  On paper, it looks competitive with a Tesla V100, and in about 40% of the die space.  That's what AMD got out of the die shrink.

    But all that compute stuff costs money, and if you're going to disable it in consumer GPUs, you can't charge extra for it there.  It's highly probable that using the same die size and power for a Navi card on the same architecture would offer a far superior gaming card, so those wanting a new, high end gaming GPU from AMD will probably have to wait for Navi.
    Welp, that pretty much seals the deal.  We can count on Nvidia to have a market monopoly for the foreseaable future.

    Guess we should all get used to $600 "midrange" cards from now on.
    For high-end gaming GPUs, only if you regard the launch of Navi as not being part of the foreseeable future.  Which it kind of isn't, as if it's a major architectural overhaul (which is long overdue by now), then we really have no idea how it will perform.
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Hrimnir said:
    Welp, that pretty much seals the deal.  We can count on Nvidia to have a market monopoly for the foreseaable future.

    Guess we should all get used to $600 "midrange" cards from now on.
    This is a super interesting and on topic of what you're saying viewing :)
    A good analysis of why and how imo.

    A link to a 37 minute video is not an acceptable forum argument.
    gervaise1AmazingAvery[Deleted User]VrikaRidelynn
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    We don't really know how soon Navi will arrive, as AMD hasn't talked much about it yet.  With 7 nm Vega shipping later this year and 7 nm Zen 2 cores chips long since returned from the fabs and working (which does not necessarily imply being ready for mass production), it looks like AMD understands TSMC 7 nm soon enough for AMD to launch Navi relatively early next year if the design is ready.  Which it might not be, especially if it's a major new architecture.
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Quizzical said:
    Hrimnir said:
    Welp, that pretty much seals the deal.  We can count on Nvidia to have a market monopoly for the foreseaable future.

    Guess we should all get used to $600 "midrange" cards from now on.
    This is a super interesting and on topic of what you're saying viewing :)
    A good analysis of why and how imo.

    A link to a 37 minute video is not an acceptable forum argument.
    Whose arguing ?? And whose to say some or whole of that video is a) on topic and b) something insightful to some.
    Besides it’s pretty accurate IMO and there is no need to deflect its truthfulness



  • HrimnirHrimnir Member RarePosts: 2,415
    Quizzical said:
    Hrimnir said:
    Quizzical said:
    There are a variety of sources on this.  For example:

    https://techreport.com/news/34243/amd-radeon-instinct-mi50-and-mi60-bring-7-nm-gpus-to-the-data-center

    AMD says that the GPU will be rated at 14.7 TFLOPS.  For comparison, a Radeon RX Vega 64 is 12.5 TFLOPS.  That's faster, but not hugely so.  AMD also says that it will have 4 stacks of HBM2 rather than 2, and clocked higher, which will more than double the memory bandwidth.

    Today's presentation was about data centers, not gaming, so AMD didn't say whether the GPU will come to Radeon cards.  While I don't know, I'd lean toward an answer of "no", as while it could be a new top end card for AMD, it would probably still be slower than a GeForce RTX 2080 (non-Ti), so what's the point?  If they do make Radeon cards out of it, or perhaps out of the salvage parts, it still won't be competitive with Nvidia's top end, which limits how AMD can price it.

    But that's not to say that all that AMD got out of the die shrink was another 20% performance.  It's a compute card, and they put a lot of compute stuff in that Vega 10 doesn't have.  For starters, it has half speed double precision compute, as compared to 1/16 in a Vega 64.  It also has full ECC memory.  Four stacks of HBM2 probably means 32 GB of memory, which is a lot more than you need in a consumer card.  PCI Express 4.0 means double the bandwidth to get data to and from the GPU.  Infinity Fabric allows a GPU to GPU connection akin to NVlink.  They've added 8-bit and 4-bit packed integer instructions like what Nvidia added to Turing, though without doing a full matrix multiply-add with them.  On paper, it looks competitive with a Tesla V100, and in about 40% of the die space.  That's what AMD got out of the die shrink.

    But all that compute stuff costs money, and if you're going to disable it in consumer GPUs, you can't charge extra for it there.  It's highly probable that using the same die size and power for a Navi card on the same architecture would offer a far superior gaming card, so those wanting a new, high end gaming GPU from AMD will probably have to wait for Navi.
    Welp, that pretty much seals the deal.  We can count on Nvidia to have a market monopoly for the foreseaable future.

    Guess we should all get used to $600 "midrange" cards from now on.
    For high-end gaming GPUs, only if you regard the launch of Navi as not being part of the foreseeable future.  Which it kind of isn't, as if it's a major architectural overhaul (which is long overdue by now), then we really have no idea how it will perform.
    daws

    Well, if history is any indicator, the consumer high end parts based on the same general architecture as the professional/commercial stuff usually have very similar TFLOPS.  A node shrink should produce much more than a ~18% performance improvement.  Now, yes im aware that tflops isn't the whole picture.  However it's generally a good run of the mill comparison.  Yes, new faster versions of HBM could definitely help out, but realistically we might, maybe see a 25% gain on Vega 64.

    That barely puts it in 1080ti territory, much less 2080 ti.

    Now let's fast forward to the 6-9 (or more) months from now when we *might* see consumer variants of this card.  By that point, Nvidia will be very close to, probably only a few months out, to releasing their own 7 or 8nm parts.  Even if we assume the same 18-25% improvement purely from the process node, that still puts nvidia way WAY above AMD's part.

    So again, nvidia would, as they are now, still be in a perfectly "safe" position to price the mid-high/high end however they want.  Unless AMD can someone produce this new card at a 300-400 USD price range, the video card market is basically screwed from a consumer stand point.

    I personally believe Nvidia 100% acheived their goal, which in my mind (again speculation as i can't prove it) was to get rid of old stock of pascal (they've done that spectacularly), and introduce price stickiness into the equation.  By pricing the new cards so astronomically higher, consumers who unfortunately tend to have very short memories, will think that over the next few months when nvidia brings pricing down 10-20%, that they are going to be getting a "deal", all the while forgetting that they are basically going to be paying for the same price/perf as if they had bought a 1080ti (at this point in the future) 2 years prior.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • CleffyCleffy Member RarePosts: 6,414
    A die shrink will always equate to a lower price for practically the same design. However, I don't think I will buy a GPU that is just a die shrink of a Vega 64.
    AmazingAvery
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Cleffy said:
    A die shrink will always equate to a lower price for practically the same design. However, I don't think I will buy a GPU that is just a die shrink of a Vega 64.
    Yes, with these instinct cards they are not really that impressive at 7nm, Volta is still superior even though it is 12nm, and with the same power consumption! The power efficiency is massive with volta, it can run double the number of transistors at lower energy consumption and it has tensor cores which highly elevates it above MI60. AMD slides initially stated 1.35x performance at the same power have shrunk down to 1.25x performance at the same power.. underwhelming. Not sure if it is really a very good product for amd to get a foothold in a market they haven't really been able to so far. 

    At a 1.25 increase in tflop count only, this card will be slower than a GTX 1080 ti in gaming considering the bottlenecks of the Vega architecture at higher frequencies and would thus lose to a RTX 2080 which got slammed for it's performance not that long ago. 

    With only this much tflops, it makes absolute zero sense for AMD to release this as a consumer gaming card which it seems people want. You would be looking at speeds at best 20% faster than Vega 64 which put it slight faster than an RTX 2070 but much more difficulty being able price near that region because of the 7nm die and the 4 stacks of HBM2. If there ever will be a consumer variant of Vega at 7nm I wouldn't be buying it either.



  • OzmodanOzmodan Member EpicPosts: 9,726
    Hrimnir said:
    Quizzical said:
    Hrimnir said:
    Quizzical said:
    There are a variety of sources on this.  For example:

    https://techreport.com/news/34243/amd-radeon-instinct-mi50-and-mi60-bring-7-nm-gpus-to-the-data-center

    AMD says that the GPU will be rated at 14.7 TFLOPS.  For comparison, a Radeon RX Vega 64 is 12.5 TFLOPS.  That's faster, but not hugely so.  AMD also says that it will have 4 stacks of HBM2 rather than 2, and clocked higher, which will more than double the memory bandwidth.

    Today's presentation was about data centers, not gaming, so AMD didn't say whether the GPU will come to Radeon cards.  While I don't know, I'd lean toward an answer of "no", as while it could be a new top end card for AMD, it would probably still be slower than a GeForce RTX 2080 (non-Ti), so what's the point?  If they do make Radeon cards out of it, or perhaps out of the salvage parts, it still won't be competitive with Nvidia's top end, which limits how AMD can price it.

    But that's not to say that all that AMD got out of the die shrink was another 20% performance.  It's a compute card, and they put a lot of compute stuff in that Vega 10 doesn't have.  For starters, it has half speed double precision compute, as compared to 1/16 in a Vega 64.  It also has full ECC memory.  Four stacks of HBM2 probably means 32 GB of memory, which is a lot more than you need in a consumer card.  PCI Express 4.0 means double the bandwidth to get data to and from the GPU.  Infinity Fabric allows a GPU to GPU connection akin to NVlink.  They've added 8-bit and 4-bit packed integer instructions like what Nvidia added to Turing, though without doing a full matrix multiply-add with them.  On paper, it looks competitive with a Tesla V100, and in about 40% of the die space.  That's what AMD got out of the die shrink.

    But all that compute stuff costs money, and if you're going to disable it in consumer GPUs, you can't charge extra for it there.  It's highly probable that using the same die size and power for a Navi card on the same architecture would offer a far superior gaming card, so those wanting a new, high end gaming GPU from AMD will probably have to wait for Navi.
    Welp, that pretty much seals the deal.  We can count on Nvidia to have a market monopoly for the foreseaable future.

    Guess we should all get used to $600 "midrange" cards from now on.
    For high-end gaming GPUs, only if you regard the launch of Navi as not being part of the foreseeable future.  Which it kind of isn't, as if it's a major architectural overhaul (which is long overdue by now), then we really have no idea how it will perform.
    daws

    Well, if history is any indicator, the consumer high end parts based on the same general architecture as the professional/commercial stuff usually have very similar TFLOPS.  A node shrink should produce much more than a ~18% performance improvement.  Now, yes im aware that tflops isn't the whole picture.  However it's generally a good run of the mill comparison.  Yes, new faster versions of HBM could definitely help out, but realistically we might, maybe see a 25% gain on Vega 64.

    That barely puts it in 1080ti territory, much less 2080 ti.

    Now let's fast forward to the 6-9 (or more) months from now when we *might* see consumer variants of this card.  By that point, Nvidia will be very close to, probably only a few months out, to releasing their own 7 or 8nm parts.  Even if we assume the same 18-25% improvement purely from the process node, that still puts nvidia way WAY above AMD's part.

    So again, nvidia would, as they are now, still be in a perfectly "safe" position to price the mid-high/high end however they want.  Unless AMD can someone produce this new card at a 300-400 USD price range, the video card market is basically screwed from a consumer stand point.

    I personally believe Nvidia 100% acheived their goal, which in my mind (again speculation as i can't prove it) was to get rid of old stock of pascal (they've done that spectacularly), and introduce price stickiness into the equation.  By pricing the new cards so astronomically higher, consumers who unfortunately tend to have very short memories, will think that over the next few months when nvidia brings pricing down 10-20%, that they are going to be getting a "deal", all the while forgetting that they are basically going to be paying for the same price/perf as if they had bought a 1080ti (at this point in the future) 2 years prior.
    You seem to forget the elephant in the room, the $1200 price tag on a 2080ti.  That is completely out of the range of most people's price range.   Most consumers do not have that kind of money for a graphics card.  If AMD can stick to a $600 price or lower they would do quite well in the category.  You seem to forget that for most people the affordable price for a graphics card is around $300 not $1200.  The 2070 card is a product without a spot in my opinion, it offers nothing that another card can do better for less or about the same price.  
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Cleffy said:
    A die shrink will always equate to a lower price for practically the same design. However, I don't think I will buy a GPU that is just a die shrink of a Vega 64.
    Yes, with these instinct cards they are not really that impressive at 7nm, Volta is still superior even though it is 12nm, and with the same power consumption! The power efficiency is massive with volta, it can run double the number of transistors at lower energy consumption and it has tensor cores which highly elevates it above MI60. AMD slides initially stated 1.35x performance at the same power have shrunk down to 1.25x performance at the same power.. underwhelming. Not sure if it is really a very good product for amd to get a foothold in a market they haven't really been able to so far. 

    At a 1.25 increase in tflop count only, this card will be slower than a GTX 1080 ti in gaming considering the bottlenecks of the Vega architecture at higher frequencies and would thus lose to a RTX 2080 which got slammed for it's performance not that long ago. 

    With only this much tflops, it makes absolute zero sense for AMD to release this as a consumer gaming card which it seems people want. You would be looking at speeds at best 20% faster than Vega 64 which put it slight faster than an RTX 2070 but much more difficulty being able price near that region because of the 7nm die and the 4 stacks of HBM2. If there ever will be a consumer variant of Vega at 7nm I wouldn't be buying it either.
    We don't know the power consumption, and hence, don't know how energy efficient it will be.  It would have to be clocked pretty aggressively to use as much power as a Tesla V100.  Maybe it is, but even if so, they could almost certainly have a lower clocked version that is more energy efficient.

    Volta's tensor cores might be important for machine learning.  They're certainly intended for machine learning, but I haven't studied it enough to know how useful they are there.  For pretty much everything else, they're absolutely useless.  I don't mean to dismiss them entirely, but what hardware is the best at what you're doing depends tremendously on what you're doing.

    It's extremely unlikely that this will be AMD's highest performance GPU at graphics on 7 nm, or even their largest die.  If you want to compare it to 14 nm, then comparing it to AMD's first GPU there (Polaris 10) shows a much larger than 18% improvement.  Even if you want to compare it to Vega 10, it has a much smaller die size and probably much lower power consumption.

    A comparison to a GTX 1080 Ti really isn't that helpful unless AMD decides to offer a Radeon card based on it.  The GTX 1080 Ti is focused on graphics, while the new Vega is a compute card that isn't guaranteed to even be able to graphics at all.  It probably can, but we don't know that for certain.  It will almost certainly blow the GTX 1080 Ti out of the water on compute.  Its real competition is the Tesla V100.

    And while you focus purely on FLOPS, the new Vega has more than double the memory bandwidth of a Radeon RX Vega 64.  It's also possible that they seriously beefed up the L2 cache, which is an area where GCN/Polaris lagged behind Maxwell/Pascal; I'm not sure if Vega 10 is more competitive there, but my guess is "no".
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Ozmodan said:
    You seem to forget the elephant in the room, the $1200 price tag on a 2080ti.  That is completely out of the range of most people's price range.   Most consumers do not have that kind of money for a graphics card.  If AMD can stick to a $600 price or lower they would do quite well in the category.  You seem to forget that for most people the affordable price for a graphics card is around $300 not $1200.  The 2070 card is a product without a spot in my opinion, it offers nothing that another card can do better for less or about the same price.  
    I'd bet on a Radeon Instinct MI60 costing a whole lot more than $1200.  At minimum, it's a higher end version of a Radeon Pro WX 9100, which already costs more than $1200.
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Quizzical said:
    Cleffy said:
    A die shrink will always equate to a lower price for practically the same design. However, I don't think I will buy a GPU that is just a die shrink of a Vega 64.
    Yes, with these instinct cards they are not really that impressive at 7nm, Volta is still superior even though it is 12nm, and with the same power consumption! The power efficiency is massive with volta, it can run double the number of transistors at lower energy consumption and it has tensor cores which highly elevates it above MI60. AMD slides initially stated 1.35x performance at the same power have shrunk down to 1.25x performance at the same power.. underwhelming. Not sure if it is really a very good product for amd to get a foothold in a market they haven't really been able to so far. 

    At a 1.25 increase in tflop count only, this card will be slower than a GTX 1080 ti in gaming considering the bottlenecks of the Vega architecture at higher frequencies and would thus lose to a RTX 2080 which got slammed for it's performance not that long ago. 

    With only this much tflops, it makes absolute zero sense for AMD to release this as a consumer gaming card which it seems people want. You would be looking at speeds at best 20% faster than Vega 64 which put it slight faster than an RTX 2070 but much more difficulty being able price near that region because of the 7nm die and the 4 stacks of HBM2. If there ever will be a consumer variant of Vega at 7nm I wouldn't be buying it either.
    We don't know the power consumption, and hence, don't know how energy efficient it will be.  It would have to be clocked pretty aggressively to use as much power as a Tesla V100.  Maybe it is, but even if so, they could almost certainly have a lower clocked version that is more energy efficient.

    Volta's tensor cores might be important for machine learning.  They're certainly intended for machine learning, but I haven't studied it enough to know how useful they are there.  For pretty much everything else, they're absolutely useless.  I don't mean to dismiss them entirely, but what hardware is the best at what you're doing depends tremendously on what you're doing.

    It's extremely unlikely that this will be AMD's highest performance GPU at graphics on 7 nm, or even their largest die.  If you want to compare it to 14 nm, then comparing it to AMD's first GPU there (Polaris 10) shows a much larger than 18% improvement.  Even if you want to compare it to Vega 10, it has a much smaller die size and probably much lower power consumption.

    A comparison to a GTX 1080 Ti really isn't that helpful unless AMD decides to offer a Radeon card based on it.  The GTX 1080 Ti is focused on graphics, while the new Vega is a compute card that isn't guaranteed to even be able to graphics at all.  It probably can, but we don't know that for certain.  It will almost certainly blow the GTX 1080 Ti out of the water on compute.  Its real competition is the Tesla V100.

    And while you focus purely on FLOPS, the new Vega has more than double the memory bandwidth of a Radeon RX Vega 64.  It's also possible that they seriously beefed up the L2 cache, which is an area where GCN/Polaris lagged behind Maxwell/Pascal; I'm not sure if Vega 10 is more competitive there, but my guess is "no".
    300W - 
    You have 13b transitions consuming 300W on 7nm, vs 21b transistors doing the same thing on 12nm (basically 16nm)




    Improved cache latency
    IF links have latency of 60-70ns! | Nvlink is 10us....Or... 10.000ns
    QuizzicalRidelynn



  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    300W - 
    You have 13b transitions consuming 300W on 7nm, vs 21b transistors doing the same thing on 12nm (basically 16nm)



    Improved cache latency
    IF links have latency of 60-70ns! | Nvlink is 10us....Or... 10.000ns
    Ah, I hadn't caught an announcement of the official TDP.  Regardless, my point stands that they could surely scale that TDP way back by lowering the clocks and voltages a little.  If you want to compare it to consumer cards, then just crippling the double precision compute would bring the TDP way down, as that's going to be the thing that cranks out a ton of heat.

    As for infinity fabric, it sounds like it's the same thing that they use on their CPUs, with about the same latency.  Not that the latency matters on a GPU.  For that matter, not that infinity fabric or NVlink will matter beyond a handful of oddball cases, as transferring data directly from one GPU to another is a peculiar thing to do and hardly ever useful.  Getting data from the CPU to the GPU and back is the big thing, and PCI Express 4.0 will help a lot there.

    Or, they could have gotten the same performance at lower power by having more compute units clocked lower, at the expense of a larger die size.  Which is basically what a Tesla V100 did.

    Still, trying to infer something about energy efficiency of a process node by comparing a compute card on one node to a graphics card on another isn't a clean comparison.  It would be like saying that AMD's Zen cores are really energy inefficient because a 8-core EPYC 7261 can burn as much as 170 W with a max turbo of only 2.9 GHz.  It's not the CPU cores burning all that power; it's a bunch other stuff moving data around, whether PCI Express, memory controllers, infinity fabric, or whatever.

    And it still tells us exactly nothing about Navi, which is what will matter to gamers.
    Ozmodan
  • Sal1Sal1 Member UncommonPosts: 430
    edited November 2018
    I retract my post.
    Post edited by Sal1 on
  • RidelynnRidelynn Member EpicPosts: 7,383
    edited November 2018
    I don’t think high end performance crown means as much as most people think.

    The last time AMD had the crown (and it wasn’t that long ago) - it wasn’t like AMD marketshare all of a sudden spiked and went through the roof.

    So AMD not having a card that is direct competition to a 1080ti or 2080 I don’t think really affects that much.... so they don’t have a $600+ GPU. That doesn’t make their sub-$300 market any less significant.

    What I think has hurt their marketshare and perception more than anything is the mining craze. Gamers haven’t been able to get their hands on GPUs at what should be competitive prices. You may think it’s all roses and rainbow for AMD because they got the card sales regardless, but now that mining is tapering off, the loss of gaming marketshare, and AMDs perception in the gaming community, really hurts them more than not having a Halo card.
    Ozmodan[Deleted User]
  • RidelynnRidelynn Member EpicPosts: 7,383
    As a fun thought experiment... let’s just say for S&G’s that Navi does beat the snot out of a 2080Ti (not that i expect that, this is strictly hypothetical)

    I would expect a few things to occur:

    AMD would try to price it “to compete”, and since the competition is at $1,000+, I would expect AMD to be as well.

    Folks would still say AMD drivers suck.

    nVidia would react - we would see a either a Technical response to reclaim the crown, a price war of sorts, or both.

    People will claim that CUDA/hardware PhysX/RTX/Gsync actually matter over performance.

    AMD marketshare would not budge significantly
    [Deleted User]
  • CleffyCleffy Member RarePosts: 6,414
    Even if AMD sold a card that was 25% faster than anything nVidia has while consuming 150watts for $300. They would not increase their market share. That's just how far the deck is stacked against them. That happened with the VLIW architecture and the start of the GCN architecture. AMD just released a superior product, yet nVidia was able to sell a more expensive card that got worse performance.
    Any proprietary stuff from nVidia doesn't matter. Most of that shit gets dropped after a couple years and only a handful of developers ever support it. Only thing that has stuck is CUDA since nVidia dominates in professional cards.
    Ridelynn[Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Ridelynn said:
    As a fun thought experiment... let’s just say for S&G’s that Navi does beat the snot out of a 2080Ti (not that i expect that, this is strictly hypothetical)

    I would expect a few things to occur:

    AMD would try to price it “to compete”, and since the competition is at $1,000+, I would expect AMD to be as well.

    Folks would still say AMD drivers suck.

    nVidia would react - we would see a either a Technical response to reclaim the crown, a price war of sorts, or both.

    People will claim that CUDA/hardware PhysX/RTX/Gsync actually matter over performance.

    AMD marketshare would not budge significantly
    I'd regard it as highly probable that there eventually will be either an AMD Navi GPU or an AMD GPU of an architecture highly derivative of Navi that does beat a GeForce RTX 2080 Ti.  It's decently likely that there will be a GeForce RTX 3080 or whatever that likewise beats the RTX 2080 Ti before that happens, but we'll get there eventually.

    That doesn't mean that those GPUs will cost $1200, though.  Die shrinks allow a given level of performance for much less money.  The reason that the GeForce RTX 2080 Ti costs a fortune to buy is that it costs a fortune to build.  I've said that so many times that people are probably sick of hearing it from me, but it's true.  If a die shrink of Turing on 7 nm gives a 400 mm^2 die in an RTX 3080 that is a little faster than an RTX 2080 Ti, even if AMD still isn't competitive at the high end, Nvidia would probably charge something like $700 for it.
    Ozmodan
  • HrimnirHrimnir Member RarePosts: 2,415
    Ridelynn said:
    I don’t think high end performance crown means as much as most people think.

    The last time AMD had the crown (and it wasn’t that long ago) - it wasn’t like AMD marketshare all of a sudden spiked and went through the roof.

    So AMD not having a card that is direct competition to a 1080ti or 2080 I don’t think really affects that much.... so they don’t have a $600+ GPU. That doesn’t make their sub-$300 market any less significant.

    What I think has hurt their marketshare and perception more than anything is the mining craze. Gamers haven’t been able to get their hands on GPUs at what should be competitive prices. You may think it’s all roses and rainbow for AMD because they got the card sales regardless, but now that mining is tapering off, the loss of gaming marketshare, and AMDs perception in the gaming community, really hurts them more than not having a Halo card.
    There's a difference between not having the crown and where they are now.  They can barely compete with nvidia's midrange LAST GEN product, much less the midrange of the new gen.
    Ozmodan

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

Sign In or Register to comment.