Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

AMD announces first Vega cards

QuizzicalQuizzical Member LegendaryPosts: 25,524
Today's announcement is for the Radeon Instinct, not a gaming card, but to do GPU compute for machine learning.  But announcing one card based on a chip tells you something about the chip, so let's have a look.  You can find stories in a lot of places, but here's the full slide deck:

http://www.anandtech.com/Gallery/Album/5195

One slide says that a server with 4 Vega GPUs has 100 TFLOPS of performance.  A little math gives you 25 TFLOPS per GPU.  For comparison, the Pascal-based Titan X, Nvidia's current top of the line, sports a hair under 11 TFLOPS.

Another slide says that Vega can do half precision at 2x the speed of single precision.  Most likely the 25 TFLOPS number is half precision, which would give us 12.5 TFLOPS single precision.  That's still more than anything Nvidia offers, albeit by a smaller margin than the Fury X as compared to the Maxwell-based Titan X.  And it's more than double the Radeon RX 480.

That's just computational capability, of course, and a lot of other things matter.  For starters, Fiji went rather light on fixed-function graphics hardware in order to pack in so many shaders.  Hopefully Vega won't make the same mistake, but the presentation of a product intended for machine learning understandably ignores that.

Nvidia has long had more clever scheduling than AMD, making it easier for games to fully exploit their hardware.  GCN narrowed that gap considerably but did not eliminate it, and we'll have to see if Vega is basically just bigger Polaris+HBM2 or if AMD offers some major architectural improvements, akin to what Nvidia did with Maxwell as compared to Kepler.

Comments

  • CleffyCleffy Member RarePosts: 6,414
    I am not so sure how well Vega would be if they deviated too much from GCN. One of the benefits AMD has had is that their architectures last a long time. This has translated into the cards aging better as they only have to support VLIW and GCN. Also it's usually not a good idea to do a die shrink and architecture change. Although they did the die shrink with Polaris, I feel Vega is just part of Polaris due to the gap of time between the releases. If it's truly a new architecture then I do believe AMD performed an engineering miracle completing this GPU 6 months in advance.
    Also I would have to wonder how it will perform in GPGPU. Going from Maxwell to Kepler, nVidia lost some of it's GPGPU functionality and gained in graphics rendering.
  • QuizzicalQuizzical Member LegendaryPosts: 25,524
    Cleffy said:
    Also I would have to wonder how it will perform in GPGPU. Going from Maxwell to Kepler, nVidia lost some of it's GPGPU functionality and gained in graphics rendering.
    I'm not sure what you meant to say there, but I'm pretty sure that's not it.  For starters, Kepler preceded Maxwell.  While the top end Kepler chip had double precision support and ECC memory (and the rest of the Kepler lineup didn't!), it was otherwise a rather terrible compute architecture.  Yes, Maxwell was better than Kepler at graphics, but it was also better and often by a larger margin at compute.

    If the bottleneck is something that happens on the GPU die (as opposed to global memory or PCI Express), for a Maxwell-based Titan X to only double the performance of a Kepler-based Titan was actually a decently favorable result to Kepler.  And Maxwell offered those huge gains in essentially the same die size and power consumption and on the same process node.  Kepler was broken in a number of ways that were less important to graphics than compute, and Maxwell largely fixed what was broken in Kepler.
  • frostymugfrostymug Member RarePosts: 645
    I read an article on this yesterday. A 3+ Petaflop single rack is insane. Especially if it runs around the price point Koduri claimed: ~$125k


    Bitcoin mining was born too soon
  • frostymugfrostymug Member RarePosts: 645
    They gave a little more info on (Ry)Zen today. 

    http://www.pcgamer.com/amd-ryzen-details-and-expectations/

    If this doesn't turn into another Bulldozer all talk no walk then AMD could be looking at the second coming of the Athlon 64 and Intel could be looking at finally needing to do something drastic rather than incremental upgrades.


  • CleffyCleffy Member RarePosts: 6,414
    edited December 2016
    Quizzical said:
    Cleffy said:
    Also I would have to wonder how it will perform in GPGPU. Going from Maxwell to Kepler, nVidia lost some of it's GPGPU functionality and gained in graphics rendering.
    I'm not sure what you meant to say there, but I'm pretty sure that's not it.  For starters, Kepler preceded Maxwell.  While the top end Kepler chip had double precision support and ECC memory (and the rest of the Kepler lineup didn't!), it was otherwise a rather terrible compute architecture.  Yes, Maxwell was better than Kepler at graphics, but it was also better and often by a larger margin at compute.

    If the bottleneck is something that happens on the GPU die (as opposed to global memory or PCI Express), for a Maxwell-based Titan X to only double the performance of a Kepler-based Titan was actually a decently favorable result to Kepler.  And Maxwell offered those huge gains in essentially the same die size and power consumption and on the same process node.  Kepler was broken in a number of ways that were less important to graphics than compute, and Maxwell largely fixed what was broken in Kepler.
    Whoops meant Fermi->Kepler.

    Also I don't think this will be the same as Athlon 64 verse Pentium IV. The Pentium IV was just flawed in it's approach. It took too many steps to get somewhere. The Athlon was a bit more efficient and could do significantly more with significantly less. Something like an Athlon 3200 competed with a 3.0 ghz Pentium IV for a fraction of the price. You didn't even have to buy insanely expensive Rambus ram.
  • QuizzicalQuizzical Member LegendaryPosts: 25,524
    Cleffy said:
    Quizzical said:
    Cleffy said:
    Also I would have to wonder how it will perform in GPGPU. Going from Maxwell to Kepler, nVidia lost some of it's GPGPU functionality and gained in graphics rendering.
    I'm not sure what you meant to say there, but I'm pretty sure that's not it.  For starters, Kepler preceded Maxwell.  While the top end Kepler chip had double precision support and ECC memory (and the rest of the Kepler lineup didn't!), it was otherwise a rather terrible compute architecture.  Yes, Maxwell was better than Kepler at graphics, but it was also better and often by a larger margin at compute.

    If the bottleneck is something that happens on the GPU die (as opposed to global memory or PCI Express), for a Maxwell-based Titan X to only double the performance of a Kepler-based Titan was actually a decently favorable result to Kepler.  And Maxwell offered those huge gains in essentially the same die size and power consumption and on the same process node.  Kepler was broken in a number of ways that were less important to graphics than compute, and Maxwell largely fixed what was broken in Kepler.
    Whoops meant Fermi->Kepler.

    Also I don't think this will be the same as Athlon 64 verse Pentium IV. The Pentium IV was just flawed in it's approach. It took too many steps to get somewhere. The Athlon was a bit more efficient and could do significantly more with significantly less. Something like an Athlon 3200 competed with a 3.0 ghz Pentium IV for a fraction of the price. You didn't even have to buy insanely expensive Rambus ram.
    That makes a lot more sense.  Thanks for the clarification.
Sign In or Register to comment.