Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

AMD Vega GPUs will launch "over the next couple of months".

24567

Comments

  • VrikaVrika Member LegendaryPosts: 7,992
    Quizzical said:
    3)  Preliminary drivers don't perform well and that's all that we're seeing.  By the time the GPU launches, the problems will be fixed and Vega will be a competitive architecture.  Maybe a little better than Pascal or maybe a little worse, but at least competitive with a Titan Xp.  It's quite possible that the reason AMD held off on the consumer version of Vega is that they knew the drivers simply weren't ready for gaming and there are huge performance improvements coming.  There are always massive improvements to be had over the very early drivers on a new architecture.  The public never sees those very early drivers, but might have seen an earlier version than most with Vega, as AMD has strong incentives to get Vega out there as soon as possible.
    AMD was already making big noise of Vega coming soon back when RX 400 cards were released, and it looks like they might have had to delay due to HBM2 supply. If they had to delay because of that, then they would have had plenty of extra time to work on Vega drivers.

    Of course this is a lot of ifs and speculation.
    Ridelynn
     
  • RidelynnRidelynn Member EpicPosts: 7,383
    Depends on how iterative the new Vega Compute Unit is from GCN
  • 13lake13lake Member UncommonPosts: 719
    edited July 2017
    Quizzical said:
    13lake said:
    AMD confirmed to PCPer if im not mistaken that the only purpose between the gaming and pro driver is to change which radeon settings are current visible and in use.

    There is no actual difference whatsover between the 2 modes of drivers.
    Gaming and professional drivers have been entirely separate for many years now.  Both AMD and Nvidia do this.  Look at how the Quadro P5000 (basically a lower clocked GTX 1080 with different drivers) handily destroys the Titan Xp in some of the professional tests.

    If the only driver options in the Vega Frontier Edition are going to be which settings are visible, then either it doesn't support the professional drivers, it doesn't support the gaming drivers, or both.  That wouldn't make any sense at all.
    Let me clarify, the gaming driver for frontier vega has no difference than the professional driver for frontier vega (I'm not talking what difference or not RX Vega's upcoming gaming drivers will have).

    And here's irrefutable proof what i posted was absolute truth (PCper got word from AMD that differences are cosmetic only).
    Whether or not that is exactly what he got told is out of the scope of the burden of proof for my initial observation)

    min 24:55 of this video, link is copied from that point in video :
    https://youtu.be/bhGAS_oGN3c?t=1495
  • PhryPhry Member LegendaryPosts: 11,004
    By the look of things, the Vega GPU's aren't really for gamers, but for game developers, you certainly wouldn't use them on a gaming PC because the performance just isnt there when it comes to gaming, and while drivers will no doubt improve the performance a little, it's unlikely to make much difference from the results that have already been obtained. :/
  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    13lake said:
    Quizzical said:
    13lake said:
    AMD confirmed to PCPer if im not mistaken that the only purpose between the gaming and pro driver is to change which radeon settings are current visible and in use.

    There is no actual difference whatsover between the 2 modes of drivers.
    Gaming and professional drivers have been entirely separate for many years now.  Both AMD and Nvidia do this.  Look at how the Quadro P5000 (basically a lower clocked GTX 1080 with different drivers) handily destroys the Titan Xp in some of the professional tests.

    If the only driver options in the Vega Frontier Edition are going to be which settings are visible, then either it doesn't support the professional drivers, it doesn't support the gaming drivers, or both.  That wouldn't make any sense at all.
    Let me clarify, the gaming driver for frontier vega has no difference than the professional driver for frontier vega (I'm not talking what difference or not RX Vega's upcoming gaming drivers will have).

    And here's irrefutable proof what i posted was absolute truth (PCper got word from AMD that differences are cosmetic only).
    Whether or not that is exactly what he got told is out of the scope of the burden of proof for my initial observation)

    min 24:55 of this video, link is copied from that point in video :
    https://youtu.be/bhGAS_oGN3c?t=1495
    Thank you for the clarification.
  • RidelynnRidelynn Member EpicPosts: 7,383
    13lake said:
    Quizzical said:
    13lake said:
    AMD confirmed to PCPer if im not mistaken that the only purpose between the gaming and pro driver is to change which radeon settings are current visible and in use.

    There is no actual difference whatsover between the 2 modes of drivers.
    Gaming and professional drivers have been entirely separate for many years now.  Both AMD and Nvidia do this.  Look at how the Quadro P5000 (basically a lower clocked GTX 1080 with different drivers) handily destroys the Titan Xp in some of the professional tests.

    If the only driver options in the Vega Frontier Edition are going to be which settings are visible, then either it doesn't support the professional drivers, it doesn't support the gaming drivers, or both.  That wouldn't make any sense at all.
    Let me clarify, the gaming driver for frontier vega has no difference than the professional driver for frontier vega (I'm not talking what difference or not RX Vega's upcoming gaming drivers will have).

    And here's irrefutable proof what i posted was absolute truth (PCper got word from AMD that differences are cosmetic only).
    Whether or not that is exactly what he got told is out of the scope of the burden of proof for my initial observation)

    min 24:55 of this video, link is copied from that point in video :
    https://youtu.be/bhGAS_oGN3c?t=1495
    Correct my understanding if I'm not correct:

    So that means Frontier won't get a lot better at gaming because it already has gaming in the drivers. I'm not implying that they can't be improved at all because of driver tweaks, just that it won't be a miraculous 70% because game-oriented drivers don't exist.

    But that still (somehow) doesn't say much about the RX Vega.
  • MrMonolitasMrMonolitas Member UncommonPosts: 263
    Amd claiming that its not gaming gpu, ok... But then it cost 999 dollars. Rx vega suppose to be lower priced oriented to gaming. If it will have the same performance with lower price really dont see any problem. Look at ryzen, lowest cpu has almost the same performance as 1800x
  • VrikaVrika Member LegendaryPosts: 7,992
    edited July 2017
    albers said:
    Amd claiming that its not gaming gpu, ok... But then it cost 999 dollars. Rx vega suppose to be lower priced oriented to gaming. If it will have the same performance with lower price really dont see any problem. Look at ryzen, lowest cpu has almost the same performance as 1800x
    A high-end 8 core Ryzen is twice as fast as low-end 4 core Ryzen if you can find a program that scales well to multiple threads. With CPUs the problem is that most programs don't scale, and that's why the low end CPU is often nearly as fast as the high-end CPU.

    With GPUs there are no similar scaling problems, and if AMD makes cheaper Vega with only 50% processing capacity (like they did with cheaper Ryzens) this time it'll also have only 50% of performance in games.
    Post edited by Vrika on
    Ridelynn
     
  • MrMonolitasMrMonolitas Member UncommonPosts: 263
    Well in certain applications yes, but not in gaming. It shows more or less same performance.
  • RidelynnRidelynn Member EpicPosts: 7,383
    edited July 2017
    There are times where you see "same gaming performance lower price". Titan <-> x80Ti  is usually one of those cases. Titan contains a lot of double precision and other functions that aren't really useful for gaming, but do help out some workstation folks. Both of those cards in a given generation have roughly the same architecture, and roughly the same number of CUDA cores.

    Vega ~may~ be a similar situation, we don't know yet.

    But if a card contains 50% fewer Compute Units / CUDA Cores / what have you, it's typically half as fast at everything.
    [Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Ridelynn said:
    There are times where you see "same gaming performance lower price". Titan <-> x80Ti  is usually one of those cases. Titan contains a lot of double precision and other functions that aren't really useful for gaming, but do help out some workstation folks. Both of those cards in a given generation have roughly the same architecture, and roughly the same number of CUDA cores.

    Vega ~may~ be a similar situation, we don't know yet.

    But if a card contains 50% fewer Compute Units / CUDA Cores / what have you, it's typically half as fast at everything.
    The original Kepler-based Titan and its dual-GPU equivalent, the Titan Z, are the only Titan cards that have had much in the way of double-precision compute.  The Maxwell and Pascal versions of Titan cards didn't physically have the double-precision on the die.  Indeed, a big chip that cuts out the non-gaming stuff is the reason GP102 (GTX 1080 Ti, Titan Xp) exists.
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    VEGA should of been out 10 months ago, It's late and not good enough at all considering the development lead time. HBM is a fail and expensive. perf/W advantage is with NVIDIA.

    HBM has been a bad bet for AMD unfortunately, it's currently very late, way below bandwidth targets, likely running at higher voltages and probably has less than ideal yields. 

    AMD is also stuck with a slightly less optimized process for HP silicon, I reckon that's worth at least 10℅ performance or 15℅ lower power consumption at these clocks, all these things combined add up to a nightmare for AMD. I was apprehensive at first about Vega being a disappointment. But all this talk is a wake up call. Vega isn't going to be overly competitive. Nvidia will continue to dominate the high end. 

    AMD is too busy making CPU's. It's a concerning situation. There's simply no money to develop a full line of GPUs like they used to. Hopefully that will change with the success of Ryzen. Otherwise it'll look as though ATi selling to AMD was a huge mistake. 

    Vega is having a similar dilemma like a few of the former AMD GPUs have been having. Getting the clock speeds up to desired levels requires far too much power. They run quite efficiently at lower voltages, providing much better perf/W. It was suggested that this GPU can compete with similar perf/W when slowed down to a GTX 1070 performance. However, moving up from there throws the wattage out the window in a big hurry. 

    So if the architecture is just too slow and they can't fix it with clock speed.... we're only going to get a ~$350 graphics card that AMD has horrible margins on which is a shame. 

    AMDs performance targets were completely wrong for both Polaris and Vega. They likely never expected such high clocks from Pascal. Consequently they had to bump clocks for both Polaris and Vega far above optimal.

    With the RX Vega series codenames rumoured to be Vega XTX, Vega XT and the Vega XL with the XTX and the XL apparently coming with 4096 stream processors and possibly only 8GB HBM2 and added to that 375W and 285W with performance less than a 1080TI looks hot, heavy and non efficient - if true - super disapointing. 

    IMO predictions what is coming in a few weeks - 
    Competition for 1080, 1070, and 1060 9gbps
    Free bonus from AMD: 50% more power consumption. Many of their previous GPUs have used more power than the rated TDP so I wouldn't be surprised if these GPUs also follow suit. 
    Either way; TDP and power consumption are still pretty directly related and the fact that Vega's TDP is significantly higher than NVidia counterparts should be an indication that they will be more power hungry as well....or that Vega will be significantly less efficient. Then there will be those touting HBCC with it having no benefit to current games out or in the works for a long time unless you have a game using more than 8GB VRAM (calling it a gimmick now).
    laseritPhry



  • OzmodanOzmodan Member EpicPosts: 9,726
    Well you neglected to note that AMD cannot make enough graphic cards ATM.  They sell out as soon as they hit the shelves since their design is far preferable to Nvidia's offerings for data mining.  They are currently out selling Nvidia's products 3 to 1.

    Let's see what the gaming version of vega looks like before making a lot of assumptions about it.   Nvidia's cards still have major issues with DX12, I am just waiting to see a good game make use of that standard.

  • PhryPhry Member LegendaryPosts: 11,004
    Ozmodan said:
    Well you neglected to note that AMD cannot make enough graphic cards ATM.  They sell out as soon as they hit the shelves since their design is far preferable to Nvidia's offerings for data mining.  They are currently out selling Nvidia's products 3 to 1.

    Let's see what the gaming version of vega looks like before making a lot of assumptions about it.   Nvidia's cards still have major issues with DX12, I am just waiting to see a good game make use of that standard.

    Directx12 is for the moment, irrelevant, Directx11 is probably more popular from a gaming viewpoint than 12 is,  although having games support Dx12 is not unrealistic, only 15 or 16% of PC's out there can even use Dx12 so its a  bit of a limited marketplace, a nice bonus for those that have it, but an irrelevant feature for the majority that dont. :/
    AmazingAvery
  • VrikaVrika Member LegendaryPosts: 7,992
    Ozmodan said:

    Well you neglected to note that AMD cannot make enough graphic cards ATM.  They sell out as soon as they hit the shelves since their design is far preferable to Nvidia's offerings for data mining.  They are currently out selling Nvidia's products 3 to 1.
    Do you have any actual data to back up that 3:1 claim?

    AMD is currently selling all the cards they can make to data miners, and their cards are better for mining than NVidia's, but also GTX 1070 and 1060 are having shortages due to how many data miners are buying so at the moment both manufacturers must be making as many cards as they can and it's just a matter of who can manufacture and deliver to merchants fastest.

    I'd guess if anything NVidia is outselling AMD. They've been more popular for the last couple of years so they should have more manufacturing capacity.

    Do you have any source for that 3:1 claim, or are you just inventing numbers that please you?
     
  • OzmodanOzmodan Member EpicPosts: 9,726
    Vrika said:
    Ozmodan said:

    Well you neglected to note that AMD cannot make enough graphic cards ATM.  They sell out as soon as they hit the shelves since their design is far preferable to Nvidia's offerings for data mining.  They are currently out selling Nvidia's products 3 to 1.
    Do you have any actual data to back up that 3:1 claim?

    AMD is currently selling all the cards they can make to data miners, and their cards are better for mining than NVidia's, but also GTX 1070 and 1060 are having shortages due to how many data miners are buying so at the moment both manufacturers must be making as many cards as they can and it's just a matter of who can manufacture and deliver to merchants fastest.

    I'd guess if anything NVidia is outselling AMD. They've been more popular for the last couple of years so they should have more manufacturing capacity.

    Do you have any source for that 3:1 claim, or are you just inventing numbers that please you?
    My source was from a Microcenter manager.  They have plenty of Nvidia cards, but zero AMD cards.  They have put a 2 card limit on video cards and they still cannot keep any AMD cards in stock.  The 1060's don't have enough memory to justify data mining and the 1070's have jumped in price to close to the 1080 range. hence have lost popularity. 
  • VrikaVrika Member LegendaryPosts: 7,992
    edited July 2017
    Ozmodan said:
    Vrika said:
    Ozmodan said:

    Well you neglected to note that AMD cannot make enough graphic cards ATM.  They sell out as soon as they hit the shelves since their design is far preferable to Nvidia's offerings for data mining.  They are currently out selling Nvidia's products 3 to 1.
    Do you have any actual data to back up that 3:1 claim?

    AMD is currently selling all the cards they can make to data miners, and their cards are better for mining than NVidia's, but also GTX 1070 and 1060 are having shortages due to how many data miners are buying so at the moment both manufacturers must be making as many cards as they can and it's just a matter of who can manufacture and deliver to merchants fastest.

    I'd guess if anything NVidia is outselling AMD. They've been more popular for the last couple of years so they should have more manufacturing capacity.

    Do you have any source for that 3:1 claim, or are you just inventing numbers that please you?
    My source was from a Microcenter manager.  They have plenty of Nvidia cards, but zero AMD cards.  They have put a 2 card limit on video cards and they still cannot keep any AMD cards in stock.  The 1060's don't have enough memory to justify data mining and the 1070's have jumped in price to close to the 1080 range. hence have lost popularity. 
    Maybe it's different in different parts of the world. Here in Finland, looking at webpage of one of the largest electronic stores we have:

    They've no GTX 1060 models in stock, only one GTX 1070 model in stock, and they even made a press release about not having graphic cards in stock back in June.

    They don't appear to have any AMD cards in stock at all, so it's still easier to get NVidia card than it's to get AMD card, but both NVidia and AMD are having the same problem and atm I'd say it's a race of who can manufacture fastest.
     
  • saurus123saurus123 Member UncommonPosts: 678
    these cards will be out of stock the first day due to minners
  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    saurus123 said:
    these cards will be out of stock the first day due to minners
    That depends on how they're priced and whether ethereum mining breaks the memory controller as it does with a GTX 1080.  You can buy a Frontier Edition today:

    https://www.newegg.com/Product/Product.aspx?Item=N82E16814105073

    Not saying you should, but I am saying that the miners didn't buy them all up.
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Ozmodan said:
    Vrika said:
    Ozmodan said:

    Well you neglected to note that AMD cannot make enough graphic cards ATM.  They sell out as soon as they hit the shelves since their design is far preferable to Nvidia's offerings for data mining.  They are currently out selling Nvidia's products 3 to 1.

    Do you have any source for that 3:1 claim, or are you just inventing numbers that please you?
    My source was from a Microcenter manager.  They have plenty of Nvidia cards, but zero AMD cards.  They have put a 2 card limit on video cards and they still cannot keep any AMD cards in stock.  The 1060's don't have enough memory to justify data mining and the 1070's have jumped in price to close to the 1080 range. hence have lost popularity. 
    1 intake channel insight isn't good enough word to go on for the whole world. 
    Absolutely though, data mining has had an impact on pricing. That is about the only good thing for AMD right now. It doesn't change the fact that that VEGA has big architecture issues.
    Ozmodan



  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Torval said:
    Quizzical said:
    saurus123 said:
    these cards will be out of stock the first day due to minners
    That depends on how they're priced and whether ethereum mining breaks the memory controller as it does with a GTX 1080.  You can buy a Frontier Edition today:

    https://www.newegg.com/Product/Product.aspx?Item=N82E16814105073

    Not saying you should, but I am saying that the miners didn't buy them all up.
    How does it break the memory controller. I haven't been following this much at all. That is an interesting factoid.
    Let's make a simple memory bandwidth synthetic benchmark.  What you do is to make an array of data, and then do a bunch of random accesses to that array.  Then change the size of the array and repeat.

    If you try this with a wide enough variety of sizes on a modern GPU, you'll see that you have a lot of bandwidth when the table is small, as you're reading from a bunch of L1 caches in parallel.  Then as the array gets larger, there's a cliff somewhere in the tens of KB where performance drops sharply because it no longer fits in L1 and has to go to L2 instead.  Once the array gets to several MB in size, there's another cliff where performance drops off because it no longer fits in L2 cache and has to go to off-die global memory instead.

    On AMD GPUs, that's the last cliff, and you can go to however large of an array the GPU will let you allocate and there isn't another big drop off.  On Nvidia GPUs, there is another big drop off when the table goes over some particular size.  It varies by GPU, but is on the order of several hundred MB.  The memory controller still works in terms of giving you correct answers, but for random access to an excessively large buffer, it's a lot slower than you'd expect from the paper specifications.  That's basically what Ethereum mining does.

    So the next question is, why is there this big drop-off--and why only on Nvidia and not on AMD?  There, the answer is that AMD and Nvidia have wildly different memory controllers, even if you're looking at GDDR5 for both.  If you allocate a 1 GB array on a CPU, you don't get a contiguous 1 GB of physical memory.  Rather, if you know the physical memory address of one spot in the array, you have a pretty good guess of what the physical address of something 4 logical bytes away is.  But if you try to go a logical 100 MB away, you have no clue about the physical address, as the CPU memory controller will randomize the addresses of pages of memory.

    Nvidia GPUs do that, too.  AMD GPUs don't.  On an AMD GPU, if you allocate a 1 GB buffer and you know the physical address of one place in the buffer, you know the physical address of everywhere in the entire buffer.

    My theory is that in order to randomize the layout of global memory, when you try to read in something from memory, the Nvidia GPU has to first look up where that page in global memory is located to find the physical address, and then go read in the data.  It will try to cache recent accesses so that most of the time, the first lookup is in some dedicated cache that makes it essentially free.  If your buffer gets so large that it can't cache the location of all of the memory pages and you're not reading from the same ones repeatedly, then it ends up having to do two lookups instead of one:  one to find the location of a page, and the second to go get the data from it.  That's how you get an extra cliff in your performance as the arrays get larger.  AMD GPUs don't have that problem.

    As with many things in life, there are no perfect solutions, but only trade-offs.  If you try reading in some chunk of data, and then something 2 GB later, and then 4 GB later, and then 6 GB later, and so forth, striding by some multiple of 2 GB each time, on an AMD GPU, you can have all of those memory accesses going to the same physical memory channel.  (The 2 GB value might vary by GPU, but it's something small like that.)  On Nvidia, they're randomized, so that your memory reads are divided pretty evenly among the memory channels.  With that access pattern, Nvidia GPUs perform how you'd hope from the paper specs and AMD GPUs perform horribly.

    The strided read problem with AMD GPUs is something that usually isn't that hard to work around if you know that you need to do so.  For example, you can occasionally leave some padding so that instead of striding by a multiple of 2 GB each time, you're jumping by that plus 128 bytes, and then you're hitting all of the memory channels evenly.  The large array read problem with Nvidia GPUs is baked into the hardware and you can't work around it.  The big problem with the AMD approach is that it makes the entry barrier to learning GPU programming higher than it otherwise would be, as you have to stop and think about whether you're accessing the physical memory channels evenly, whereas on Nvidia, you can ignore that and it just works.

    Ethereum mining is random lookups into large arrays that start at 1 GB and get larger as time passes.  Apparently it's already too large for some Nvidia GPUs.  The people who bought a GTX 1060 or GTX 1070 because it can handle the current size well are likely to be disappointed when the array grows enough that their card hits that last cliff and performance drops sharply.  A Radeon RX 570 should perform well until the table gets so large that you run out of memory.

    AMD has talked about Vega's HBM2 being a "high bandwidth cache".  I'm not sure what they mean by that, but I've interpreted it as meaning that they're moving to randomizing the global memory addresses like Nvidia does.  If that's the case and ethereum miners try to buy up all of the Vega cards, they might not be happy with the performance.  It's plausible that Polaris 10 and Fiji based GPUs could be the backbone of Ethereum mining for a lot longer than most people expect.
    [Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Ozmodan said:
    Vrika said:
    Ozmodan said:

    Well you neglected to note that AMD cannot make enough graphic cards ATM.  They sell out as soon as they hit the shelves since their design is far preferable to Nvidia's offerings for data mining.  They are currently out selling Nvidia's products 3 to 1.

    Do you have any source for that 3:1 claim, or are you just inventing numbers that please you?
    My source was from a Microcenter manager.  They have plenty of Nvidia cards, but zero AMD cards.  They have put a 2 card limit on video cards and they still cannot keep any AMD cards in stock.  The 1060's don't have enough memory to justify data mining and the 1070's have jumped in price to close to the 1080 range. hence have lost popularity. 
    1 intake channel insight isn't good enough word to go on for the whole world. 
    Absolutely though, data mining has had an impact on pricing. That is about the only good thing for AMD right now. It doesn't change the fact that that VEGA has big architecture issues.
    Let's wait until we see the Radeon RX Vega launch with gaming-optimized drivers before we declare it a failure.
    [Deleted User]Ozmodan
  • CleffyCleffy Member RarePosts: 6,414
    nVidia fabs a higher volume of chips than AMD since they sell 2 times more GPUs. So even though there is limited stock of AMD GPUs, it doesn't mean they have a huge profit margin as it didn't help with the R9 290s when Bitcoin was buying them up.
    Ozmodan
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    RX VEGA XT 4096 8GB HBM2 ASIC: 220W 

    ASIC 1080 = 120 -130W, same performance Vega to 1080 ASIC 300 W, this is more than 2× perf/W of Pascal = very bad
    1080 OC on Custom WC and only 217 W Power Limit > Vega RX (300W?) 
    http://www.3dmark.com/3dm11/12258223
    The XL better be under $350 if it wants to have a hope of selling at all.

    Ozmodan



  • RidelynnRidelynn Member EpicPosts: 7,383
    Umm, the GTX 1080 has a TDP of 180W at stock clocks.

    And from the supposed leak you have copied and pasted here, Vega XT is 220W, which isn't all that different.

    Sure, 300W > 120W, but nothing we are talking about here hits those numbers you just pulled out of thin air.
Sign In or Register to comment.