Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

New Nvidia Ampere Video Cards Rumours/Leaks

AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
edited May 2020 in Hardware
Decent round up of new leaked info which has leaked out on Nvidia Ampere - watch the full video here - https://www.youtube.com/watch?v=uqillUSZvMM

1) 0:08 Ampere Rumor Intro
2) 2:17 AMPERE ARCHITECUTRE OVERVIEW
3) 9:15 Ampere Launch Timeframe
4) 10:50 Assessing RDNA 2 vs Ampere
5) 12:38 Adding my info to Other Leaks
6) 14:14 Final Thoughts on Competition & Pricing
It's a good video and the points make for interesting discussion. Potential performance if true, looks awesome and impressive, but it's all going to come down to pricing. Grain of salt with everything here and this post is mean't for discussion / speculation as nothing is out yet.

https://images.mmorpg.com/images/galleries/full/192020/68e107a4-bbb6-4ffa-ae0a-7d78cf09c77d.jpg

https://images.mmorpg.com/images/galleries/full/192020/1e12b398-9325-4887-ac10-6d9e9c405104.jpg

https://images.mmorpg.com/images/galleries/full/192020/48ad985c-538c-40b9-b901-58ae8cce6cb7.jpg

https://imgur.com/7wJKNh2
https://imgur.com/DodRhWh
https://imgur.com/nHITWqN

Is the image pasting broken in the forums?



«1

Comments

  • RidelynnRidelynn Member EpicPosts: 7,383
    A 3060 beating a 2080Ti....

    I do think it's technically possible for nVidia to do. I think nVidia held back a good bit on Turing on the tech side (and tried to make up for it on the price tag, ((excuse my poor joke there))), and I think it's plausible that nVidia could put out a component that's as fast as these rumors seem to lean toward .

    And it wouldn't be too far of a stretch, the 1070 pretty well matched a 980Ti... 

    But there has been a big gulf between a x70 and an x60 in terms of pricing, or at least historically. And a vast difference between a 2080Ti pricing and ... pretty much everything else.

    It would be ... difficult ... for nVidia to come out on generation later with a $299 card that equals a $1200 card that was, just the day before, current on the market. Of course, nothing would be stopping nVidia from re-valuing their current tech tiers, and now a 3060 may have an MSRP of $599.

    If i had purchased a 2080Ti and that happened, I don't know. On one hand, you'd have the 3080Ti to crow over. On the other, that 3060 just tanked the value of your 2080Ti. You always expect some depreciation, but a drop by a factor of 4 is a bitter pill.
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    What's the source on that?  It looks like it's a random person who is just guessing.  And it doesn't look like an especially good guess, either.

    Let's start with the whopper.  Claiming that ray-tracing isn't going to lower performance anymore is wildly wrong.  The reason that the whole industry didn't go full ray tracing decades ago is that it's intrinsically expensive.  No amount of RTX is going to fix that.

    "Pascal version of Turing" is probably not what he meant.  Turing was a successor architecture to Pascal.  He might have meant a Pascal version of Maxwell, which was basically a die shrink.

    Higher core clocks is a dumb bullet point to list under higher IPC.  Instructions per clock is not affected by the clock rate.

    Double the tensor cores per compute unit is a weird thing to claim.  Tensor cores in Volta/Turing could already match the full register bandwidth of simple fma.  Doubling that would be bizarre unless they double the shaders per compute unit entirely--which they might, though that would just take them back to Maxwell/Pascal numbers.  Actually, Turing could have easily doubled the tensor cores per compute unit just by changing the granularity that Nvidia decided to describe as a compute unit.  (Yes, yes, "streaming multiprocessor", but that's a dumb name so I don't use it.)

    Tensor-accelerated VRAM compression is a weird thing to claim.  I don't know how Nvidia's RAM compression works.  I generally assumed that it was something tied to the memory controllers, which the tensor cores are not.  And you don't put full shaders in memory controllers, much less tensor cores.  You put exactly the things needed to access memory efficiently there and nothing else.
    GdemamiGladDog
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Quizzical said:
    What's the source on that?  It looks like it's a random person who is just guessing.  And it doesn't look like an especially good guess, either.
    He mentions his sources, most of this comes from someone at Nvidia but later in the video there are a few mentioned. He also has a good track record with his information received and the accuracy of it.
    Quizzical said:
    Let's start with the whopper.  Claiming that ray-tracing isn't going to lower performance anymore is wildly wrong.  The reason that the whole industry didn't go full ray tracing decades ago is that it's intrinsically expensive.  No amount of RTX is going to fix that.
    He said Ray tracing should be up to 4 times better per tier and his information there is vague still. He said he expects double to RT cores at least and said doesn't expect no correlation there. But in all better improvement such as (speculation) that an 3060 could be close to a 2080ti in terms of performance. RTX ON shouldn't lower performance as much any more. Not a bigger hit as today.
    PS the industry is totally going RT (AMD and Intel).
    Quizzical said:
    "Pascal version of Turing" is probably not what he meant.  Turing was a successor architecture to Pascal.  He might have meant a Pascal version of Maxwell, which was basically a die shrink.
    Regarding where he said Pascal and paraphrasing what some people are expecting in terms of Pascal jump over Maxwell and the significant gains there. He also said Kepler trying to run DX 12 as in not to expect Turing to age well. As in Kepler supported DX11 but not 11.1 and so on.
    Quizzical said:
    Higher core clocks is a dumb bullet point to list under higher IPC.  Instructions per clock is not affected by the clock rate.
    His summary from his leaker is to expect 10-20% IPC improvements from things like - new architecture, double L2 cache and to expect higher core clocks all listed under the performance heading.
    Quizzical said:
    Double the tensor cores per compute unit is a weird thing to claim.  Tensor cores in Volta/Turing could already match the full register bandwidth of simple fma.  Doubling that would be bizarre unless they double the shaders per compute unit entirely--which they might, though that would just take them back to Maxwell/Pascal numbers.  Actually, Turing could have easily doubled the tensor cores per compute unit just by changing the granularity that Nvidia decided to describe as a compute unit.  (Yes, yes, "streaming multiprocessor", but that's a dumb name so I don't use it.)
    We'll have to wait and see for more info on that point. It's a new arch and an evolution supposedly, so could be quite different to Turing.
    Quizzical said:
    Tensor-accelerated VRAM compression is a weird thing to claim.  I don't know how Nvidia's RAM compression works.  I generally assumed that it was something tied to the memory controllers, which the tensor cores are not.  And you don't put full shaders in memory controllers, much less tensor cores.  You put exactly the things needed to access memory efficiently there and nothing else.
    It could be that Ampere compensates GDDR6 bandwidth with Tensor-core accelerated VRAM compression and NVcache. Similar-ish to what XSX does with BCPack compression and Velocity architecture but then again in the Turing design the memory controllers are far from the tensor cores. There could be some ML applied hypothetically learn't thru streaming/caching algorithm but that isn't said here. We'll have to see.





  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Ridelynn said:
    It would be ... difficult ... for nVidia to come out on generation later with a $299 card that equals a $1200 card that was, just the day before, current on the market. Of course, nothing would be stopping nVidia from re-valuing their current tech tiers, and now a 3060 may have an MSRP of $599.
    Agree! I don't see the cheaper option to be coming on the table for consumers vs what happened with Turing pricing..




  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Quizzical said:
    Let's start with the whopper.  Claiming that ray-tracing isn't going to lower performance anymore is wildly wrong.  The reason that the whole industry didn't go full ray tracing decades ago is that it's intrinsically expensive.  No amount of RTX is going to fix that.
    He said Ray tracing should be up to 4 times better per tier and his information there is vague still. He said he expects double to RT cores at least and said doesn't expect no correlation there. But in all better improvement such as (speculation) that an 3060 could be close to a 2080ti in terms of performance. RTX ON shouldn't lower performance as much any more. Not a bigger hit as today.
    PS the industry is totally going RT (AMD and Intel).

    Never mind 4x ray-tracing performance.  You could have 40x ray-tracing performance and it would still be a huge hit to performance.  Higher ray-tracing performance lets you do more things with it and results in a better looking game.  But maxing it is still going to be a huge performance hit for many years to come.
    Gdemami
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Quizzical said:
    Quizzical said:
    Let's start with the whopper.  Claiming that ray-tracing isn't going to lower performance anymore is wildly wrong.  The reason that the whole industry didn't go full ray tracing decades ago is that it's intrinsically expensive.  No amount of RTX is going to fix that.
    He said Ray tracing should be up to 4 times better per tier and his information there is vague still. He said he expects double to RT cores at least and said doesn't expect no correlation there. But in all better improvement such as (speculation) that an 3060 could be close to a 2080ti in terms of performance. RTX ON shouldn't lower performance as much any more. Not a bigger hit as today.
    PS the industry is totally going RT (AMD and Intel).

    Never mind 4x ray-tracing performance.  You could have 40x ray-tracing performance and it would still be a huge hit to performance.  Higher ray-tracing performance lets you do more things with it and results in a better looking game.  But maxing it is still going to be a huge performance hit for many years to come.
    Well RT performance is limited by Shader Cores and not RT cores. What the leak suggests is the hardware is being improved but we don't know the details. So possible but need more info. 



  • CleffyCleffy Member RarePosts: 6,414
    edited May 2020
    The only thing I expect is that it will be notably faster than its predecessor since its on a smaller process node. I think the initial flagship will be about 50% faster than the current flagship.
    NVidia typically releases a halo product early and has a revision on that halo product.
    AMD typically releases a product that matches their previous best hardware at a reasonable price, then makes a product that doubles as a toaster.
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    edited May 2020
    New video up from the same guy with more info - 


    https://www.youtube.com/watch?v=oCPufeQmFJk

    Summary - 

    Physical card design has been simplified with less screws and possibly 3 fan design.
    3x DP 2.0
    1x HDMI 2.1
    1x USB-C
    all on PCIe 4.0 interface
    GA102 Performance Specs
    5376 CUDA Cores
    220-230W
    Card boosts above 2.2GHz
    18gbps - 864GB/s bandwidth (40% more that 2080Ti)
    Typically overall performance 50% faster than 2080 Ti and even 70% faster in some titles
    384-bit Bus Width
    Looks to be leveraging twitter leak saying 84 SM's
    21+ TFLOPS
    Ampere RT Cores can process intersections 4x faster than Turing
    Double Tensor Cores for denoising
    Less RT performance loss
    Solid ~10% IPC increase over Turing
    Double L2 Cache
    High end on 7nm EUV
    NVCache is Nvidia's answer to HBCC
    Leverages both DRR and SSD for enhanced load times and VRAM
    Dynamically utilize bandwith from SSD, VRAM and DDR for multiple tasks simultaneously

    Tensor Memory Compression - Uses Tensor cores to compress and decompress items stored in VRAM
    Can shave 20-40% off VRAM usage
    Possibly 1 quarter behind AMD in terms of release.
    Rushing to get this out the door (this year).
    Lining up for launch mid - September (in-line with CyberPunk 2077 launch)
    Post edited by AmazingAvery on



  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    If the guy was just making things up last time, then why should we care about a new batch of him making things up?

    For what it's worth, none of the memory vendors are talking about 18 Gbps GDDR6, or at least not publicly.  You can't buy GDDR6 from Hynix at all, though they're working on it:

    https://www.skhynix.com/products.do?ct1=36&ct2=49&lang=eng

    Micron will sell you 14 Gbps GDDR6 if you want it.  They're working on 16 Gbps GDDR6, but that is still in the sampling stage:

    https://www.micron.com/products/graphics-memory/gddr6/part-catalog

    If you were to buy the 16 Gbps GDDR6 chips that Micron is now sampling for use with a 384-bit bus, that would force you to use at least 24 GB, by the way.

    Samsung does offer 16 Gbps GDDR6, as well as sampling that same clock speed at a larger capacity:

    https://www.samsung.com/semiconductor/dram/gddr6/

    For what it's worth, "sampling" basically means that it's available, but only in small quantities.  That will allow AMD, Nvidia, and their various board partners to get a handful of working memory chips that they can use to test their own GPUs and PCB designs and so forth.  That's wildly inappropriate for a high volume commercial launch.

    To get from 18 Gbps GDDR6 not even sampling yet in May to a hard launch in September of the same year is not happening.  Attempting that would at best repeat the GTX 1080/1070 situation of a very soft launch that took about six months to become available at MSRP.  There's no reason to do that when you can readily get all the memory you need at a slightly lower clock speed.  A hard launch in September on 16 Gbps GDDR6 is far more plausible.
  • RidelynnRidelynn Member EpicPosts: 7,383
    I wouldn’t put it past nV to paper launch the heck
    out of it just so they can keep people from buying AMD cards.

    even if only a handful of people can get them, and prices are 2-3x MSRP, you know the Team Green fans will fork it over or wait in line, and a lot of people that don’t know any better but listen to the youtuber reviews will all do the same.
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Ridelynn said:
    I wouldn’t put it past nV to paper launch the heck
    out of it just so they can keep people from buying AMD cards.

    even if only a handful of people can get them, and prices are 2-3x MSRP, you know the Team Green fans will fork it over or wait in line, and a lot of people that don’t know any better but listen to the youtuber reviews will all do the same.
    Why do a paper launch with 18 Gbps GDDR6 when you could do a hard launch with 16 Gbps GDDR6 and then a refresh with 18 Gbps GDDR6 when the memory is actually ready?
  • GladDogGladDog Member RarePosts: 1,097
    If Nvidia actually had something viable based on this, they would be loud and brash about it, as they usually are.  AMD also does this, but Nvidia is notorious about lauding themselves to the heavens.  I doubt that it would be 'leaked' as a marketing tool unless they were ready to officially release specs.

    I ignore all of this stuff.  I have an RTX2060 video card, but I waited until LONG after initial release, after the tech was proven and had gotten cheap.  I'll do the same with the next gen stuff.


    The world is going to the dogs, which is just how I planned it!


  • VrikaVrika Member LegendaryPosts: 7,989
    GladDog said:
    If Nvidia actually had something viable based on this, they would be loud and brash about it, as they usually are. 
    It's NVidia's style to be really quiet until they are close to launching. I don't believe in this rumor, but they may have something and they're just keeping it quiet until they're ready to make loud announcement.
     
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Vrika said:
    GladDog said:
    If Nvidia actually had something viable based on this, they would be loud and brash about it, as they usually are. 
    It's NVidia's style to be really quiet until they are close to launching. I don't believe in this rumor, but they may have something and they're just keeping it quiet until they're ready to make loud announcement.
    Yep they are waiting on seeing what Navix2 brings.



  • VrikaVrika Member LegendaryPosts: 7,989
    Vrika said:
    GladDog said:
    If Nvidia actually had something viable based on this, they would be loud and brash about it, as they usually are. 
    It's NVidia's style to be really quiet until they are close to launching. I don't believe in this rumor, but they may have something and they're just keeping it quiet until they're ready to make loud announcement.
    Yep they are waiting on seeing what Navix2 brings.
    I don't think they're doing that either. I think they just aren't quite ready yet, except for the professional cards where we should get some announcement on Thursday.
     
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Vrika said:
    Vrika said:
    GladDog said:
    If Nvidia actually had something viable based on this, they would be loud and brash about it, as they usually are. 
    It's NVidia's style to be really quiet until they are close to launching. I don't believe in this rumor, but they may have something and they're just keeping it quiet until they're ready to make loud announcement.
    Yep they are waiting on seeing what Navix2 brings.
    I don't think they're doing that either. I think they just aren't quite ready yet, except for the professional cards where we should get some announcement on Thursday.
    Yup agree with that too.
    1. Want to see what Big Navi is - They saw capability inside what power envelope with the new console info that came out
    2. Paper launch to win performance crown
    3. They had some stuff (lower end gaming cards come from Samsung and high volume from TSMC) so are questions on the capacity to deliver in a timely fashion and I think they screwed up a bit there and are behind.



  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Vrika said:
    GladDog said:
    If Nvidia actually had something viable based on this, they would be loud and brash about it, as they usually are. 
    It's NVidia's style to be really quiet until they are close to launching. I don't believe in this rumor, but they may have something and they're just keeping it quiet until they're ready to make loud announcement.
    How much noise Nvidia makes about upcoming launches depends tremendously on their current competitive position.  In early 2010, they were leaking like a sieve about upcoming Fermi cards because they knew that with the lineup as it was, just about anyone who spent more than $100 on a video card was going to buy AMD, and they wanted to convince people to wait.  At the other extreme, in mid-2018, rumors say that Nvidia artificially delayed the launch quite a bit to give excess Pascal inventory time to sell off, as they knew that they were the clear market leader even without Turing.
  • RidelynnRidelynn Member EpicPosts: 7,383
    Quizzical said:
    Ridelynn said:
    I wouldn’t put it past nV to paper launch the heck
    out of it just so they can keep people from buying AMD cards.

    even if only a handful of people can get them, and prices are 2-3x MSRP, you know the Team Green fans will fork it over or wait in line, and a lot of people that don’t know any better but listen to the youtuber reviews will all do the same.
    Why do a paper launch with 18 Gbps GDDR6 when you could do a hard launch with 16 Gbps GDDR6 and then a refresh with 18 Gbps GDDR6 when the memory is actually ready?
    Because 18 > 16

    No one said it makes sense either way, but that's how marketing often goes. And if there is a "shortage", even if it's completely artificial, it drives more people to rush out to get them, because of FOMO.
  • RidelynnRidelynn Member EpicPosts: 7,383
    Vrika said:
    GladDog said:
    If Nvidia actually had something viable based on this, they would be loud and brash about it, as they usually are. 
    It's NVidia's style to be really quiet until they are close to launching. I don't believe in this rumor, but they may have something and they're just keeping it quiet until they're ready to make loud announcement.
    Yep they are waiting on seeing what Navix2 brings.
    What I do think nV is waiting on is a day when they need the stock market price to pop. That's when I think we'll see official details start to dribble out.
  • CleffyCleffy Member RarePosts: 6,414
    I think the story on why nVidia is now behind production on Ampere is pretty funny. They wanted to shop around for 7nm production to get a better price. So AMD swept in and bought all of TSMCs remaining 7nm production including what would have been allocated to Ampere.
  • jusomdudejusomdude Member RarePosts: 2,706
    Finding it hard to believe a $300 card would perform as good as a $1200 one in just one generation. If they want to increase the price of the 3060 by $100 or more that's really a kick in the nuts to the mainstream.
    Gdemami
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    jusomdude said:
    Finding it hard to believe a $300 card would perform as good as a $1200 one in just one generation. If they want to increase the price of the 3060 by $100 or more that's really a kick in the nuts to the mainstream.
    It's not that implausible.  It would basically mean doubling performance in two years.

    But do remember that the original source is just some random person who is making things up.
  • RidelynnRidelynn Member EpicPosts: 7,383
    Quizzical said:
    jusomdude said:
    Finding it hard to believe a $300 card would perform as good as a $1200 one in just one generation. If they want to increase the price of the 3060 by $100 or more that's really a kick in the nuts to the mainstream.
    It's not that implausible.  It would basically mean doubling performance in two years.

    But do remember that the original source is just some random person who is making things up.
    The jump in performance isn't unheard of.

    but the drop in price would be.
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Any drop in price from a $1200 consumer GPU would be unprecedented, as the $1200 consumer GPU itself is unprecedented.
  • RidelynnRidelynn Member EpicPosts: 7,383
    Quizzical said:
    Any drop in price from a $1200 consumer GPU would be unprecedented, as the $1200 consumer GPU itself is unprecedented.
    *was unprecedented.
    Quizzical
Sign In or Register to comment.