Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Did Nvidia just kill AMD?

13

Comments

  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Originally posted by Pellagren
    What a strange thread.  Nvidia has made so much more in profit, the only true measure of success of a company and its products, that the argument is really inane.  AMD/ATI catering to an increasingly small and focused group of consumers will be their ultimate demise.  Free markets, more often than not, choose the winner.

    Because Nvidia's Quadro cards holding most of the market for professional graphics while AMD's Bulldozer CPU architecture and its derivatives struggle proves so much about GeForce versus Radeon cards?  Right.

    To the contrary, it's Nvidia that is catering to a smaller market than AMD, largely because AMD has an x86 license and Nvidia doesn't.  But that also has nothing to do with a comparison of GeForce and Radeon discrete cards.

  • HrimnirHrimnir Member RarePosts: 2,415

    I finally got a chance to read anandtech's article on the new 980/970 cards, and i gotta say.  NVIDIA is in a VERY good position right now.  I mean VERY.

    Its not just the price/perf of the cards, when you look into the details of the cards a few things come up:

    1. These are still manufactured on the 28nm process, which means supply wont be an issue, and that all of these efficiency gains that have been made were without reducing the manufacturing process, which is historically how both ATI and NVIDIA acheived more efficient GPU's.

    2. The 980 is roughly the same as a 780ti, but it does so with only 2048 CUDA cores, vs 780ti's 2880.

    3. The 980 is a 5.2b transistor part, compared to 7.1b on the 780ti, again, on the same 28nm process.

    4. The 980 produces all of this with only a 165w TDP, vs 250w on the 780ti (and the AMD cards are even worse about gflops/watt).

     

    All this means that, literally at any time that nvidia wants to, they could just drop the hammer on AMD, release a card, call it a 980ti, or something like that, with 2880 cuda cores, still with probably a 225-250w TDP, and it would outperform the current cards by 20-25% or better.  They could do this tommorow if they wanted.

     

    IMO this is bad bad bad news for AMD.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • MickyknoxMickyknox Member UncommonPosts: 61
    They both have fast cards but for me it comes down to driver support and Nvidia in my books is top dog for drivers not to mention that program for your games Geforce Experience is once hell of a program.
  • CleffyCleffy Member RarePosts: 6,414
    Originally posted by Hrimnir

    I finally got a chance to read anandtech's article on the new 980/970 cards, and i gotta say.  NVIDIA is in a VERY good position right now.  I mean VERY.

    Its not just the price/perf of the cards, when you look into the details of the cards a few things come up:

    1. These are still manufactured on the 28nm process, which means supply wont be an issue, and that all of these efficiency gains that have been made were without reducing the manufacturing process, which is historically how both ATI and NVIDIA acheived more efficient GPU's.

    2. The 980 is roughly the same as a 780ti, but it does so with only 2048 CUDA cores, vs 780ti's 2880.

    3. The 980 is a 5.2b transistor part, compared to 7.1b on the 780ti, again, on the same 28nm process.

    4. The 980 produces all of this with only a 165w TDP, vs 250w on the 780ti (and the AMD cards are even worse about gflops/watt).

     

    All this means that, literally at any time that nvidia wants to, they could just drop the hammer on AMD, release a card, call it a 980ti, or something like that, with 2880 cuda cores, still with probably a 225-250w TDP, and it would outperform the current cards by 20-25% or better.  They could do this tommorow if they wanted.

     

    IMO this is bad bad bad news for AMD.

    You don't just drop a 7.1 billion transistor gpu on the market. It has to be designed and spun first which takes 6 months. By that time we don't know what will be out, for all we know AMD might have moved onto a smaller process node.

    GPU design is all about trade offs, then iterating your GPU around how those trade-offs panned out. I think it would be a relatively dumb investment for nVidia to spin up another 28nm chip. We also don't know how Maxwell pans out in the greater scope of things like in OpenGL or DirectX 11.2 -12. All we really know is how it performs in Battlefield IV. Similarly we know how AMD performed with digital currency mining. Really just 2 different priorities where the two companies think the market is heading.

    The problem with nVidia is that it still needs to support all the proprietary shit they threw on it that only a handful of developers ever use. PhysX, nVision, GSync, and so on. nVidia has also been worse in driver support over the last decade.

  • BillSussmanBillSussman Member Posts: 42
    Originally posted by Jockan
    Originally posted by Torcip
    So with how crazy low priced Nvidia's new flagship Maxwell cards are, how is AMD going to respond?

     

     

    Same as always. AMD is the one who made Nvidia start going lower on prices.

     

     

    Pitty AMD suck ass though. I really wish I never built my latest RIG using AMD parts. AMD cards are weak as piss. My 7950s are fully maxed and still don't out perform most Nvidia cards (even cheaper ones) 

  • akkedis86akkedis86 Member UncommonPosts: 123

    I do not really care for flagship marketing when my 280X is $50 cheaper than a competing 770 Geforce OC.

    Everybody seems to have missed that AMD recently released the 285 at a rock-bottom price. 

    AMD also has the MAntle API for consoles which helps for closer Communication with the hardware( Directx 12 stole some ideas from Mantle)

     

    APUs may not have the CPU performance that Intel has on their hybrid chips, but their graphics far outperforms it. So, it is realistic to imagine that an application that can take advantage of opencl, or some sort of hardware accelaration would run great. As is increasingly the case.

     

    This is also good news for Cuda, which I am not a big fan of, for no particular reason. 

     

    AMD doing massive restructuring. They have stopped competing with Intel on purpose, and are focusing on APU technologies, and soon ARM SOC.

     

     Nvidea has a history of forcing competitors out of the GPU business to try and make a Monopoly. Don't believe me, google it. The fact is, they do not want competition. 

     

    Nvidea Tegra, not the best Arm chip out there by a long shot. The fact is, where it not for competition, you would be paying a lot more for your cards.

     

    In my opinion, MSI+AMD= an Awesome gaming, and overclocking experience, at a really affordable price.

    AMD is the overclockers dream. Once again, CPUs are irrelevant in todays Environment, and I bet AMD will strike back soon. 

    Maybe not on their flagship cards, but on the consumer range which the average person buys. 

     

     

  • BillSussmanBillSussman Member Posts: 42
    Originally posted by akkedis86

    I do not really care for flagship marketing when my 280X is $50 cheaper than a competing 770 Geforce OC.

    Everybody seems to have missed that AMD recently released the 285 at a rock-bottom price. 

    AMD also has the MAntle API for consoles which helps for closer Communication with the hardware( Directx 12 stole some ideas from Mantle)

     

    APUs may not have the CPU performance that Intel has on their hybrid chips, but their graphics far outperforms it. So, it is realistic to imagine that an application that can take advantage of opencl, or some sort of hardware accelaration would run great. As is increasingly the case.

     

    This is also good news for Cuda, which I am not a big fan of, for no particular reason. 

     

    AMD doing massive restructuring. They have stopped competing with Intel on purpose, and are focusing on APU technologies, and soon ARM SOC.

     

     Nvidea has a history of forcing competitors out of the GPU business to try and make a Monopoly. Don't believe me, google it. The fact is, they do not want competition. 

     

    Nvidea Tegra, not the best Arm chip out there by a long shot. The fact is, where it not for competition, you would be paying a lot more for your cards.

     

    In my opinion, MSI+AMD= an Awesome gaming, and overclocking experience, at a really affordable price.

    AMD is the overclockers dream. Once again, CPUs are irrelevant in todays Environment, and I bet AMD will strike back soon. 

    Maybe not on their flagship cards, but on the consumer range which the average person buys. 

     

     

     

     

    Yea overclockers dream huh.. My 7950s are maxed out. Yet shittier stock NVIDIA cards still out perform it. 

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Cleffy
    Originally posted by Hrimnir

    I finally got a chance to read anandtech's article on the new 980/970 cards, and i gotta say.  NVIDIA is in a VERY good position right now.  I mean VERY.

    Its not just the price/perf of the cards, when you look into the details of the cards a few things come up:

    1. These are still manufactured on the 28nm process, which means supply wont be an issue, and that all of these efficiency gains that have been made were without reducing the manufacturing process, which is historically how both ATI and NVIDIA acheived more efficient GPU's.

    2. The 980 is roughly the same as a 780ti, but it does so with only 2048 CUDA cores, vs 780ti's 2880.

    3. The 980 is a 5.2b transistor part, compared to 7.1b on the 780ti, again, on the same 28nm process.

    4. The 980 produces all of this with only a 165w TDP, vs 250w on the 780ti (and the AMD cards are even worse about gflops/watt).

     

    All this means that, literally at any time that nvidia wants to, they could just drop the hammer on AMD, release a card, call it a 980ti, or something like that, with 2880 cuda cores, still with probably a 225-250w TDP, and it would outperform the current cards by 20-25% or better.  They could do this tommorow if they wanted.

     

    IMO this is bad bad bad news for AMD.

    You don't just drop a 7.1 billion transistor gpu on the market. It has to be designed and spun first which takes 6 months. By that time we don't know what will be out, for all we know AMD might have moved onto a smaller process node.

    GPU design is all about trade offs, then iterating your GPU around how those trade-offs panned out. I think it would be a relatively dumb investment for nVidia to spin up another 28nm chip. We also don't know how Maxwell pans out in the greater scope of things like in OpenGL or DirectX 11.2 -12. All we really know is how it performs in Battlefield IV. Similarly we know how AMD performed with digital currency mining. Really just 2 different priorities where the two companies think the market is heading.

    The problem with nVidia is that it still needs to support all the proprietary shit they threw on it that only a handful of developers ever use. PhysX, nVision, GSync, and so on. nVidia has also been worse in driver support over the last decade.

    Clearly you didn't actually read my post.  The 780ti is ALREADY a 7.1b transistor chip.

    And in case you don't feel like googling, here's about a dozen different tests done in far more than BF4

    http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review

    NVIDIA GPU Specification Comparison
      GTX 980 GTX 970 GTX 780 Ti GTX 770
    CUDA Cores 2048 1664 2880 1536
    Texture Units 128 104 240 128
    ROPs 64 64 48 32
    Core Clock 1126MHz 1050MHz 875MHz 1046MHz
    Boost Clock 1216MHz 1178MHz 928Mhz 1085MHz
    Memory Clock 7GHz GDDR5 7GHz GDDR5 7GHz GDDR5 7GHz GDDR5
    Memory Bus Width 256-bit 256-bit 384-bit 256-bit
    VRAM 4GB 4GB 3GB 2GB
    FP64 1/32 FP32 1/32 FP32 1/24 FP32 1/24 FP32
    TDP 165W 145W 250W 230W
    GPU GM204 GM204 GK110 GK104
    Transistor Count 5.2B 5.2B 7.1B 3.5B
    Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
    Launch Date 09/18/14 09/18/14 11/07/13 05/30/13
    Launch Price $549 $329 $699 $399
     

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • CryptorCryptor Member UncommonPosts: 523
    Nvidia killed AMD off good 5 years ago if not more.
  • aRtFuLThinGaRtFuLThinG Member UncommonPosts: 1,387
    Originally posted by Jockan
    Originally posted by Gestankfaust
    Originally posted by Jockan
    Originally posted by Torcip
    So with how crazy low priced Nvidia's new flagship Maxwell cards are, how is AMD going to respond?

     

     

    Same as always. AMD is the one who made Nvidia start going lower on prices.

    That's one take on it. If you've only been following hardware for about 5 years.

     

     

    How far do we have to go back? LMAO

    Maybe he is suggesting that we should go all the way back to the days of Voodoo3 vs S3 Virge, or even EGA vs Monochrome, lol :P

     

  • CleffyCleffy Member RarePosts: 6,414
    Originally posted by Hrimnir

    Clearly you didn't actually read my post.  The 780ti is ALREADY a 7.1b transistor chip.

    And in case you don't feel like googling, here's about a dozen different tests done in far more than BF4

    http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review

    NVIDIA GPU Specification Comparison
      GTX 980 GTX 970 GTX 780 Ti GTX 770
    CUDA Cores 2048 1664 2880 1536
    Texture Units 128 104 240 128
    ROPs 64 64 48 32
    Core Clock 1126MHz 1050MHz 875MHz 1046MHz
    Boost Clock 1216MHz 1178MHz 928Mhz 1085MHz
    Memory Clock 7GHz GDDR5 7GHz GDDR5 7GHz GDDR5 7GHz GDDR5
    Memory Bus Width 256-bit 256-bit 384-bit 256-bit
    VRAM 4GB 4GB 3GB 2GB
    FP64 1/32 FP32 1/32 FP32 1/24 FP32 1/24 FP32
    TDP 165W 145W 250W 230W
    GPU GM204 GM204 GK110 GK104
    Transistor Count 5.2B 5.2B 7.1B 3.5B
    Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
    Launch Date 09/18/14 09/18/14 11/07/13 05/30/13
    Launch Price $549 $329 $699 $399
     

    Clearly you didn't read what I said. You don't spin a new GPU chip overnight. The GTX 980 is clocked much higher than the 780ti. There is no way they are putting on 40% more transistors without some major drawback.

    Like I said GPUs are trade offs and no GPU has it all. This GPU is very impressive from nVidia but it still is cram full of all the nVidia proprietary tools. Its difficult to say a 5 billion transistor chip is magically better than a 6 billion chip purely based on efficiency gains. There is a trade-off going on here. As we know from nVidia's history they are very keen on optimizing drivers for benchmarked games and working with developers to support the way they do things. But will this business strategy work in an AMD dominated gaming market? Up until a week ago AMD was the best pick for GPU in every price category, and all console games are developed around AMDs GCN architecture.

    To me in the current development environment, I don't see how nVidia can crush AMD while still making room for technologies that will never be used.

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Cleffy
    Originally posted by Hrimnir

    Clearly you didn't actually read my post.  The 780ti is ALREADY a 7.1b transistor chip.

    And in case you don't feel like googling, here's about a dozen different tests done in far more than BF4

    http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review

    NVIDIA GPU Specification Comparison
      GTX 980 GTX 970 GTX 780 Ti GTX 770
    CUDA Cores 2048 1664 2880 1536
    Texture Units 128 104 240 128
    ROPs 64 64 48 32
    Core Clock 1126MHz 1050MHz 875MHz 1046MHz
    Boost Clock 1216MHz 1178MHz 928Mhz 1085MHz
    Memory Clock 7GHz GDDR5 7GHz GDDR5 7GHz GDDR5 7GHz GDDR5
    Memory Bus Width 256-bit 256-bit 384-bit 256-bit
    VRAM 4GB 4GB 3GB 2GB
    FP64 1/32 FP32 1/32 FP32 1/24 FP32 1/24 FP32
    TDP 165W 145W 250W 230W
    GPU GM204 GM204 GK110 GK104
    Transistor Count 5.2B 5.2B 7.1B 3.5B
    Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
    Launch Date 09/18/14 09/18/14 11/07/13 05/30/13
    Launch Price $549 $329 $699 $399
     

    Clearly you didn't read what I said. You don't spin a new GPU chip overnight. The GTX 980 is clocked much higher than the 780ti. There is no way they are putting on 40% more transistors without some major drawback.

    Like I said GPUs are trade offs and no GPU has it all. This GPU is very impressive from nVidia but it still is cram full of all the nVidia proprietary tools. Its difficult to say a 5 billion transistor chip is magically better than a 6 billion chip purely based on efficiency gains. There is a trade-off going on here. As we know from nVidia's history they are very keen on optimizing drivers for benchmarked games and working with developers to support the way they do things. But will this business strategy work in an AMD dominated gaming market? Up until a week ago AMD was the best pick for GPU in every price category, and all console games are developed around AMDs GCN architecture.

    To me in the current development environment, I don't see how nVidia can crush AMD while still making room for technologies that will never be used.

    You do realize your argument is completely bunk.  They've already done it, its called, GTX 980 vs GTX 970.  Holy crap, it has less cuda cores and SMX's?!?!?!?!?!.

    And yes, i read your post.  You're not "spinning" shit overnight.  You're taking literally the EXACT same architecture, and you add more SMX's, holy mother of god, how did i figure that out so easily.  It couldnt be that... wait... no... they didnt do it with the... 780ti??!!!!  or the titan, or any of those based on exactly the same architecture...

    Imgaine you have an inline 4 cylinder engine, and you add 2 more cylinders, and OMFG you have an inline 6 cylinder engine.  No special ultra engineering required.

    And i'm sorry but you come off as a fanboy.  Nvidia has no less proprietary BS than AMD does with the sole exception of gsync.  I also provided you a link which shows no less than NINE different games that it benchmarked on.  Are you going to seriously try to claim that its all nvidia secret sauce driver optimizations?

    Here's your tinfoil hat bro.

    Oh btw, that article, it goes into great detail about precisely how nvidia acheived those efficiencies with a chip with 2 billion less transistors.  In case you're being lazy, which i suspect you are, here's an excerpt:

    "Starting with the Maxwell 1 SMM, NVIDIA has adjusted their streaming multiprocessor layout to achieve better efficiency. Whereas the Kepler SMX was for all practical purposes a large, flat design with 4 warp schedulers and 15 different execution blocks, the SMM has been heavily partitioned. Physically each SMM is still one contiguous unit, not really all that different from an SMX. But logically the execution blocks which each warp scheduler can access have been greatly curtailed.

    The end result is that in an SMX the 4 warp schedulers would share most of their execution resources and work out which warp was on which execution resource for any given cycle. But on an SMM, the warp schedulers are removed from each other and given complete dominion over a far smaller collection of execution resources. No longer do warp schedulers have to share FP32 CUDA cores, special function units, or load/store units, as each of those is replicated across each partition. Only texture units and FP64 CUDA cores are shared."

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by aRtFuLThinG
    Originally posted by Jockan
    Originally posted by Gestankfaust
    Originally posted by Jockan
    Originally posted by Torcip
    So with how crazy low priced Nvidia's new flagship Maxwell cards are, how is AMD going to respond?

     

     

    Same as always. AMD is the one who made Nvidia start going lower on prices.

    That's one take on it. If you've only been following hardware for about 5 years.

     

     

    How far do we have to go back? LMAO

    Maybe he is suggesting that we should go all the way back to the days of Voodoo3 vs S3 Virge, or even EGA vs Monochrome, lol :P

     

    Don't forget about matrox man.  Those were the shizz.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910

    Kind of a silly addendum to the discussion, but Nvidia released some pics done using the tech that is in their 980 and 970 cards.  They used Voxel Global Illumination to show how the pics taken on the moon could exist as they were, without resorting to any sort of anti-conspiracy theory conspiracies.

     

    http://www.cnet.com/news/nvidias-new-gpu-sinks-moon-landing-hoax-using-virtual-light/

     

    Kind of neat.  image

    I can not remember winning or losing a single debate on the internet.

  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Originally posted by Hrimnir

    All this means that, literally at any time that nvidia wants to, they could just drop the hammer on AMD, release a card, call it a 980ti, or something like that, with 2880 cuda cores, still with probably a 225-250w TDP, and it would outperform the current cards by 20-25% or better.  They could do this tommorow if they wanted.

     

    IMO this is bad bad bad news for AMD.

    Could Nvidia make a larger Maxwell card with more of everything?  Sure.  Is it a trivial thing to do?  No.  Could they launch such a card tomorrow?  Not if they decided today that they were going to make it.  If they decided to make such a card today, launch could easily be a year away or more.  Recall that Nvidia had GF100 (GTX 480) chips back from the fab in September 2009, and it was obvious that the chips were a mess.  Nvidia set out to fix the chips--not even design a new chip or scale anything up, but just fix a broken chip--and the GeForce GTX 580 didn't launch until November 2010.

    Will Nvidia make a larger Maxwell chip that is just a scaled up GTX 980?  That's extremely unlikely.  For the last several generations, going back to at least the GeForce GTX 480 and Radeon HD 5870, the top chip of a generation from each of AMD and Nvidia always had some stuff present for GPU compute that wasn't physically present in the lower end cards.  Considering how important Nvidia thinks GPU compute is to its future as the market for lower end laptop and desktop discrete cards due to ever increasing performance from integrated graphics, I regard it as very unlikely that Nvidia would abandon that approach.

    So perhaps the better question is, will Nvidia create a new top end Maxwell GPU chip on 28 nm with added GPU compute stuff?  There, Nvidia probably either started work on that exact chip long ago or else it's not going to exist.  If Nvidia starts today and the chip launches at the same time that AMD launches a 16 nm chip that destroys it in every way imaginable, the chip is obsolete the day it launches.  You don't want to put many millions of dollars into developing a new chip that is obsolete on launch day.

    Could Nvidia make a 2880 shader Maxwell chip, as you propose?  That I'd regard as extremely unlikely, as it would take a major redesign.  Maxwell SMMs have 128 shaders each.  2880 shaders means 22 1/2 SMMs.  Cutting a chip in half doesn't mean you cut the performance in half.  It means the chip doesn't work.

    Will Nvidia make a new, top end Maxwell chip on 28 nm with a die as big as previous top end chips?  If you asked me six months ago, I'd have regarded it as probable.  I'd still regard it as likely, but not guaranteed.  Both AMD and Nvidia usually start with the top end chips and work their way down the stack, so it would be unusual for the big chip to come last.  Unusual doesn't mean impossible, as there have been exceptions such as the Radeon HD 6870 beating the Radeon HD 6970 to market because the latter was delayed when TSMC canceled its 32 nm process node.

    -----

    While the 28 nm process node has been around for a while, that should mean that Nvidia can order a ton of GM204 chips.  It doesn't guarantee that Nvidia already has a ton of cards available to buy today.  Most of the SKUs of both the GTX 970 and GTX 980 are out of stock on New Egg at the moment.

    A lot depends on when you decide to launch the card.  Cards are available when they're available, but you have enough to send to press for reviews months before you have enough for mass retail availability.  If you decide to do a paper launch as soon as reviews are ready, it takes months before it's easy to find and buy such a card.  If you wait a few months, you can do a hard launch and anyone who wants a card can easily find it at MSRP.  What did Nvidia do?  Watch which SKUs stay in stock and we'll find out.

    I don't think it would make much sense for Nvidia to do a paper launch like they did with the GeForce GTX 680.  At that point, Nvidia was in desperate straits, having lost the last three generations to AMD badly, on top of being a generation behind as AMD already had its entire Radeon HD 7000 series available at retail.  Nvidia did a paper launch early to say, hey, we're finally going to catch up this generation; wait a bit and you can get a good card from Nvidia--or at least price drops from AMD.  But Nvidia was in no such desperate situation a week ago, as Kepler was already competitive with AMD's GCN cards.

    -----

    While Maxwell cards are still on 28 nm, it's not the same 28 nm process node; it's a 28 nm HPM process node that Kepler and GCN cards weren't built on because it wasn't available.  How much of Maxwell's gains were due to process node improvements rather than architecture improvements?  Probably not that much, but we don't know for certain.  It's likely that a lot of the gains were due to designing for process nodes that are now well understood while the architecture was being designed, which is an advantage that neither AMD's GCN cards nor Nvidia's Kepler cards had.  It's far from automatic that Maxwell will maintain such a lead with the shrink to 16/14 nm.

    -----

    One thing that I think hasn't gotten nearly enough attention is the 2 MB L2 cache; for comparison, the GeForce GTX 780 Ti had 512 KB of L2 cache, in spite of being a much larger chip.  The lower end GeForce GTX 750 Ti with 5 SMMs had 2 MB of L2 cache, and the new GeForce GTX 980 with 16 SMMs still has 2 MB of L2 cache.  If Nvidia thinks 2 MB of cache is exactly the right amount to do something cool with texture caching or some such, that would speak favorably of Nvidia's efforts to scale up Maxwell to huge chips or shrink it to new process nodes.  It would not speak favorably of Nvidia's chances to finally be competitive in the sub-$150 market, as 2 MB of cache would eat up a considerable fraction of a small die.

  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Originally posted by BillSussman

    Yea overclockers dream huh.. My 7950s are maxed out. Yet shittier stock NVIDIA cards still out perform it. 

    Because a card that isn't the top end when it launches would normally be expected to still beat the high end 2 1/2 years later if you just overclock it a little?  Right.

  • sn072856sn072856 Member UncommonPosts: 56

    one of the advantages of being an ancient troll is that I've had time to have been there done that...

    I've used all sorts of video cards over the years.

    This is transient.

    Next year it will be all about the next great thing....

    DEAL WITH IT..

    its never going to change...

    I have a life, its just different from yours.....

  • MikehaMikeha Member EpicPosts: 9,196

    Everybody's waiting to see what AMD brings next. Even my Nvidia buddy at work had no idea these cards had even released. lol

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Jockan
    Everybody's waiting to see what AMD brings next. Even my Nvidia buddy at work had no idea these cards had even released. lol


    The 285X I think gets us a good inclination of what's coming down the pike. It's the first Tonga card we've seen, and a Tonga XT has been rumored to come out by the end of the year.

    That being said - the 285X. It's more or less a respin of the 280. It added very little in terms of performance over the 280, we just saw it come up to date in terms of features. I would expect Tonga XT to do pretty much the same thing for Hawaii XT - maybe a bit in performance, but not much, and mostly just bringing the cards up to feature parity.

    As far as adding more CUDA cords to a 980 - it's the M204 die we are seeing. I fully expect a M210 die to come out, but not until it absolutely needs to. We saw the same thing with Kepler - the K104 die was released as the 680, that didn't mean there wasn't a K110 die - and we didn't see the K110 die until it was released as Titan. The K110 was rumored to be delayed due to production issues, so they pushed the 104 out early (and that was a paper launch as it was). Will Maxwell be the same? I doubt it - but I also doubt that they have the M210 in their pocket and are just waiting to release it. If I were nVidia (and I'm not, just to clarify) - the M204 already beats Hawaii XT (and probably will beat Tonga XT), so why push M210 if you don't have to? I would sit on it for a few months, put a few more months R&D into it (Turbo 3.0?), and then push it to production when either it looks like a real threat in performance is coming down the pike (which AMD doesn't look to do at 28nm), or sales of M204 start to lag, at which point you push M210, at a premium price, price cut M204, and pick up sales on both fronts.

  • aRtFuLThinGaRtFuLThinG Member UncommonPosts: 1,387
    Originally posted by Hrimnir
    Originally posted by aRtFuLThinG
    Originally posted by Jockan
    Originally posted by Gestankfaust
    Originally posted by Jockan
    Originally posted by Torcip
    So with how crazy low priced Nvidia's new flagship Maxwell cards are, how is AMD going to respond?

     

     

    Same as always. AMD is the one who made Nvidia start going lower on prices.

    That's one take on it. If you've only been following hardware for about 5 years.

     

     

    How far do we have to go back? LMAO

    Maybe he is suggesting that we should go all the way back to the days of Voodoo3 vs S3 Virge, or even EGA vs Monochrome, lol :P

     

    Don't forget about matrox man.  Those were the shizz.

    Ommmm... dam matrox r tasty

  • DatawarlockDatawarlock Member Posts: 338
    Meh. I've seen a couple posts almost bragging that AMD has caused others to bring prices down. Big deal? AMD never could compete with the big dogs at the higher prices, they HAVE to try to get all the prices down. Seriously, as of me writing this share prices for AMD are at $3.69, $60m in shares held.... while Nvidia is at $18.81 and has $92m in shares held. So really, who cares what fanboy says what about which vendor? The market data tells all.
  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Datawarlock
    Meh. I've seen a couple posts almost bragging that AMD has caused others to bring prices down. Big deal? AMD never could compete with the big dogs at the higher prices, they HAVE to try to get all the prices down. Seriously, as of me writing this share prices for AMD are at $3.69, $60m in shares held.... while Nvidia is at $18.81 and has $92m in shares held. So really, who cares what fanboy says what about which vendor? The market data tells all.

    There's a lot more to that financial data than just a niche gamer/GPU segment. Both businesses have a lot of other revenue streams than just gamers buying discrete video cards.

    Really it says very little about the quality or performance of a companies product.

  • DatawarlockDatawarlock Member Posts: 338
    Originally posted by Ridelynn

     


    Originally posted by Datawarlock
    Meh. I've seen a couple posts almost bragging that AMD has caused others to bring prices down. Big deal? AMD never could compete with the big dogs at the higher prices, they HAVE to try to get all the prices down. Seriously, as of me writing this share prices for AMD are at $3.69, $60m in shares held.... while Nvidia is at $18.81 and has $92m in shares held. So really, who cares what fanboy says what about which vendor? The market data tells all.

     

    There's a lot more to that financial data than just a niche gamer/GPU segment. Both businesses have a lot of other revenue streams than just gamers buying discrete video cards.

    Really it says very little about the quality or performance of a companies product.

    It still stands to show that AMD can't compete on the same level as Nvidia price-wise, thus they have to keep praying to drag the others down to their level. Nvidia can afford to take a hit on something, AMD really can't.

  • CleffyCleffy Member RarePosts: 6,414
    Originally posted by Hrimnir

    You do realize your argument is completely bunk.  They've already done it, its called, GTX 980 vs GTX 970.  Holy crap, it has less cuda cores and SMX's?!?!?!?!?!.

    And yes, i read your post.  You're not "spinning" shit overnight.  You're taking literally the EXACT same architecture, and you add more SMX's, holy mother of god, how did i figure that out so easily.  It couldnt be that... wait... no... they didnt do it with the... 780ti??!!!!  or the titan, or any of those based on exactly the same architecture...

    Imgaine you have an inline 4 cylinder engine, and you add 2 more cylinders, and OMFG you have an inline 6 cylinder engine.  No special ultra engineering required.

    And i'm sorry but you come off as a fanboy.  Nvidia has no less proprietary BS than AMD does with the sole exception of gsync.  I also provided you a link which shows no less than NINE different games that it benchmarked on.  Are you going to seriously try to claim that its all nvidia secret sauce driver optimizations?

    Here's your tinfoil hat bro.

    Oh btw, that article, it goes into great detail about precisely how nvidia acheived those efficiencies with a chip with 2 billion less transistors.  In case you're being lazy, which i suspect you are, here's an excerpt:

    "Starting with the Maxwell 1 SMM, NVIDIA has adjusted their streaming multiprocessor layout to achieve better efficiency. Whereas the Kepler SMX was for all practical purposes a large, flat design with 4 warp schedulers and 15 different execution blocks, the SMM has been heavily partitioned. Physically each SMM is still one contiguous unit, not really all that different from an SMX. But logically the execution blocks which each warp scheduler can access have been greatly curtailed.

    The end result is that in an SMX the 4 warp schedulers would share most of their execution resources and work out which warp was on which execution resource for any given cycle. But on an SMM, the warp schedulers are removed from each other and given complete dominion over a far smaller collection of execution resources. No longer do warp schedulers have to share FP32 CUDA cores, special function units, or load/store units, as each of those is replicated across each partition. Only texture units and FP64 CUDA cores are shared."

    I don't think you are reading what I am saying. Companies don't magically make things happen overnight. The GTX 980 is their top bin excluding the possibility for a Titan replacement. When your die size increases too much you begin to have adverse effects due to heat. So more transistors = lower clock. You also don't slap another SMM on a chip, it takes time to integrate another SMM ontop of your existing chip.

    Like I said before designing GPUs is balancing trade-offs. Even with what they called "efficiency" is really just a change of trade-offs. For instance the fps may be high, but the micro stutter bad. At the end of the day you have transistors that are all equal. When applied to a certain task they may be running at 100% capacity, or held back by one aspect not having sufficient resources while another has too abundant. The problem with nVidia is they have a lot of proprietary tools that most developers don't use. These are CUDA, PhysX, nVision, GSync, VXGI, SHIELD, and VR Direct. On the other hand AMD just delegates every addon like that to OpenCL which will work on all GPUs. To me there is nothing efficient about having resources dedicated to maybe 1% of games.

    nVidia like AMD targets benchmark games. Every game in the Anandtech benchmark are GeForce Experience games. This means that nVidia has already optimized those games to work on their devices. There are also a few AMD sponsored titles and Mantle titles. How these cards do in a benchmark may not show how they perform in real life since both makers have special drivers just for those titles.

    Now to answer you one last time. I don't see how nVidia is ever going to throw down the hammer on AMD with the current environment for every reason I have mentioned before. The majority of game developers must target AMDs GCN architecture on an x86 environment. AMD until a week ago was the best choice within every price bracket. nVidia cards have wasted transistors for their proprietary technologies. AMD cards have 1 billion more transistors.

    I also don't see nVidia spinning up a more powerful GPU die on 28nm.

  • Leon1eLeon1e Member UncommonPosts: 791

    Neither AMD or nVidia are breaking any physical laws. The whole point of this thread is stupid whiteknighting and troll bait. Both companies work on similar tech with similar machinery (They even share the same manufacturing plants). The whole point is who will release something on an "acceptable" price for the consumer so they would lose their mind and buy it out. The same way bitcoiners and price/performance folks bought out the R9 290X last year. Now its nVidia. It's not and have never been a catch up game. You simple do not develop an architecture and manufacture it overnight. Whoever believes that is a dumbass and should go back to school. 

    I personally tend to put the fanboyism aside and support AMD for what it is, and that is always being the cheaper alternative that is up to par with the overpriced gear (although with the current pricing i wouldn't say nvidia is overpriced, but we'll see how much they demand for 980ti next year, they were quick to gauge the Titan for 1000$ and 290X mopped the floor with it at half the price) and using open technologies. OpenCL, FreeSync, once they are finished with Mantle SDK they will make it open platform as well. I feel comfortable with AMD knowing they are transparent about their tech and are good enough alternative. 

    I also would like to point out that in computing AMD cards are beating nVidia. Thus most amateur bitcoin miners invest into AMD GPUs. Its just they should really hire a better driver dev team. Not that the driver is unstable or something...I do believe its simply not juicing those GPUs properly.

     

    Oh btw, to the doomsayers, AMD's revenue is up in part thanks to its graphics division. So that's some food for thought.

Sign In or Register to comment.