Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Nvidia announces Turing architecture

1468910

Comments

  • VrikaVrika Member LegendaryPosts: 7,992
    Gorwe said:
    Honestly, why do I think this is just a marketing spiel to justify the huge increase in gfxc price?
    I don't think so.

    While NVidia wants to sell their GPUs at a good price, they're having to make too large chips right now and it's not good for their profits either.

    I think it's more a play for the future. NVidia could have made RTX 2080 better if they had gone to different direction, but they were running quickly against a wall where games didn't have any good means to use all that GPU power. Change to ray-tracing removes that wall for foreseeable future, and might allow NVidia to make RTX 3080 and RTX 4080 better products.
    [Deleted User]Ridelynn
     
  • gervaise1gervaise1 Member EpicPosts: 6,919
    Vrika said:
    Gorwe said:
    Honestly, why do I think this is just a marketing spiel to justify the huge increase in gfxc price?
    I don't think so.

    While NVidia wants to sell their GPUs at a good price, they're having to make too large chips right now and it's not good for their profits either.

    I think it's more a play for the future. NVidia could have made RTX 2080 better if they had gone to different direction, but they were running quickly against a wall where games didn't have any good means to use all that GPU power. Change to ray-tracing removes that wall for foreseeable future, and might allow NVidia to make RTX 3080 and RTX 4080 better products.

    It would be surprising if there wasn't an element of corporate future strategy in this.

    Graphics cards are currently going the way of sound cards. When the day finally arrives AMD will still be around but what of NVidia? Break into the cpu market?

    So "pushing" ray tracing for games could indeed be part of a strategy to keep alive the need for graphics cards - at least for "serious gaming".

    Additionally NVidia dominate corporate "ray tracing" card sales so Turing will help there. And maybe the combination gaming + corporate. will help keep manufacturing unit costs low enough. 
    [Deleted User]Ridelynn
  • pantaropantaro Member RarePosts: 515
    DMKano said:
    Gorwe said:
    Honestly, why do I think this is just a marketing spiel to justify the huge increase in gfxc price? And that this whole Ray Tracing nonsense will end up just the same as Tesselation. A very nice idea, but completely unattainable right now. But! Don't forget to preorder those sexy, if completely overpriced RTXs on the way out! ;)

    I think that majority will wait to see if the RTX line beats GTX 1000 series in normal games - this is why actual 3rd party game benchmarks are so critical. 


    If yes - it will sell, because majority dont care for ray tracing and only want whatever is fastest.

    based on what nvidia demoed i care a great deal about ray tracing now. but i totally agree with your statement. there isnt gonna be a ton of games supporting this for quite sometime and considering i have a 1080TI for me to really care anymore requires a pretty big leap in performance and some real benchmarks.
  • RidelynnRidelynn Member EpicPosts: 7,383
    I agree with both Vrika and Gervaise1. even though it looks like they are trying to debate each other, I think they are both getting at the same heart of the matter, just from different perspectives.
  • RidelynnRidelynn Member EpicPosts: 7,383
    https://www.marketwatch.com/story/amd-dethrones-amazon-in-investor-money-flows-2018-08-24

    The summary from the article:

    AMD’s stock has surged this year, burning short-sellers. What are the chances it can rise 10-fold, as Nvidia did?

    Now, this isn’t directly because of competition or product lineup between the two companies, so I wouldn’t read it in that manner.

    Rather, there is only so much investment money available to go around, and nVidia wants to ensure it continues to get priority access to that. AMD is a threat because they are threatening that particular line of funding - not because of direct competition with Ryzen or Vega or anything else.

    The takeaway from this that was left unsaid: Intel is screwed.
  • VrikaVrika Member LegendaryPosts: 7,992
    edited August 2018
    Ridelynn said:
    https://www.marketwatch.com/story/amd-dethrones-amazon-in-investor-money-flows-2018-08-24

    The summary from the article:

    AMD’s stock has surged this year, burning short-sellers. What are the chances it can rise 10-fold, as Nvidia did?

    Now, this isn’t directly because of competition or product lineup between the two companies, so I wouldn’t read it in that manner.

    Rather, there is only so much investment money available to go around, and nVidia wants to ensure it continues to get priority access to that. AMD is a threat because they are threatening that particular line of funding - not because of direct competition with Ryzen or Vega or anything else.

    The takeaway from this that was left unsaid: Intel is screwed.
    Neither Intel nor NVidia are really dependent on investor's money. They're both profitable companies and have been profitable for a long time.

    It's good for AMD because it's helping their financial situation a lot, but it's not really hurting Intel or NVidia.
     
  • RidelynnRidelynn Member EpicPosts: 7,383
    So nVidia and Intel don’t care what their stock price is? Those are all investors. Executive perks are often directly linked to stock performance, not necessarily corporate performance. Corporate worth is often derived from net stock price.
  • VrikaVrika Member LegendaryPosts: 7,992
    Ridelynn said:
    So nVidia and Intel don’t care what their stock price is? Those are all investors. Executive perks are often directly linked to stock performance, not necessarily corporate performance. Corporate worth is often derived from net stock price.
    They care, I didn't mean that.

    Just that it won't affect their ability to operate normally, develop new products, and even make investments if there's something worth investing.
     
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    DLSS (Deep Learning Super-Sampling) looks really REALLY interesting. ( Nvidia super-sampling method that puts the AI tensors cores embedded within the GPUs to work).

    You can have it on RTX without Ray Tracing even and NVIDIA “say” it will boost FPS by 30% to 100%. So long as devs implement it, and from that list a few pages back there are some games getting it that I can see growing. The purpose of this tech is to assist when RT is activated and turn on too, but as mentioned I’m particularly interested to see it in general.

    Nvidia quote :
    Those numbers get even bigger with deep learning super-sampling, or DLSS, unveiled Monday.

    DLSS takes advantage of our tensor cores’ ability to use AI. In this case it’s used to develop a neural network that teaches itself how to render a game. It smooths the edges of rendered objects and increases performance.

    That’s one example of how Turing can do things no other GPU can. This makes Turing’s full performance hard to measure.

    But looking at today’s PC games — against which GPUs without Turing’s capabilities have long been measured — it’s clear Turing is an absolute beast.
    Looks like the NDA embargo how all the specifics of the tech gets lifted on the 14th Sept.

    Short demo video here where Nvidia had two machines, one with a GTX 1080Ti running TAA (Temporal Anti-Aliasing) and the other a RTX 2080Ti using DLSS. The demo was the infiltrator one by EPIC and performance looks pretty sweet 

    https://youtu.be/x-mVK3mj_xk



  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    The amount of loss from downsampling and encoding that video is so great that you wouldn't be able to tell the difference in image quality between a high degree of SSAA and no anti-aliasing at all.

    Any other GPU could do the same DLSS, albeit slower.  Vega's half precision logic means that tensor operations will be 1/4 the speed of Turing's, for example.  Even if simple floats as GPUs had over a decade ago could do it at 1/8 of the speed.  So if Nvidia wanted to show off the image quality of DLSS, they could do it today on already released hardware.

    It's plausible that DLSS could be pretty nifty on Turing.  It's even plausible that it could be useful on lots of other hardware without the tensor cores.  Nvidia's FXAA is useful on AMD GPUs, for example.

    But depending on how much their genuinely leaning on machine learning (as opposed to that just being a stupid marketing buzzword that is only tangentially relevant), it's also very plausible that the image quality will be garbage with a lot of artifacting.  Machine learning may be able to identify complex things much better than random chance, but it's also prone to get things wildly wrong in ways that humans would spot as obviously ridiculous.  And that's even if you don't require it to infer something hundreds of millions of times per second as it would here.
    [Deleted User]
  • OzmodanOzmodan Member EpicPosts: 9,726
    DLSS (Deep Learning Super-Sampling) looks really REALLY interesting. ( Nvidia super-sampling method that puts the AI tensors cores embedded within the GPUs to work).

    You can have it on RTX without Ray Tracing even and NVIDIA “say” it will boost FPS by 30% to 100%. So long as devs implement it, and from that list a few pages back there are some games getting it that I can see growing. The purpose of this tech is to assist when RT is activated and turn on too, but as mentioned I’m particularly interested to see it in general.

    Nvidia quote :
    Those numbers get even bigger with deep learning super-sampling, or DLSS, unveiled Monday.

    DLSS takes advantage of our tensor cores’ ability to use AI. In this case it’s used to develop a neural network that teaches itself how to render a game. It smooths the edges of rendered objects and increases performance.

    That’s one example of how Turing can do things no other GPU can. This makes Turing’s full performance hard to measure.

    But looking at today’s PC games — against which GPUs without Turing’s capabilities have long been measured — it’s clear Turing is an absolute beast.
    Looks like the NDA embargo how all the specifics of the tech gets lifted on the 14th Sept.

    Short demo video here where Nvidia had two machines, one with a GTX 1080Ti running TAA (Temporal Anti-Aliasing) and the other a RTX 2080Ti using DLSS. The demo was the infiltrator one by EPIC and performance looks pretty sweet 

    https://youtu.be/x-mVK3mj_xk

    when I see actual demos that prove this actually is the case, it is just marketing gibberish as far as I am concerned.   I still think getting a 1080 right now is light years better than anything else on the market including the 20xx line up.  I bought a AMD chip, motherboard today and got a 1080 for under 400.  That is a deal.
    Quizzical[Deleted User]
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Ozmodan said:
     got a 1080 for under 400.  That is a deal.
    That is a good deal. Hopefully it works out for you when the 2080 and 2080 are released (in terms of price/performance rated to your own personal values). There should be lots more info in 2 weeks time.




  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Here is a good video - 

    Most interesting were comments about performance  around the 25min mark. The guy is asked directly about the performance, uplift in traditional games with Turing. He starts with 35-45% faster by tier (i.e. 2080 over 1080 & 2080 Ti over 1080 Ti) is what we can expect across a variety of games without DLSS. 

    What I took away - 
    DLSS: Mainly that it will be trained for each game, and maybe each setting/level for each particular game, and that NVidia will do the DLSS training for free on the their network as part of developer relations.

    Nvidia have advantage of dedicated hardware for acceleration of RayTracing , a special Raytrace Core which is in fact an ASIC integrated into array of FP32 cuda cores of each SM block inside Turing die, which working through low level API Optix as extension of DXR. Advantage is higher performance of RT. Dissadvantage is that game must support that NV extension. However, with the training done for free through Nvidia no one knows the particulars of that but it could be lucrative for devs. Especially compared to AMD's NGG and primitive shader failures, this seems to be a more positive approach in dev relations.

    TPU: Will be accessible through CUDA interface, developer can use them for whatever they want in games, and he expects all kinds of new uses developed outside NVidia.

    NVLink: Shot down the memory sharing idea that many had. Basically new NVLink is just a faster SLI bridge. SLI still requires just as much developer work as it ever did, same old SLI modes, all working the same way.
    VrikaQuizzicalRidelynnOzmodan



  • VrikaVrika Member LegendaryPosts: 7,992
    Torval said:
    The thing that keeps getting overlooked in many of these conversations is 2070 and better are only relevant if you're gaming in 4K. Otherwise they offer nothing over 1070 or better.

    Enthusiasts who always install current gen video cards are going to buy them. For nearly everyone else there is less incentive now to upgrade to a 20xx series than almost any previous upgrade cycle. I'm sure we can think of some outlying examples, but why upgrade? RT? lol, no.
    On the other hand not every GPU is an upgrade to previous generation. There are many who upgrade a lot older GPUs, buy new computers, and replacements to broken hardware. Nvidia doesn't need to catch everyone in an upgrade cycle, tech isn't developing fast enough for that to be possible, as long as they can catch people who'd be looking to buy a new GPU anyway and beat AMD they're doing well enough.


     
  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Torval said:
    The thing that keeps getting overlooked in many of these conversations is 2070 and better are only relevant if you're gaming in 4K. Otherwise they offer nothing over 1070 or better.

    Enthusiasts who always install current gen video cards are going to buy them. For nearly everyone else there is less incentive now to upgrade to a 20xx series than almost any previous upgrade cycle. I'm sure we can think of some outlying examples, but why upgrade? RT? lol, no.
    Go heavy on ray-tracing and suddenly you'll need more performance than a GTX 1070 offers at something like 200x150 or larger resolutions.
  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Most interesting were comments about performance  around the 25min mark. The guy is asked directly about the performance, uplift in traditional games with Turing. He starts with 35-45% faster by tier (i.e. 2080 over 1080 & 2080 Ti over 1080 Ti) is what we can expect across a variety of games without DLSS. 

    What I took away - 
    DLSS: Mainly that it will be trained for each game, and maybe each setting/level for each particular game, and that NVidia will do the DLSS training for free on the their network as part of developer relations.

    Nvidia have advantage of dedicated hardware for acceleration of RayTracing , a special Raytrace Core which is in fact an ASIC integrated into array of FP32 cuda cores of each SM block inside Turing die, which working through low level API Optix as extension of DXR. Advantage is higher performance of RT. Dissadvantage is that game must support that NV extension. However, with the training done for free through Nvidia no one knows the particulars of that but it could be lucrative for devs. Especially compared to AMD's NGG and primitive shader failures, this seems to be a more positive approach in dev relations.

    TPU: Will be accessible through CUDA interface, developer can use them for whatever they want in games, and he expects all kinds of new uses developed outside NVidia.

    NVLink: Shot down the memory sharing idea that many had. Basically new NVLink is just a faster SLI bridge. SLI still requires just as much developer work as it ever did, same old SLI modes, all working the same way.
    Thank you for the text summary.  Too many people would have left it at "please watch this hour long video that may or may not have anything interesting in it", which is useless.

    My takeaway is this:  ray-tracing has a future.  DLSS does not.  And more generally, the tensor unit does not, at least outside of people running non-gaming code written by Nvidia like they do now.

    Being able to access some new unit via an extension to DirectX or Vulkan is infinitely preferable to having to use a different API entirely.  Especially when that different API is the rotting blob of incompatibility known as CUDA.  If it has to tie in CUDA, then it will never be used outside of sponsored titles made by developers who don't realize what a mess it will make to try to call CUDA for part of a game.

    Using a DirectX extension is far less bad than that.  Using a DirectX extension won't mean that a driver upgrade makes your host code no longer compile the way that CUDA can.  Using a DirectX extension won't risk breaking incompatibility with future Nvidia GPUs the way that CUDA can.  Using a DirectX extension also means that it's possible for AMD to support it in the future, or perhaps have something added to the core API that is broadly supported by both AMD and Nvidia and requires only a minor tweak to shader code to drop the extension and move to the core API.

    If the training is done custom for various games by Nvidia, then only a handful of sponsored games will ever be supported, and only at a handful of settings.  It could turn into, if you turn on DLSS, the rest of your graphics settings are locked.  Or perhaps they'll let you change settings, with the caveat that it makes DLSS look like garbage because it was trained assuming settings different from your own.  Patches and mods for games could easily cause the same problem.

    More generally, machine learning is only viable in situations where a substantial error rate is acceptable.  Suppose, for example, that you have 1000 pictures.  Ten of them are pictures of cats.  You want some pictures of cats for whatever reason and don't want to have to look your pictures one at a time.  A machine learning algorithm might be able to narrow your list from 1000 pictures down to 12, eight of which are actually pictures of cats.  That can save you a lot of time so long as you didn't need to track down all of your cat pictures.

    Rendering graphics for games is far less tolerant of outright errors like that.  Ever had a monitor with a stuck or otherwise defective pixel?  Getting just one pixel wrong can look bad.  If the machine learning algorithm picks the right color for pixels 99.9% of the time, then that will look like thousands of bad pixels in each frame.  They'll be different pixels from one frame to the next, which will result in flickering effects.  That's terrible.

    I have little doubt that they can produce a few demos where DLSS looks nice.  I have very strong doubts that that will be more broadly applicable outside of carefully tuned demos.

    So why did they bother to include the tensor unit?  My speculation is that it's for the same reason that Carrizo had half rate double precision on its integrated GPU:  it's much cheaper to copy/paste large chunks of a chip that you've already done than to completely redo it.  In the case of Carrizo, it was from AMD's Hawaii GPU.  (Fun fact:  the integrated GPU in an FX-8800P can beat a GeForce GTX 1080 Ti outright at double-precision compute.)  In the case of Turing, it's probably from GV100, which has the tensor cores because Nvidia expected many of the GPUs to be used primarily for machine learning.  If it would cost you $100 million in engineering to properly redo a block to remove something you don't need, or $50 million to fab the extra wafers it takes to just regard it as wasted die space and move on, going with the copy/paste approach is an easy call.
  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Most interesting were comments about performance  around the 25min mark. The guy is asked directly about the performance, uplift in traditional games with Turing. He starts with 35-45% faster by tier (i.e. 2080 over 1080 & 2080 Ti over 1080 Ti) is what we can expect across a variety of games without DLSS. 
    I say it's better to organize things into tiers by engineering, not marketing.  The GeForce RTX 2080 is the same tier as the GeForce GTX 1080 Ti.  The GeForce RTX 2080 Ti is a new, higher tier.  In order to have a point, the RTX 2080 needs to beat the GTX 1080 Ti by a considerable margin, not just the GTX 1080.
    Ridelynn
  • SlyLoKSlyLoK Member RarePosts: 2,698
    AMDs new 7nm cards are releasing this year. I would wait for them if you want to upgrade. Supposedly 1.25x better than Nvidias new Turing marketing hype on a smaller die.
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    SlyLoK said:
    AMDs new 7nm cards are releasing this year. I would wait for them if you want to upgrade. Supposedly 1.25x better than Nvidias new Turing marketing hype on a smaller die.
    No, there will not be any 7nm AMD gaming cards this year (just pro 7nm workstation ones) and IMO there won't be any 7nm AMD gaming cards for another 8-12 months.
    Ozmodan



  • VrikaVrika Member LegendaryPosts: 7,992
    SlyLoK said:
    AMDs new 7nm cards are releasing this year. I would wait for them if you want to upgrade. Supposedly 1.25x better than Nvidias new Turing marketing hype on a smaller die.
    It's also going to have 32GB of HBM2 memory. It's going to be a professional GPU, not a consumer GPU.
    [Deleted User]
     
  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    SlyLoK said:
    AMDs new 7nm cards are releasing this year. I would wait for them if you want to upgrade. Supposedly 1.25x better than Nvidias new Turing marketing hype on a smaller die.
    AMD has announced that they're going to release a Vega card on a 7 nm process node for compute this year.  It's not clear whether there will ever be Radeon cards based on the same die.  AMD has never made a discrete GPU that didn't have a Radeon version, but Nvidia hadn't done analogously either until two years ago.  Today, Nvidia's two most recently top end GPUs haven't had a GeForce card at all.

    Even if there are Radeon cards, it's also not clear how much they'll cost.  Early wafers on 7 nm could be very expensive, or could suffer poor yields.  You can live with 30% yields for a while if you can pass on the cost to your customers by charging $4000 per card, but not if you're charging $500 per card.  It's also possible that there will be Radeon cards based on it eventually, but AMD will wait until yields are better and the cost to build the cards comes down.

    There will be a bunch of 7 nm cards coming from AMD (and from Nvidia) eventually, but I wouldn't assume that there will be any Radeon or GeForce launches that particularly matter this year.  (Yes, I'm including 12 nm Turing in that assessment.)  Next year is a very different story, as I'd expect to see a lot of Radeon and GeForce cards on 7 nm, and that will bring down the price tag considerably for any level of performance that currently costs you over $200.
  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Vrika said:
    SlyLoK said:
    AMDs new 7nm cards are releasing this year. I would wait for them if you want to upgrade. Supposedly 1.25x better than Nvidias new Turing marketing hype on a smaller die.
    It's also going to have 32GB of HBM2 memory. It's going to be a professional GPU, not a consumer GPU.
    Just because it has 4 stacks of HBM2 doesn't mean that all GPUs based on the card must have 32 GB of memory.  It's trivial to make a version with 4-high stacks that has 16 GB, or 2-high stacks that has 8 GB.  Or they might use salvage parts that only have 2 or 3 stacks of HBM2 enabled on Radeon cards.

    It's also possible that they'll do something like what they did with the Radeon Vega Frontier Edition:  nominally a professional card, but can also run the consumer drivers, and cheaper than a GeForce RTX 2080 Ti.  Probably also slower than the RTX 2080 Ti, but not necessarily with a price tag of several thousand dollars.

    I'm not predicting that AMD will do this or that.  I'm saying that we don't really know.  With the Radeon Vega Frontier Edition, there weren't even any rumors that such a part would exist until not very long before it launched.  For what it's worth, I don't think that AMD even decided to make such a card until not very long before they announced it.
  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Torval said:
    The availability and cost of HBM2 will also affect when and whether they build consumer cards configured like that. Nvidia is sticking to GDDR6 for 20xx which I find interesting, but not entirely surprising.
    Cost, yes, but availability is not in question.  I'd also expect the cost per stack to be lower than it was when they launched the first Vega cards a year ago.
    [Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Torval said:
    Quizzical said:
    Torval said:
    The availability and cost of HBM2 will also affect when and whether they build consumer cards configured like that. Nvidia is sticking to GDDR6 for 20xx which I find interesting, but not entirely surprising.
    Cost, yes, but availability is not in question.  I'd also expect the cost per stack to be lower than it was when they launched the first Vega cards a year ago.
    I remember reading something about supply issues late last year so I wasn't sure if that was still true. If so that would be a problem and cost versus profit margin is always an issue.

    I did a quick search just now and didn't see anything come up concerning supply issues recently so that is a plus. I'd really like to see the industry move off of GDDR because it is essentially at the end of its life cycle. So if production maturity equals lower costs, then also a plus.

    I'm still on a 970 and it's holding up okay, but I hate the small memory configuration. I plan on waiting it out until next fall unless I can get a decent deal on a side grade card with more memory - 1060/6, 1070+, 580. We'll see what the holiday sales look like.
    Availability is always an issue when a memory standard is new, but HBM2 isn't new anymore.  GDDDR6 still is, but a year from now, I expect it to be easy to get in large volumes.

    I don't expect GDDR* to ever go away entirely, simply because it will always be cheaper than HBM*.  HBM* requires a base die for each stack, as well as an interposer to connect all of the stacks of memory to the die.  GDDR* doesn't, and that makes it cheaper.

    One huge question is how big the price difference will be.  If HBM2 costs $50 more than GDDR5, then that's acceptable if you have to do it to be competitive in $700 cards, but you flatly can't use it for $200 cards.  If the price difference is $10, then using it in $500 cards is an easy call, and maybe you do in $200 cards, too, but not in $100 cards.

    It's similar to the reasons why DDR3/DDR4 haven't gone away entirely.  It's cheaper than GDDR5, so it gets used in low end cards were saving a few dollars is a huge deal.  But it's pretty much unusable in $100+ cards because it would cripple performance.

    If you can wait a year, I expect both AMD and Nvidia to offer $300 cards on 7 nm process nodes that will handily beat anything you can get on that same budget right now.  That's not based on inside information or even claimed leaks.  It's just based on the general principle that process nodes tend to get more mature, and if TSMC 7 nm is mature enough for AMD to launch a decently large Vega die this year, it's probably going to be mature enough to launch a whole lineup a year from now outside perhaps of low end cards where it may still be too expensive.
    [Deleted User]
  • OzmodanOzmodan Member EpicPosts: 9,726
    Here is a good video - 

    Most interesting were comments about performance  around the 25min mark. The guy is asked directly about the performance, uplift in traditional games with Turing. He starts with 35-45% faster by tier (i.e. 2080 over 1080 & 2080 Ti over 1080 Ti) is what we can expect across a variety of games without DLSS. 

    What I took away - 
    DLSS: Mainly that it will be trained for each game, and maybe each setting/level for each particular game, and that NVidia will do the DLSS training for free on the their network as part of developer relations.

    Nvidia have advantage of dedicated hardware for acceleration of RayTracing , a special Raytrace Core which is in fact an ASIC integrated into array of FP32 cuda cores of each SM block inside Turing die, which working through low level API Optix as extension of DXR. Advantage is higher performance of RT. Dissadvantage is that game must support that NV extension. However, with the training done for free through Nvidia no one knows the particulars of that but it could be lucrative for devs. Especially compared to AMD's NGG and primitive shader failures, this seems to be a more positive approach in dev relations.

    TPU: Will be accessible through CUDA interface, developer can use them for whatever they want in games, and he expects all kinds of new uses developed outside NVidia.

    NVLink: Shot down the memory sharing idea that many had. Basically new NVLink is just a faster SLI bridge. SLI still requires just as much developer work as it ever did, same old SLI modes, all working the same way.
    Most of the people who test these cards are saying that that 35-45% speed increase is really bloated, they don't expect more than a 15-20% increase.  DLSS is a joke, no one is going to code for a card specific driver and the same thing goes with CUDA.  Just because Nvidia invents something does not mean anyone will use it.  
    AmazingAvery
Sign In or Register to comment.