https://www.anandtech.com/show/15701/nvidias-geforce-gtx-1650-gddr6-released-gddr5-price-parityThe basic idea of the card itself is that you take a GeForce GTX 1650, give it GDDR6 memory instead of GDDR5, set the main clock speed on the GPU a little slower, and clock the memory a lot higher because GDDR6 can handle it. Only two SKUs have shown up on New Egg yet, but it looks like prices are about the same as for the GDDR5 version: typically a little higher than the nominal MSRP of $150.
The card itself isn't really that interesting, as minor refreshes of $150 cards usually aren't. But what is interesting is that this likely heralds the end of GDDR5. Or perhaps rather, the end of new products being launched that use GDDR5; existing products that use GDDR5 will continue to be produce for quite a while.
GDDR5 has been an exceptionally long-lived DRAM standard. It was introduced with the Radeon HD 4870 way back in June 2008. The same memory standard was still used for the original GeForce GTX 1650 that launched less than a year ago. To have two major, high volume consumer products launch more than a decade apart on exactly the same memory standard like that is unusual. I can't find any other DRAM standard with that sort of longevity.
While it was once the high end, GDDR5 maintained a considerable presence in mid-range cards even as Nvidia moved their high end to GDDR5X and AMD moved to HBM and then HBM2. While it wasn't as fast as the newer memory standards, it was a lot cheaper. And it was still fast enough for $100 and $200 cards, while cheaper memory standards such as DDR4 were not.
Nvidia and AMD have both since moved to GDDR6 for their high end consumer parts. With the launch of Turing, Nvidia went with GDDR6 for the high end and GDDR5 for the mid range. With the launch of Navi, AMD went with GDDR6 for the high end, and also GDDR6 for the mid range. The Radeon RX 5500 was the last card that could plausibly have used GDDR5, but AMD went GDDR6 only for it.
The reason why both AMD and Nvidia have moved to GDDR6 for sub-$200 cards is that the price has come down. They both would have continued to go with GDDR5 if it were a lot cheaper than GDDR6. But the advantage of GDDR5 is that it was cheaper for a given quantity. Once that advantage is gone, there's no point in continuing to use GDDR5 for new products.
In a sense, it's not surprising that GDDR6 would eventually be cheaper than GDDR5. Long run price decreases for memory are driven by die shrinks. New process nodes let you cram more memory into each wafer, thus making it cheaper to manufacture a given quantity of memory. Memory vendors Samsung, Hynix, and Micron will build the latest memory standards on their newest and best process nodes. Older standards seen as being on the way out can keep being produced on the older nodes, but don't get moved to the newer nodes. That allows some continued use of older process nodes rather than having to shut them down entirely. But using the older process nodes still costs money, so eventually the products produced on newer nodes are cheaper.
Sometimes it can take a while for the newer standards to become cheaper than the older ones. Memory prices are very volatile, as the three major vendors choose how much to produce well in advance, then often find out that demand for memory is somewhat higher or lower than they expected. Wafers are only chosen months in advance, but building the fabs is a large fraction of production costs, and that has to be chosen years in advance. You can leave your fabs partially idle, but that doesn't refund the cost of building them. Such mismatches of supply to demand can cause wild fluctuations in prices--and those fluctuations can happen almost independently for different standards. Create too much of memory type A and not enough of type B and type B will be a lot more expensive than A for a while, regardless of which is newer.
Comments