In my opinion, if AMD thought the 1080 was going to be great, they wouldn't be having a press conference. THe other thing, AMD has working cards for both series as Nvidia only showed 1 card.
Starting to believe there's blood in the water right now, and it isn't AMD's .
In my opinion, if AMD thought the 1080 was going to be great, they wouldn't be having a press conference. THe other thing, AMD has working cards for both series as Nvidia only showed 1 card.
Starting to believe there's blood in the water right now, and it isn't AMD's .
Why would Nvidia need to show the 1070 ? it's using the ol' GDDR5 so no bottleneck on memory side, and on the actual chip side it's just the matter of time until crippled 1080 chips pile up and through the wondrous and mysterious magic of 'binning' voila !
And then there were plenty of 1070 ready for june 10th launch date.
In my opinion, if AMD thought the 1080 was going to be great, they wouldn't be having a press conference. THe other thing, AMD has working cards for both series as Nvidia only showed 1 card.
Starting to believe there's blood in the water right now, and it isn't AMD's .
Why would Nvidia need to show the 1070 ? it's using the ol' GDDR5 so no bottleneck on memory side, and on the actual chip side it's just the matter of time until crippled 1080 chips pile up and through the wondrous and mysterious magic of 'binning' voila !
And then there were plenty of 1070 ready for june 10th launch date.
Here is the bigger question. Is it on a Friday afternoon?
I see what you did there.
Really, though, it's not likely, purely because afternoon in the United States is the middle of the night in Macau. If they held an event at 2 am local time, then we'd really suspect that something is wrong.
Looking forward to this conference, the fact that AMD did a new design for the die makes me more excited overNvidia's just do wheat we did before and add moar powa.
AMD will definitely have working cards plural, both for Polaris 10 and 11. They did when they showed them off four months ago. The question is whether they'll have enough that everyone who wants one can buy it. And that's not clear at all.
Depending on supplies, they may put all Polaris 11 into laptops first. It's going to be a great laptop part (at least until an analogous Pascal chip shows up) but "meh" in desktops for a while, so it makes no sense to have a shortage in laptops while selling cards more cheaply for desktops unless the desktops only get the broken cards that can't go into laptops. There are rumors that Apple is going to buy a zillion of them, too.
In Macau and chose worldwide journalists to come to Macau. It's bizarro tech year I guess.
Anyway, not much in the way of specs yet. A little, but not much more than Nvidia showed. Hopefully we'll see full specs at the conference. That would be sweet.
There are a few annoying things that reinforce I will be waiting for spring of 2017 to see what AMD and Nvidia poo out.
First they have two different architectures under R9. That will be confusing unless they brand clearly or those are two separate architectures for portable and desktop. Reading through integrator specs can be annoying enough as it is.
Only the Fury X successor will have HBM2. Disappointing.
According to the spec sheet they are using GDDR5 and GDDR5X in every card version. That will be add to the confusion when looking at cards. Do you have the Pro or XT. Does it have 5 or 5X?
Hopefully all of that will pan out in clearly understood, to us laymen, terms as to what hardware is configured with what technology.
What spec sheet? AMD hasn't released any hard specs on Polaris other than to say that it uses GDDR5.
In Macau and chose worldwide journalists to come to Macau. It's bizarro tech year I guess.
Anyway, not much in the way of specs yet. A little, but not much more than Nvidia showed. Hopefully we'll see full specs at the conference. That would be sweet.
There are a few annoying things that reinforce I will be waiting for spring of 2017 to see what AMD and Nvidia poo out.
First they have two different architectures under R9. That will be confusing unless they brand clearly or those are two separate architectures for portable and desktop. Reading through integrator specs can be annoying enough as it is.
Only the Fury X successor will have HBM2. Disappointing.
According to the spec sheet they are using GDDR5 and GDDR5X in every card version. That will be add to the confusion when looking at cards. Do you have the Pro or XT. Does it have 5 or 5X?
Hopefully all of that will pan out in clearly understood, to us laymen, terms as to what hardware is configured with what technology.
Whats exactly wrong with Macau? Asia market is developing really fast and is biggest market in the world with still huge room to grow. US and EU are where they are. Its also exotic location and not boring US. Texas....blah *yawn*
And im not really sure where you got those specs. Polaris 10 will be R9 part, Polaris 11 R7 part.
Using HBM2 memory on cards that dont need it is....pointless. Its just makes cards more expencive for no reason.
These cards will use GDDR5, no GDDRX5 parts for now. Again, you want to pay more for something that has no practical use on those cards AND GDDRX5 is in very short supply.
GPU requires certain amount of bandwith, going beyond that is pointles. If GDDR5 can provide that bandwith GDDRX5 or HBM2 just make GPU more expencive for no reason. Later in the year, when they release faster cards they will feature GDDRX5 or HBM2 depeneding on bandwidth requirements.
To put it in perspective, Titan X and GTX980ti have GDDR5 memory on 384-bit bus and thats enough bandwidth for cards that fast. ~330 GB/s
1080 has GDDRX5 (but of slowest variety) on 256-bit bus and has roughly same bandwidth as Titan X and GTX980ti ~320 GB/s
1070 has GDDR5 on 256-bit and has way less bandwidth (on par with GTX980/GTX970) ~230 GB/s-240 GB/s
Polaris 10 and 11 will use GDDR5 memory. Polaris 10 cards will have same bandwidth as 1070 (GDDR5 and 256-bit bus)
The difference between GDDR5 and GDDRX5 for now is not that big, GDDR5=8000MHz GDDRX5=10000 MHz. GDDRX5 goes up to 14000 MHz, but nothing beyond slowest variety is available atm and wont be available for some time, thats unfortunately a fact.
And most likely P11 wont be available for desktop immediately, they will bin best (lowest voltage chips) for laptop parts and those chips which dont meet certain requriements will be put in desktop parts in the start. So its just a matter of getting decent supply of those chips.
China alone accounts for most medium-high and high/enthusiast gpus sold. It's pretty obvious why they chose Macau. China has the biggest middle-class with the highest disposable income, so of course any tech company is going to cater to them more and more in the future.
And both HBM2 and GDDRX5 realistically arent ready for (or are just starting) mass production , so both are replacing already existing parts with relatively cheaper (kind iffy with 1080) parts.
In the near term, the point of Polaris 11 is lower power. Yes, you can get that level of performance today, but getting it in 50 W instead of 100 W or whatever is a huge deal in a laptop. Eventually, it will be cheaper to build a Polaris 11 GPU than a comparable GPU on 28 nm, but we're likely not there yet.
I just want to point out, people are grossly exaggerating the importance of memory bandwidth. It hasn't been a real bottle neck for a long time. It's kind of like a water pump and a hose. If you have a pump that can only do 500gallons a minute and a hose that's capable of 600, then you have a bottleneck. However if you have an 800 gallon per minute pump and a 600 hose, then that extra 200 is essentially wasted.
That's not the best analogy in the world but it gets the point across.
Game engines and stuff have been using some very advanced compression algorithms and tricks for textures and such that the need for memory bandwidth has been heavily reduced. There is already tons of places you can see that people will OC the memory on a card 15-20% and not even see statistically significant FPS increases.
It simply is one of those things that as far as gaming, it's just not a bottleneck on any of the medium to high end cards.
Now, moving into the commercial and business grade stuff, where people are going to be moving massive datasets in and out of memory constantly, that's a whole different ball game. Which is why Nvidia is going balls deep on pascal with HBM2 at full bandwidth. That's an area where it actually is relevant.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
AMD doesn't have to beat nVidia - the fastest card isn't what brings in the bulk of your revenue, it's just a marketing bullet point for the most part. It's what you have in the <$200 market that brings home the bacon. That's why AMD is still competitive in the GPU arena, not because they have the faster cards, but because they are very aggressive on their performance vs. price points.
And, if anything, Polaris is exactly targeted at that - not raw performance, but performance vs. price points.
I just want to point out, people are grossly exaggerating the importance of memory bandwidth. It hasn't been a real bottle neck for a long time. It's kind of like a water pump and a hose. If you have a pump that can only do 500gallons a minute and a hose that's capable of 600, then you have a bottleneck. However if you have an 800 gallon per minute pump and a 600 hose, then that extra 200 is essentially wasted.
That's not the best analogy in the world but it gets the point across.
Game engines and stuff have been using some very advanced compression algorithms and tricks for textures and such that the need for memory bandwidth has been heavily reduced. There is already tons of places you can see that people will OC the memory on a card 15-20% and not even see statistically significant FPS increases.
It simply is one of those things that as far as gaming, it's just not a bottleneck on any of the medium to high end cards.
Now, moving into the commercial and business grade stuff, where people are going to be moving massive datasets in and out of memory constantly, that's a whole different ball game. Which is why Nvidia is going balls deep on pascal with HBM2 at full bandwidth. That's an area where it actually is relevant.
Whether memory bandwidth is the bottleneck depends on what you're doing. Assuming you can get perfect coalescence, if 1% of the data you use has to come from or be written to global memory, that's a huge bottleneck on most GPUs. If you're being stupid and ignoring memory coalescence, replace 1% by 0.03% and the statement is still true.
So I should probably explain memory coalescence. If you touch global memory at all (whether read or write) on a GPU with GDDR5, you have to access 128 bytes. If you only wanted 4 bytes, it's as expensive as if you needed the whole 128 byte chunk that it is part of. You can't just grab an arbitrary collection of 128 bytes, either, as there are alignment requirements. You can grab bytes 0-127, or bytes 128-255, or whatever. If you want bytes 64-191, you're reading in 256 bytes, as that means you need both 0-127 and 128-255.
The way this usually works is that a single thread only needs four bytes, but you try to have 32 consecutive threads read consecutive chunks of four bytes to use the whole 128 bytes together. Or eight bytes or sixteen bytes or whatever per thread. If you read the same memory repeatedly, it can get cached in L1 and L2 caches for you to avoid having to go all the way to global memory.
In graphics, you should be able to get pretty good coalescence in reading stuff in at the start of vertex shaders. With texture accesses and writing out the final depth and color, you get substantially less good coalescence. If you want to see a GPU slow to a crawl, make a huge texture (loosely, much larger than L2 cache) and have each primitive grab a random-ish location from the texture without being lined up so that nearby primitives grab nearby texels.
Anyway, where your bottleneck is can easily vary within a game. You could easily be compute bound for some programs (in the graphics sense; a single game can use many graphics programs) and memory bound for others. Adding more memory bandwidth helps the latter a lot and the former not at all. So you can easily get situations where 10% more memory bandwidth gives you 5% more performance.
Once you move to non-graphical compute, requirements can vary a lot more wildly. You can have situations where your memory bandwidth usage is more naturally expressed in bytes per second than gigabytes per second. For example, bitcoin mining. You can also have situations where you can't even use 1% of your compute capabilities because you're so memory bottlenecked. For example, find sums of random-ish entries in a 1 GB array. Or you can have situations where the bottleneck should have compute, but someone writes stupid code and creates a huge memory bottleneck where there should have been none.
But the GTX 1080 isn't for compute. If you're paying $4000 for a Tesla card, you want the top end chip. That's what GP100 is for. The reason the GTX 1080 is going with GDDR5X and not GDDR5 is because Nvidia believes that it needs the extra bandwidth for graphics.
The note says that they're going to launch Bristol Ridge, so WCCF interpreted that to mean "Polaris". Which it isn't. It's possible that they'll launch both at the same time, but that's not what the invitation says.
And both HBM2 and GDDRX5 realistically arent ready for (or are just starting) mass production , so both are replacing already existing parts with relatively cheaper (kind iffy with 1080) parts.
wat?
Please do not respond to me, even if I ask you a question, its rhetorical.
Comments
Starting to believe there's blood in the water right now, and it isn't AMD's .
And then there were plenty of 1070 ready for june 10th launch date.
Really, though, it's not likely, purely because afternoon in the United States is the middle of the night in Macau. If they held an event at 2 am local time, then we'd really suspect that something is wrong.
https://www.techpowerup.com/222347/amd-to-launch-first-polaris-graphics-cards-by-late-may
Depending on supplies, they may put all Polaris 11 into laptops first. It's going to be a great laptop part (at least until an analogous Pascal chip shows up) but "meh" in desktops for a while, so it makes no sense to have a shortage in laptops while selling cards more cheaply for desktops unless the desktops only get the broken cards that can't go into laptops. There are rumors that Apple is going to buy a zillion of them, too.
And im not really sure where you got those specs. Polaris 10 will be R9 part, Polaris 11 R7 part.
Using HBM2 memory on cards that dont need it is....pointless. Its just makes cards more expencive for no reason.
These cards will use GDDR5, no GDDRX5 parts for now. Again, you want to pay more for something that has no practical use on those cards AND GDDRX5 is in very short supply.
GPU requires certain amount of bandwith, going beyond that is pointles. If GDDR5 can provide that bandwith GDDRX5 or HBM2 just make GPU more expencive for no reason. Later in the year, when they release faster cards they will feature GDDRX5 or HBM2 depeneding on bandwidth requirements.
To put it in perspective, Titan X and GTX980ti have GDDR5 memory on 384-bit bus and thats enough bandwidth for cards that fast. ~330 GB/s
1080 has GDDRX5 (but of slowest variety) on 256-bit bus and has roughly same bandwidth as Titan X and GTX980ti ~320 GB/s
1070 has GDDR5 on 256-bit and has way less bandwidth (on par with GTX980/GTX970) ~230 GB/s-240 GB/s
Polaris 10 and 11 will use GDDR5 memory. Polaris 10 cards will have same bandwidth as 1070 (GDDR5 and 256-bit bus)
The difference between GDDR5 and GDDRX5 for now is not that big, GDDR5=8000MHz GDDRX5=10000 MHz. GDDRX5 goes up to 14000 MHz, but nothing beyond slowest variety is available atm and wont be available for some time, thats unfortunately a fact.
And most likely P11 wont be available for desktop immediately, they will bin best (lowest voltage chips) for laptop parts and those chips which dont meet certain requriements will be put in desktop parts in the start. So its just a matter of getting decent supply of those chips.
And both HBM2 and GDDRX5 realistically arent ready for (or are just starting) mass production , so both are replacing already existing parts with relatively cheaper (kind iffy with 1080) parts.
That's not the best analogy in the world but it gets the point across.
Game engines and stuff have been using some very advanced compression algorithms and tricks for textures and such that the need for memory bandwidth has been heavily reduced. There is already tons of places you can see that people will OC the memory on a card 15-20% and not even see statistically significant FPS increases.
It simply is one of those things that as far as gaming, it's just not a bottleneck on any of the medium to high end cards.
Now, moving into the commercial and business grade stuff, where people are going to be moving massive datasets in and out of memory constantly, that's a whole different ball game. Which is why Nvidia is going balls deep on pascal with HBM2 at full bandwidth. That's an area where it actually is relevant.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
http://videocardz.com/59808/amd-vega-gpu-allegedly-pushed-forward-to-october
And, if anything, Polaris is exactly targeted at that - not raw performance, but performance vs. price points.
So I should probably explain memory coalescence. If you touch global memory at all (whether read or write) on a GPU with GDDR5, you have to access 128 bytes. If you only wanted 4 bytes, it's as expensive as if you needed the whole 128 byte chunk that it is part of. You can't just grab an arbitrary collection of 128 bytes, either, as there are alignment requirements. You can grab bytes 0-127, or bytes 128-255, or whatever. If you want bytes 64-191, you're reading in 256 bytes, as that means you need both 0-127 and 128-255.
The way this usually works is that a single thread only needs four bytes, but you try to have 32 consecutive threads read consecutive chunks of four bytes to use the whole 128 bytes together. Or eight bytes or sixteen bytes or whatever per thread. If you read the same memory repeatedly, it can get cached in L1 and L2 caches for you to avoid having to go all the way to global memory.
In graphics, you should be able to get pretty good coalescence in reading stuff in at the start of vertex shaders. With texture accesses and writing out the final depth and color, you get substantially less good coalescence. If you want to see a GPU slow to a crawl, make a huge texture (loosely, much larger than L2 cache) and have each primitive grab a random-ish location from the texture without being lined up so that nearby primitives grab nearby texels.
Anyway, where your bottleneck is can easily vary within a game. You could easily be compute bound for some programs (in the graphics sense; a single game can use many graphics programs) and memory bound for others. Adding more memory bandwidth helps the latter a lot and the former not at all. So you can easily get situations where 10% more memory bandwidth gives you 5% more performance.
Once you move to non-graphical compute, requirements can vary a lot more wildly. You can have situations where your memory bandwidth usage is more naturally expressed in bytes per second than gigabytes per second. For example, bitcoin mining. You can also have situations where you can't even use 1% of your compute capabilities because you're so memory bottlenecked. For example, find sums of random-ish entries in a 1 GB array. Or you can have situations where the bottleneck should have compute, but someone writes stupid code and creates a huge memory bottleneck where there should have been none.
But the GTX 1080 isn't for compute. If you're paying $4000 for a Tesla card, you want the top end chip. That's what GP100 is for. The reason the GTX 1080 is going with GDDR5X and not GDDR5 is because Nvidia believes that it needs the extra bandwidth for graphics.
wat?
Please do not respond to me, even if I ask you a question, its rhetorical.
Please do not respond to me