AMD said recently the power issue is gonna be fixed with a driver. Check reddit amd for source.
Seems like a really bad fix for a hardware design problem - which needs a higher power delivery to the card (as in more connectors beyond the 6pin)
Its not a hardware design problem, card is designed to work at 150W TDP and it does that. Thats pretty much how all reference cards are designed. Its only when you decide to overclock it goes out of spec, but so do many other cards. Its really nothing new.
Doesn't seem so bad to me... Not sure what a better alternative would be for people that already bought the ref cards. If it's that' big of an issue for you just wait for the vendor cards.
An alternative would be to return the cards and get the money back until the new ones with more power connectors are released.
That's what I would do if I didn't cancel my pre-order already.
Everyone who bought one is within the return period right now.
Im sorry, but overclock is aways done at consumer risk. Where has AMD said that reference card is for overclocking? Ib fact ive said many times that if you want to overclock skip reference cards as they neither have power delivery or cooling thats really up to the task.
64W average an not a pretty grapf that goes above the spec pretty much constantly. Now add factory OC and its pretty much way above spec. Now add manual OC and youre looking at a card that cunsumes 90-100 W. There are 750tis without aditional power connector and there are those which have it. 6pin power connector isnt there to look pretty, factory overclocked and manually overclocled 750tis use > 75W
Moral of the story: if you want to overclock dont buy reference cards or cards that dont have power connectors (for which other in the class have like GTX950 and GTX750ti).
As the graph is about a no connectors 750 Ti i pity those poor motherboards that might not last more than ~2 years ( considering that the chance they are cheap with someone having gotten 750 Ti is pretty much 100% )
The problem they are not specifying which 750 Ti those graphs are for, and they do mention the gigabyte one on the previous page. And that gigabyte one has a additional 6-pin connector, which would make this graph irrelevant to say the least.
MSI gaming, Palit Storm X, even reference when overclocked...none have additional power connectors...and theres probably more, Asus comes to mind, probably EVGA on some models... .... ...
The kicker is that they all bragged about "no additional power connector" while they were achieving that by going over PCIe slot spec.
Same goes for GTX950 with no additional power connector (Asus, dont know if any others released such cards)
I'm not in favor of a GTX 950 without a 6-pin connector or a heavily overclocked GTX 750 Ti, either. Those didn't get so much attention because they're low volume SKUs that scarcely got reviewed at all and certainly not the high profile reviews of a new card launch.
We can argue about how bad going out of specification is, but not that it is bad. Let's back up and take a principled look.
There are industry standards in many things, from PCI Express to SATA to USB to ATX and beyond. Industry standards are a good thing. Industry standards are what let you buy a CPU from Intel, a video card from Sapphire, a motherboard from MSI, an SSD from Crucial, memory from Corsair, a power supply from Seasonic, a monitor from Asus, and a game from Blizzard and reasonably expect that it will all work together. And still reasonably expect that it will all work together if you instead buy a CPU from AMD, a video card from EVGA, a motherboard from Gigabyte, an SSD from Samsung, memory from G.Skill, a power supply from XFX, a monitor from Dell, and a game from NCsoft. Or mix and match parts however you please, including many other options for many components, subject to a relative handful of compatibility restrictions (e.g., correct processor socket and memory standard).
Industry standards allow for competition. If you don't like what Corsair does with memory, you can buy G.Skill. If you don't like G.Skill, you can buy Mushkin. Or Patriot, Kingston, or a lot of other brands. Industry standards where if you follow the standards, it works with everything, greatly lowers the barriers into entry. That means more competition, lower prices, and things more likely to just work. Those are all good things.
Just because most or even all of the existing hardware you need your component to be compatible with does X doesn't mean it's okay to ignore the standard and only make your part compatible with something that does X. Even if it mostly works, you've now introduced some non-standard hardware into the system. Someone in the future could make hardware that follows the standards perfectly but doesn't do X, and now your hardware isn't compatible with theirs. And it's 100% your fault, not theirs, and your customers suffer for it. Thinking it's okay to ignore standards if you can get away with it is how we got the debacle of Internet Explorer 6-8, among other things, and everyone suffered for that, not just its users.
Malabooga's chart on a 750 Ti shows power draw at 1 ms resolution. The reason you have these massive vertical lines is that within a second, power draw can fluctuate wildly. But when you're pulling too much power, the question of how much and for how long makes a huge, huge difference. 100 W coming in for 1 ms is not at all similar to 100 W coming in for an entire second, which is itself not similar to averaging 100 W for an entire hour. The last one can be catastrophic if you've only got a PCI Express slot. The first is worth a shrug. There's only one brief moment in the entire run where the power draw stayed over 75 W for as long as 1/3 of a second, and the 64 W average is well in spec.
I'm not sure how much is too much, as I'm not an electrician. But I do know that the 120 V current from the wall will routinely run as high as 170 V for milliseconds at a time. Yes, yes, there's a difference between AC and DC, and AC spikes up and down by a factor of sqrt(2) over the nominal voltage.
But the standards are there for a reason, and violating them with your stock products is usually not okay, with only rare exceptions. I especially don't like seeing that happen on power delivery, as that can result in "hardware is dead", which can't be fixed by the next patch. If end users push things out of spec by overclocking, that's a risk they take and if everything fries, that's on the end user. I'm okay with letting end users take risks, whether by stupidity or by working around them with high quality components to mitigate those risks. But it shouldn't be out of spec at stock settings. Especially when it's a simple fix to get back in spec.
Now, there are rare exceptions. For example, the PCI Express standard only anticipates boards that draw up to 300 W. If you give a board two 8-pin connectors and then pull 350 W, I'm okay with that, at least assuming that the power draw is distributed appropriately among the various connectors. If you're building a server or an HPC with eight boards on a node, 350 W each can make it awfully hard to cool, especially when you've got thousands of the servers or nodes or whatever in the room. That might be why the standard caps it at 300 W, though I don't know that. But for a single card in a desktop, cooling 350 W is not trivial, but it is doable if you work around it, in part because it doesn't mean you're pumping 1 MW into a room like you might if you tried an analogously simple setup in a data center. And it also doesn't make the card non-compliant with other hardware for subtle reasons.
I like to have parts just work too but at least part of the fault is on the user with parts dying and them not researching the card before buying or just ignoring the problems without waiting for a fix. If the power issue were the only difference between ref and partner cards I'd probably just buy the ref since the power problem should be fixed soon.
I had read that Polaris does still have PowerTune, and that GCN 1.4 had bumped it to include better control over voltage than it had previously had, and includes a second Boost clock above the base clock that the card could ramp up to, bringing it more in line with nVidia's Boost. And they now allow much more control over PowerTune in the latest driver (at least according to screen shots on the HardOCP review - I don't have an AMD card to verify).
Which means.... AMD probably knows exactly what they are doing with this. The power issue could be easily resolved by just adjusting PowerTune parameters, but that would be at the expense of performance. And these reference cards just aren't made for overclocking at all (which should surprise exactly no one)
Pulling more power then the board reference design is actually fairly normal. It is just rarely noticed. It is WAY more normal then you think. And the reference power limits on the connectors are just that reference only. Most boards (except the super cheap ones) can actually pull 2x to 3x the normal power through the connectors safely.
some people have been undervolting the card and reducing power needs by up to 34 watts while sticking to default clocks. Although undervolting might make some cards unstable. So results may vary.
im sorry, but 750ti was very popular card, especially AIB ones, and especially ones with no additional power connectors just due to fact they had no additional power connectors as well as GTX950.
Just how it became such a problem now? it has been out there for years. And yes they can "fix" it by limiting power options in drivers and thats about it. Card doesnt really go out of spec on stock just when overclcocked.
Doesn't seem so bad to me... Not sure what a better alternative would be for people that already bought the ref cards. If it's that' big of an issue for you just wait for the vendor cards.
AMD said recently the power issue is gonna be fixed with a driver. Check reddit amd for source.
Seems like a really bad fix for a hardware design problem - which needs a higher power delivery to the card (as in more connectors beyond the 6pin)
Or possibly the current drivers are not good enough to properly handle the newest gpu which just came out. current version of driver is about a month old, i am not sure it is capable of working with a gpu that is 1 day old.
Like Quizzical said earlier in a couple posts, handling the issue in the driver is not the optimal fix. It means that they need to clock down the card. Benchmarks and reviews are done with the out of spec card showing better performance than what you'll get once the drive fixes it. So you've bought a card based on an advertised performance that it won't deliver.
The best thing to do is return the reference card, if possible, and buy a partner card with a better power and cooling supply.
Thats actually not true, card doesnt go out of spec on stock, just when overclocked and NVidia cards has been doing that for years now with no problems whatsoever.
Wasn't the whole deal that the 480 was actually going out of spec on stock without any overclocking whatsover ?
That is what I gather - Tom's, which was the first to report as far as I can tell, found power readings out of spec at stock clocks and voltages.
It could be that AMD send out special "Review" samples that had additional overclocking on them (wouldn't be the first time that has happened), and only these review boards are affected, but I don't have anything to confirm that.
I certainly agree, if you overclock, you are taking responsibility for the consequences. But the card should conform at stock.
as its seen on the graphs when not OCed it stays below 75W vast majority of the time with occasional spike above, but as 960 had constant spikes in range of 200-300W and that was dismissed as "harmless"...yeah
Good read. I agree with you completely that with these reference cards, one would be best off to wait two weeks and buy a partner board. The board partners should have the power delivery issues fixed through their respective BIOSes. I did want to add something that you left out, however. People have been having great luck with undervolting these cards. It can drop power usage by 30+ watts, AND provides much more overclocking headroom on reference boards. What are your thoughts on this? It leads me to the conclusion that binning might have been an issue from Global Foundries, and in order to maximize supply, AMD increased voltage on these polaris chips.
The industry standards are good guidelines, I agree.
However, there are some standards that can be rather flexible while some are already giving you the actual maximums.
@Quizzical Consider this: If you look at the power connectors (molex connector specs) you can see that the 75 W limit imposed by the PCIE on the 6-pin is very conservative. It's fine to go over.
The theoretical maximum the 6-pin can provide is: 12V * 11A * 2 = 264 W
The theoretical maximum the 8-pin (where pin 2 is also a 12 V): 12V * 11A * 3 = 396W
The reason why the PCIE specs list a maximum value of 75W is because of the maximum current that the PCB lines can receive from the connector. The PCIE specs assume it's ~3A:
12V * 3A * 2 = 72W
It's the card manufacturers who are the ones creating the cards' circuit board and as long as their PCB can handle more power from the connector there is nothing wrong with using it. Technology has advanced enough already that card manufacturers list a maximum of 75 W only to show PCIE comaptibility, not because they don't pull in more power.
Having explained all that, the issue with the newest Radeon RX 480 is the fact it draws more from the PCIE slot. The PCB lines providing current to the PCIE are NOT something graphics card manufacturers can control and this is the real reason why this card is so dangerous. You are basically relying on your mobo manufacturer to make up for AMDs mistakes. If your motherboard won't be able to handle the current it'll either shut down your PC or fry.
The reason why the Radeon draws so much from PCIE is because for some very strange reason it's PCIE and connector power intakes are linked. Usually what graphic cards do was keep PCIE power at a sensible level and simply increase the power intake from the connector.
For the Radeon RX 480 is seems it's impossible to increase one without increasing the other which is puzzling to say the least since previous AMD cards handled it fine:
Not sure a new driver can change that, seems like either a faulty architecture or faulty BIOS.
Seriously Farmville? Yeah I think it's great. In a World where half our population is dying of hunger the more fortunate half is spending their time harvesting food that doesn't exist.
Im sorry but Asus, Gigabyte, MSI, EVGA also produce motherboards as well as graphics cards (Sapphire used to also).
1) We are not talking about Asus, Gigabyte, MSI and EVGA graphics cards here. They'll probably release cards with their own architecture and their own BIOS tweaks which might fix those mistakes.
2) The fact you make motherboards doesn't mean you predict the GPU manufacturer will go out of spec and push the PCIE beyond the limits on his graphics card.
Hell, I imagine most of the time the 480 will be on cheap motherboards since it's not a high end card. And those motherboards are usually cheap because they save every dollar possible which in turn means circuits on it won't be too able to reliably handle higher currents.
3) In your previous post higher up the page you falsely claim that the max intake from the non-OC card rarely exceeds 75W.You need to look at at a few additional details!
The white line you see oscillating around 75W is the 12V PCIE which should not exceed 66W. There is also a 3.3V PCIE line on that graph (the red one) which contributes to the total power and is perfects fine, not exceeding specs.
You might argue, 75 to 66W - it's only 15% higher than the specs. Sure...
It's like driving a 75 tonne truck on a bridge that has warning signs "Max 66 tonnes".
If (that's the case of the power connector and graphics card PCB) you were the one building that bridge and you know it can easily handle a 75 tonne truck, sure, go ahead and drive.
If (that's the case with the mobo PCB from various manufacturers) you were not the one building that bridge and have no idea if the bridge will be able to handle 75 tonnes, you are not going to risk your life. Or at least shouldn't.
Originally posted by nethaniah
Seriously Farmville? Yeah I think it's great. In a World where half our population is dying of hunger the more fortunate half is spending their time harvesting food that doesn't exist.
sacredfool said:
Im sorry but Asus, Gigabyte, MSI, EVGA also produce motherboards as well as graphics cards (Sapphire used to also).
span>1) We are not talking about Asus, Gigabyte, MSI and EVGA graphics cards here. They'll probably release cards with their own architecture and their own BIOS tweaks which might fix those mistakes.
2) The fact you make motherboards doesn't mean you predict the GPU manufacturer will go out of spec and push the PCIE beyond the limits on his graphics card.
Hell, I imagine most of the time the 480 will be on cheap motherboards since it's not a high end card. And those motherboards are usually cheap because they save every dollar possible which in turn means circuits on it won't be too able to reliably handle higher currents.
3) In your previous post higher up the page you falsely claim that the max intake from the non-OC card rarely exceeds 75W.You need to look at at a few additional details!
The white line you see oscillating around 75W is the 12V PCIE which should not exceed 66W. There is also a 3.3V PCIE line on that graph (the red one) which contributes to the total power and is perfects fine, not exceeding specs.
You might argue, 75 to 66W - it's only 15% higher than the specs. Sure...
It's like driving a 75 tonne truck on a bridge that has warning signs "Max 66 tonnes".
If (that's the case of the power connector and graphics card PCB) you were the one building that bridge and you know it can easily handle a 75 tonne truck, sure, go ahead and drive.
If (that's the case with the mobo PCB from various manufacturers) you were not the one building that bridge and have no idea if the bridge will be able to handle 75 tonnes, you are not going to risk your life. Or at least shouldn't.
Just to add, PCPer is interpreting data different, they likely got the same results but filtered them out to show "more readable way that better meshes with the continuous power delivery capabilities of the system".
This is fine if you want to know how much you will pay for your electricity bill but potentionally dangerous if you intend to calculate capacity of your PSU or like in this case, PCIE.
2. You are both GPU and MOBO manufacturer. And even more budget cards like 750ti/GTX950 with no power connectors have done it for years now and those are usually paired with worse/cheapest budget motherboards
3. spec on PCIe is 5,5A +/- 8%, so 5,94A= 71,28 W
4. bridges are built for much more than slight increase. do you even have idea how bridges are stress tested before theyre put to use? pretty much everything has safety margins that go way over rated working spec.
The concern is exactly the continuous power rather than some milisecond spikes. Your PSU and PCIE should be able to handle short power spikes since the work done during a milisecond is negligible and thus the heat is negligible too. No heat, no danger. The graphs represent a 10 second period if I am reading them right and they simply smoothed out the curve to eliminate high frequency "noise".
PS: I discussed the PCPer data only. Toms data shows exactly the same thing just in a more stupid manner since the sampling frequency presented make the data unreadable.
Originally posted by nethaniah
Seriously Farmville? Yeah I think it's great. In a World where half our population is dying of hunger the more fortunate half is spending their time harvesting food that doesn't exist.
sacredfool said: Your PSU and PCIE should be able to handle short power spikes since the work done during a milisecond is negligible and thus the heat is negligible too.
2. You are both GPU and MOBO manufacturer. And even more budget cards like 750ti/GTX950 with no power connectors have done it for years now and those are usually paired with worse/cheapest budget motherboards
3. spec on PCIe is 5,5A +/- 8%, so 5,94A= 71,28 W
4. bridges are built for much more than slight increase. do you even have idea how bridges are stress tested before theyre put to use? pretty much everything has safety margins that go way over rated working spec.
The bridge was a metaphor...
As far as point 3 goes, this just proves that even if you look at the best case scenario (71 W) the RX 480 is still over the specification since as you said yourself the card "stays below 75W vast majority of the time with occasional spike above".
That is not fine. It's not fine even if other kids, like the 750, would do it too. Which by the way you disprove yourself since on the previous page you posted a graph of the 750 having an average of 64W....
sacredfool said: Your PSU and PCIE should be able to handle short power spikes since the work done during a milisecond is negligible and thus the heat is negligible too.
That depends how high those spikes go....
Switches, mechanical or electronic, have always spiked and stuff is working fine. It's a perfectly normal occurrence in electrical and electronic engineering.
The highest spiked on the 12V rail (looking at Toms now) have been around only 150 W. That is double the spec (the max being 71W as mala pointed out) but it's not dangerous at all since the spec deals with continuous power rather than spikes.
Originally posted by nethaniah
Seriously Farmville? Yeah I think it's great. In a World where half our population is dying of hunger the more fortunate half is spending their time harvesting food that doesn't exist.
Dude thats a REFERENCE 750ti, theres plenty of custom OCed 750tis with no power connector out there as wellas GTX950, in fact their main selling point was "no additional power connector". If you dont have a clue what youre talking about (hint "they only produce GPUs" when most of these manufacturers prodcue motherboards TOO, even arguing the point...ahem)
yes, bridge was a metaphor and a good one to show that yes, all things have a safety margin built in...surprise surprise.
And 960 had constant spikes in range of 200-300W and it spent most time spiking to 150W
And that was all fine, until now, and nobody paid much attention. And thats because its non issue lol.
Comments
The kicker is that they all bragged about "no additional power connector" while they were achieving that by going over PCIe slot spec.
Same goes for GTX950 with no additional power connector (Asus, dont know if any others released such cards)
We can argue about how bad going out of specification is, but not that it is bad. Let's back up and take a principled look.
There are industry standards in many things, from PCI Express to SATA to USB to ATX and beyond. Industry standards are a good thing. Industry standards are what let you buy a CPU from Intel, a video card from Sapphire, a motherboard from MSI, an SSD from Crucial, memory from Corsair, a power supply from Seasonic, a monitor from Asus, and a game from Blizzard and reasonably expect that it will all work together. And still reasonably expect that it will all work together if you instead buy a CPU from AMD, a video card from EVGA, a motherboard from Gigabyte, an SSD from Samsung, memory from G.Skill, a power supply from XFX, a monitor from Dell, and a game from NCsoft. Or mix and match parts however you please, including many other options for many components, subject to a relative handful of compatibility restrictions (e.g., correct processor socket and memory standard).
Industry standards allow for competition. If you don't like what Corsair does with memory, you can buy G.Skill. If you don't like G.Skill, you can buy Mushkin. Or Patriot, Kingston, or a lot of other brands. Industry standards where if you follow the standards, it works with everything, greatly lowers the barriers into entry. That means more competition, lower prices, and things more likely to just work. Those are all good things.
Just because most or even all of the existing hardware you need your component to be compatible with does X doesn't mean it's okay to ignore the standard and only make your part compatible with something that does X. Even if it mostly works, you've now introduced some non-standard hardware into the system. Someone in the future could make hardware that follows the standards perfectly but doesn't do X, and now your hardware isn't compatible with theirs. And it's 100% your fault, not theirs, and your customers suffer for it. Thinking it's okay to ignore standards if you can get away with it is how we got the debacle of Internet Explorer 6-8, among other things, and everyone suffered for that, not just its users.
Malabooga's chart on a 750 Ti shows power draw at 1 ms resolution. The reason you have these massive vertical lines is that within a second, power draw can fluctuate wildly. But when you're pulling too much power, the question of how much and for how long makes a huge, huge difference. 100 W coming in for 1 ms is not at all similar to 100 W coming in for an entire second, which is itself not similar to averaging 100 W for an entire hour. The last one can be catastrophic if you've only got a PCI Express slot. The first is worth a shrug. There's only one brief moment in the entire run where the power draw stayed over 75 W for as long as 1/3 of a second, and the 64 W average is well in spec.
I'm not sure how much is too much, as I'm not an electrician. But I do know that the 120 V current from the wall will routinely run as high as 170 V for milliseconds at a time. Yes, yes, there's a difference between AC and DC, and AC spikes up and down by a factor of sqrt(2) over the nominal voltage.
But the standards are there for a reason, and violating them with your stock products is usually not okay, with only rare exceptions. I especially don't like seeing that happen on power delivery, as that can result in "hardware is dead", which can't be fixed by the next patch. If end users push things out of spec by overclocking, that's a risk they take and if everything fries, that's on the end user. I'm okay with letting end users take risks, whether by stupidity or by working around them with high quality components to mitigate those risks. But it shouldn't be out of spec at stock settings. Especially when it's a simple fix to get back in spec.
Now, there are rare exceptions. For example, the PCI Express standard only anticipates boards that draw up to 300 W. If you give a board two 8-pin connectors and then pull 350 W, I'm okay with that, at least assuming that the power draw is distributed appropriately among the various connectors. If you're building a server or an HPC with eight boards on a node, 350 W each can make it awfully hard to cool, especially when you've got thousands of the servers or nodes or whatever in the room. That might be why the standard caps it at 300 W, though I don't know that. But for a single card in a desktop, cooling 350 W is not trivial, but it is doable if you work around it, in part because it doesn't mean you're pumping 1 MW into a room like you might if you tried an analogously simple setup in a data center. And it also doesn't make the card non-compliant with other hardware for subtle reasons.
Which means.... AMD probably knows exactly what they are doing with this. The power issue could be easily resolved by just adjusting PowerTune parameters, but that would be at the expense of performance. And these reference cards just aren't made for overclocking at all (which should surprise exactly no one)
Source: https://www.reddit.com/r/Amd/comments/4qoclm/german_site_explores_the_potential_for/
Just how it became such a problem now? it has been out there for years. And yes they can "fix" it by limiting power options in drivers and thats about it. Card doesnt really go out of spec on stock just when overclcocked.
It could be that AMD send out special "Review" samples that had additional overclocking on them (wouldn't be the first time that has happened), and only these review boards are affected, but I don't have anything to confirm that.
I certainly agree, if you overclock, you are taking responsibility for the consequences. But the card should conform at stock.
non OC
OC
and the article that sums it up
http://wccftech.com/amd-rx-480-pcie-power-issue-detailed-overclocking-investigated/
as its seen on the graphs when not OCed it stays below 75W vast majority of the time with occasional spike above, but as 960 had constant spikes in range of 200-300W and that was dismissed as "harmless"...yeah
However, there are some standards that can be rather flexible while some are already giving you the actual maximums.
@Quizzical
Consider this:
If you look at the power connectors (molex connector specs) you can see that the 75 W limit imposed by the PCIE on the 6-pin is very conservative. It's fine to go over.
The theoretical maximum the 6-pin can provide is:
12V * 11A * 2 = 264 W
The theoretical maximum the 8-pin (where pin 2 is also a 12 V):
12V * 11A * 3 = 396W
The reason why the PCIE specs list a maximum value of 75W is because of the maximum current that the PCB lines can receive from the connector. The PCIE specs assume it's ~3A:
12V * 3A * 2 = 72W
It's the card manufacturers who are the ones creating the cards' circuit board and as long as their PCB can handle more power from the connector there is nothing wrong with using it. Technology has advanced enough already that card manufacturers list a maximum of 75 W only to show PCIE comaptibility, not because they don't pull in more power.
Having explained all that, the issue with the newest Radeon RX 480 is the fact it draws more from the PCIE slot.
The PCB lines providing current to the PCIE are NOT something graphics card manufacturers can control and this is the real reason why this card is so dangerous. You are basically relying on your mobo manufacturer to make up for AMDs mistakes. If your motherboard won't be able to handle the current it'll either shut down your PC or fry.
The reason why the Radeon draws so much from PCIE is because for some very strange reason it's PCIE and connector power intakes are linked. Usually what graphic cards do was keep PCIE power at a sensible level and simply increase the power intake from the connector.
For the Radeon RX 480 is seems it's impossible to increase one without increasing the other which is puzzling to say the least since previous AMD cards handled it fine:
Not sure a new driver can change that, seems like either a faulty architecture or faulty BIOS.
Sources:
Molex specs:
http://www.tomshardware.co.uk/forum/274631-28-power-spec-power-plug#2674391
AMD Power intake:
1) We are not talking about Asus, Gigabyte, MSI and EVGA graphics cards here.
They'll probably release cards with their own architecture and their own BIOS tweaks which might fix those mistakes.
2) The fact you make motherboards doesn't mean you predict the GPU manufacturer will go out of spec and push the PCIE beyond the limits on his graphics card.
Hell, I imagine most of the time the 480 will be on cheap motherboards since it's not a high end card. And those motherboards are usually cheap because they save every dollar possible which in turn means circuits on it won't be too able to reliably handle higher currents.
3) In your previous post higher up the page you falsely claim that the max intake from the non-OC card rarely exceeds 75W. You need to look at at a few additional details!
The white line you see oscillating around 75W is the 12V PCIE which should not exceed 66W. There is also a 3.3V PCIE line on that graph (the red one) which contributes to the total power and is perfects fine, not exceeding specs.
You might argue, 75 to 66W - it's only 15% higher than the specs. Sure...
It's like driving a 75 tonne truck on a bridge that has warning signs "Max 66 tonnes".
If (that's the case of the power connector and graphics card PCB) you were the one building that bridge and you know it can easily handle a 75 tonne truck, sure, go ahead and drive.
If (that's the case with the mobo PCB from various manufacturers) you were not the one building that bridge and have no idea if the bridge will be able to handle 75 tonnes, you are not going to risk your life. Or at least shouldn't.
Just to add, PCPer is interpreting data different, they likely got the same results but filtered them out to show "more readable way that better meshes with the continuous power delivery capabilities of the system".
This is fine if you want to know how much you will pay for your electricity bill but potentionally dangerous if you intend to calculate capacity of your PSU or like in this case, PCIE.
https://www.overclockers.co.uk/pc-components/graphics-cards/amd/radeon-rx-480
Asus/Gigabyte/MSI cards there
2. You are both GPU and MOBO manufacturer. And even more budget cards like 750ti/GTX950 with no power connectors have done it for years now and those are usually paired with worse/cheapest budget motherboards
3. spec on PCIe is 5,5A +/- 8%, so 5,94A= 71,28 W
4. bridges are built for much more than slight increase. do you even have idea how bridges are stress tested before theyre put to use? pretty much everything has safety margins that go way over rated working spec.
work = power / time
The concern is exactly the continuous power rather than some milisecond spikes. Your PSU and PCIE should be able to handle short power spikes since the work done during a milisecond is negligible and thus the heat is negligible too. No heat, no danger. The graphs represent a 10 second period if I am reading them right and they simply smoothed out the curve to eliminate high frequency "noise".
PS:
I discussed the PCPer data only. Toms data shows exactly the same thing just in a more stupid manner since the sampling frequency presented make the data unreadable.
As far as point 3 goes, this just proves that even if you look at the best case scenario (71 W) the RX 480 is still over the specification since as you said yourself the card "stays below 75W vast majority of the time with occasional spike above".
That is not fine. It's not fine even if other kids, like the 750, would do it too. Which by the way you disprove yourself since on the previous page you posted a graph of the 750 having an average of 64W....
Switches, mechanical or electronic, have always spiked and stuff is working fine. It's a perfectly normal occurrence in electrical and electronic engineering.
The highest spiked on the 12V rail (looking at Toms now) have been around only 150 W. That is double the spec (the max being 71W as mala pointed out) but it's not dangerous at all since the spec deals with continuous power rather than spikes.
yes, bridge was a metaphor and a good one to show that yes, all things have a safety margin built in...surprise surprise.
And 960 had constant spikes in range of 200-300W and it spent most time spiking to 150W
And that was all fine, until now, and nobody paid much attention. And thats because its non issue lol.