So today i did a little experiment to see the effects. Test game was Witcher 3 as it pretty much takes all that GPU has to give, GPU RX480
1. 1365 MHz, 1.15v (1.125 after vdroop) = ~165W
2. 1280 MHz, 1.15v (1.125 after vdroop) = ~157W
3. 1280 MHz, 1.055v (1.037 after vdroop)= ~125W
as its clearly visible, voltage has several times more impact on power consumption than frequency. So anyone with AMD card that doesnt want to OC their card can reduce their power consumption significantly without any loss in performance and is advised to play around a bit and undervolt their card and Wattman is great means to do it as it is built in the drivers.
Or OTOH, if you want to OC your card without touching voltages go ahead as it will have fairly small impact on power consumption due to increased frequency.
(This is also true for CPUs.)
This is not possible on NVidia cards as lowering voltages automatically reduces performance proportionally to voltage reduction, and any OCing automatically raises voltages/power consumption, and you need thirdy party program to do it.
Yea you gave it less power making it use less watts. Its like using a 12v battery in a machine that wants to use 24volts. Yes it will work but not as good as it should.
Yea you gave it less power making it use less watts. Its like using a 12v battery in a machine that wants to use 24volts. Yes it will work but not as good as it should.
Your understanding of thos is astounding, you should write a paper on it and show those NVidia/AMD/Intel engineers how deluded they are! rofl
You will lose performance because you have cut power that the card will need to run at its full potential. Back to my example. You put a 12v battery in a 24v power tool. It runs, but half as good as it would with full power. Unless somehow the gpu doesn't need the power. You are basically underpowering the gpu.
Yea you gave it less power making it use less watts. Its like using a 12v battery in a machine that wants to use 24volts. Yes it will work but not as good as it should.
Voltage is not the same thing as wattage. Rather, power (in watts) is voltage (in volts) times current (in amps).
You will lose performance because you have cut power that the card will need to run at its full potential. Back to my example. You put a 12v battery in a 24v power tool. It runs, but half as good as it would with full power. Unless somehow the gpu doesn't need the power. You are basically underpowering the gpu.
If it's stable, changing the voltage changes power consumption, but doesn't change performance. The drawback of setting the voltage too low is that it doesn't work right anymore.
However, for chips set at a fixed voltage, it's common to be set at a higher voltage than the chip actually needs. Not all chips need the same voltage to get a given clock speed, and they set it high enough that most chips can function properly. In some cases, tinkering with voltages yourself to see how low you can set it on the particular chip you have while remaining stable can save you 5% or 10% or whatever of your power consumption without affecting performance.
Stability actually depends on voltage, temperature, and clock speed. If you know that you keep your hardware cooler than most people, that will let you get away with a (slightly) lower voltage at the same clock speed. And the stock settings tend to be pretty conservative, as if someone has a poorly ventilated case that makes his video card get hot and crash as a result, he'll probably blame the GPU vendor, not himself. Because people who would blame themselves are much less likely to make that mistake.
Most RX480s have default voltage of 1.15v regardless of their actual frequency, even reference that runs at max 1266 Mhz (but it hits another limit that is power limit so it runs 1180-1190 Mhz)
Thats why people that undervolted their reference cards saw pretty good improvement in performance WITH reduced power consumption, as with reduced voltage it used much less power and it wasnt hitting stock power limit so it ran constant max boost of 1266 MHz. Another effect was reduced fan noise and temperatures as it had to dissipate less heat
Another way to hit constant 1266 MHz was raising the power limit, but that just raised temperatures/fan noise so its clear which way is much more preferable.
So yeah, in some circumstances reducing voltage can even improve performance lol
You will lose performance because you have cut power that the card will need to run at its full potential. Back to my example. You put a 12v battery in a 24v power tool. It runs, but half as good as it would with full power. Unless somehow the gpu doesn't need the power. You are basically underpowering the gpu.
If it's stable, changing the voltage changes power consumption, but doesn't change performance. The drawback of setting the voltage too low is that it doesn't work right anymore.
However, for chips set at a fixed voltage, it's common to be set at a higher voltage than the chip actually needs. Not all chips need the same voltage to get a given clock speed, and they set it high enough that most chips can function properly. In some cases, tinkering with voltages yourself to see how low you can set it on the particular chip you have while remaining stable can save you 5% or 10% or whatever of your power consumption without affecting performance.
Stability actually depends on voltage, temperature, and clock speed. If you know that you keep your hardware cooler than most people, that will let you get away with a (slightly) lower voltage at the same clock speed. And the stock settings tend to be pretty conservative, as if someone has a poorly ventilated case that makes his video card get hot and crash as a result, he'll probably blame the GPU vendor, not himself. Because people who would blame themselves are much less likely to make that mistake.
It is possible that the card was set to use more power then it should for maximum output. We are also talking about a card not just a chip here as well. Which involves fans, ram, chip. The fans are possibly running slower or the ram isn't getting all its power either. So he could have found a sweet spot that AMD missed but not easy to determine without doing a full analysis. But looking at the numbers he dropped 25w. Thats too much of a drop not to slow down something in that card.
Yea you gave it less power making it use less watts. Its like using a 12v battery in a machine that wants to use 24volts. Yes it will work but not as good as it should.
Your understanding of thos is astounding, you should write a paper on it and show those NVidia/AMD/Intel engineers how deluded they are! rofl
You are the one saying that AMD engineers didn't know how to properly volt the card and you figured out a better way. You made that claim not me.
Yea you gave it less power making it use less watts. Its like using a 12v battery in a machine that wants to use 24volts. Yes it will work but not as good as it should.
Your understanding of thos is astounding, you should write a paper on it and show those NVidia/AMD/Intel engineers how deluded they are! rofl
You are the one saying that AMD engineers didn't know how to properly volt the card and you figured out a better way. You made that claim not me.
you really think they test each card individually and make custom bios for each and every card?
NVidia tied their power management and algorithms in such way that you cannot decouple them, and even if you could lower voltage without loss of performance, you cant, you lower voltage you lose performance whether its warranted or not because it will work at whatever NVidia progremmed it at. And as you can see, engineers and statisics are far from perfect. AMD didnt, they gave you ability to fine tune your card to your hearts delight lol
You will lose performance because you have cut power that the card will need to run at its full potential. Back to my example. You put a 12v battery in a 24v power tool. It runs, but half as good as it would with full power. Unless somehow the gpu doesn't need the power. You are basically underpowering the gpu.
If it's stable, changing the voltage changes power consumption, but doesn't change performance. The drawback of setting the voltage too low is that it doesn't work right anymore.
However, for chips set at a fixed voltage, it's common to be set at a higher voltage than the chip actually needs. Not all chips need the same voltage to get a given clock speed, and they set it high enough that most chips can function properly. In some cases, tinkering with voltages yourself to see how low you can set it on the particular chip you have while remaining stable can save you 5% or 10% or whatever of your power consumption without affecting performance.
Stability actually depends on voltage, temperature, and clock speed. If you know that you keep your hardware cooler than most people, that will let you get away with a (slightly) lower voltage at the same clock speed. And the stock settings tend to be pretty conservative, as if someone has a poorly ventilated case that makes his video card get hot and crash as a result, he'll probably blame the GPU vendor, not himself. Because people who would blame themselves are much less likely to make that mistake.
It is possible that the card was set to use more power then it should for maximum output. We are also talking about a card not just a chip here as well. Which involves fans, ram, chip. The fans are possibly running slower or the ram isn't getting all its power either. So he could have found a sweet spot that AMD missed but not easy to determine without doing a full analysis. But looking at the numbers he dropped 25w. Thats too much of a drop not to slow down something in that card.
This is pure gold lol. 25w? basic math much? lol
guess what, that voltage doesnt effect ANYTHING on the card except GPU itself. You dont even have a most basic understanding how graphic card works lol
Yea you gave it less power making it use less watts. Its like using a 12v battery in a machine that wants to use 24volts. Yes it will work but not as good as it should.
Your understanding of thos is astounding, you should write a paper on it and show those NVidia/AMD/Intel engineers how deluded they are! rofl
You are the one saying that AMD engineers didn't know how to properly volt the card and you figured out a better way. You made that claim not me.
you really think they test each card individually and make custom bios for each and every card?
NVidia tied their power management and algorithms in such way that you cannot decouple them, and even if you could lower voltage without loss of performance, you cant, you lower voltage you lose performance whather its warranted or not. AMD didnt, they gave you ability to fine tune your card to your hearts delight lol
You will lose performance because you have cut power that the card will need to run at its full potential. Back to my example. You put a 12v battery in a 24v power tool. It runs, but half as good as it would with full power. Unless somehow the gpu doesn't need the power. You are basically underpowering the gpu.
If it's stable, changing the voltage changes power consumption, but doesn't change performance. The drawback of setting the voltage too low is that it doesn't work right anymore.
However, for chips set at a fixed voltage, it's common to be set at a higher voltage than the chip actually needs. Not all chips need the same voltage to get a given clock speed, and they set it high enough that most chips can function properly. In some cases, tinkering with voltages yourself to see how low you can set it on the particular chip you have while remaining stable can save you 5% or 10% or whatever of your power consumption without affecting performance.
Stability actually depends on voltage, temperature, and clock speed. If you know that you keep your hardware cooler than most people, that will let you get away with a (slightly) lower voltage at the same clock speed. And the stock settings tend to be pretty conservative, as if someone has a poorly ventilated case that makes his video card get hot and crash as a result, he'll probably blame the GPU vendor, not himself. Because people who would blame themselves are much less likely to make that mistake.
It is possible that the card was set to use more power then it should for maximum output. We are also talking about a card not just a chip here as well. Which involves fans, ram, chip. The fans are possibly running slower or the ram isn't getting all its power either. So he could have found a sweet spot that AMD missed but not easy to determine without doing a full analysis. But looking at the numbers he dropped 25w. Thats too much of a drop not to slow down something in that card.
This is pure gold lol. 25w? basic math much? lol
guess what, that voltage doesnt effect ANYTHING on the card except GPU itself. You dont even have a most basic understanding how graphic card works lol
Ok you want to play this game i guess. You have somehow dropped 27W without losing any performance. Yea keep telling yourself that. You are using ALL the power that card is accounted for. The power that supplies the entire card not just the gpu processor. You telling me that your card is running 157w just the processor alone? No look at the specs. The entire card uses 157W and that includes the fans. Look man if you buy the card you are suppose to expect 150w power consumption from it. You know thats how you decide what power supply you need.
Now lets talk about your misunderstanding of voltage. Right now the plug in your house has 115v of power supplied just sitting there. It doesn't get used unless you plug something into the socket. Then when you begin to use the power it begins to use watts of power at 115v. If you lower the voltage then you lower the amount of power that can be drawn out. Which is why your watts went down. Because you have limited the power that can be drawn out. Which shows that something in the card is trying to draw 27w of power that you have cut off.
i wont edcuate you (as wh have all learend that its completely pointless to even try), and ive already told you to not embarass yourself, but you insist on doing so, well its hilarious anyway, so we can all get a good laugh at least lol
Yea you gave it less power making it use less watts. Its like using a 12v battery in a machine that wants to use 24volts. Yes it will work but not as good as it should.
Your understanding of thos is astounding, you should write a paper on it and show those NVidia/AMD/Intel engineers how deluded they are! rofl
You are the one saying that AMD engineers didn't know how to properly volt the card and you figured out a better way. You made that claim not me.
Let's talk about how computer chips actually work at a high level. And yes, I'm glossing over a whole lot of details here.
At a given moment in time, there will be a number of places on the chip that have a logical 0 or 1. This is represented by positive or negative net charges. Positively charged protons in metal don't move around very much, but negatively charged electrons sure do. So they can change whether the region is a 0 or a 1 by moving electrons in or out.
In order to do computations, you have to make it so that whether a given region has a 0 or 1 at some moment in time depends on whether particular other regions had a 0 or 1 at a previous moment in time. A very simple case is just moving a 0 or 1 around from one spot to another. A little more complicated is basic logic circuits, such as that one bit can be the xor of two previous bits. That is, a region should have a 0 if two particular other regions were the same (both 0 or both 1) at a previous time, or a 1 if the other two regions were different.
So the chip can do various things to move electrons around accordingly. Note that this requires information getting to one region from multiple other regions, so it's not just moving electrons in or out of a region in isolation. If you have to find out about information halfway across the chip, it takes a while for the electrons to to get there.
But you don't want to take all day to xor two bits together. You want to finish it so that you can move on and do the next computation that uses the result of the previous one. What is customarily done is to assume that the result if a computation will be done within some amount of time so that we can start using its output for the next. That amount of time can be the length of a clock cycle, though there are technical reasons why this isn't necessarily the case, from multiple clock domains on a chip to wanting to do some sequence of operations in a single clock cycle.
At a higher clock speed, you have less time for each clock cycle. So long as the electrons shuffle around to where they're supposed to be in time, everything works and it just means higher performance. But if you set the clock speed too high, you're assuming that electrons will be in position sooner than they actually get there, and so it doesn't work. If you set the clock speed lower than necessary, it still works, at least up to some limits as the portions of a chip don't magically hold their charges forever, but you get reduced performance.
Voltage comes in via Ohm's Law: V = IR. That is, voltage is current times resistance, or more appropriately do this discussion, I = V/R. If a given wire carrying electrons to move them around has a given resistance, at a higher voltage, you get a higher current--that is, the electrons get where they're going faster. Getting the electrons where they are going faster means that you can shorten the time of a clock cycle and have it still work. That is, you can clock the chip higher, which gets you more performance.
Temperature matters, too, as conductors tend to have higher resistance at higher temperatures, at least within the normal operating range of a computer. Thus, at a higher temperature, you need higher voltage to get the same current as before and maintain the same clock speed. Or perhaps rather, you accept a lower current and have to reduce the clock speed to give charges more time to move around. This is why better cooling allows higher clock speeds, and extreme cooling such as liquid nitrogen allows extremely high clock speeds.
And then quantum mechanics gets in the way. You can't actually decide, I'm going to move exactly 73 electrons from here to there to have exactly this charge. Or rather, if you decide that, physics probably won't cooperate with your decision. There are random errors in how many electrons get moved where. So long as the nominal charge for a 0 is far enough apart from the nominal charge for a 1, being off by several electrons is a small enough error that it can still figure out whether it's supposed to be a 0 or a 1 and it works.
But if stuff isn't getting there fast enough, this can break down, as what should have been a 0 ends up as closer to the nominal charge for a 1 or vice versa. The error propagates as the wrong output to one computation becomes the wrong input to the next, and then it and everything that depends on it can end up as random junk. And then you end up with impossible results that crash the computer or otherwise cause things not to work.
You can't ever eliminate the possibility of errors entirely, and with billions of transistors on a chip and billions of clock cycles per second, you'd better make sure that the probability of any particular bit being in error is awfully small in order for the expected time per error to be long enough that users will tolerate it. Having a bit error show up somewhere on the chip once per century is okay for most purposes, but once per second is not.
But quantum mechanics isn't done getting in the way. Those tiny wires that are only tens or hundreds of nanometers thick aren't quite identical at an atomic scale. Having a few more copper atoms here and a few fewer there means that some wires have more resistance than others intended to be identical to them. More resistance means lower current, and so the clock speed has to be reduced to compensate for the inferior wires.
But remember, you've still got billions of transistors on a chip, and now if just one wire can't move charges fast enough, it causes errors, those errors propagate, and the whole thing falls apart. So you have to set the clock speed to be slow enough that every single wire on the entire chip can move its data around fast enough.
How fast the worst wire on a chip can handle varies by random chance from one chip to the next. That's why some can handle higher clock speeds than other, nominally identical dies. Chip vendors tend to try to be conservative, and pick clock speeds and voltages that most of their chips can handle at the temperatures that their customers will run the hardware at. They don't truly plan for a worst case scenario on temperatures, but they do plan for the worst case that a not really all that unreasonable customer might plausibly inflict on their hardware, and it needs to still work in that case.
If you've got better cooling than the chip vendor plans for, you keep your chip cooler, and that means reduced resistance. That allows a lower voltage to still give the necessary current. Alternatively, if you simply got relatively lucky with which chip you were given, so that the worst wire on the chip has lower resistance than on some other nominally identical chips that also work, you can set a lower voltage and still get the necessary current.
Chip vendors do bin their chips somewhat, but finding the exact limits of what every chip depends on the operating temperatures that they don't know. They don't know how hot your room will be, nor how good of ventilation your case will have. So they try to leave a considerable buffer with higher voltages or lower clock speeds so that the chip still works even if your cooling isn't very good.
You can adjust those clock speeds and voltages yourself to get closer to the limits of what the chip can handle before it starts giving errors at an unacceptably high rate. But if you push too close to the limits, you can make your system unstable. A given clock speed and voltage could run fine in one game and crash in another that pushes the chip harder, thus generating more heat, making it run hotter, increasing resistance, and thereby reducing current below what is necessary. Or something that works fine on a cold day might crash on a hot day. Having enough of a buffer to keep your system stable is a good thing.
And then while I said that the protons in metal don't move much, it's not actually true that they don't move at all. Electromigration can move atoms around. As a chip ages, unfortunate changes from electromigration can increase the resistance of particular wires, thus reducing the current flow at a given voltage. That means that you can't have as high of a clock speed as before. It can also mean that if you had the clock speed and voltage close to what the limit of a chip can handle, and then the chip can't handle as much as before, settings that used to be stable no longer are. Stock settings are intended to have enough of a buffer to remain stable for years, in spite of the inevitable electromigration.
Reducing the voltage does reduce power consumption, though. Power is current times voltage, or P = IV. But remember that the current itself depends on the voltage, as I = V/R. Substitution yields P = V^2/R, and power is proportional to voltage squared. More generally, power consumption for an entire chip is roughly proportional to voltage squared. Reducing the voltage can thus drop power consumption considerably. The only downside to reducing voltage is that if you reduce it too far, the system crashes. Which is quite a downside, after all.
Yea you gave it less power making it use less watts. Its like using a 12v battery in a machine that wants to use 24volts. Yes it will work but not as good as it should.
Your understanding of thos is astounding, you should write a paper on it and show those NVidia/AMD/Intel engineers how deluded they are! rofl
You are the one saying that AMD engineers didn't know how to properly volt the card and you figured out a better way. You made that claim not me.
you really think they test each card individually and make custom bios for each and every card?
NVidia tied their power management and algorithms in such way that you cannot decouple them, and even if you could lower voltage without loss of performance, you cant, you lower voltage you lose performance whather its warranted or not. AMD didnt, they gave you ability to fine tune your card to your hearts delight lol
You will lose performance because you have cut power that the card will need to run at its full potential. Back to my example. You put a 12v battery in a 24v power tool. It runs, but half as good as it would with full power. Unless somehow the gpu doesn't need the power. You are basically underpowering the gpu.
If it's stable, changing the voltage changes power consumption, but doesn't change performance. The drawback of setting the voltage too low is that it doesn't work right anymore.
However, for chips set at a fixed voltage, it's common to be set at a higher voltage than the chip actually needs. Not all chips need the same voltage to get a given clock speed, and they set it high enough that most chips can function properly. In some cases, tinkering with voltages yourself to see how low you can set it on the particular chip you have while remaining stable can save you 5% or 10% or whatever of your power consumption without affecting performance.
Stability actually depends on voltage, temperature, and clock speed. If you know that you keep your hardware cooler than most people, that will let you get away with a (slightly) lower voltage at the same clock speed. And the stock settings tend to be pretty conservative, as if someone has a poorly ventilated case that makes his video card get hot and crash as a result, he'll probably blame the GPU vendor, not himself. Because people who would blame themselves are much less likely to make that mistake.
It is possible that the card was set to use more power then it should for maximum output. We are also talking about a card not just a chip here as well. Which involves fans, ram, chip. The fans are possibly running slower or the ram isn't getting all its power either. So he could have found a sweet spot that AMD missed but not easy to determine without doing a full analysis. But looking at the numbers he dropped 25w. Thats too much of a drop not to slow down something in that card.
This is pure gold lol. 25w? basic math much? lol
guess what, that voltage doesnt effect ANYTHING on the card except GPU itself. You dont even have a most basic understanding how graphic card works lol
Ok you want to play this game i guess. You have somehow dropped 27W without losing any performance. Yea keep telling yourself that. You are using ALL the power that card is accounted for. The power that supplies the entire card not just the gpu processor. You telling me that your card is running 157w just the processor alone? No look at the specs. The entire card uses 157W and that includes the fans. Look man if you buy the card you are suppose to expect 150w power consumption from it. You know thats how you decide what power supply you need.
Now lets talk about your misunderstanding of voltage. Right now the plug in your house has 115v of power supplied just sitting there. It doesn't get used unless you plug something into the socket. Then when you begin to use the power it begins to use watts of power at 115v. If you lower the voltage then you lower the amount of power that can be drawn out. Which is why your watts went down. Because you have limited the power that can be drawn out. Which shows that something in the card is trying to draw 27w of power that you have cut off.
115 V is a voltage, not a power. The V stands for Volts. It can draw whatever power it needs by changing the current, as power is current times voltage, that is, P = IV. Well, at least up to the limit that the wires in your house can handle before the circuit breakers feel the need to intervene to stop you from burning your house down. But that's not a meaningful concern for most desktops, as you have to be pulling well over 1000 W from the wall for that to be an issue. But a desktop can readily pull 100 W from the wall one second, then 200 W the next, then back to 100 W right after that.
Besides, your power supply will convert that 115 V AC to +12 V DC before passing it along to the video card, and then the video card will convert that to the voltages actually needed by various chips on the card. Changing the voltage of a GPU changes that last conversion that happens on the video card itself, not the 115 V AC from the wall or +12 V DC from the power supply to the video card, at least up to slight changes in the voltage caused by changing current draw.
More volts will always be less watts in the math. Such as a 220VAC HVAC system consumes less power than that of a 120VAC. But voltage is irrelevant in these cases. CMOS, MOSFET, and TTL are +3.3v positive, +1.3v positive, and +5v positive respectively (logic positive, not voltage), changing the voltage will not effect performance in increasing, infact will do harm as the DC ripple can and will force a constant positive logic (all ones) if the output is too high (If this is occuring the circuit drew too much heat and burned the semi-conductors open.). Then there's also a skew of other digital and electrical theory phenomena. Such as propagation delay, and a natural 1.7V voltage drop per general purpose diode, IC circuits have lower as less of a variance gap between unlike particals (typically silicon and germanium, the main ingredients of a semi-conductor -- diodes, transistors).
In terms of performance to power consumption, the first step would be to lessen the ripple from the A to D converter of the incoming power, simple fix being just parallel more capacitors and configure zener diodes properly as voltage regulator circuits. Usually this is already resolved both on the mother board and power supply of the machine. At least I would hope the engineers designed them that way, you get what you pay for. The smoother the DC input, the more reliable the information and less need for error correction checks.
Power supply rails distribute more than just a +12VDC. There's also a 5.5VDC, 3.3VDC, and 1.5VDC needed to operate modern motherboards.
Edit: To clarify my first statement "More volts will always be less watts" Yes, P=IE, but V=IR. Thus more EMF (electro-motive force, Voltage) means less current.
I've had two identical asus 270-DC2OC (serial numbers one next the other)
They have default voltage of 1.215v, and i managed to change their p-states, and put 1.1v for the highest usage one.
And they worked perfectly for a time with a factory 50mhz overclock compared to stock 270s on the most demanding of tests.
After a year or so one of the cards started crashing in only some games. Then it started crashing constantly, so i went back and loaded the default bios put it back to 1.215v and works perfectly to this day.
The other even though i don't have it with me anymore is working 3 years now @1.1v 950Mhz(50Mhz overclock) and reduced wattage output with no problems whatsover.
i wont edcuate you (as wh have all learend that its completely pointless to even try), and ive already told you to not embarass yourself, but you insist on doing so, well its hilarious anyway, so we can all get a good laugh at least lol
Dude what is your problem? You went from using 157w down to 125w. That is 27w of power less you used. The volts are not equal to the power but the volts limits the power. You can't get as much power from 12v as you can from 24v that is just how it works. You limited how much power the card is able to use by lowering the voltage. The card only uses the power that you let it use and you put a block on it. So ye DUH the power consumption is lower because you limited how much it can use. WTF you talking about basic math? You think volts=power consumption?
Yea you gave it less power making it use less watts. Its like using a 12v battery in a machine that wants to use 24volts. Yes it will work but not as good as it should.
Your understanding of thos is astounding, you should write a paper on it and show those NVidia/AMD/Intel engineers how deluded they are! rofl
You are the one saying that AMD engineers didn't know how to properly volt the card and you figured out a better way. You made that claim not me.
you really think they test each card individually and make custom bios for each and every card?
NVidia tied their power management and algorithms in such way that you cannot decouple them, and even if you could lower voltage without loss of performance, you cant, you lower voltage you lose performance whather its warranted or not. AMD didnt, they gave you ability to fine tune your card to your hearts delight lol
You will lose performance because you have cut power that the card will need to run at its full potential. Back to my example. You put a 12v battery in a 24v power tool. It runs, but half as good as it would with full power. Unless somehow the gpu doesn't need the power. You are basically underpowering the gpu.
If it's stable, changing the voltage changes power consumption, but doesn't change performance. The drawback of setting the voltage too low is that it doesn't work right anymore.
However, for chips set at a fixed voltage, it's common to be set at a higher voltage than the chip actually needs. Not all chips need the same voltage to get a given clock speed, and they set it high enough that most chips can function properly. In some cases, tinkering with voltages yourself to see how low you can set it on the particular chip you have while remaining stable can save you 5% or 10% or whatever of your power consumption without affecting performance.
Stability actually depends on voltage, temperature, and clock speed. If you know that you keep your hardware cooler than most people, that will let you get away with a (slightly) lower voltage at the same clock speed. And the stock settings tend to be pretty conservative, as if someone has a poorly ventilated case that makes his video card get hot and crash as a result, he'll probably blame the GPU vendor, not himself. Because people who would blame themselves are much less likely to make that mistake.
It is possible that the card was set to use more power then it should for maximum output. We are also talking about a card not just a chip here as well. Which involves fans, ram, chip. The fans are possibly running slower or the ram isn't getting all its power either. So he could have found a sweet spot that AMD missed but not easy to determine without doing a full analysis. But looking at the numbers he dropped 25w. Thats too much of a drop not to slow down something in that card.
This is pure gold lol. 25w? basic math much? lol
guess what, that voltage doesnt effect ANYTHING on the card except GPU itself. You dont even have a most basic understanding how graphic card works lol
Ok you want to play this game i guess. You have somehow dropped 27W without losing any performance. Yea keep telling yourself that. You are using ALL the power that card is accounted for. The power that supplies the entire card not just the gpu processor. You telling me that your card is running 157w just the processor alone? No look at the specs. The entire card uses 157W and that includes the fans. Look man if you buy the card you are suppose to expect 150w power consumption from it. You know thats how you decide what power supply you need.
Now lets talk about your misunderstanding of voltage. Right now the plug in your house has 115v of power supplied just sitting there. It doesn't get used unless you plug something into the socket. Then when you begin to use the power it begins to use watts of power at 115v. If you lower the voltage then you lower the amount of power that can be drawn out. Which is why your watts went down. Because you have limited the power that can be drawn out. Which shows that something in the card is trying to draw 27w of power that you have cut off.
115 V is a voltage, not a power. The V stands for Volts. It can draw whatever power it needs by changing the current, as power is current times voltage, that is, P = IV. Well, at least up to the limit that the wires in your house can handle before the circuit breakers feel the need to intervene to stop you from burning your house down. But that's not a meaningful concern for most desktops, as you have to be pulling well over 1000 W from the wall for that to be an issue. But a desktop can readily pull 100 W from the wall one second, then 200 W the next, then back to 100 W right after that.
Besides, your power supply will convert that 115 V AC to +12 V DC before passing it along to the video card, and then the video card will convert that to the voltages actually needed by various chips on the card. Changing the voltage of a GPU changes that last conversion that happens on the video card itself, not the 115 V AC from the wall or +12 V DC from the power supply to the video card, at least up to slight changes in the voltage caused by changing current draw.
Watts= amps X volts
You decrease the volts and you decrease the watts. Watts is the amount of power used. Volts limits how much power can be used in this situation. He is changing the volts not the amps in this equation.
i wont edcuate you (as wh have all learend that its completely pointless to even try), and ive already told you to not embarass yourself, but you insist on doing so, well its hilarious anyway, so we can all get a good laugh at least lol
Dude what is your problem? You went from using 157w down to 125w. That is 27w of power less you used. The volts are not equal to the power but the volts limits the power. You can't get as much power from 12v as you can from 24v that is just how it works. You limited how much power the card is able to use by lowering the voltage. The card only uses the power that you let it use and you put a block on it. So ye DUH the power consumption is lower because you limited how much it can use. WTF you talking about basic math? You think volts=power consumption?
157 - 125 = 32.
Because there are no borrows, that's actually true in any numerical base, not just decimal.
Saying you can't get as much power, you're talking about power consumption, not performance.
And that claim isn't even true within the relevant range. 12 V at 10 A is the same 120 W as 24 V at 5 A. There are limits to how much current given hardware can safely handle, but those aren't relevant to this discussion as the GPU can handle what it needs to.
You decrease the volts and you decrease the watts. Watts is the amount of power used. Volts limits how much power can be used in this situation. He is changing the volts not the amps in this equation.
That is true, but Amps = Volts / Resistance. So you aren't just changing one variable when you change voltage.
It gets a lot more complicated in semiconductor math though, because although it's a DC voltage, it has a lot more variables that just Ohms and Kirchoffs. There's an alternating component because of the clock frequency, and a capacitive component because of the nature of semiconductors.
In general in semiconductors, power is directly proportional to frequency, and proportional to the square of voltage, with a constant based on the specifics of the process node.
The speed of the semiconductor won't change based on voltage alone, unless you change the clock frequency. Adjusting the voltage downward will result in lower power consumption (which is measurable both via a power monitor, or via heat production), provided there is at least enough voltage there to make the physics work.
So when you say that all power that's missing (34 or 27 or whatever Watts) has to slow down the GPU, that's not true, because the clock didn't necessarily change. You will see a lower amount of heat production, and that's where all that power that is missing has went. You have to have enough voltage to make the physics work, but beyond that, your just cranking up the thermostat on your silicon heating elements.
You decrease the volts and you decrease the watts. Watts is the amount of power used. Volts limits how much power can be used in this situation. He is changing the volts not the amps in this equation.
That is true, but Amps = Volts / Resistance. So you aren't just changing one variable when you change voltage.
It gets a lot more complicated in semiconductor math though, because although it's a DC voltage, it has a lot more variables that just Ohms and Kirchoffs. There's an alternating component because of the clock frequency, and a capacitive component because of the nature of semiconductors.
In general in semiconductors, power is directly proportional to frequency, and proportional to the square of voltage, with a constant based on the specifics of the process node.
The speed of the semiconductor won't change based on voltage alone, unless you change the clock frequency. Adjusting the voltage downward will result in lower power consumption (which is measurable both via a power monitor, or via , provided there is at least enough voltage there to make the physics work.
So when you say that all power that's missing (34 or 27 or whatever Watts) has to slow down the GPU, that's not true, because the clock didn't necessarily change. You will see a lower amount of heat production, and that's where all that power that is missing has went. You have to have enough voltage to make the physics work, but beyond that, your just cranking up the thermostat on your silicon heating elements.
I was about to edit the image post to say this exact thing.
Don't forget ELI the ICE man. In a capacitive circuit current leads voltage.
Comments
However, for chips set at a fixed voltage, it's common to be set at a higher voltage than the chip actually needs. Not all chips need the same voltage to get a given clock speed, and they set it high enough that most chips can function properly. In some cases, tinkering with voltages yourself to see how low you can set it on the particular chip you have while remaining stable can save you 5% or 10% or whatever of your power consumption without affecting performance.
Stability actually depends on voltage, temperature, and clock speed. If you know that you keep your hardware cooler than most people, that will let you get away with a (slightly) lower voltage at the same clock speed. And the stock settings tend to be pretty conservative, as if someone has a poorly ventilated case that makes his video card get hot and crash as a result, he'll probably blame the GPU vendor, not himself. Because people who would blame themselves are much less likely to make that mistake.
Thats why people that undervolted their reference cards saw pretty good improvement in performance WITH reduced power consumption, as with reduced voltage it used much less power and it wasnt hitting stock power limit so it ran constant max boost of 1266 MHz. Another effect was reduced fan noise and temperatures as it had to dissipate less heat
Another way to hit constant 1266 MHz was raising the power limit, but that just raised temperatures/fan noise so its clear which way is much more preferable.
So yeah, in some circumstances reducing voltage can even improve performance lol
NVidia tied their power management and algorithms in such way that you cannot decouple them, and even if you could lower voltage without loss of performance, you cant, you lower voltage you lose performance whether its warranted or not because it will work at whatever NVidia progremmed it at. And as you can see, engineers and statisics are far from perfect. AMD didnt, they gave you ability to fine tune your card to your hearts delight lol
This is pure gold lol. 25w? basic math much? lol
guess what, that voltage doesnt effect ANYTHING on the card except GPU itself. You dont even have a most basic understanding how graphic card works lol
Now lets talk about your misunderstanding of voltage. Right now the plug in your house has 115v of power supplied just sitting there. It doesn't get used unless you plug something into the socket. Then when you begin to use the power it begins to use watts of power at 115v. If you lower the voltage then you lower the amount of power that can be drawn out. Which is why your watts went down. Because you have limited the power that can be drawn out. Which shows that something in the card is trying to draw 27w of power that you have cut off.
i wont edcuate you (as wh have all learend that its completely pointless to even try), and ive already told you to not embarass yourself, but you insist on doing so, well its hilarious anyway, so we can all get a good laugh at least lol
At a given moment in time, there will be a number of places on the chip that have a logical 0 or 1. This is represented by positive or negative net charges. Positively charged protons in metal don't move around very much, but negatively charged electrons sure do. So they can change whether the region is a 0 or a 1 by moving electrons in or out.
In order to do computations, you have to make it so that whether a given region has a 0 or 1 at some moment in time depends on whether particular other regions had a 0 or 1 at a previous moment in time. A very simple case is just moving a 0 or 1 around from one spot to another. A little more complicated is basic logic circuits, such as that one bit can be the xor of two previous bits. That is, a region should have a 0 if two particular other regions were the same (both 0 or both 1) at a previous time, or a 1 if the other two regions were different.
So the chip can do various things to move electrons around accordingly. Note that this requires information getting to one region from multiple other regions, so it's not just moving electrons in or out of a region in isolation. If you have to find out about information halfway across the chip, it takes a while for the electrons to to get there.
But you don't want to take all day to xor two bits together. You want to finish it so that you can move on and do the next computation that uses the result of the previous one. What is customarily done is to assume that the result if a computation will be done within some amount of time so that we can start using its output for the next. That amount of time can be the length of a clock cycle, though there are technical reasons why this isn't necessarily the case, from multiple clock domains on a chip to wanting to do some sequence of operations in a single clock cycle.
At a higher clock speed, you have less time for each clock cycle. So long as the electrons shuffle around to where they're supposed to be in time, everything works and it just means higher performance. But if you set the clock speed too high, you're assuming that electrons will be in position sooner than they actually get there, and so it doesn't work. If you set the clock speed lower than necessary, it still works, at least up to some limits as the portions of a chip don't magically hold their charges forever, but you get reduced performance.
Voltage comes in via Ohm's Law: V = IR. That is, voltage is current times resistance, or more appropriately do this discussion, I = V/R. If a given wire carrying electrons to move them around has a given resistance, at a higher voltage, you get a higher current--that is, the electrons get where they're going faster. Getting the electrons where they are going faster means that you can shorten the time of a clock cycle and have it still work. That is, you can clock the chip higher, which gets you more performance.
Temperature matters, too, as conductors tend to have higher resistance at higher temperatures, at least within the normal operating range of a computer. Thus, at a higher temperature, you need higher voltage to get the same current as before and maintain the same clock speed. Or perhaps rather, you accept a lower current and have to reduce the clock speed to give charges more time to move around. This is why better cooling allows higher clock speeds, and extreme cooling such as liquid nitrogen allows extremely high clock speeds.
And then quantum mechanics gets in the way. You can't actually decide, I'm going to move exactly 73 electrons from here to there to have exactly this charge. Or rather, if you decide that, physics probably won't cooperate with your decision. There are random errors in how many electrons get moved where. So long as the nominal charge for a 0 is far enough apart from the nominal charge for a 1, being off by several electrons is a small enough error that it can still figure out whether it's supposed to be a 0 or a 1 and it works.
But if stuff isn't getting there fast enough, this can break down, as what should have been a 0 ends up as closer to the nominal charge for a 1 or vice versa. The error propagates as the wrong output to one computation becomes the wrong input to the next, and then it and everything that depends on it can end up as random junk. And then you end up with impossible results that crash the computer or otherwise cause things not to work.
You can't ever eliminate the possibility of errors entirely, and with billions of transistors on a chip and billions of clock cycles per second, you'd better make sure that the probability of any particular bit being in error is awfully small in order for the expected time per error to be long enough that users will tolerate it. Having a bit error show up somewhere on the chip once per century is okay for most purposes, but once per second is not.
(continued in next post)
But quantum mechanics isn't done getting in the way. Those tiny wires that are only tens or hundreds of nanometers thick aren't quite identical at an atomic scale. Having a few more copper atoms here and a few fewer there means that some wires have more resistance than others intended to be identical to them. More resistance means lower current, and so the clock speed has to be reduced to compensate for the inferior wires.
But remember, you've still got billions of transistors on a chip, and now if just one wire can't move charges fast enough, it causes errors, those errors propagate, and the whole thing falls apart. So you have to set the clock speed to be slow enough that every single wire on the entire chip can move its data around fast enough.
How fast the worst wire on a chip can handle varies by random chance from one chip to the next. That's why some can handle higher clock speeds than other, nominally identical dies. Chip vendors tend to try to be conservative, and pick clock speeds and voltages that most of their chips can handle at the temperatures that their customers will run the hardware at. They don't truly plan for a worst case scenario on temperatures, but they do plan for the worst case that a not really all that unreasonable customer might plausibly inflict on their hardware, and it needs to still work in that case.
If you've got better cooling than the chip vendor plans for, you keep your chip cooler, and that means reduced resistance. That allows a lower voltage to still give the necessary current. Alternatively, if you simply got relatively lucky with which chip you were given, so that the worst wire on the chip has lower resistance than on some other nominally identical chips that also work, you can set a lower voltage and still get the necessary current.
Chip vendors do bin their chips somewhat, but finding the exact limits of what every chip depends on the operating temperatures that they don't know. They don't know how hot your room will be, nor how good of ventilation your case will have. So they try to leave a considerable buffer with higher voltages or lower clock speeds so that the chip still works even if your cooling isn't very good.
You can adjust those clock speeds and voltages yourself to get closer to the limits of what the chip can handle before it starts giving errors at an unacceptably high rate. But if you push too close to the limits, you can make your system unstable. A given clock speed and voltage could run fine in one game and crash in another that pushes the chip harder, thus generating more heat, making it run hotter, increasing resistance, and thereby reducing current below what is necessary. Or something that works fine on a cold day might crash on a hot day. Having enough of a buffer to keep your system stable is a good thing.
And then while I said that the protons in metal don't move much, it's not actually true that they don't move at all. Electromigration can move atoms around. As a chip ages, unfortunate changes from electromigration can increase the resistance of particular wires, thus reducing the current flow at a given voltage. That means that you can't have as high of a clock speed as before. It can also mean that if you had the clock speed and voltage close to what the limit of a chip can handle, and then the chip can't handle as much as before, settings that used to be stable no longer are. Stock settings are intended to have enough of a buffer to remain stable for years, in spite of the inevitable electromigration.
Reducing the voltage does reduce power consumption, though. Power is current times voltage, or P = IV. But remember that the current itself depends on the voltage, as I = V/R. Substitution yields P = V^2/R, and power is proportional to voltage squared. More generally, power consumption for an entire chip is roughly proportional to voltage squared. Reducing the voltage can thus drop power consumption considerably. The only downside to reducing voltage is that if you reduce it too far, the system crashes. Which is quite a downside, after all.
Besides, your power supply will convert that 115 V AC to +12 V DC before passing it along to the video card, and then the video card will convert that to the voltages actually needed by various chips on the card. Changing the voltage of a GPU changes that last conversion that happens on the video card itself, not the 115 V AC from the wall or +12 V DC from the power supply to the video card, at least up to slight changes in the voltage caused by changing current draw.
In terms of performance to power consumption, the first step would be to lessen the ripple from the A to D converter of the incoming power, simple fix being just parallel more capacitors and configure zener diodes properly as voltage regulator circuits. Usually this is already resolved both on the mother board and power supply of the machine. At least I would hope the engineers designed them that way, you get what you pay for. The smoother the DC input, the more reliable the information and less need for error correction checks.
Power supply rails distribute more than just a +12VDC. There's also a 5.5VDC, 3.3VDC, and 1.5VDC needed to operate modern motherboards.
Edit: To clarify my first statement "More volts will always be less watts" Yes, P=IE, but V=IR. Thus more EMF (electro-motive force, Voltage) means less current.
I've had two identical asus 270-DC2OC (serial numbers one next the other)
They have default voltage of 1.215v, and i managed to change their p-states, and put 1.1v for the highest usage one.
And they worked perfectly for a time with a factory 50mhz overclock compared to stock 270s on the most demanding of tests.
After a year or so one of the cards started crashing in only some games. Then it started crashing constantly, so i went back and loaded the default bios put it back to 1.215v and works perfectly to this day.
The other even though i don't have it with me anymore is working 3 years now @1.1v 950Mhz(50Mhz overclock) and reduced wattage output with no problems whatsover.
You decrease the volts and you decrease the watts. Watts is the amount of power used. Volts limits how much power can be used in this situation. He is changing the volts not the amps in this equation.
Because there are no borrows, that's actually true in any numerical base, not just decimal.
Saying you can't get as much power, you're talking about power consumption, not performance.
And that claim isn't even true within the relevant range. 12 V at 10 A is the same 120 W as 24 V at 5 A. There are limits to how much current given hardware can safely handle, but those aren't relevant to this discussion as the GPU can handle what it needs to.
It gets a lot more complicated in semiconductor math though, because although it's a DC voltage, it has a lot more variables that just Ohms and Kirchoffs. There's an alternating component because of the clock frequency, and a capacitive component because of the nature of semiconductors.
In general in semiconductors, power is directly proportional to frequency, and proportional to the square of voltage, with a constant based on the specifics of the process node.
http://www.ti.com/lit/an/scaa035b/scaa035b.pdf
The speed of the semiconductor won't change based on voltage alone, unless you change the clock frequency. Adjusting the voltage downward will result in lower power consumption (which is measurable both via a power monitor, or via heat production), provided there is at least enough voltage there to make the physics work.
So when you say that all power that's missing (34 or 27 or whatever Watts) has to slow down the GPU, that's not true, because the clock didn't necessarily change. You will see a lower amount of heat production, and that's where all that power that is missing has went. You have to have enough voltage to make the physics work, but beyond that, your just cranking up the thermostat on your silicon heating elements.
Don't forget ELI the ICE man. In a capacitive circuit current leads voltage.