Yea I know its obviously the chip burning up unnecessary energy untill malabooga fixed it and now its running smoothly. There is no possible way he took energy from the ram or anything else. Although they probably run on the same circuit. Thats impossible but its much more probable that AMD just didn't optimize their chips properly because well thats impossible to do.
Yet somehow they cannot write a simple program that optimizes their cards checking the power consumption vs volts vs clock speed. Yea thats a really hard program to make.
Well no, no energy would actually be taken from anything else in the device... That's not how electricity works.
Series circuits are voltage drop/voltage dividers. Parallel circuits are current dividers (current doesn't drop. Current can usually be viewed as a constant, though it can be manipulated.)
By having multiple systems which would be powered individually from the power supply in parallel circuits hence why there are 20-24 pins going to your mother board, plus the 4 to 8 pins powering the cpu, and 6-12 going to your gpu. Of each connector typically only two of them are data. Data +, Data - like USB. Current is drawn at the power supply, if current gets too high, the safety system will kick in, a fuse or in rare cases a breaker. This is paralleled and also multiple levels of voltages to suffice different microprocessor standards (CMOS, FET, MOSFET, TTL etc) and their varying voltages.
The more devices powered, the more current draw total from the PSU. If the circuits were all series the voltage drops would never be able to pass the breakdown voltage needed to pass the semi-conductors.
A quick edit of something to add, I found it amusing thinking of an easy way to describe it.
Imagine a semi-conductor is like a wall. A .1 volt is a soldier sent to man a battering ram. The battering ram that's needed to break down the wall will take more men depending on the wall's strength. The enemies wall is pretty tough, it needs a ram that takes 7 soldiers at least in order to charge through. On their way through the castle walls they run into some resistance, some soldiers fret of what's ahead and might change sides. Some come through indecisive - though rare.
Yea I know its obviously the chip burning up unnecessary energy untill malabooga fixed it and now its running smoothly. There is no possible way he took energy from the ram or anything else. Although they probably run on the same circuit. Thats impossible but its much more probable that AMD just didn't optimize their chips properly because well thats impossible to do.
Yet somehow they cannot write a simple program that optimizes their cards checking the power consumption vs wats vs clock speed. Yea thats a really hard program to make.
So, yeah... PowerTune.
You gotta give the chips enough voltage to work - each chip is a little bit different, so you default to the minimum voltage that allows all chips to work. For many individual cards, that voltage may be higher than needed, and may be able to be tuned down, but it's very much a per-card thing. Yeah, it may result in extra power being drawn in some cases, but it guarantees that every card will work at default settings, and you have PowerTune to make sure you don't exceed your design maximum.
Could you do that automagically? Maybe you could devise a process that gives each card a custom firmware that declares what the optimum voltage for each individual die is. And then you need to track each firmware revision versus serial number of the GPU, so you can make sure firmware upgrades in the future contain the optimum custom voltage setting.
But is it worth the hassle and expense of doing so, or do you just burn a few extra watts that's well within the engineering capacity of the chip and cooler, and make it a generic number across the entire line?
Really dude? I mean all he freaking did was tweak the voltage a bit untill he found something that didnt waste all the energy into the card. You telling me its very hard to just make that into the driver to check the frequency against the power consumption and voltage? You don't have to build firmware for that its just changing numbers to an existing driver.
Did Malabooga have to rebuild his firmware to do what he did? So Why the heck are you thinking the factory would have to rebuild firmware for something this simple?
You will lose performance because you have cut power that the card will need to run at its full potential. Back to my example. You put a 12v battery in a 24v power tool. It runs, but half as good as it would with full power. Unless somehow the gpu doesn't need the power. You are basically underpowering the gpu.
If it's stable, changing the voltage changes power consumption, but doesn't change performance. The drawback of setting the voltage too low is that it doesn't work right anymore.
However, for chips set at a fixed voltage, it's common to be set at a higher voltage than the chip actually needs. Not all chips need the same voltage to get a given clock speed, and they set it high enough that most chips can function properly. In some cases, tinkering with voltages yourself to see how low you can set it on the particular chip you have while remaining stable can save you 5% or 10% or whatever of your power consumption without affecting performance.
Stability actually depends on voltage, temperature, and clock speed. If you know that you keep your hardware cooler than most people, that will let you get away with a (slightly) lower voltage at the same clock speed. And the stock settings tend to be pretty conservative, as if someone has a poorly ventilated case that makes his video card get hot and crash as a result, he'll probably blame the GPU vendor, not himself. Because people who would blame themselves are much less likely to make that mistake.
It is possible that the card was set to use more power then it should for maximum output. We are also talking about a card not just a chip here as well. Which involves fans, ram, chip. The fans are possibly running slower or the ram isn't getting all its power either. So he could have found a sweet spot that AMD missed but not easy to determine without doing a full analysis. But looking at the numbers he dropped 25w. Thats too much of a drop not to slow down something in that card.
No sorry but the chip was using a lot of unnecessary power. That happens sometimes and since everything on the card uses its own power source it didn't affect anything but the die itself when he tweaked the voltage.
Yea I know its obviously the chip burning up unnecessary energy untill malabooga fixed it and now its running smoothly. There is no possible way he took energy from the ram or anything else. Although they probably run on the same circuit. Thats impossible but its much more probable that AMD just didn't optimize their chips properly because well thats impossible to do.
Yet somehow they cannot write a simple program that optimizes their cards checking the power consumption vs wats vs clock speed. Yea thats a really hard program to make.
So, yeah... PowerTune.
You gotta give the chips enough voltage to work - each chip is a little bit different, so you default to the minimum voltage that allows all chips to work. For many individual cards, that voltage may be higher than needed, and may be able to be tuned down, but it's very much a per-card thing. Yeah, it may result in extra power being drawn in some cases, but it guarantees that every card will work at default settings, and you have PowerTune to make sure you don't exceed your design maximum.
Could you do that automagically? Maybe you could devise a process that gives each card a custom firmware that declares what the optimum voltage for each individual die is. And then you need to track each firmware revision versus serial number of the GPU, so you can make sure firmware upgrades in the future contain the optimum custom voltage setting.
But is it worth the hassle and expense of doing so, or do you just burn a few extra watts that's well within the engineering capacity of the chip and cooler, and make it a generic number across the entire line?
Really dude? I mean all he freaking did was tweak the voltage a bit untill he found something that didnt waste all the energy into the card. You telling me its very hard to just make that into the driver to check the frequency against the power consumption and voltage? You don't have to build firmware for that its just changing numbers to an existing driver.
Did Malabooga have to rebuild his firmware to do what he did? So Why the heck are you thinking the factory would have to rebuild firmware for something this simple?
If it's so easy, sounds like AMD has a well paying job for you. Or you could release it third party and make yourself internet famous and probably get hired by nVidia as a firmware engineer!
Yea I know its obviously the chip burning up unnecessary energy untill malabooga fixed it and now its running smoothly. There is no possible way he took energy from the ram or anything else. Although they probably run on the same circuit. Thats impossible but its much more probable that AMD just didn't optimize their chips properly because well thats impossible to do.
Yet somehow they cannot write a simple program that optimizes their cards checking the power consumption vs wats vs clock speed. Yea thats a really hard program to make.
So, yeah... PowerTune.
You gotta give the chips enough voltage to work - each chip is a little bit different, so you default to the minimum voltage that allows all chips to work. For many individual cards, that voltage may be higher than needed, and may be able to be tuned down, but it's very much a per-card thing. Yeah, it may result in extra power being drawn in some cases, but it guarantees that every card will work at default settings, and you have PowerTune to make sure you don't exceed your design maximum.
Could you do that automagically? Maybe you could devise a process that gives each card a custom firmware that declares what the optimum voltage for each individual die is. And then you need to track each firmware revision versus serial number of the GPU, so you can make sure firmware upgrades in the future contain the optimum custom voltage setting.
But is it worth the hassle and expense of doing so, or do you just burn a few extra watts that's well within the engineering capacity of the chip and cooler, and make it a generic number across the entire line?
Really dude? I mean all he freaking did was tweak the voltage a bit untill he found something that didnt waste all the energy into the card. You telling me its very hard to just make that into the driver to check the frequency against the power consumption and voltage? You don't have to build firmware for that its just changing numbers to an existing driver.
Did Malabooga have to rebuild his firmware to do what he did? So Why the heck are you thinking the factory would have to rebuild firmware for something this simple?
If it's so easy, sounds like AMD has a well paying job for you. Or you could release it third party and make yourself internet famous and probably get hired by nVidia as a firmware engineer!
What exactly did I miss here? He changed the voltage a bit and saw he was still running at the normal frequency. What other steps were required for him to post those results?
Ok now go and make your driver or whatever do that automatically.
I think its the ideas you guys presented in this thread that are wrong. Otherwise AMD and NVIDIA both would make it happen. You already got a program that reads and sets frequency, watts, amps. Its not hard to get the same program to automatically make adjustments to the amps and read the output. I mean you are already reading them with a program. Set the parameters and if needed make sure the user knows it will take all night running simulations to be fully optimized.
There's a reason messing with the numbers voids the warranty. There is probably so much risk involved they choose not to do it.
Ok now go and make your driver or whatever do that automatically.
I think its the ideas you guys presented in this thread that are wrong. Otherwise AMD and NVIDIA both would make it happen. You already got a program that reads and sets frequency, watts, amps. Its not hard to get the same program to automatically make adjustments to the amps and read the output. I mean you are already reading them with a program. Set the parameters and if needed make sure the user knows it will take all night running simulations to be fully optimized.
There's a reason messing with the numbers voids the warranty. There is probably so much risk involved they choose not to do it.
What? I'm done lecturing and clearing up understanding here. Didn't have the positive effect I was aiming for -- Keeping constructive.
Ok now go and make your driver or whatever do that automatically.
I think its the ideas you guys presented in this thread that are wrong. Otherwise AMD and NVIDIA both would make it happen. You already got a program that reads and sets frequency, watts, amps. Its not hard to get the same program to automatically make adjustments to the amps and read the output. I mean you are already reading them with a program. Set the parameters and if needed make sure the user knows it will take all night running simulations to be fully optimized.
There's a reason messing with the numbers voids the warranty. There is probably so much risk involved they choose not to do it.
What? I'm done lecturing and clearing up understanding here. Didn't have the positive effect I was aiming for -- Keeping constructive.
Here's a non-scholarly article to start with. Follow the references and read for a few hours?
So I could spend a few hours learning all those details and that will explain why you cannot simply make a program read the numbers and make changes accordingly? Now I'm beginning to see who has really been trolled in this thread.
SomethingUnusual said: What? I'm done lecturing and clearing up understanding here. Didn't have the positive effect I was aiming for -- Keeping constructive.
Ok now go and make your driver or whatever do that automatically.
I think its the ideas you guys presented in this thread that are wrong. Otherwise AMD and NVIDIA both would make it happen. You already got a program that reads and sets frequency, watts, amps. Its not hard to get the same program to automatically make adjustments to the amps and read the output. I mean you are already reading them with a program. Set the parameters and if needed make sure the user knows it will take all night running simulations to be fully optimized.
There's a reason messing with the numbers voids the warranty. There is probably so much risk involved they choose not to do it.
What? I'm done lecturing and clearing up understanding here. Didn't have the positive effect I was aiming for -- Keeping constructive.
Here's a non-scholarly article to start with. Follow the references and read for a few hours?
So I could spend a few hours learning all those details and that will explain why you cannot simply make a program read the numbers and make changes accordingly? Now I'm beginning to see who has really been trolled in this thread.
Yes. Exactly, reading and studying the topic will do exactly that.
Simply the answer is you can't program something that isn't designed to be programmed. And specifically not designed to be tweaked on the fly. If the machine is making constant changes it is also making constant logic errors. The waveforms are never perfectly square. They wave around, waving them around more makes it that much more difficult for the error correction circuits. It's just not wise. It's not stable. That's why.
Ok now go and make your driver or whatever do that automatically.
I think its the ideas you guys presented in this thread that are wrong. Otherwise AMD and NVIDIA both would make it happen. You already got a program that reads and sets frequency, watts, amps. Its not hard to get the same program to automatically make adjustments to the amps and read the output. I mean you are already reading them with a program. Set the parameters and if needed make sure the user knows it will take all night running simulations to be fully optimized.
There's a reason messing with the numbers voids the warranty. There is probably so much risk involved they choose not to do it.
What? I'm done lecturing and clearing up understanding here. Didn't have the positive effect I was aiming for -- Keeping constructive.
Here's a non-scholarly article to start with. Follow the references and read for a few hours?
So I could spend a few hours learning all those details and that will explain why you cannot simply make a program read the numbers and make changes accordingly? Now I'm beginning to see who has really been trolled in this thread.
Yes. Exactly, reading and studying the topic will do exactly that.
Simply the answer is you can't program something that isn't designed to be programmed. And specifically not designed to be tweaked on the fly. If the machine is making constant changes it is also making constant logic errors. The waveforms are never perfectly square. They wave around, waving them around more makes it that much more difficult for the error correction circuits. It's just not wise. It's not stable. That's why.
I didn't mean a program that will be constantly making adjustments. The stuff is already there its just a matter of reading the numbers and adjusting them. I mean look you download a program and start adjusting the GPU settings. Its not hard to make the program read the stuff and make adjustments for you. The only difference is instead of YOU typing the number the computer is doing it based upon its readings.
Like I said there's probably a reason they don't do it which has to do with lifespan of the card itself. I mean look at the OP he simply changed the volts and he's running more efficiently and at the same speed. How are we miscommunicating right now?
Lower volts, look at speed, If speed not working raise volts, repeat. This is what the OP claims he did.
Maybe if you lower the voltage down far enough, way down past zero, into anti-voltage, the TDP will flip flop, and we will have discovered cold fusion.
Maybe if you lower the voltage down far enough, way down past zero, into anti-voltage, the TDP will flip flop, and we will have discovered cold fusion.
Or maybe we should have a discussion based upon a bogus claim from a random forum user that we cannot seem to reproduce or find anywhere else on the internet. Which would explain why AMD and NVIDIA both aren't interrested in that software I suggested.
Maybe if you lower the voltage down far enough, way down past zero, into anti-voltage, the TDP will flip flop, and we will have discovered cold fusion.
Or maybe we should have a discussion based upon a bogus claim from a random forum user that we cannot seem to reproduce or find anywhere else on the internet. Which would explain why AMD and NVIDIA both aren't interrested in that software I suggested.
We already tried having a discussion, but for some reason, we keep getting pulled back into this one-ring circus to stare at the clowns.
You will lose performance because you have cut power that the card will need to run at its full potential. Back to my example. You put a 12v battery in a 24v power tool. It runs, but half as good as it would with full power. Unless somehow the gpu doesn't need the power. You are basically underpowering the gpu.
If it's stable, changing the voltage changes power consumption, but doesn't change performance. The drawback of setting the voltage too low is that it doesn't work right anymore.
However, for chips set at a fixed voltage, it's common to be set at a higher voltage than the chip actually needs. Not all chips need the same voltage to get a given clock speed, and they set it high enough that most chips can function properly. In some cases, tinkering with voltages yourself to see how low you can set it on the particular chip you have while remaining stable can save you 5% or 10% or whatever of your power consumption without affecting performance.
Stability actually depends on voltage, temperature, and clock speed. If you know that you keep your hardware cooler than most people, that will let you get away with a (slightly) lower voltage at the same clock speed. And the stock settings tend to be pretty conservative, as if someone has a poorly ventilated case that makes his video card get hot and crash as a result, he'll probably blame the GPU vendor, not himself. Because people who would blame themselves are much less likely to make that mistake.
It is possible that the card was set to use more power then it should for maximum output. We are also talking about a card not just a chip here as well. Which involves fans, ram, chip. The fans are possibly running slower or the ram isn't getting all its power either. So he could have found a sweet spot that AMD missed but not easy to determine without doing a full analysis. But looking at the numbers he dropped 25w. Thats too much of a drop not to slow down something in that card.
No sorry but the chip was using a lot of unnecessary power. That happens sometimes and since everything on the card uses its own power source it didn't affect anything but the die itself when he tweaked the voltage.
Now that wasn't so hard was it?
It's using unnecessary power in the sense that vehicles that carry rarely used safety equipment are carrying unnecessary weight and reducing their gas mileage as a result. You can usually get away without it, but if you need it, suddenly it's not so unnecessary anymore.
You will lose performance because you have cut power that the card will need to run at its full potential. Back to my example. You put a 12v battery in a 24v power tool. It runs, but half as good as it would with full power. Unless somehow the gpu doesn't need the power. You are basically underpowering the gpu.
If it's stable, changing the voltage changes power consumption, but doesn't change performance. The drawback of setting the voltage too low is that it doesn't work right anymore.
However, for chips set at a fixed voltage, it's common to be set at a higher voltage than the chip actually needs. Not all chips need the same voltage to get a given clock speed, and they set it high enough that most chips can function properly. In some cases, tinkering with voltages yourself to see how low you can set it on the particular chip you have while remaining stable can save you 5% or 10% or whatever of your power consumption without affecting performance.
Stability actually depends on voltage, temperature, and clock speed. If you know that you keep your hardware cooler than most people, that will let you get away with a (slightly) lower voltage at the same clock speed. And the stock settings tend to be pretty conservative, as if someone has a poorly ventilated case that makes his video card get hot and crash as a result, he'll probably blame the GPU vendor, not himself. Because people who would blame themselves are much less likely to make that mistake.
It is possible that the card was set to use more power then it should for maximum output. We are also talking about a card not just a chip here as well. Which involves fans, ram, chip. The fans are possibly running slower or the ram isn't getting all its power either. So he could have found a sweet spot that AMD missed but not easy to determine without doing a full analysis. But looking at the numbers he dropped 25w. Thats too much of a drop not to slow down something in that card.
No sorry but the chip was using a lot of unnecessary power. That happens sometimes and since everything on the card uses its own power source it didn't affect anything but the die itself when he tweaked the voltage.
Now that wasn't so hard was it?
It's using unnecessary power in the sense that vehicles that carry rarely used safety equipment are carrying unnecessary weight and reducing their gas mileage as a result. You can usually get away without it, but if you need it, suddenly it's not so unnecessary anymore.
Or like how a crane can lift 2, 3, sometimes 4 times its load rating before it fails. They used a lot more materials and wasted a lot of money in the fabrication but its fail safe for its intended load rating.
You will lose performance because you have cut power that the card will need to run at its full potential. Back to my example. You put a 12v battery in a 24v power tool. It runs, but half as good as it would with full power. Unless somehow the gpu doesn't need the power. You are basically underpowering the gpu.
If it's stable, changing the voltage changes power consumption, but doesn't change performance. The drawback of setting the voltage too low is that it doesn't work right anymore.
However, for chips set at a fixed voltage, it's common to be set at a higher voltage than the chip actually needs. Not all chips need the same voltage to get a given clock speed, and they set it high enough that most chips can function properly. In some cases, tinkering with voltages yourself to see how low you can set it on the particular chip you have while remaining stable can save you 5% or 10% or whatever of your power consumption without affecting performance.
Stability actually depends on voltage, temperature, and clock speed. If you know that you keep your hardware cooler than most people, that will let you get away with a (slightly) lower voltage at the same clock speed. And the stock settings tend to be pretty conservative, as if someone has a poorly ventilated case that makes his video card get hot and crash as a result, he'll probably blame the GPU vendor, not himself. Because people who would blame themselves are much less likely to make that mistake.
It is possible that the card was set to use more power then it should for maximum output. We are also talking about a card not just a chip here as well. Which involves fans, ram, chip. The fans are possibly running slower or the ram isn't getting all its power either. So he could have found a sweet spot that AMD missed but not easy to determine without doing a full analysis. But looking at the numbers he dropped 25w. Thats too much of a drop not to slow down something in that card.
No sorry but the chip was using a lot of unnecessary power. That happens sometimes and since everything on the card uses its own power source it didn't affect anything but the die itself when he tweaked the voltage.
Now that wasn't so hard was it?
It's using unnecessary power in the sense that vehicles that carry rarely used safety equipment are carrying unnecessary weight and reducing their gas mileage as a result. You can usually get away without it, but if you need it, suddenly it's not so unnecessary anymore.
Or like how a crane can lift 2, 3, sometimes 4 times its load rating before it fails. They used a lot more materials and wasted a lot of money in the fabrication but its fail safe for its intended load rating.
Yea I know its obviously the chip burning up unnecessary energy untill malabooga fixed it and now its running smoothly. There is no possible way he took energy from the ram or anything else. Although they probably run on the same circuit. Thats impossible but its much more probable that AMD just didn't optimize their chips properly because well thats impossible to do.
Yet somehow they cannot write a simple program that optimizes their cards checking the power consumption vs volts vs clock speed. Yea thats a really hard program to make.
Well no, no energy would actually be taken from anything else in the device... That's not how electricity works.
Series circuits are voltage drop/voltage dividers. Parallel circuits are current dividers (current doesn't drop. Current can usually be viewed as a constant, though it can be manipulated.)
By having multiple systems which would be powered individually from the power supply in parallel circuits hence why there are 20-24 pins going to your mother board, plus the 4 to 8 pins powering the cpu, and 6-12 going to your gpu. Of each connector typically only two of them are data. Data +, Data - like USB. Current is drawn at the power supply, if current gets too high, the safety system will kick in, a fuse or in rare cases a breaker. This is paralleled and also multiple levels of voltages to suffice different microprocessor standards (CMOS, FET, MOSFET, TTL etc) and their varying voltages.
The more devices powered, the more current draw total from the PSU. If the circuits were all series the voltage drops would never be able to pass the breakdown voltage needed to pass the semi-conductors.
A quick edit of something to add, I found it amusing thinking of an easy way to describe it.
Imagine a semi-conductor is like a wall. A .1 volt is a soldier sent to man a battering ram. The battering ram that's needed to break down the wall will take more men depending on the wall's strength. The enemies wall is pretty tough, it needs a ram that takes 7 soldiers at least in order to charge through. On their way through the castle walls they run into some resistance, some soldiers fret of what's ahead and might change sides. Some come through indecisive - though rare.
I'm really trying to understand this. Google isn't helping me too much. But are we saying in GPU's the current isn't a constant? Or the die itself has a fluctuating current?
Because with CPU's the current is constant so what makes that different?
Yea I know its obviously the chip burning up unnecessary energy untill malabooga fixed it and now its running smoothly. There is no possible way he took energy from the ram or anything else. Although they probably run on the same circuit. Thats impossible but its much more probable that AMD just didn't optimize their chips properly because well thats impossible to do.
Yet somehow they cannot write a simple program that optimizes their cards checking the power consumption vs volts vs clock speed. Yea thats a really hard program to make.
Well no, no energy would actually be taken from anything else in the device... That's not how electricity works.
Series circuits are voltage drop/voltage dividers. Parallel circuits are current dividers (current doesn't drop. Current can usually be viewed as a constant, though it can be manipulated.)
By having multiple systems which would be powered individually from the power supply in parallel circuits hence why there are 20-24 pins going to your mother board, plus the 4 to 8 pins powering the cpu, and 6-12 going to your gpu. Of each connector typically only two of them are data. Data +, Data - like USB. Current is drawn at the power supply, if current gets too high, the safety system will kick in, a fuse or in rare cases a breaker. This is paralleled and also multiple levels of voltages to suffice different microprocessor standards (CMOS, FET, MOSFET, TTL etc) and their varying voltages.
The more devices powered, the more current draw total from the PSU. If the circuits were all series the voltage drops would never be able to pass the breakdown voltage needed to pass the semi-conductors.
A quick edit of something to add, I found it amusing thinking of an easy way to describe it.
Imagine a semi-conductor is like a wall. A .1 volt is a soldier sent to man a battering ram. The battering ram that's needed to break down the wall will take more men depending on the wall's strength. The enemies wall is pretty tough, it needs a ram that takes 7 soldiers at least in order to charge through. On their way through the castle walls they run into some resistance, some soldiers fret of what's ahead and might change sides. Some come through indecisive - though rare.
I'm really trying to understand this. Google isn't helping me too much. But are we saying in GPU's the current isn't a constant? Or the die itself has a fluctuating current?
Because with CPU's the current is constant so what makes that different?
C stand for capacitance and not current. The current isn't constant.
Also note that if you power the graphics card from the 12V rail (and you do) that doesn't mean that all its components get 12V on them but you still give 12V from your PSU to power the card.
Lowering the GPU's voltage within the chip's factory specifications doesn't affect its processing power but in some cases it might affect its reliability if you set the voltage too low. The GPU might start to make calculation errors or even crash if the voltage is too low.
I guess that if you dynamically wanted to set the GPU voltage to a lowest but still safe level then you constantly had to double check (or rather check them multiple times at multiple voltage levels) all the calculations and stuff the GPU makes and that would result signifficant drop in performance. Also the chip's age and temperature are factors among many that affect the minimum required voltage to safely operate a particular piece. That's why the manufacturer sets the voltage higher than needed, to a safe value at which every single chip works properly regardless of age, temperature, etc. That doesn't mean that at your own risk you can't set your voltage slightly lower than the factory default and still stay safe for years.
Comments
Series circuits are voltage drop/voltage dividers. Parallel circuits are current dividers (current doesn't drop. Current can usually be viewed as a constant, though it can be manipulated.)
By having multiple systems which would be powered individually from the power supply in parallel circuits hence why there are 20-24 pins going to your mother board, plus the 4 to 8 pins powering the cpu, and 6-12 going to your gpu. Of each connector typically only two of them are data. Data +, Data - like USB. Current is drawn at the power supply, if current gets too high, the safety system will kick in, a fuse or in rare cases a breaker. This is paralleled and also multiple levels of voltages to suffice different microprocessor standards (CMOS, FET, MOSFET, TTL etc) and their varying voltages.
The more devices powered, the more current draw total from the PSU. If the circuits were all series the voltage drops would never be able to pass the breakdown voltage needed to pass the semi-conductors.
A quick edit of something to add, I found it amusing thinking of an easy way to describe it.
Imagine a semi-conductor is like a wall. A .1 volt is a soldier sent to man a battering ram. The battering ram that's needed to break down the wall will take more men depending on the wall's strength. The enemies wall is pretty tough, it needs a ram that takes 7 soldiers at least in order to charge through. On their way through the castle walls they run into some resistance, some soldiers fret of what's ahead and might change sides. Some come through indecisive - though rare.
Did Malabooga have to rebuild his firmware to do what he did? So Why the heck are you thinking the factory would have to rebuild firmware for something this simple?
Now that wasn't so hard was it?
There's a reason messing with the numbers voids the warranty. There is probably so much risk involved they choose not to do it.
https://en.wikipedia.org/wiki/Constant_current
Here's a non-scholarly article to start with. Follow the references and read for a few hours?
Simply the answer is you can't program something that isn't designed to be programmed. And specifically not designed to be tweaked on the fly. If the machine is making constant changes it is also making constant logic errors. The waveforms are never perfectly square. They wave around, waving them around more makes it that much more difficult for the error correction circuits. It's just not wise. It's not stable. That's why.
Like I said there's probably a reason they don't do it which has to do with lifespan of the card itself. I mean look at the OP he simply changed the volts and he's running more efficiently and at the same speed. How are we miscommunicating right now?
Lower volts, look at speed, If speed not working raise volts, repeat. This is what the OP claims he did.
"Be water my friend" - Bruce Lee
Because with CPU's the current is constant so what makes that different?
https://en.wikipedia.org/wiki/SpeedStep
Also note that if you power the graphics card from the 12V rail (and you do) that doesn't mean that all its components get 12V on them but you still give 12V from your PSU to power the card.
Lowering the GPU's voltage within the chip's factory specifications doesn't affect its processing power but in some cases it might affect its reliability if you set the voltage too low. The GPU might start to make calculation errors or even crash if the voltage is too low.
I guess that if you dynamically wanted to set the GPU voltage to a lowest but still safe level then you constantly had to double check (or rather check them multiple times at multiple voltage levels) all the calculations and stuff the GPU makes and that would result signifficant drop in performance. Also the chip's age and temperature are factors among many that affect the minimum required voltage to safely operate a particular piece. That's why the manufacturer sets the voltage higher than needed, to a safe value at which every single chip works properly regardless of age, temperature, etc. That doesn't mean that at your own risk you can't set your voltage slightly lower than the factory default and still stay safe for years.