I can't see myself making the jump any time soon, especially with current prices. Perhaps if i was constantly moving large files around or wanted to speed up the unpacking of huge games on steam like GTA V i would but frankly they are rare occurrances. If when the time comes i need to replace my SSD again the prices are more inline with them i might, but i wouldn't upgrade to one just for the sake of it.
Originally posted by Kabaal I can't see myself making the jump any time soon, especially with current prices. Perhaps if i was constantly moving large files around or wanted to speed up the unpacking of huge games on steam like GTA V i would but frankly they are rare occurrances. If when the time comes i need to replace my SSD again the prices are more inline with them i might, but i wouldn't upgrade to one just for the sake of it.
Well, you gotta let the early adopters bring the price down. That is what decides what is mainstream or not. I sometimes become one of those early adopters, but on this one, even though I want it, I will wait.
He claimed the NVMe is just as cheap to build as a SATA drive, and therefore the price isn't fair. Even though it's not, since it features an expensive controllers that isn't found on SATA drives, since it now communicates through PCIe, which intel conveniently pointed out.
Perhaps you missed the word "if" in the statement you quoted. Even if it's more expensive to build a PCI Express SSD today, that doesn't mean that it will always be the case. Today, most SSD controllers are built for SATA and if you want to use something else, you need an extra controller to convert it. But there are plenty of SSD controllers that will be built in the future, and if they expect PCI Express to be the universal standard, they might have a PCI Express controller built in instead of or in addition to the SATA.
I'm not saying that this or that will definitely happen. I don't know what will happen, as which is more expensive to build depends on some very low level things in ASIC design that I'm not familiar with. And it also matters how much more expensive to build: there's an enormous difference between adding $20 to the cost and adding a nickel to the cost.
Some people said SSD were pointless when they came out too. Boy, were those people on the wrong side of history. It will be the same with this.
Some of the early SSDs were pointless, or at least awfully close to it. 2 write operations per second is not enough for a useful device, as JMicron demonstrated.
People overclock to get a few more FPS, or use liquid cooling for a few less degrees temp. These drives may not be mind blowingly faster in real world use, but they will be faster than SSDs for sure, even if it's just a few seconds less loading, etc. It all depends what's worth it to you.
There is so much wrong with this post, I wouldn't even know where to begin. I see other people have explained it to you so I guess there is no need.
Let's break it down a bit:
For game loading, it can vary a bit by game, but usually the CPU is going to be the big bottleneck.
For transferring video from a camera, whatever cable you're using to do the transfer is probably the bottleneck. SATA 3 is already faster than USB 3.0.
For copying files, if it's over a network (especially the Internet, but even a LAN), the network is usually the bottleneck. For copying files within a single computer, the slower of the drives is the bottleneck, so in order to see any benefit, both drives would have to be PCI Express. And who spends meaningful time copying drives from one SSD to another SSD within the same computer, anyway?
For backups, the backup medium is probably the bottleneck. This is definitely the case if you're copying from an SSD to a hard drive or thumb drive. If backing up over a network, the network may be the bottleneck. If you bought an SSD for pure backup purposes, you're doing it wrong.
For installing software, getting the software is probably the bottleneck, especially if you have to download it. To install from a DVD, the DVD is likely to be the bottleneck, though CPU work also could be.
For launching an OS or software, the CPU is usually going to be the bottleneck, though it does vary by software. A good SATA SSD can be the bottleneck for loading some software, but that's not typical.
Edit: for some of those things, if you're using a hard drive, the hard drive will probably be the bottleneck. That's why going from a hard drive to a good SSD is such a big deal. But my comparisons above are for comparing a good SATA SSD to some other, much faster storage.
But to argue making storage 5 times as fast is irrelevant, and to claim "the speed of PCIe is irrelevant", is like a person only browsing the web saying "GPU are irrelevant, because I only browse the web, and my bottleneck is the connection to Twitter's server".
Whether the performance boost is irrelevant depends on what you're going to do with it. There are enterprise uses where this is a big deal. But if you're not doing something insanely storage intensive, it's not.
It's kind of like saying that if you're never going to do 3D graphics at all, getting a GeForce GTX Titan X over a GeForce GTX 750 isn't likely to benefit you. That doesn't mean the Titan X is junk; it's the best consumer video card on the market today. But it only benefits people who push a GPU hard.
I can tell a difference between my PCI-e SSD in my MBP and the SATA SSDs in my other computers. It's not like going from a HDD to SDD, but it's still enough of a difference to be noticeable. And that isn't even NVMe. Real world benchmarks are showing improvements over SATA - it's not by orders of magnitude like the jump from HDD to SDD, but it's still an improvement.
How do you know that the SSD is the difference? Apple doesn't exactly make it easy to do clean comparisons of individual hardware components.
When I upgraded from an OCZ Agility 1 to a Seagate 600, I didn't notice any performance change. I kept both SSDs in the system for a while, too, to make sure that I had everything working on the new one before wiping the old. And if there are performance gains to be had from faster storage, going from an old Indilinx Barefoot controller to a good modern SSD controller would be more likely to show visible gains than going from the latter to a PCI Express SSD.
I'm instant loading everything in GTA5 with an old AMD Phenom II, and DDR2 ram, all because of an 850 EVO.
As long as consoles use HDDs, I'm pretty sure I'm safe on load times. Paying $1000 for an Intel brand SSD, not unless video game developers suddenly become good again, and actually care about pushing the boundaries of current gen hardware.
SSD at moment are fast enough and for gaming on highend PC fast enough specially with price performance. I almost never transfer huge files on same SSD so for that i dont need a faster SDD. Bound to be some bottleneck somewhere with a SSD this fast i sure of that. Unless in games it's a huge difference which i doub and price droppeds only a very small group willing to pay such rediculous prices.
I got i high end pc but i wont pay this price for SDD my 2 SSD for gaming are fast enough, almost zero loadtimes in games.
SSD M.2 failed im almost positive this will fail also unless price drops i don't see any future with this new tech.
Hope to build full AMD system RYZEN/VEGA/AM4!!!
MB:Asus V De Luxe z77 CPU:Intell Icore7 3770k GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now)) MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB PSU:Corsair AX1200i OS:Windows 10 64bit
Originally posted by Classicstar 99% won't pay 400 dollars for 400gb ssd.
SSD at moment are fast enough and for gaming on highend PC fast enough specially with price performance. I almost never transfer huge files on same SSD so for that i dont need a faster SDD. Bound to be some bottleneck somewhere with a SSD this fast i sure of that. Unless in games it's a huge difference which i doub and price droppeds only a very small group willing to pay such rediculous prices.
I got i high end pc but i wont pay this price for SDD my 2 SSD for gaming are fast enough, almost zero loadtimes in games.
SSD M.2 failed im almost positive this will fail also unless price drops i don't see any future with this new tech.
I agree about M.2 - it's really only meant for Laptops/Tablets and other small form factors. It probably won't see much longevity.
And NVMe is expensive right now - I agree, it's not worth it to a lot of people. But if it incurs a noticeable performance benefit, it will be worth it to some people (if nothing else the MAX MAX crowd), and then the costs will come down as production ramps up, and as the costs come down that price/performance slides down, and it becomes worth it to more people - and that's how a technology builds inertia. Right now, a lot of that cost is in the speed premium, a good bit is due to lack of competition, and part of it is because there isn't a massive industrial base to produce them compared to SATA (the controller chips, the interface components, etc). Competition can lower the price dramatically, and then economy of scale can edge it down even further.
The Apple controller is a proprietary (imagine that) 4-lane PCIe controller.
I have several Apple machines at home and at work. Intel CPUs haven't evolved that much speed-wise, and Apple is actually de-emphasizing The GPU (most models just use Iris now rather than a discrete card). It's not hard to compare a Sandy or Ivy laptop with a SATA SSD with a Haswell PCIe.
I don't know about bottlenecking on the PCI bus just yet. And SSD, even on NVMe, is going to be significantly slower than feeding/reading GDDR5 - and there are setups now with enough PCI bandwidth to feed 3 and 4 GPUs.
Do tri/quad SLI setups bottleneck on a PCI bus yet, or is their bottleneck still elsewhere?
The Apple controller is a proprietary (imagine that) 4-lane PCIe controller.
I have several Apple machines at home and at work. Intel CPUs haven't evolved that much speed-wise, and Apple is actually de-emphasizing The GPU (most models just use Iris now rather than a discrete card). It's not hard to compare a Sandy or Ivy laptop with a SATA SSD with a Haswell PCIe.
Nope. The MBP uses an M.2 Samsung 851, with a proprietary pin-out so you can't replace it yourself. And since it's M.2, that means it uses the Intel IRST SATA Express controller. Not that that's a bad thing, just the same thing everybody else is doing.
As for bottlenecks, not really part of the equation. SATA Express uses PCIe 2.0 channels, most GPU's are using the PCIe 3.0 channels, as long as you have enough lanes, you're fine.
Originally posted by Reckloose Originally posted by RidelynnThe Apple controller is a proprietary (imagine that) 4-lane PCIe controller.I have several Apple machines at home and at work. Intel CPUs haven't evolved that much speed-wise, and Apple is actually de-emphasizing The GPU (most models just use Iris now rather than a discrete card). It's not hard to compare a Sandy or Ivy laptop with a SATA SSD with a Haswell PCIe.
Nope. The MBP uses an M.2 Samsung 851, with a proprietary pin-out so you can't replace it yourself. And since it's M.2, that means it uses the Intel IRST SATA Express controller. Not that that's a bad thing, just the same thing everybody else is doing.
Originally posted by RidelynnThe Apple controller is a proprietary (imagine that) 4-lane PCIe controller.I have several Apple machines at home and at work. Intel CPUs haven't evolved that much speed-wise, and Apple is actually de-emphasizing The GPU (most models just use Iris now rather than a discrete card). It's not hard to compare a Sandy or Ivy laptop with a SATA SSD with a Haswell PCIe.
Nope. The MBP uses an M.2 Samsung 851, with a proprietary pin-out so you can't replace it yourself. And since it's M.2, that means it uses the Intel IRST SATA Express controller. Not that that's a bad thing, just the same thing everybody else is doing.
That would be the tear down of the MacBook, not the MacBook pro. Check the tear down of the MBP, there is a GIGANTIC Samsung logo sitting on the SSD.
But, there really is no such thing as a PCIe SSD, it's a marketing term to trick people. What they actually are, are PCIe based drive controllers with the drive physically attached to the card. So, for instance, the PCIe SSD in a mac pro, is actually just a PCIe SATA controller with 2 SSD's attached and set to a raid 0.
I would suspect the SSD in the MB 2015 is a proprietary M.2 using SATA express anyways. Going the PCIe sata controller doesn't make sense when you're trying to make something super light and small (the intel SATA controller is built into the mainboard chipset, and if you use it, you don't need to add an additional SATA controller). Since the MB is somewhat... low power, having something as fast as an 851 wouldn't matter a whole bunch.
All SSDs are essentially RAID 0 devices, with all the NAND chips working in parallel... so nothing new there. That's why larger capacity drives often perform better than lower capacity drives - they have more chips in their "array".
All SSDs are essentially RAID 0 devices, with all the NAND chips working in parallel... so nothing new there. That's why larger capacity drives often perform better than lower capacity drives - they have more chips in their "array".
The PCIe version of the NVMe cards is an NVMe controller with the SSD's strapped to it.
Ok, let's first get the terminology that those sites are messing with out of the way. M.2 is not PCIe, it's called SATA Express. The point of SATA express, is using PCIe channels to go from the drive to the system controller. Doing this, requires an mainboard with support for it, as well as a completely separate socket (since even though it uses the same "wiring", they are totally incompatible with each other)
An M.2 SSD is achieving the speeds it gets by doing a 4x raid 0. To do so with cabling, would take a chunk of room, so using the PCIe channels (which are high bandwidth/low latency) ends up working significantly better than any other option, with the added bonus of not needing an additional controller attached to your drive (as would be required for standard PCIe styled SSD cards).
NVMe actually uses an identical system, (the little e actually stands for express, like SATA Express), where it uses PCIe channels (4) to go from the drive to the Controller (NVMe controller). So that isn't really the big deal. The big deal is that NVMe interacts with SSD's in an SSD fashion (right now we deal with them like they were still spindles).
Definitely dig past the marketing silly, and you get to the NVMe stuff. It massively increases command queues. An NVMe will have 64,000 command queues, which literally means you can perform 64,000 input/output operations simultaneously. Consider that SATA has 1 command queue, SATA Express x4 has 4. Like you said, SSD's are kinda raid 0-ish, but it's more that they are capable of being accessed and interacted with in a massively parallel fashion, rather than being raid 0.
I would suspect, that we won't see any real performance gains from NVMe until programs are built to really utilize the technology. Everything right now deals with SSD's like they were super spindles, so until programs are constructed with the concept of being able to do massive simultaneous I/O operations, the drives will still be treated like super spindles.
And then there were two. If all of the SSD controller vendors were pushing NVMe in whatever form factor, maybe it wouldn't cost a dime over AHCI. It takes time to get there, though.
Wow. 9W idle, 25W under load. That's a lot for a drive. That's a lot for even a mechanical drive.
Intel's NVMe is 4W/*Up to 25W*
Most SATA drives are well under 1W idle, and well under 5W peak. But I suppose you don't also have to support the power draw of the SATA controller as well - which I have no idea what that number is, but it's certainly > 0.
Wow. 9W idle, 25W under load. That's a lot for a drive. That's a lot for even a mechanical drive.
Intel's NVMe is 4W/*Up to 25W*
Most SATA drives are well under 1W idle, and well under 5W peak. But I suppose you don't also have to support the power draw of the SATA controller as well - which I have no idea what that number is, but it's certainly > 0.
I dont understand why people nitpick how much power a hard drive uses. Who cares if its 10 or 20. Its a drop in the bucket.
I mean seriously, if you're that concerned about power usage, then you should chastise yourself if you ever leave a room and dont turn off all the lights.
Got a light bulb thats 40w that you leave on for a couple hours because you forgot to turn it off? Etc.
Honestly, people should be worried about it where it matters, CPU's and GPU's. The rest of the components in a computer are relatively speaking irrelevant in the overall power usage.
Hard drives especially. In an enterprise or database server environment where the hard drive is being pegged 24 hours a day, and you have several hundred of them, then yes, a doubling of power usage is huge.
In a consumer situation, where that hard drive might be at full load a total of 20-30 minutes an an entire day, you're talking about peanuts here.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
Comments
Well, you gotta let the early adopters bring the price down. That is what decides what is mainstream or not. I sometimes become one of those early adopters, but on this one, even though I want it, I will wait.
Perhaps you missed the word "if" in the statement you quoted. Even if it's more expensive to build a PCI Express SSD today, that doesn't mean that it will always be the case. Today, most SSD controllers are built for SATA and if you want to use something else, you need an extra controller to convert it. But there are plenty of SSD controllers that will be built in the future, and if they expect PCI Express to be the universal standard, they might have a PCI Express controller built in instead of or in addition to the SATA.
I'm not saying that this or that will definitely happen. I don't know what will happen, as which is more expensive to build depends on some very low level things in ASIC design that I'm not familiar with. And it also matters how much more expensive to build: there's an enormous difference between adding $20 to the cost and adding a nickel to the cost.
Some of the early SSDs were pointless, or at least awfully close to it. 2 write operations per second is not enough for a useful device, as JMicron demonstrated.
People overclock to get a few more FPS, or use liquid cooling for a few less degrees temp. These drives may not be mind blowingly faster in real world use, but they will be faster than SSDs for sure, even if it's just a few seconds less loading, etc. It all depends what's worth it to you.
There is so much wrong with this post, I wouldn't even know where to begin. I see other people have explained it to you so I guess there is no need.
James T. Kirk: All she's got isn't good enough! What else ya got?
Let's break it down a bit:
For game loading, it can vary a bit by game, but usually the CPU is going to be the big bottleneck.
For transferring video from a camera, whatever cable you're using to do the transfer is probably the bottleneck. SATA 3 is already faster than USB 3.0.
For copying files, if it's over a network (especially the Internet, but even a LAN), the network is usually the bottleneck. For copying files within a single computer, the slower of the drives is the bottleneck, so in order to see any benefit, both drives would have to be PCI Express. And who spends meaningful time copying drives from one SSD to another SSD within the same computer, anyway?
For backups, the backup medium is probably the bottleneck. This is definitely the case if you're copying from an SSD to a hard drive or thumb drive. If backing up over a network, the network may be the bottleneck. If you bought an SSD for pure backup purposes, you're doing it wrong.
For installing software, getting the software is probably the bottleneck, especially if you have to download it. To install from a DVD, the DVD is likely to be the bottleneck, though CPU work also could be.
For launching an OS or software, the CPU is usually going to be the bottleneck, though it does vary by software. A good SATA SSD can be the bottleneck for loading some software, but that's not typical.
Edit: for some of those things, if you're using a hard drive, the hard drive will probably be the bottleneck. That's why going from a hard drive to a good SSD is such a big deal. But my comparisons above are for comparing a good SATA SSD to some other, much faster storage.
Whether the performance boost is irrelevant depends on what you're going to do with it. There are enterprise uses where this is a big deal. But if you're not doing something insanely storage intensive, it's not.
It's kind of like saying that if you're never going to do 3D graphics at all, getting a GeForce GTX Titan X over a GeForce GTX 750 isn't likely to benefit you. That doesn't mean the Titan X is junk; it's the best consumer video card on the market today. But it only benefits people who push a GPU hard.
How do you know that the SSD is the difference? Apple doesn't exactly make it easy to do clean comparisons of individual hardware components.
When I upgraded from an OCZ Agility 1 to a Seagate 600, I didn't notice any performance change. I kept both SSDs in the system for a while, too, to make sure that I had everything working on the new one before wiping the old. And if there are performance gains to be had from faster storage, going from an old Indilinx Barefoot controller to a good modern SSD controller would be more likely to show visible gains than going from the latter to a PCI Express SSD.
I'm instant loading everything in GTA5 with an old AMD Phenom II, and DDR2 ram, all because of an 850 EVO.
As long as consoles use HDDs, I'm pretty sure I'm safe on load times. Paying $1000 for an Intel brand SSD, not unless video game developers suddenly become good again, and actually care about pushing the boundaries of current gen hardware.
What does anyone expect from someone who openly bash linux in his sig?
99% won't pay 400 dollars for 400gb ssd.
SSD at moment are fast enough and for gaming on highend PC fast enough specially with price performance.
I almost never transfer huge files on same SSD so for that i dont need a faster SDD.
Bound to be some bottleneck somewhere with a SSD this fast i sure of that.
Unless in games it's a huge difference which i doub and price droppeds only a very small group willing to pay such rediculous prices.
I got i high end pc but i wont pay this price for SDD my 2 SSD for gaming are fast enough, almost zero loadtimes in games.
SSD M.2 failed im almost positive this will fail also unless price drops i don't see any future with this new tech.
Hope to build full AMD system RYZEN/VEGA/AM4!!!
MB:Asus V De Luxe z77
CPU:Intell Icore7 3770k
GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now))
MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB
PSU:Corsair AX1200i
OS:Windows 10 64bit
I agree about M.2 - it's really only meant for Laptops/Tablets and other small form factors. It probably won't see much longevity.
And NVMe is expensive right now - I agree, it's not worth it to a lot of people. But if it incurs a noticeable performance benefit, it will be worth it to some people (if nothing else the MAX MAX crowd), and then the costs will come down as production ramps up, and as the costs come down that price/performance slides down, and it becomes worth it to more people - and that's how a technology builds inertia. Right now, a lot of that cost is in the speed premium, a good bit is due to lack of competition, and part of it is because there isn't a massive industrial base to produce them compared to SATA (the controller chips, the interface components, etc). Competition can lower the price dramatically, and then economy of scale can edge it down even further.
The Apple controller is a proprietary (imagine that) 4-lane PCIe controller.
I have several Apple machines at home and at work. Intel CPUs haven't evolved that much speed-wise, and Apple is actually de-emphasizing The GPU (most models just use Iris now rather than a discrete card). It's not hard to compare a Sandy or Ivy laptop with a SATA SSD with a Haswell PCIe.
I don't know about bottlenecking on the PCI bus just yet. And SSD, even on NVMe, is going to be significantly slower than feeding/reading GDDR5 - and there are setups now with enough PCI bandwidth to feed 3 and 4 GPUs.
Do tri/quad SLI setups bottleneck on a PCI bus yet, or is their bottleneck still elsewhere?
Nope. The MBP uses an M.2 Samsung 851, with a proprietary pin-out so you can't replace it yourself. And since it's M.2, that means it uses the Intel IRST SATA Express controller. Not that that's a bad thing, just the same thing everybody else is doing.
As for bottlenecks, not really part of the equation. SATA Express uses PCIe 2.0 channels, most GPU's are using the PCIe 3.0 channels, as long as you have enough lanes, you're fine.
Hmm.. I don't know about that
That would be the tear down of the MacBook, not the MacBook pro. Check the tear down of the MBP, there is a GIGANTIC Samsung logo sitting on the SSD.
But, there really is no such thing as a PCIe SSD, it's a marketing term to trick people. What they actually are, are PCIe based drive controllers with the drive physically attached to the card. So, for instance, the PCIe SSD in a mac pro, is actually just a PCIe SATA controller with 2 SSD's attached and set to a raid 0.
I would suspect the SSD in the MB 2015 is a proprietary M.2 using SATA express anyways. Going the PCIe sata controller doesn't make sense when you're trying to make something super light and small (the intel SATA controller is built into the mainboard chipset, and if you use it, you don't need to add an additional SATA controller). Since the MB is somewhat... low power, having something as fast as an 851 wouldn't matter a whole bunch.
I'm also pretty sure that's why NVMe is such a big deal - it's not a SATA controller strapped to a PCI card, it's a native interface.
~And~
MBP Teardown:
https://www.ifixit.com/Teardown/MacBook+Pro+13-Inch+Retina+Display+Early+2015+Teardown/38300
Samsung S4LN058A01 PCIe 3.0 x4 AHCI flash controller
It is a M.2 form factor, but it's a good deal faster than a generic SATA drive, so I'd say it's doing something:
http://www.anandtech.com/show/8979/samsung-sm951-512-gb-review
~And~
All SSDs are essentially RAID 0 devices, with all the NAND chips working in parallel... so nothing new there. That's why larger capacity drives often perform better than lower capacity drives - they have more chips in their "array".
The PCIe version of the NVMe cards is an NVMe controller with the SSD's strapped to it.
Ok, let's first get the terminology that those sites are messing with out of the way. M.2 is not PCIe, it's called SATA Express. The point of SATA express, is using PCIe channels to go from the drive to the system controller. Doing this, requires an mainboard with support for it, as well as a completely separate socket (since even though it uses the same "wiring", they are totally incompatible with each other)
An M.2 SSD is achieving the speeds it gets by doing a 4x raid 0. To do so with cabling, would take a chunk of room, so using the PCIe channels (which are high bandwidth/low latency) ends up working significantly better than any other option, with the added bonus of not needing an additional controller attached to your drive (as would be required for standard PCIe styled SSD cards).
NVMe actually uses an identical system, (the little e actually stands for express, like SATA Express), where it uses PCIe channels (4) to go from the drive to the Controller (NVMe controller). So that isn't really the big deal. The big deal is that NVMe interacts with SSD's in an SSD fashion (right now we deal with them like they were still spindles).
Definitely dig past the marketing silly, and you get to the NVMe stuff. It massively increases command queues. An NVMe will have 64,000 command queues, which literally means you can perform 64,000 input/output operations simultaneously. Consider that SATA has 1 command queue, SATA Express x4 has 4. Like you said, SSD's are kinda raid 0-ish, but it's more that they are capable of being accessed and interacted with in a massively parallel fashion, rather than being raid 0.
I would suspect, that we won't see any real performance gains from NVMe until programs are built to really utilize the technology. Everything right now deals with SSD's like they were super spindles, so until programs are constructed with the concept of being able to do massive simultaneous I/O operations, the drives will still be treated like super spindles.
http://anandtech.com/show/9269/ocz-introduces-zdrive-6000-pcie-ssd-series-with-nvme-support
And then there were two. If all of the SSD controller vendors were pushing NVMe in whatever form factor, maybe it wouldn't cost a dime over AHCI. It takes time to get there, though.
Wow. 9W idle, 25W under load. That's a lot for a drive. That's a lot for even a mechanical drive.
Intel's NVMe is 4W/*Up to 25W*
Most SATA drives are well under 1W idle, and well under 5W peak. But I suppose you don't also have to support the power draw of the SATA controller as well - which I have no idea what that number is, but it's certainly > 0.
I dont understand why people nitpick how much power a hard drive uses. Who cares if its 10 or 20. Its a drop in the bucket.
I mean seriously, if you're that concerned about power usage, then you should chastise yourself if you ever leave a room and dont turn off all the lights.
Got a light bulb thats 40w that you leave on for a couple hours because you forgot to turn it off? Etc.
Honestly, people should be worried about it where it matters, CPU's and GPU's. The rest of the components in a computer are relatively speaking irrelevant in the overall power usage.
Hard drives especially. In an enterprise or database server environment where the hard drive is being pegged 24 hours a day, and you have several hundred of them, then yes, a doubling of power usage is huge.
In a consumer situation, where that hard drive might be at full load a total of 20-30 minutes an an entire day, you're talking about peanuts here.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche