Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Intel about to launch NVMe SSD.

1235»

Comments

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Torval
    I like the industry focus on the reduction of power consumption. I would like to see less power hungry desktops along with the laptops and mobile devices. I use LED lightbulbs in the house too. :p

    I dont disagree with you, i just think people need to pick their battles better. Worrying about a 7w difference in a hard drive is ridiculous

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Hrimnir
    Originally posted by Torval I like the industry focus on the reduction of power consumption. I would like to see less power hungry desktops along with the laptops and mobile devices. I use LED lightbulbs in the house too. :p
    I dont disagree with you, i just think people need to pick their battles better. Worrying about a 7w difference in a hard drive is ridiculous

    It's an increase of 5x from the previous form factor- that's significant.

    It's actually more power than the typical CPU requires in a netbook and some slim laptops (those using U/Y series CPUs).

    So, if we should be concerned with CPU power draw, and now the SSD requires more power than the CPU, shouldn't we be concerned with that?

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Ridelynn

     


    Originally posted by Hrimnir

    Originally posted by Torval I like the industry focus on the reduction of power consumption. I would like to see less power hungry desktops along with the laptops and mobile devices. I use LED lightbulbs in the house too. :p
    I dont disagree with you, i just think people need to pick their battles better. Worrying about a 7w difference in a hard drive is ridiculous

     

    It's an increase of 5x from the previous form factor- that's significant.

    It's actually more power than the typical CPU requires in a netbook and some slim laptops (those using U/Y series CPUs).

    So, if we should be concerned with CPU power draw, and now the SSD requires more power than the CPU, shouldn't we be concerned with that?

    Thats like saying you should be concerned about a paper cut on your finger when you have a sucking chest wound.  Its all about priorities.

    Yes, in a laptop environment thats hugely important, a 2 or 3w difference is massive. So, i give you that.

    My comments were more in regards to a non enterprise personal user desktop environment.  Having a HDD take 10w more power than a previous HDD in a total system using 400-500w+ is peanuts. Worrying about a 10w difference in a HDD when you have a video card using 250w is just plain dumb IMO.

    Do i still think companies should focus on power efficiency? Absolutely.  But PC users are adults and they can make decisions that are important to their needs.  If im buying a SSD for my gaming rig, i literally don't care what the power usage of the hard drive is. Its not even a blip on the radar.  However, the difference between a AMD card using 270w and a Nvidia cards using 170w is a much bigger deal.

    The other thing i wanted to clarify is we have to look at usage patterns.  A hard drive is only going to be operating at full load very little as it will complete a task and go back to idle or close to idle.  Even in a game if you examine the access it averages out to a pretty low load.

    In an enterprise environment where that hard drive is getting pegged or mostly pegged for extended periods, and they may have a NAS or something with 50 of them operating constantly, that adds up, fast, and increases costs, fast.  So, it is important, just not in a PC gaming rig scenario, IMO.

     

    Edit:  A few years back intel and AMD were focusing on low power CPUs for servers, then someone, i cant remember who, did some tests, and found out the high power CPU's were completing the tasks faster than the slower, low power CPUs, and ultimately the supposedly high power CPU was using a significantly lower amount of power overall than the low power parts.  If you have a 200w part thats completing the task in 1/5th of the time of the 40w part, you're still using less power.

    Just a little side story to support my point that the picture isnt always as clear as comparing power ratings on a device.  You have to look at the usage patterns.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • RidelynnRidelynn Member EpicPosts: 7,383

    Your talking about one usage pattern specifically - desktop gaming PCs. Sure, 10W on a 500W budget isn't huge.

    I'm talking about SSDs in general. SSDs get used in a lot more than desktop gaming PCs, and in a lot of cases where the difference of 5-10X power consumption is a meaningful difference.

    Even talking about SSD usage - the difference in idle wattages - 0.3-0.5 up to 5W - that's a >10X increase in IDLE power draw, and only a 5x increase in in-use draw. So your usage comparison actually works out even worse in this case, the NVMe drive is much less efficiency compared to a typical SATA drive under typical use work loads (where it sits idle most of the time and only bursts data occasionally)

    Now, the one thing I don't factor in is power draw of the SATA controller - it could very well be that a typical SATA controller draws 5-10W all by itself - I don't really know. Given that a NVMe drive does not require a SATA controller, and that a SATA controller is baked into nearly every motherboard controller chip out there, I don't know how to factor in all of that. ALthough... a Z97 chip has 6 SATA ports on it (along with everything else it supports - ethernet/video out, USB, etc), in a TDP of 4.1W - that doesn't bode well for the NVMe case. And we are seeing first generation NVMe controllers - they have room to get much more efficient.

    And to nitpick, the example of a 40W CPU being 1/5 speed of 200W, those two CPUs use the same amount of energy per "compute unit" in that example. Not really relevent to this discussion, and I understand your point, except you should use numbers that add up, and it doesn't follow in this case because the idle power draw.

    If we want to see NVMe become a standard (or become a standard faster) - it has to have logical use cases outside of just a high performance desktop PC. Otherwise it will remain an expensive niche product, just like all the aftermarket PCI drives that have come before it.

  • HrimnirHrimnir Member RarePosts: 2,415
     
     
    Originally posted by Ridelynn

    Your talking about one usage pattern specifically - desktop gaming PCs. Sure, 10W on a 500W budget isn't huge.

    I'm talking about SSDs in general. SSDs get used in a lot more than desktop gaming PCs, and in a lot of cases where the difference of 5-10X power consumption is a meaningful difference.

    Even talking about SSD usage - the difference in idle wattages - 0.3-0.5 up to 5W - that's a >10X increase in IDLE power draw, and only a 5x increase in in-use draw. So your usage comparison actually works out even worse in this case, the NVMe drive is much less efficiency compared to a typical SATA drive under typical use work loads (where it sits idle most of the time and only bursts data occasionally)

    Now, the one thing I don't factor in is power draw of the SATA controller - it could very well be that a typical SATA controller draws 5-10W all by itself - I don't really know. Given that a NVMe drive does not require a SATA controller, and that a SATA controller is baked into nearly every motherboard controller chip out there, I don't know how to factor in all of that. ALthough... a Z97 chip has 6 SATA ports on it (along with everything else it supports - ethernet/video out, USB, etc), in a TDP of 4.1W - that doesn't bode well for the NVMe case. And we are seeing first generation NVMe controllers - they have room to get much more efficient.

    And to nitpick, the example of a 40W CPU being 1/5 speed of 200W, those two CPUs use the same amount of energy per "compute unit" in that example. Not really relevent to this discussion, and I understand your point, except you should use numbers that add up, and it doesn't follow in this case because the idle power draw.

    If we want to see NVMe become a standard (or become a standard faster) - it has to have logical use cases outside of just a high performance desktop PC. Otherwise it will remain an expensive niche product, just like all the aftermarket PCI drives that have come before it.

    I get what you're saying, i do.  But i think you're blowing it out of proportion.

    Best example i could give is a hypothetical doctor saying that you have 5x the chance of getting cancer if you eat ritz crackers instead of saltine crackers.  So, if that chance is .000001% with the saltines, and .000005% with the ritz, its still amounts to a stupidly small chance.  Just spinning and saying "well its FIVE TIMES THE CHANCE!!!!1!!" doesn't paint the whole picture.

    Again, IMO in a desktop scenario, really any consumer level desktop scenario, the power draw of a hard drive should not even be on your radar.  In a laptop environment, absolutely it should be of concern.

    As to the server CPU thing, its not directly relevant to the discussion, but relevant to the point i was trying to make is that you can't just compare 1 specific statistic about the part.

    To dive more into that.  The idle states of most CPU's are pretty close to each other.  Intel and AMD have gotten very good at bringing even stupidly fast processors that have idle states roughly the same as each other (they're all very low numbers).  So, the point i was trying to make is that lets for the sake of numbers say an i7 idles at 12w, and an celeron idles at 10w.  The celeron at full load might be using say 35w, whereas the i7 may be using say 95w.  At initial glance it looks like the i7 is using roughly 3x as much power.  What comes into play though is how long is it at full load.  If the 35w takes an 30min to complete the task, but the i7 does it in 8 minutes.  Then you only ended up using 72% as much power with the i7 as you did with the celeron.

    Again, i stress the point here is to look at usage patterns and determine from there.  I mean if we honestly look at it, most of our computers are sitting idle for ~18-20 hours a day (provided we're not working from home or something).  In that 4-6 hours of use, how much of that time is the hard drive being pegged? 10%? 20%?  So, if you have one hard drive which uses 10w more than another hard drive, but its only using 10w more for a total of 15-20minutes of combined full load time.  What have you really saved at the end of the day?  Less than having kept a small lightbulb in a lamp on for 20 minutes basically.

    As far as nvme being adopted, i think it will likely be more the desktop/server market (where the additional performance would matter much more).  In a server environment especially, the additional performance FAR outweighs the power usage because it will allow the drive to complete the task much faster and return back to an idle state.

    Just to also respond, you're also correct about the sata controllers being a factor. its hard to say.  I would guess 2-3w isnt unreasonable.  But, who knows.  I tried reading some .pdfs on the marvel controllers and all they say is "industry leading low power consumption" but don't quote any actual numbers.  So, who knows.

    Point is, even a 10x increase of nothing (or almost nothing) is still nothing ;-).

     

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • RidelynnRidelynn Member EpicPosts: 7,383

    Just some math

    With a SATA SSD, a laptop that has a 75Wh battery may get around 5 hours of typical (non-gaming) use. With a NVMe drive, you just dropped that to about 3.5 hours - you lose 1.5 hours of use just because of an insignificant increase in power consumption

    A blade server hosting 24 SSDs - a typical rack config. It goes from being about 25W idle and 120W under load, to 120W idle, and 600W under load. For one rack. Now your not only looking at the straight power use, your also looking at significant HVAC costs. Data centers are already up in the Megawatts of power consumption.

    And our gaming rig. 5W idle may not be a lot, but 25W under load is enough to start to worry about needing to actively cool it. One lone drive may not be significant, but many gaming rigs have multiple drives - they start to add up quickly on a power budget. Try using a NVMe drive in a console, or in a SFF PC - you'd have issues quickly.

    This is first generation - the power increase is very alarming, and not at all insignificant, even in a desktop case. I'm very much for the additional benefits NVMe brings, but if this is the cost of that (not money cost, but tradeoff cost), then I'm not so enthusiastic about it.

    Anyway you do have a point - it's not 100+W, it's 25W. Someone willing to throw $700 at one of these probably can afford 25W. I'm just hoping it doesn't stay in that niche, and the cost isn't the only thing that could hold it there.

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Ridelynn

    Just some math

    With a SATA SSD, a laptop that has a 75Wh battery may get around 5 hours of typical (non-gaming) use. With a NVMe drive, you just dropped that to about 3.5 hours - you lose 1.5 hours of use just because of an insignificant increase in power consumption

    A blade server hosting 24 SSDs - a typical rack config. It goes from being about 25W idle and 120W under load, to 120W idle, and 600W under load. For one rack. Now your not only looking at the straight power use, your also looking at significant HVAC costs. Data centers are already up in the Megawatts of power consumption.

    And our gaming rig. 5W idle may not be a lot, but 25W under load is enough to start to worry about needing to actively cool it. One lone drive may not be significant, but many gaming rigs have multiple drives - they start to add up quickly on a power budget. Try using a NVMe drive in a console, or in a SFF PC - you'd have issues quickly.

    This is first generation - the power increase is very alarming, and not at all insignificant, even in a desktop case. I'm very much for the additional benefits NVMe brings, but if this is the cost of that (not money cost, but tradeoff cost), then I'm not so enthusiastic about it.

    Anyway you do have a point - it's not 100+W, it's 25W. Someone willing to throw $700 at one of these probably can afford 25W. I'm just hoping it doesn't stay in that niche, and the cost isn't the only thing that could hold it there.

     

    Don't disagree with any of this.  100% agree as far as laptops and mostly with server environments (I still wonder how much faster the drive could complete the task and return to an idle state could affect overall power consumption, but thats pure speculation at this point).

    You did bring up a good point as far as cooling the drive in a desktop environment, that could present some challenges and is another factor to consider for sure.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

Sign In or Register to comment.