If this comes across as confused and nonsensical, I apologize in advance.
I recall reading that occupying multiple PCIe slots can actually limit how much performance the individual PCIe lanes are getting. From my understanding, some mother boards limit the PCIe speed of the fastest lane, if other lanes are occupied. Is that correct?
If M.2 drives use PCIe slots, does this mean they will reduce the performance of my graphic card's PCIe slot? If I have my GTX1080 in one slot and buy two M.2 drives for the other PCIe slots, do I have to worry about impacting my graphic card's performance?
This does not make sense to me at all - am I mixing up different things in my mind?
Comments
I am not 100% sure on this but I know if I used 2x 1080ti's on my current system it would be a waste of $900 because it would slow them down to 8x8 which means each card runs at 50% of the bandwidth compared to if I purchased a I7 with 28 PCIE lanes, or a I9 With 44 PCIE lanes and ran 16x16 SLI which is two 1080ti's, and then threw in a SSD it would all be fine.
Personally I wouldn't buy the 28 PCIE lane, I would go with the 44 for Dual SLI, and a hard drive unless I didn't care about running SLI.
https://www.newegg.com/Product/Product.aspx?Item=N82E16819117795&ignorebbr=1
Look on the CPU where it says: Max Number of PCI Express Lanes
Personally I would just go with a regular SSD and connect it to one of the highest speed internal SATA III Ports, and then connect other regular Hard Drives, move all your personal files as you wish over to over drives downloads folder, users etc, this way you don't waste writes.
2) Graphics card uses CPU lanes pretty much exclusively, SSD will likely use lanes provided by chipset.
The more mainstream consumer platforms are more likely to have some contention, as 16 lanes for a video card and 4 each for two m.2 slots is 24 PCI Express lanes right there. It's also possible that if you connect two m.2 drives, they'll be competing with each other for bandwidth and not each have their own full x4 connection, and my guess is that that's more common than splitting lanes off of the x16 connection for the video card. That's just a guess, however, and could easily be wrong.
Is it theoretically possible that a m.2 drive on direct PCI could impact GPU performance? Sure, you can contrive some situation where it does. But I don't think it would be terribly common place, if at all found anywhere in the real world. Streaming high bit rate video to the m.2 controller at +Gbs (you might hit it with an uncompressed 4K stream) while SLI/CF gaming? Running a high volume production database while crunching GPU-AI calculations?
It is possible to have a PCI Express x16 connection such that two slots share its bandwidth and either can use the just about all of the x16 bandwidth even if the other slot has something in it that merely isn't using very much bandwidth at the moment. That's far more expensive to build, however, so it's almost never done.
For what it's worth, PCI Express 3.0 x16 gives you 16 GB/s theoretical bandwidth, but real-world measured bandwidth generally tops out at around 10 GB/s, even in simple synthetic cases of copy a bunch of data and don't do anything else. You have to jump through some hoops to even get that 10 GB/s, so you could run into meaningful problems from PCI Express data transfers while using far less bandwidth than that, even.
Some programs can overwhelm a PCI Express 3.0 x16 connection such that the GPU is mostly waiting for data to come in and out. Games tend not to need all that much bandwidth, though, as stuff gets buffered on the GPU. If you run out of video memory so that the game has to constantly shuffle things in and out as they get used, you can get a huge PCI Express bottleneck in a hurry.
The real fix to that is more video memory or turning down settings, not more PCI Express bandwidth. This is much like saying that if you're running out of system memory and paging to disk constantly, the real fix is getting more system memory, not getting a faster SSD to make paging to disk less painful.
If you wanted to get full bandwidth to three m.2 SSDs all at once, that would still be possible if you split the processor x16 connection into x8-x4-x4 and used two of the x4s for m.2 SSDs. But needing to do that would be pretty rare for consumer use, so I'd expect motherboards to rarely to never set it up that way.
The chipset has 20 PCI Express lanes coming off of it that have dedicated access to the chipset, but the entire chipset only has an x4 connection to the CPU. (The connection from the chipset to the CPU isn't truly PCI Express, but that doesn't matter for this comparison.) Thus, everything coming off of the chipset has to share that x4 connection to the CPU. If you want one SSD to use the full x4 bandwidth and nothing else is using any bandwidth at all, it can. But if you want two SSDs to both use the full x4 bandwidth at once, they can't because they share the x4 bandwidth to the CPU. They can get their data to the chipset just fine, but the chipset can't get it all to the CPU fast enough.
If the SSDs were plugged into the x16 connection from the CPU split as x8/x4/x4 and used all three of those for SSDs, then then could each have their own x4 connection from that just fine and all use their full bandwidth simultaneously. But going through the chipset, they could all use perhaps 1 GB/s at once, or any one could use its full bandwidth while the other two are idle, but they can't all use full bandwidth at once as they'll overwhelm the single x4 connection from the chipset to the CPU.
Thus, if Jean-Luc were decide to get three of his SSDs, put them in m.2 slots and push them all at once, it would probably top out at round 3 GB/s reads total. For consumer use, that's plenty. For some enterprise uses, it's not. A new AMD Epyc CPU would allow you to have a single socket server with perhaps 30 or so m.2 SSDs all using their full bandwidth simultaneously, at least if you don't overwhelm system memory bandwidth as where the data comes from or goes to, as it has enough bandwidth for that many dedicated PCI Express x4 connections to the CPU.
If you got a dedicated RAID card that can handle m.2 slots (I assume that such a thing exists, though I don't know of any) and plugged it into the x16 slot instead of a video card, then you'd be able to push all of the SSDs at once. Performance in some games would suffer, however.
But if you try to put three m.2 drives in RAID 0 on that motherboard, you're likely to be disappointed with the results. The HEDT and server platforms exist for a reason.