View Single Post
  #5 (permalink)  
Old September 18, 2015, 04:35 PM
SKYMTL's Avatar
SKYMTL SKYMTL is offline
HardwareCanuck Review Editor
 
Join Date: Feb 2007
Location: Montreal
Posts: 13,603
Default

Quote:
Originally Posted by Vittra View Post
Something has been nagging me about these ATX boards with dual 4x PCI-E 3.0 M.2 connectors.

As DMI 3.0 only has 40Gb/s bandwidth between the pch and cpu, and two powerful U.2/M.2 drives can theoretically take up 32GB/s bandwidth each (4x PCI-E 3.0), how is bandwidth priority determined? Will it throttle the drives to ensure ethernet/usb/etc connectivity is not adversely affected?

I suppose the only realistic usage scenario where this could happen is when Raid 0 thrown into the mix. Not something I personally care to do, but still curious what the outcome would be.
I think you are approaching this in an overly straightforward manner. While the DMI interconnect manages communications between the CPU and PCH, the CPU doesn't need to process every I/O request of the attached devices. A LOT of the processing is done locally on the SSD, GPU or other devices. Hence why every add-in card that requires significant bandwidth has onboard cache.
__________________
Reply With Quote