What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

The Intel SSD 750 Series Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Intel has a historically stellar track record in the SSD market with their full stable of consumer, enterprise and datacenter drives continually receiving top grades on our pages. One of the highlights has been the 730-series which combined –at the time- class leading performance, great endurance ratings and Intel’s iron-clad warranty structure. It also sparked a great deal of discussion when, just before the holiday season, Intel reduced its price by a good 30% across all capacity levels. While many snapped up the drives, others were left wondering if the fire-sale signaled a replacement was just around the corner and now we have the answer. Say hello to the new Intel 750-series SSDs.

The 750-series may be the spiritual successor to the 730 but it does things a bit differently. Gone is the standard SATA interface, focus on power consumption and compatibility with notebooks. In their place is a raw, unadulterated enthusiast drive architecture that utilizes an NVMe backbone and been built from the ground up to provide massive performance regardless of such niceties like efficiency or adherence to smaller form factors. We’ve already seen what NVMe can do when tied at the hip to Intel’s current enterprise-centric architectures so there’s potential for many of the benefits to now trickle down from the enterprise market into enthusiasts’ hands.

1.png

For the time being Intel will be offering the 750-series two primary capacities: 400GB and a massive 1.2TB. While both perform quite similarly to one another, price is what really separates the men from the boys in this case with the 400GB going for $389 while the 1.2TB hits a high water mark of $1029. In the grand scheme of things neither costs an astronomical amount when their class-leading performance is taken into account and both actually feature lower dollar per GB ratios than OCZ’s RevoDrive 350 and G.Skill’s Phoenix Blade.

At its heart the 750-series is in many ways DC P3700 which has been redesigned for use outside the enterprise market. This means it does away with the antiquated 'Advanced Host Controller Interface' (AHCI) standards and incorporates a new consumer orientated NVMe controller and the Non-Volatile Memory Host Controller Interface (NVMHCI) standard.

As the SATA interface as we know it becomes saturated, companies have been looking at other connectivity standards for system storage. In the eyes of many, those options primarily focused upon SATA Express, its small form factor M.2 standard and of course devices that interfaced with the system directly through a PCIe slot. Among these solutions there is one common denominator: the PCIe bus is being utilized to boost throughput far past what previous generation connectivity standards could achieve. While the triple-port SATA Express connector seen last year has thus far failed to gain much interest as evidenced by the dearth of products implementing it, M.2 and PCIe-based storage devices are out in the wild and gradually nibbling away at SATA 3’s dominance.

intro.jpg

With the 750-series’ introduction, Intel is muddying the waters by drastically varying the way it interfaces with your system from one model to another. Much like its enterprise-centric P3700 sibling, should you choose the add-in card form factor, Intel’s 750-series comes equipped with a standard X4 PCIe 3.0 connector, interfacing directly with a motherboard’s compatible PCI-E slots. Both capacities will be offered in this form.

These 400GB and 1.2TB drives will also come in a potentially more convenient and compact 2.5”, 15mm Z-height design which uses an SFF-8639 plug alongside a SFF-8643 adapter rather than the typical SATA Express host-side connector we’re accustomed to. This causes some unique challenges on the connectivity front but also expands the amount of bandwidth on tap by an order of magnitude. Regardless of the form factor, the 750-series has some very particular bandwidth needs that will limit platform compatibility to X99-based systems but we’ll get further into that particular nugget a bit later. Other than those factors, both drives are literally clones of one another.

Even with just a quick glance at the 750's specifications you can see that this radical switch in underlying architecture results in massive changes - mostly for the good. While the these new drives do not support any encryption abilities and write endurance is a fraction of what’s offered on the P3700, the improved read and write performance makes comparisons to its 730-series predecessor almost impossible. Don’t expect this to be a paper launch either since we’ve been told the 400GB and 1.2TB capacities will be available in their 2.5” and add-in versions starting today.

At launch Intel’s 750-series will be in two very different competitive positions as well. The performance figures certainly aren’t unique in the PCIe storage market since drives like OCZ’s RevoDrive 350 and G.Skill’s Phoenix Blade are able to match or even beat it in some metrics, though none currently come close to its price per GB ratio. On the other hand, Intel is offering up a titanic amount of performance for anyone looking into a 2.5” drive since nothing in that segment comes remotely close. What remains to be seen is how these competitors respond to Intel’s thrown gauntlet.

With all of these things being said, it becomes abundantly obvious that Intel’s 750-series targets the enthusiast and semi-professional markets with laser-like precision. For most users a “standard” SATA-based drive will cost substantially less while still providing adequate everyday performance. They’ll have time to wait until NVMe-based alternatives begin reaching lower price points. For everyone else, the 750-series seems to offer a tantalizing blend of next-generation performance alongside a relatively inexpensive price.

mfg.jpg
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
NVMe & What it Really Means to You

NVMe & What it Really Means to You


nvme.jpg

When people first hear the acronym NVMe, many might assume that it stands for 'Non-Volatile Memory, Enterprise' and it is some new type of enterprise grade NAND. While a good and logical guess, it is not even close to what NVMe is and what it is meant to do. Non-Volatile Memory Express is actually an entirely new standard for SSDs and how they communicate with a system’s PCIe bus.

As most enthusiasts and professionals know, SATA based devices have a whole host of legacy issues that intrinsically hold Solid State Drives back. Put simply, the SATA standard was never designed with extremely high performance NAND based devices in mind. Instead it was designed with Hard Disk Drives in mind. While the governing body of SATA's standard certainly has done their best to improve it via SATA 6Gbps and the new SATA Express, at the end of the day it still relies upon the outmoded 'Advanced' Host Controller Interface that requires an intermediary controller between the CPU and the storage device.

By requiring an I/O Controller Hub (or PCH), AHCI - like IDE before it - allows the production of associated drives to be less expensive as the drive's controller doesn’t need to do the heavy lifting. Rather, the tertiary processing gets offloaded to the PCH / CPU combination. This increases latency and creates a performance bottleneck when dealing with ultra-high performance devices which require a massive amount of data to be processed off-device.

In this past this didn’t cause too much of an issue since spindle-based drives didn’t need to have extremely high performance controllers. However, solid state devices have seen their capabilities increase exponentially in a relatively short amount of time, vastly outstripping what even a 6Gbps SATA interface can provide. Even in a perfect world where SATA-IO could quickly implement upgraded standards there would still be a large latency bottleneck due to the fact that the SATA controller would still have to receive the commands from the CPU, pass them on to the SSD controller, receive the information and then retransmit.

Without scalability or a quick means to keep pace with SSD technology, many have concluded that SATA is a dead end with an outmoded standards process. Instead something different is needed, something built from the ground up with next generation storage performance as its primary focus.

While waiting for a new standard to emerge, Solid State Device manufactures turned to the PCIe bus to circumvent the SATA or SAS controller. Unfortunately, this in turn required a PCIe HUB controller and special proprietary drivers which allow a PCIe SSD to connect to the PCIe bus and effectively communicate with the system. For the most part these new overheads simply reduced instead of eliminated the underlying problems with existing SSD communication designs.

Obviously these issue have been known for some time, and back in 2007 (a mere four years after SATA was implemented) Intel helped create a new open source standard called the Non-Volatile Memory Host Controller Interface Specification (NVMHCI for short). After two years of work a consortium of over ninety companies founded the NVM Express Workgroup which would be in charge of developing NVMHCI into a workable, open source connectivity and communication standard. It is out of this workgroup that the standard which we now know as Non-Volatile Memory Express (NVMe) was created.

lat2.jpg

As previously stated NVMe has been designed from the ground up with the unique abilities and demands of Solid State Drives in mind. As such overall latency, available bandwidth, and scalability are the most important areas NVMe seeks to address. To minimize these issues, the NVM-EW opted to use PCIe as its foundation. However, instead of just making a hodge-podge standard that relies upon PCIe host bus adapters to work, NVMe compatible controllers will be able to talk directly to the CPU as they have to 'speak' PCIe.

By removing this middleman controller a lot of the latency issues associated with PCIe based Solid State Drives are also removed. Equally important, this also eliminates the need for custom drivers and their associated overhead, which will also further reduce latency as there will be fewer layers between the SSD controller and CPU. In the case of the Intel DC P3700 800GB (a drive the for example, its NVMe design allows it to boast an impressively low 20 microsecond read and write latency.

As an added bonus NVMe based devices will require fewer controller chips on the device, which reduces power consumption and cooling requirements. This is why higher end drives consumes only 20-25 watts of power compared to the last generation, lower performing drives like Intel's 910 which required 25 to 28 watts of power.

Obviously NVMe solves the latency issue which was actually starting to bottleneck PCIe based drives but it also solves future performance issues as well. By using the PCIe bus the NVMe Workgroup is able to be meet emerging needs faster than SATA's or SAS' workgroups could ever hope to, as most of the hard work is done for them. For example PCIe 2.0 NVMe devices have access to a 2GB/s wide bus, whereas PCIe 3.0 NVMe devices will be able to hit nearly 4GB/s before saturating the bus, and future versions (PCIe 4.0 compatibility has already been announced) will have nearly 8GB/s of bandwidth to work with, and so on and so forth. Needless to say both performance and future proofing have been neatly taken care of as they are built directly into NVMe standards.

lat.jpg

As an additional benefit from having no legacy issues to support, the number of channels and even number of NAND ICs per channel can be scaled up beyond what AHCI based devices can reasonably support. This allows increased capacity drives which not only reduces the cost per gigabyte of each NVMe device, but also allows for fewer NVMe drives to meet given capacity and performance requirements of a build. That’s something that will be of utmost importance to enterprise consumers. For example, the performance offered by one NVMe DC P3700 can replace up to eight Intel DC S3700 drives, while also offering increased steady state performance and decreased latency.

While NVMe is for the time being an Enterprise-only affair this should quickly change and workstation or even mass market NVMe based drives will likely start appearing in the near future. On the surface, the idea of NVMe may not appeal to consumer motherboard manufactures used to offering only 'SATA' compatibility, and the idea of finding room for another port standard certainly does not appeal to many engineers. Luckily NVMe has another ace up its sleeve: Small Form Factor 8639 specification and SATA Express.

8639_sm.jpg

In a bout of inter-bureaucratic cooperation rarely seen, SATA Express has been designed from the ground up to use either 'legacy' AHCI or NVMHCI as its standard. Of course, there will be a certain performance loss by using NVMe instead of AHCI ( at heart it is still a SATA-IO and not NVM-EW created standard) but this will allow an intermediary step between AHCI compliant solid state drives and NVMHCI complaint devices. Thanks to Intel pushing SATA Express (via their Z97 PCH controller) this ensures compatible ports will become standard on most Intel based consumer grade motherboards. On its own this would certainly help ease the inevitable transition from AHCI to the superior NVMe standard but the truly ingenious part is the new 8639 Small Form Factor specification.

8639 SFF is an emerging standard which takes the usual SATA / SAS / SATA Express power and data ports and converts it to also support NVMe based devices. To ensure there is no confusion, NVMe devices using SFF 8639 like the 750 Series will have a slightly different pin out configuration. Much like SATA connectors will not work on SAS drives, NVMe connectors will not (or should not) work with the other types, thus eliminating the 'it fits, but doesn’t work' risk that could have otherwise cropped up.

This boon to workstation and home users is more a side effect as the SFF 8639 standard is meant to quickly allow servers to be upgraded to NVMe with just a backplane swap. For whatever the reason the end result is that NVMe will not just be replacing 'PCIe' Solid State Drives, but will also replace AHCI based 'SATA', and 'SAS' SSDs as well. Taken as whole NVMe is easily the biggest advancement in storage subsystem development since the creation of solid state drives and compatible devices are about to start pouring into the market.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The Intel 750 PCIe Add-In Card

The Intel 750 PCIe Add-In Card


ang_sm.jpg

NVMe controllers their associated NVMHCI interface was first introduced to most consumers via the Intel Data Center P3700 series. However that series was focused only on the enterprise marketplace and as such many home enthusiasts paid it very little attention beyond to marvel at what power it offered deep pocketed business users. The Intel 750 on the other hand sets out to change people's mind about the cost associated with NVMe and how it can indeed be more than just an 'enterprise' standard.

ang3_sm.jpg

For most consumers the new Intel 750 'half-height' board will be of most interest. As stated in the introduction this form-factor shares more than a passing resemblance to last year's P3700 series as it uses basically the same PCB and controller architecture. It uses a long half-height board layout that makes use of a four lane PCIe 3.0 port connector. Also like the datacenter series, there is a large metal covering the obscures the components, provides additional cooling and makes the whole affair look like a passively cooled video card. Additionally, the Intel 750 doesn’t require an external power source as it is receives its full 25 watts (typical) of power via the PCIe bus.

back_sm.jpg

The half-height version of the Intel 750 uses only one PCB upon which all the various components are installed. Unlike the DC P3700 which uses 36 NAND ICs there are 'only' thirty two 20nm IMFT fabricated standard MLC NAND modules covering both sides of this PCB. The use of standard MLC instead of eMLC allows the 750 series to hit a much lower price point, while still providing more than enough durability for home enthusiasts. Reflecting its lower price point and home consumer orientation the amount of over-provisioning is also much less than the DC P3700, but as with the NAND type used this is more than acceptable as the need for such over-provisioning simply does not exist for non-datacenter environments.

front_sm.jpg

On the PCB there is also a single Intel 'CH29AE41AB0' first generation NVMe controller, and five ram cache ICs - for a total of 2.5GB onboard cache. This is the same controller that the Intel DC P3700 uses but twice the amount of RAM, however the 750's firmware has been refined for the home user market instead of the enterprise marketplace. This, the NAND type used, the number of NAND ICs, and the smaller amount of NAND over-provisioning does explain the differences in performance specifications between the 750 and DC P3700 series.

cap_sm.jpg

Besides the actual form-factor the 'half-height' version does differ slightly from its SFF 2.5" version in a few other minor ways. Firstly, the half-height version does not use just one capacitor and instead has two smaller capacitors located directly on the board - just as the DC P3700 series does. While the physical specifications of the capacitors may differ from the SFF-8639 version, they do provide full flushing of inflight data abilities.

led_sm.jpg

Since this form-factor does afford much more room on the PCB this version also comes with four small LEDs near the rear of the board - nearest the mesh backplate. These provide basic diagnostic abilities and with just a glance you can tell if the drive is active, inactive, or has failed the internal tests it routinely runs. However both the different number of capacitors, and diagnostic LEDs are negligible differences which won’t affect overall performance.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The Intel 750 2.5” Drive

The 750-series 2.5” Drive


bottom_sm.jpg

While most enthusiasts will likely gravitate towards the half-height add-in card version of this drive, the Small Form Factor iteration of the Intel 750 series is much more interesting for those who are constrained by their motherboard’s layout . From the outside it may not look all that different from the Intel 730 series it replaces but the it does hint at things to come if you look closely.

comp_sm.jpg

The first hint that it houses something radically different is the sheer thickness of the drive itself. It may indeed use a 2.5" form-factor but, unlike the Intel 730 which uses a 7mm Z-Height design, the Intel 750 is a full 15mm high. You certainly won’t be using this in a notebook.

comp2_sm.jpg

The real difference between the 750 and other 2.5” drives is on the connectivity front with this new generation utilizing the SFF-8639 interface. This means that it may look somewhat similar to the older SATA connections on the Intel 730, but the 750 cannot use SATA controllers, nor even connect via the fairly common yet under-utilized SATA Express interface. As we will see on the next page, Intel has instituted some additions to mitigate the pain of upgrading but in doing so they’ll likely push most users to the add-in card version.

open_sm.jpg

Even on just a quick glance, the internal architecture profoundly differs from the previous 730 series. Like most consumer grade 2.5 form-factor solid state drives the previous 730 series used only one PCB. As you can see the 750 uses two that are interconnected via a data cable that is very reminiscent of PCIe extension cables. For all intents and purposes this means the 750 design team simply took the half-height 750 version, cut the PCB in to two and crammed it inside a 2.5", 15mm Z-height case. This means that the only thing that has been carried over from the 730 series is the fact that it supports full enterprise grade data loss protection, including flushing of in-flight data, just as the half height PCIe version does. Unlike the 730 these power loss protection abilities are provided vai one large capacitor instead of two smaller ones.

open2_sm.jpg

The internal components include 32 IMFT 20nm MLC NAND ICs covering both sides of both PCBs. This means that not every NAND IC will be in contact with the external metal case, but heat buildup should be a non-issue for home users who simply will not stress a 750 series drive like an enterprise consumer would a DC P3700.

ram_sm.jpg

Beyond the differences in physical layout, the SFF8639 1.2TB version has the exact same amount of onboard ram cache as its half-height sibling. This means it uses five RAM ICs for a whopping 2.5GB of onboard cache available for the Intel first generation NVMe controller.

top_sm.jpg

To keep this high power controller cool Intel didn’t add an internal heatsink and rather uses the entire external metal enclosure as one large heat spreader. This explains the metal fins on one side of the 2.5" drive.

cable_sm.jpg

Beyond these differences there are a few more minor points of deviation from the half-height version that are worth mentioning. Obviously this model does not have any externally visible diagnostic LEDs and the actual connector is different but unlike the half-height version which is entirely powered via the PCIe bus the SFF-8639 does not have this luxury. Instead its SFF-8639 to SFF-8643 adapter cable comes with a standard SATA power connector to provide the required 25 watts of power.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Motherboard Compatibility Limitations Explained

Motherboard Compatibility Limitations Explained


Intel’s new 750-series isn’t like any drive currently on the market since it requires upstream and downstream bandwidth provisions that are beyond what a typical two-lane PCI-E 2.0 interface can provide. According to Intel these drives need a quartet of Gen3 PCI-E lanes to reach their performance targets. At first glance this may sound relatively innocuous but there are some far-reaching repercussions upon backwards compatibility with existing platforms. By requiring at least four PCI-E 3.0 lanes Intel has effectively limited native compatibility to X99 motherboards or high end Z97 boards that utilize certain PLX bridge chips.

In addition to all of that, the 2.5” version requires the aforementioned SFF-8639 cable because the current SATA Express standard, with its dual PCI-E 2.0 lanes, supposedly can’t provide sufficient bandwidth. Yes, you read that right: the 1GB/s offered up by the SATA-E interface on most motherboards would bottleneck the 750-series. It requires “up to” 3.94GB/s proving the SATA Express connector which is present on so many current motherboards may already be woefully outdated.

Let’s explain this situation in a bit more detail starting with Z97

Z97-5.png

Current Z97 boards provide an excellent cost-effective platform for gamers but many of them are saddled with inherent PCI-E lane limitations. Haswell and Devil’s Canyon processors feature just sixteen native PCI-E 3.0 lanes which can either be utilized as a single x16 route for add-in graphics card needs or two x8 pathways for Crossfire / SLI configurations. Meanwhile, the Z97 PCH doesn’t offer any PCI-E 3.0 capabilities whatsoever and its Flex I/O ports (which can be configured for SATA, USB 3.0 or PCI-E use) are only Gen2 compliant, making that a dead end as well.

With these things taken into account, adding a 750-series SSD into the secondary PCI-E x16 slot should technically allow the drive to run at full speed provided you are willing to live with slower GPU access on the primary slot. However, Intel has gone to great lengths to indicate their new drives are certified exclusively for the X99 Haswell-E platform but even those PCI-E heavy boards will face some challenges.

5960X123-3.png

When taken at face value, Haswell-E processors offer a metric ton of PCI-E lanes which happens to be their primary selling feature over the standard Haswell lineup. However, there are still a finite number of possibilities since boards based off of Wellsburg-X still have to deliver the graphics flexibility enthusiasts crave while now also making place for things like high speed PCI-E NVMe SSDs and USB 3.1. Luckily, USB 3.1 capabilities can be linked directly to some of the eight PCI-E 2.0 lanes originating from the PCH but the Intel 750-series’ bandwidth needs can’t live there too since the DMI 2.0 interface would quickly become saturated.

With the CPU’s native lanes the only game in town that could possibly support the 750-series everything seems to be solved. Not so fast. While the i7-5960X and i7-5930K both support 40 Gen3 PCI-E lanes, the less expensive i7-5820K only has 28 lanes on tap. That presents a small problem for motherboard manufacturers who want broad support for ultra-fast storage subsystems.

In the diagrams below we will show you how ASUS is accomplishing this delicate balancing act on their upcoming X99 Sabertooth, a board which natively supports both of the Intel 750 series’ form factors.

2.png

In the best case scenario where a 40-lane CPU is being used, ASUS offers compatibility via eight PCI-E Gen3 lanes than have been split off from the processor. Via a Q-Switch, these can either be pushed to a PCI-E x8 slot on the motherboard or that slot can be disabled and four of those lanes can provide bandwidth for an onboard PCI-E x4 M.2 connector. That connector works with an adapter (more about this on the next page) to convert M.2 to SFF-8643. Both the slot and M.2 port cannot be used at the same time; it’s either one or the other.

This layout still allows for dual x16 graphics cards to be used and seriously limits the sacrifices one would have to make when running Intel’s latest SSDs.

3.png

Getting this all accomplished with a 28-lane Haswell-E processor is a completely different story. In this scenario, the Sabertooth runs dual graphics card configurations in a x16 / x8 layout which leaves just four lanes. These four lanes are once again split off from the primary allocation and directed towards providing PCI-E storage solutions with optimal bandwidth. This time around the motherboard’s physical PCI-E x8 slot receives a maximum of four lanes that can be redirected towards the M.2 slot. This may not offer the broader flexibility that the 40-lane configuration does but it still allows for users to install a 750-series without too many sacrifices.

5_sm.jpg

Throughout the last few paragraphs we have been talking about ASUS’ implementation on their new Sabertooth but from a physical perspective, there’s nothing to say that other manufacturers’ can’t offer similar solutions. Nearly every X99 board on the market can run the Intel 750 in its add-in card form but when we start looking at SFF-8639 / SFF-8643 compatibility on the next page, things will start to look particularly dire.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
That Pesky SFF-8643 Connector & How to Use It

That Pesky SFF-8643 Connector & How to Use It


Hopefully over the last few pages you somehow noticed that the standard 2.5” version of Intel’s 750-series utilizes a data connector that very few of us have seen before. Called SFF-8643, it is essentially a more compact, higher bandwidth, next generation version of the current SATA-Express connector we’ve seen on so many Z97 and X99 motherboards. Sounds cool right? Not so fast because if you thought the PCI-E disposition needed for compatibility was confusing, this thing will set your noggin a-spinnin’.

connector.jpg

The cable that Intel is using for these drives is an interesting one. As you can see one end has the usual SFF-8643 connection to the motherboard but the other has both a SATA power adapter and a SFF-8639 connector.

Intel is fully cognizant of the fact that the number of consumer motherboards which offer this PCIe interface option is precisely zero right now but that hasn’t stopped them. Instead, they are using an interface from normally seen on SAS 12Gb/s solid state drives and associated enterprise-class motherboards.

Make no mistake; the Intel 750 is not a Serial Attached SCSI drive. It cannot communicate with SAS controllers unless the SAS controller also supports NVMe. The 750 simply uses this connector for compatibility and bandwidth reasons since it offers more than double the potential throughput of today’s SATA-Express interface despite the standard’s inclusion of NVMe within its base specifications.

hyper_kut_sm.jpg

This move to SFF-8643 for what is essentially a “standard sized” SSD presents a huge problem for motherboard vendors since none of their current models support the interface. With that being said ASUS already offers a so-called 'Hyper Kit' with some of their latest 2011-v3 motherboards like the X99 Sabertooth.

As you can see the Hyper Kit is nothing more than a SFF-8643 to M.2 x4 adapter. Such a simple solution is possible because the Intel 750 is bootable and attaches to your system via standard PCIe lanes. On the previous page we saw how ASUS was able to wrangle the processor’s native PCI-E lanes to offer this solution and we’re certain other vendors will follow in their footsteps.

1_sm.jpg
2_sm.jpg

Unfortunately, these adapters do come with some major caveats right now. They may be a compact 60mm in length (ie M.2's '2260' standard) but they do not conform to any M.2 height standard. The M.2 standard calls for Z-heights that are measured in millimeters, and motherboard layout design teams make the assumption that devices which use this specialized port are in fact long and slim, but not very tall.

3_sm.jpg
4_sm.jpg

The end result of this is nothing short of a bloody mess for dual GPU systems. On the Sabertooth the adapter completely blocks the installation of most enthusiast graphics cards in the second x16 PCI-E slot.

All in all, the addition of SFF-8643 connectivity to the 2.5” Intel 750 series looks and feels incomplete on the motherboard side. Right now, the ASUS Sabertooth is the only X99 board with the necessary adapter and the execution leaves much to be desired for those using dual GPU systems. We do applaud Intel for pushing the boundaries in this case but without broader compatibility, this convenient form-factor may end up going back to the enterprise market whence it came.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Testing Methodology

Testing Methodology


Testing a drive is not as simple as putting together a bunch of files, dragging them onto folder on the drive in Windows and using a stopwatch to time how long the transfer takes. Rather, there are factors such as read / write speed and data burst speed to take into account. There is also the SATA controller on your motherboard and how well it works with SSDs & HDDs to think about as well. For best results you really need a dedicated hardware RAID controller w/ dedicated RAM for drives to shine. Unfortunately, most people do not have the time, inclination or monetary funds to do this. For this reason our test-bed will be a more standard motherboard with no mods or high end gear added to it. This is to help replicate what you the end user’s experience will be like.

Even when the hardware issues are taken care of the software itself will have a negative or positive impact on the results. As with the hardware end of things, to obtain the absolute best results you do need to tweak your OS setup; however, just like with the hardware solution most people are not going to do this. For this reason our standard OS setup is used. However, except for the Windows 7 load test times we have done our best to eliminate this issue by having the drive tested as a secondary drive. With the main drive being an Intel DC S3700 800GB Solid State Drive.

For synthetic tests we used a combination of the ATTO Disk Benchmark, HDTach, HD Tune, Crystal Disk Benchmark, IOMeter, AS-SSD, Anvil Storage Utilities and PCMark 7.

For real world benchmarks we timed how long a single 10GB rar file took to copy to and then from the devices. We also used 10gb of small files (from 100kb to 200MB) with a total 12,000 files in 400 subfolders.

For all testing a Asus Sabretooth TUF X99 LGA 2011-v3 motherboard was used, running Windows 7 64bit Ultimate edition. All drives were tested using either AHCI mode using Intel RST 10 drivers, or NVMHCI using Intel NVMe drivers.

All tests were run 4 times and average results are represented.

In between each test suite runs (with the exception being IOMeter which was done after every run) the drives are cleaned with either HDDerase, SaniErase or a manufactures 'Toolbox' and then quick formatted to make sure that they were in optimum condition for the next test suite.

Processor: Core i7 5930K
Motherboard: Asus Sabretooth TUF X99
Memory: 16GB G.Skill Ripjaws DDR4
Graphics card: NVIDIA GeForce GTX 780
Hard Drive: Intel DC S3700 800GB, Intel P3700 800GB
Power Supply: XFX 850

SSD FIRMWARE (unless otherwise noted):

OCZ Vertex 2 100GB: 1.33
Vertex 460 240GB: 1.0
Intel 7230 240GB: L2010400
Samsung 840 Pro 256GB:DXM06B0Q
Plextor M6e 256GB: 1.03
AMD R7 240GB: 1.0
Crucial MX200: MU01
G.Skill Phoenix 480GB: 2.71
Intel 750: 8EV10135
Intel P3700: 8DV10043

Samsung MDX controller:
Samsung 840 Pro 256GB- Custom firmware w/ 21nm Toggle Mode NAND

SandForce SF1200 controller:
OCZ Vertex 2 - ONFi 2 NAND

SandForce SF2281 controller:
G.Skill Phoenix 480GB - Custom firmware w/ 128Gbit ONFi 3 NAND

Marvell 9183 controller:
Plextor M6e 256GB- Custom firmware w/ 21nm Toggle Mode NAND

Marvell 9188 controller:
Plextor M6s - Custom firmware w/ 21nm Toggle Mode NAND

Marvell 9189 controller:
Crucial MX200 - Custom firmware w/ 128Gbit ONFi 3 NAND

Barefoot 3 controller:
AMD R7 (M00) - 19nm Toggle Mode NAND w/ custom firmware
OCZ Arc 100 (M10) - 19nm Toggle Mode NAND

Intel X25 G3 controller:
Intel 730 - Custom firmware w/ ONFi 2 NAND

Intel NVMe G1 Controller:
Intel P3700 - Customer firmware w/ HET NAND
Intel 750 - Customer firmware w/ MLC 20nm NAND
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Read Bandwidth / Write Performance

Read Bandwidth


<i>For this benchmark, HDTach was used. It shows the potential read speed which you are likely to experience with these hard drives. The long test was run to give a slightly more accurate picture. We don’t put much stock in Burst speed readings and thus we no longer included it. The most important number is the Average Speed number. This number will tell you what to expect from a given drive in normal, day to day operations. The higher the average the faster your entire system will seem.</i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/Intel_750/read.jpg" border="0" alt="" />
</div>

Write Performance


<i>For this benchmark HD Tune Pro was used. To run the write benchmark on a drive, you must first remove all partitions from that drive and then and only then will it allow you to run this test. Unlike some other benchmarking utilities the HD Tune Pro writes across the full area of the drive, thus it easily shows any weakness a drive may have.</i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/Intel_750/write.jpg" border="0" alt="" />
</div>

As you can see Intel has taken advantage of the extra wide bus that PCIe x4 affords and has created a home consumer drive that simply is mind boggling fast. So much so that it takes a four drive RAID 0 SATA device like the G.Skill Blade to beat it - and then only in sequential write tests. Equally important is both form-factors post results so close as to be the same with the minor difference simply being considered noise and well within the error ratio of such tests.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
ATTO Disk Benchmark

ATTO Disk Benchmark


<i>The ATTO disk benchmark tests the drives read and write speeds using gradually larger size files. For these tests, the ATTO program was set to run from its smallest to largest value (.5KB to 8192KB) and the total length was set to 256MB. The test program then spits out an extrapolated performance figure in megabytes per second. </i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/Intel_750/atto_r.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/Intel_750/atto_w.jpg" border="0" alt="" /></div>

As with the sequential read and write results, the ATTO results are very, very good. By removing the SATA controller from the equation Intel is able to offer a drive that can not only provide great large file performance but also excellent small file performance. Even compared to previous 'SATA' controller based models like the four controller RAID 0 G.Skill Blade the single controller Intel 750 can easily hold its own. The Blade does give some incredible write numbers though.

Once again both 750 versions perform within tolerances of each other so consumers will not be forced into one form factor over another. Instead they will be able to choose which one best fits their custom build.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Crystal DiskMark / PCMark 7

Crystal DiskMark


Crystal DiskMark is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and random read/write speeds; and allows you to set the number of tests iterations to run. We left the number of tests at 5 and size at 100MB.


cdm_r.jpg


cdm_w.jpg



PCMark 7


While there are numerous suites of tests that make up PCMark 7, only one is pertinent: the HDD Suite. The HDD Suite consists of numerous tests that try and replicate real world drive usage. Everything from how long a simulated virus scan takes to complete, to MS Vista start up time to game load time is tested in these core tests; however we do not consider this anything other than just another suite of synthetic tests. For this reason, while each test is scored individually we have opted to include only the overall score.


pcm7.jpg



Can you say 'paradigm shift'? Because this is what a paradigm shift looks like. Once enthusiast see results like this they are going to be salivating at the thought of owning a drive this bloody fast. The 750 results were so good in fact we had to include the DC P3700 as making proper comparison to anything else was simply less than optimal and did not give a full picture of what this drive can do. That is bloody impressive no matter how you look at it.
 

Latest posts

Top