What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

Toshiba OCZ TR200 960GB & 480GB SSD Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
New SSD launches have been few and far between as of late. That shouldn’t come as any surprise since the solid state storage segment has recently seen a reduction its number of market players, pressure from NAND shortages and gradually increasing BOM costs due to supply constraints. There’s also significant separation appearing and new SSDs end up acting like bookends, being either entry level SATA-based models or extremely high end drives. There isn’t really much of a middle ground since most of those budget-minded options are pushing up against the SATA interface’s limits. With that in mind, it shouldn’t come as any surprise that the lion’s share of new devices have come from a handful of first party NAND producers like Samsung, Toshiba and Micron / Crucial whereas previous bellwethers are being increasingly pushed to the fringes.

One example of those new market realities was Toshiba’s purchase of OCZ and that partnership is now beginning to bear some very appealing fruit. We’ve highly praised their enthusiast level RD400 series, mid-level VX500 drives and entry T-series, with the latter being more frequently updated. After the Trion 150, TL100 and TR150 that evolution is now seeing another cycle with the TR200. And like many other lower end SSDs these days, the TR200 experiences mission creep that sinks it deeply into the VX-series’ territory. Luckily with lines blurring so much, would-be buyers can get more performance for their money.

Regardless of anyone’s personal opinions about the long-term forecast for SATA and TLC NAND-based storage in the mainstream marketplace, one thing remains clear: the SATA interface with TLC NAND combination is here to stay in the entry level and value orientated segments. The reason for this is simple. While there is indeed a legion of PC enthusiasts who have been using Solid State Drives for years and years now, the vast majority of buyers have only recently awoken to their hard disk drives. This is the corner of the market the all new Toshiba OCZ TR200 series hopes to conquer and it is the blueprint upon which the TR200 series has been built and in order to keep costs low while still offering great performance metrics, the TLC / SATA combination is being used once again.

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/int.jpg" border="0" alt="" /></div>


Following in the footsteps of many, many… many other OCZ models, the all new TR200 series comes in three sizes for now. A small 240GB drive which has an MSRP of $89.99 USD, a 480GB model with an MSRP of $149.99 USD and a large 960GB drive for those edge case value buyers who really need a lot of fast storage and are willing to pay a whopping $289.99 for the privilege of only needing one SATA port. Today we will be taking a closer look at both the 480GB as well as the 960GB capacity versions. One interesting thing to note is the TR200 240GB is just $13 more than a 32GB Optane module, proving once again that lower-cost SSDs have the capability to really undermine Intel’s plans for a holistic Optane ecosystem.

<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/intro.jpg" border="0" alt="" /></div>

On the surface another SATA with TLC NAND based drive does not sound all that exciting. After all, every single SSD rebrander and manufacturer has released an AHCI SATA based drive that relies upon 3D TLC NAND of one flavor or another in the past year or so. This market is so saturated that we could dedicate a <i>long</i> review to just going over all the options and we’d likely come up with a dozen drives that performed exactly the same.

In the case of Toshiba’s TR200 series, appearances can be deceiving as this is not 'Just Another TLC Drive'. You see, Toshiba has an ace up their sleeve that they hope will upend the market co-dominance of Samsung and Crucial. This potential ace in the hole can all be summed up in a neat and type acronym package: BiCS 3.0. BiCS is a simple acronym for Bit Cost Scalable. What it actually is… is a bit more complicated; so much so that we have dedicated the next page to going over it in detail. For the nonce it is suffice to say that it is Toshiba’s take on ‘3D' NAND. However, this new third generation of BiCS technology is a potential game changer which modifies how 'fast' TLC NAND is at writing information to its cells. In time as this technology matures it may actually remove the need for pseudo-SLC buffers that have a tendency to fail at the most importune of times.

So what has this change allowed Toshiba’s OCZ arm to accomplish? Well, the TR200’s on-paper specifications pretty much align with those of the TR150 in respects to read / write speeds and even NAND endurance but there are some slight deviations that need to be highlighted as well. Basically, OCZ is sacrificing a bit of low level sequential write and random read bandwidth for better random write speeds. That’s an interesting choice to make but it could be one that ultimately benefits these drives’ real world performance.

When compared against the direct competition, things do look a bit challenging for the TR200. While its pricing aligns with the likes of WD’s Blue series and Crucial’s BX300 drives, it falls behind in some performance categories while offering less NAND endurance. But we still have to remember these are simply on-paper specs and real world performance will be the deciding factor here.

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/top.jpg" border="0" alt="" /></div>


Beyond this critical and potential game changer addition the TR200 series is pretty much a Toshiba drive built to their standards - and make no mistake ,it may be an OCZ branded drive but this is a Toshiba model through and through. Just as with the TR200’s predecessor series the -Trion 150- this means an all metal 7mm 2.5" chassis and a lack of onboard capacitors for data loss protection (it uses a firmware based solution). There’s also a marked change away from the typical OCZ blue color scheme towards a more trendy key lime green.

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/tc58.jpg" border="0" alt="" /></div>


The TR200 also relies upon a Toshiba controller that is best describes as being ‘extensively tested’. In the TR200’s case it is the TC58NC1010GS9 which is a revised version of the TC58 that has appeared in many Solid State Drive models from Toshiba and OCZ in the past few <i>years</i>. Unfortunately, this controller doesn’t include any external RAM cache to keep performance high when I/O requests get hot and heavy. It does however come with OCZ's 'hassle free' 3 year warranty which is arguably one of the best in the industry <i>and</i> easier that Toshiba's RMA process.

On the positive side this TR200 series may make use of new NAND technology but BiCS 3 was extensively beta tested in the OEM world. Specifically, before Toshiba was ready to release this tech into the consumer market they released the Toshiba XG5 in May of this year to ensure that any bugs or quirks were worked out <i>before</i> home users ever got it.

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/soft.jpg" border="0" alt="" /></div>


Since this is an ‘OCZ’ branded drive the TR200 is also fully compatible with the latest version of OCZ’s SSD Utility app. This application allows for easy monitoring, troubleshooting, and even firmware upgrading all in one package. Unfortunately, while OCZ were one of the first to release an ‘SSD Toolbox’ application, they have failed to keep up with changing times. As such, experienced SSD users expecting to find an easy way to change the drive’s over-provisioning or even boost performance via RAM cache will be disappointed – as both are still MIA from this application. It does include a nifty little benchmark utility that will measures read and write performance – and is obviously the tool of choice for OCZ’s RMA department when determining if a drive is in need of replacement or not.

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/board2.jpg" border="0" alt="" /></div>


Cracking open the chassis we can see that thanks to TLC 3D NAND, even the largest 960GB model only requires a half-length PCB – just like previous TRION and TL models. Specifically, if you do decide to void your warranty you will find eight Toshiba NAND ICs (of varying capacity depending on TR200 model) and the upgraded Toshiba TC58 controller. As expected there is no RAM IC nor onboard capacitors since the controller claims to need neither. Instead the controller uses Toshiba’s firmware based algorithms to ensure data loss protection in the event of an unexpected loss of power. Basically the drive backs up all data on a fairly regular basis to the NAND and keeps ‘in-flight’ data to a minimum. It does however cost this new model much needed CPU cycles. As we will show you later in the review these are two weak links in the ‘new’ TR200’s chain.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
A Closer Look at BiCS Technology

A Closer Look at BiCS Technology


<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/BiCS.jpg" border="0" alt="" /></div>

Toshiba has been working on their BiCS technology since 2007. In the intervening years they have further refined and evolved the basic idea of vertically stacking NAND. Back in 2007 it was indeed a groundbreaking idea but since then others have successfully done it. What Toshiba has done however is to think outside the box and create TLC NAND that is arguably superior to all others.

There are numerous ways in which this BiCS 3 NAND is arguably superior to the IMFT 3D NAND we have previously looked at. The first is in exactly how the NAND is designed and how it is programmed. In a
floating gate (FG) NAND cell, like IMFT's 3D NAND, this 'floating' gate is the conductor of electricity. So to change the voltage in a cell the gate itself provides the pathway. In a Charge Floating Trap (CFT) NAND the gate is an insulator that simply impedes the flow of current.

This may not sound like much of a difference, as both are Negative-AND logic gates, but it really is significant. In standard floating gate designs high write loads cause stress on the crystal lattice of the gate and eventually tiny fractures start appear in the lattice. Over time these fractures or 'oxide defects' build up and allow electrons to flow freely into and out of the cell regardless of the gate's state. This causes a short circuit in the cell, rendering it unable to hold a charge. In laymen's terms these fractures are what 'kill' a NAND cell. The smaller the fabrication process the smaller the gate, the more severe the impact even small fractures can have. This is why historically every new smaller fabrication node size came with a reduction in cell life. It is also a large part of why IMFT's first gen '3D' NAND was not built on a smaller node process than the proceeding '2D' NAND process, and may never go smaller - or as Micron states on their 3D NAND page (https://www.micron.com/products/nand-flash/3d-nand) "This vertical approach lets us expand the size of each 3D NAND cell—the lithography is actually larger than our latest planar NAND."

Even more concerning is this issue becomes much more acute when dealing with NAND cells which hold 3 bits – or 6 voltage states – and 'bit-flipping' (or more accurately an error in the voltage of the cell) can readily happen after only a small portion of the gate is damaged from use. This is why TLC was always considered by enthusiasts to be the red-headed step child of the industry.

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/BiCS2.jpg" border="0" alt="" />
</div>

Conversely a Charge Floating Trap does not suffer from this issue until a relatively large portion of the gate is damaged. This is because these small fractures only create a short circuit at the <i>electron</i> level and drain off only the electrons that are 'touching' the fracture. Leaving the rest of the electrons in the cell to control the voltage adequately enough that the controller can still accurately tell what the cell state is. In laymen's terms CFT is more durable than floating gate and requires a lot less handholding – or 'housekeeping' – by the controller, freeing up valuable cycles for more important things such as real-time IO requests.

Equally important these fractures in the gate are also a lot less likely to occur in the first place. CFT gate's do not use the same erase technique that is needed with floating gates to ensure a cell is wiped clean of residual charge. Instead it can use a gentler, lower voltage, but <i>faster</i> process that still ensures the cell is completely clean and ready for new data storage.

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/charge.jpg" border="0" alt="" />
</div>

To imagine this difference, take two scenarios. In one scenario you have a water balloon filled with water. In the other you have the same latex balloon but it is covering a glass of water that is tipped upside down. For these scenarios the latex balloon is the gate and the water represents the electrons in a cell. In both scenarios you are going to randomly prick the balloon with a very sharp, very small needle. This needle represents the random damage that happens to a NAND cell's gate when being heavily used, or simply being erased. The end result will be that while the balloon over the glass will indeed loose water over time it will only be 'dripping' out of the holes. Whereas the balloon filled with water is going to catastrophically fail after only a few pin pricks. That is the difference between FG and CFT gates, albeit exaggerated for effect.

Floating Gate manufactures are well aware of this issue and have done their best to mitigate the damage done during erases by simply using less voltage but for a longer period of time. This makes for a slower erase cycle and one that is still more damaging than its Charge Floating Trap counterpart. Barring going back to 50nm process nodes there is only so much manufactures can do with floating gate designs to increase longevity. Even then it is a tradeoff between early cell death or higher levels of performance. Toshiba were simply the first to see these issues associated with ever decreasing node size, and were willing to spend money on coming up with a solution.

This major difference between typical floating gate and charge floating trap design is indeed significant but it is not the only reason CFT is seen by many experts as the more elegant solution. Another reason is that, while it still is a <i>possibility</i>, the chance of capacitive coupling occurring is greatly reduced compared to floating gate designs. Capacitive coupling happens when these 'oxide defects' do not drain the cell voltage enough for the controller to notice, and mark the cell as dead, but enough that these free electrons bleed over to adjacent cells. When enough of this bleed over happens the gates themselves become linked as there is now a pathway connecting them together that is <i>not</i> monitored by the controller.

In simplistic terms, capacitive coupling means that when the controller initiates a cell charge state change in the any one of these coupled cells, the voltages of <i>all</i> the cells can <i>randomly</i> change. This in turn can change a 0 into a 1 in not just one bit but up to three bits <i>per linked</i> cell. Which in turn leads to read errors in these damaged cells, and in turn requiring the controller to implement its ECC to try and recover the data. The net result is not only a risk of loss of data but performance also suffers. In an effort to reduce this issue floating gate NAND based drives shuffle and rewrite the data on a very short schedule – usually measured in weeks - so as to reduce this issue from becoming serious, as well as to uncover these misbehaving cells early. It does however reduce NAND life as the long term cumulative effect of all this writing, erasing, and re-writing creates the very issue it is trying to solve.

These are the main reasons that Toshiba was the first NAND manufacturer to start thinking about a new way of creating NAND that did not suffer from these issues. It is why they were also the first to start expending significant resources on research and development of CFT based NAND storage (circa 2007). Even though Samsung was the first to the marketplace with a CFT design, via Samsung's 'V-NAND' technology, Toshiba's greater experience should translate into fewer real-world issues. Put another way, the chances of the early generation Samsung Evo issues happening to Toshiba and their BiCS NAND is a lot less.

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/float.jpg" border="0" alt="" />
</div>

Further helping cement Toshiba's take on CFT as arguably the superior option to both Samsung's V-NAND and IMFT's 3D NAND is the fact that Toshiba has not taken a linear path to their NAND cell layout. As we detailed in our Crucial MX300 review IMFT 3D NAND, much like Samsung V-NAND, is laid out like a large apartment building with vertical and horizontal interconnects ('elevators' and 'hallways') with the NAND cells themselves in 'apartments' in between these controller routes.

In BiCS these vertical and horizontal paths are not separate and distinct. Instead each vertical pathway is U shaped with vertical lines of NAND interconnected at the 'base of the building'. Amongst other things, this in theory should allow for faster inter-IC transfer of data from one 'apartment' block to another – for example during internal housecleaning when a NAND block needs to be refreshed or erased. What is not theory is that BiCS TLC NAND is significantly faster than IMFT or Samsung TLC NAND when it comes to erase cycles.

With other NAND manufacture's '3D' NAND, the controller can only initiate an erasure at the single page (IMFT) or two page (Samsung) level. With BiCS the controller can flash <i>three pages</i> per erase cycle. This is important as all 3D NAND -including BiCS – relies upon a pseudo-SLC write buffer to achieve acceptable write performance. This buffer may vary in size from manufacture to manufacture but once this group of TLC cells acting in SLC mode is exhausted the controller has to rely upon the rest of the TLC NAND. Until the pseudo-SLC buffer can be flushed of data, erased, and made ready for new incoming write requests performance does drop precipitously. TLC NAND is not only rather slow to complete write requests compared to SLC and MLC, but the controller has to dedicate some of its cycles to internal transfer of data from SLC to TLC NAND, other cycles to handling incoming write requests, and others still to erasing the SLC write buffer. That is a lot more demand than the controller can supply when only able to erase one or two pages per cycle. As such TLC BiCS based drives should be able to get out of this 'emergency mode' faster than their counterparts.

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/board1.jpg" border="0" alt="" /></div>


The reason that CFT based NAND storage are not as common as their FG counterparts all boils down to cost of manufacture. Floating gate storage is a very mature process with most of the major bugs worked out, or at the least negated as best as possible. CFT gates on the other hand may date back to the 60s, but creating a NAND design process that scale up to industrial levels has proven difficult. It is not a simple process like FG and has taken Toshiba many, many years to perfect. A perfect example of this is the Toshiba OCZ TR200 series may be the first mass consumer drive released to the general public with BiCS but this is not to say it uses first generation BiCS NAND. Instead it is <i>third</i> generation (aka 'BiCS 3'). Compare and contrast this ultra conservative approach of not releasing a product before it is fully baked with Samsung who simply want 'First to market!' accolades – with consumers paying the price – and Toshiba certainly appears to have the better, more mature technology. As they should as they were the ones who invented the idea of using a Charge Trap for NAND based storage in the first place.

Taken as a whole Toshiba's fresh take on NAND design is extremely exciting and may help breathe life into a technology that some – like IMFT and their 3DXpoint phase change memory technology – believe is quickly becoming irrelevant and antiquated. It remains to be seen if this newer approach can indeed stave off the almost inevitable change-over from NAND to newer 'solid state' technologies.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Test System & Testing Methodology

Test System & Testing Methodology


Properly testing a modern Solid State Drive to fully understand its abilities is not a simple undertaking. It takes time and it takes <i>experience</i> as relying upon tried and true applications is no longer good enough. Modern solid state drives come with a whole arsenal of tricks to ensure that the end-user never sees the true capabilities of a drive long enough to form a negative opinion. They have gotten so good at coming up with workarounds that minimize any underlying issues that even less experienced reviewers can be fooled.

This certainly is a laudable goal as at the end of the day a SSD is not meant for reviewers it is meant for users. As such anything that can make the overall experience a more positive one has to be considered a good thing. It does however make it difficult to make an informed decision a drive is never truly pushed past its performance envelope – as only then can you the potential buyer know if a given model is right for you.

This new testing methodology is the distillation of a decade's worth of Solid State Drive reviewing. In these years we have seen all the tricks, all the workarounds and have spent a lot of time and effort on creating an improved methodology that is designed to strip away them all. Only then can we show you our readers exactly what a drive is made of. To do this we have blended in new with the old. Long term readers will notice that many of our tests are similar to the way we used to do things, but even here things have changed greatly. The size, the scope, and even the underlying methodology has been improved.

In the past we, like other review sites, would test a drive when empty of all other data. This is unrealistic and while we did do some limited partial and full drive performance it was based on an unrealistically optimistic scenario. As such from now on <i>all</i> solid state drives will tested only when they are first filled to 50% capacity. The only exceptions are testing applications that require an empty drive to work. For example HD Tach requires not only an empty drive but a drive that is also unpartitioned in order to run. These are now the exceptions not the rule.

Long term readers will also notice a few new additions to our testing suite. These custom tests are worst case scenarios that we have come up with that are still in the realm of possibility – as all tests are focused in on showing overall performance in as realistic a manner as possible.

For the time being all testing will be carried out on a Ryzen 7 based system. Specifically we are using an overclocked Ryzen 1700 running at 4.0Ghz, and with its Infinity Fabric set to 1600Mhz (via DDR4-3200 RAM). AMD's Ryzen series was selected as it is has become very popular with large groups of consumers – everything from budget to enthusiast. More importantly Intel based systems are proving to be rather short-lived with lifecycles best described in months not years. On a regular basis we will reassess the testbed and update it to what we consider to be the best option. We have no brand loyalty and rather our only criteria is using what is best for showcasing the strengths and weakness of a review sample.

For all of the benchmarks, appropriate lengths are taken to ensure an equal comparison through methodical setup, installation, and testing. The following outlines our testing methodology setup:

A) Windows is installed using a full format.

B) Chipset drivers and accessory hardware drivers (audio, network, GPU) are installed.

C) To ensure consistent results, a few tweaks are applied to Windows 10 Pro and the NVIDIA control panel:
• UAC – Disabled
• Indexing – Disabled
• Superfetch – Disabled
• System Protection/Restore – Disabled
• Problem & Error Reporting – Disabled
• Remote Desktop/Assistance - Disabled
• Windows Security Center Alerts – Disabled
• Windows Defender – Disabled
• Screensaver – Disabled
• Power Plan – High Performance
• V-Sync – Off

D) All available Windows updates are then installed.

E) All programs are installed and then updated, followed by a defragment.

F) All networking is disabled so as to eliminate this variability in overhead

G) Benchmarks are each run four to ten times, and unless otherwise stated the results are then averaged.


The full system specs are as follows:

Case: Lian-Li PC-T70W
Motherboard Chipset: AMD X370
CPU: AMD Ryzen 7 1700 @ 4Ghz
RAM: DDR4-3200 16-16-16-18
OS: 64-Bit Windows 10 RS2 Pro
OS Drive: 1x 1TB Corsair MX300 SSD
Graphics card: EVGA GeForce GTX 1070 SC Gaming
Power Supply: Seasonic Focus Gold 850FX


<i>Special Thanks to Crucial for their support and supplying the MX300 SSD.
Special thanks to AMD for their support and supplying the DDR4 RAM Kit. </i>
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Read Bandwidth / Write Performance

Read Bandwidth


<i>For this benchmark, HDTune was used. It shows the potential read speed which you are likely to experience with these hard drives. While this application will provide numerous results the most important number is the Average Speed number. This number will tell you what to expect from a given drive in normal, day to day operations.</i>

<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/read.jpg" border="0" alt="" /></div>


Write Performance


<i>For this benchmark HD Tune Pro was used. To run the write benchmark on a drive, you must first remove all partitions from that drive and then and only then will it allow you to run this test. Unlike some other benchmarking utilities the HD Tune Pro writes across the full area of the drive, thus it easily shows any weakness a drive may have.</i>

<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/write.jpg" border="0" alt="" /></div>


As expected the read performance is pretty much what we have come to expect from a modern AHCI SATA solid state drive. That is to say it is good, but not outstanding. The write performance is also fairly decent given the fact that the pseudo-SLC buffer is an anemic 8GB on the TR200 480GB model and a mediocre 15GB on the 960GB model. To put this into perspective this, about half of what a BX or MX 300 series from Crucial come equipped with.

When this rather small buffer is exhausted performance will plummet to the 100MB mark – or about the same as what a decent hard disk drive posts. On the positive side this is still better than what the TL100 and TR150 did in the exact same scenario – which was as low as 50MB/s.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
ATTO Disk Benchmark

ATTO Disk Benchmark


<i>The ATTO disk benchmark tests the drives read and write speeds using gradually larger size files. For these tests, the ATTO program was set to run from its smallest to largest value (.5KB to 8192KB) and the total length was set to 256MB with a queue depth left at its default of 4. The test program then spits out an extrapolated performance figure in megabytes per second. Of all the results there are four that we consider the most important. 0.5KB, 2KB, 4KB, and 8192KB. The first three show how a given drive can handle those critical small files, while the largest shows what the drive can do under optimal scenarios. </i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/atto_w.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/atto_r.jpg" border="0" alt="" /></div>

Once again the TR200 is proving to be noticeably better than the Trion 150 it replaces. By the same token the TC58 controller is getting a tad long in the tooth. While yes this is the latest and greatest revision it still is a controller based on a rather old design. Toshiba really needs to step up and replace it with a newer, faster, and just <i>better</i> controller.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Crystal DiskMark / PCMark 7

Crystal DiskMark


<i>Crystal DiskMark is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and random read/write speeds; and allows you to set the number of tests iterations to run. We left the number of tests at 5 and size at 100MB. </i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/cdm_w.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/cdm_r.jpg" border="0" alt="" /></div>


PCMark 8


<i>While there are numerous suites of tests that make up PCMark 7, only one is pertinent: the Storage 2.0 test. The Storage 2.0 consists of numerous tests that try and replicate real world drive usage. Everything from how long a simulated virus scan takes to complete, to MS Vista start up time to game load time is tested in these core tests; however, we do not consider this anything other than just another suite of synthetic tests. For this reason, while each test is scored individually we have opted to include only the overall score.</i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/pcm.jpg" border="0" alt="" /></div>


Regardless of the new and better NAND technology being used in this drive, we are seeing a few major weaknesses of this ‘new’ series appear. First no RAM buffer means that when the automatic data-loss protection kicks in… performance tanks. Then mix in an older controller and the results are decent but certainly not outstanding. This is a shame as the majority of this performance boost is coming from the BiCS NAND and it is being handicapped by this controller – as we will show in detail later in the review.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
AS-SSD / Anvil Storage Utilities Pro

AS-SSD


<i>AS-SSD is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and small 4K read/write speeds as well as 4K file speed at a queue depth of 6. While its primary goal is to accurately test Solid State Drives, it does equally well on all storage mediums it just takes longer to run each test as each test reads or writes 1GB of data.</i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/asd_w.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/asd_r.jpg" border="0" alt="" /></div>


Anvil Storage Utilities Pro


<i>Much like AS-SSD, Anvil Pro was created to quickly and easily – yet accurately – test your drives. While it is still in the Beta stages it is a versatile and powerful little program. Currently it can test numerous read / write scenarios but two in particular stand out for us: 4K queue depth of 4 and 4K queue depth of 16. A queue depth of four along with 4K sectors can be equated to what most users will experience in an OS scenario while 16 depth will be encountered only by power users and the like. We have also included the 4k queue depth 1 results to help put these two other numbers in their proper perspective. All settings were left in their default states and the test size was set to 1GB.</i>

<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/anvil_w.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/anvil_r.jpg" border="0" alt="" /></div>

Once again, both the 480GB and 960GB capacities of the TR200 offer decent performance for an entry level series, but both are not really up to par when compared against similarly priced models from the competition.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
IOMeter Latency Torture Test

IOMeter Latency Torture Test


<i>In a perfect world the response time of a storage device should be as close as instantaneous as possible. This of course is impossible, instead any delay that is under 0.100 of a second (100miliseconds) is considered the gold standard of storage responsiveness. This is because 100ms is generally considered the smallest perceptible interval of time humans can perceive. Anything above this will result in the occasional perceptible ‘stutter’. However, a single solitary 200ms pause is better than a significant cluster of 150ms pauses. As such any and all results must be considered in their totality and not just based on a single data point. This is why we have included four charts instead of just two. The first two charts represent the total results of an IOMeter 10 minute read test and a 10 minute write test. The last two just show the average read/write results as well as the maximum read/write response rate that occurred during these tests.

To obtain these results we configured IOMeter to use a 10 second ramp up followed by a 10 minute run for each test using the entire drive’s capacity. We also configured IOMeter to record the results in one second increments (the smallest time slice allowable). The first test was using 4K aligned data chunks that were 100% random, 100% write only using its Full Random pattern. The second used 4K aligned data chunks that were 100% random, 100% read. This is done to show how the controller handles emergency housecleaning even when inundated with read I/O request.

In this test we are not focusing in on steady-state results or other Enterprise orientated determining factors. We are simply looking for overall latency under what can be considered a realistic worst-case scenario for home users via a method that can still reliably strip away the various protection mechanism the controller has in its arsenal to keep up appearances. This is not the bad old days where ‘SSD Stutter’ is still truly a thing. Instead this test is designed to solely highlight how good or bad a controller and NAND combination really is.

All tests were run four times and the most common result was used.</i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/random_1.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/random_2.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/random_3.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/random_4.jpg" border="0" alt="" /></div>

This is actually a pretty good snapshot of what a 'perfect storm' of design decisions can culminate in. Three key factors led to these rather poor performance results. The first issue is the lack of a RAM buffer. The second is CPU cycle robbing via data-loss protection. The third is using a rather old controller and pairing it with TLC that has a rather anemically sized pseudo-SLC write buffer.

Better TLC NAND technology or not, improved results or not, this new series is still a TLC based drive. Even the best TLC NAND is still slower than MLC NAND. In time BiCS may change this but performance is still not there yet. Drive manufacturers understand this and it is why they include beefy 'SLC' write buffers.

The TR200 does not have a large write buffer so this combination of decisions by Toshiba puts more stress on the controller to keep ahead of I/O demands. As such it will show highly variable performance when stressed. As it is also using firmware and not a hardware-based data loss protection configuration this variability will be even more extreme than it otherwise could be. At set intervals (time and data based) the controller has to stop working on a write or read request to backup its data just in case the power is unexpectedly lost. You can actually see that happening in the charts above.

To a certain extent this issue could have been avoided by simply using a large external RAM cache buffer. This buffer would have kept read and write I/O requests from <i>appearing</i> to randomly stall and made the TR200 series <i>seem</i> a lot more responsive than it really is. This is why most modern solid state drives come with onboard RAM but as Toshiba has opted to not include this critical feature this randomness is on full display for anyone willing to test for it.

Toshiba really needs to rethink their whole approach to consumer drives and <i>either</i> use their new BiCS TLC NAND <i>or</i> firmware data loss protection. They also need to start using a much more capable controller that can use an external RAM cache buffer. Crucial was able to grasp why combining all these cost-cutting measures into one drive is such a bad idea and we hope Toshiba follows suit ASAP. Until then this drive will indeed react badly when pushed past its rather narrow performance window. 
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Windows 10 / Adobe CC Load Time

Windows 10 Start Up with Boot Time A/V Scan Performance


<i>When it comes to hard drive performance there is one area that even the most oblivious user notices: how long it takes to load the Operating System. We have chosen Windows 10 RS2 64bit Pro as our Operating System with all 'fast boot' options disabled in the BIOS. In previous load time tests we would use the Anti-Virus splash screen as our finish line; this however is no longer the case. We have not only added in a secondary Anti-Virus to load on startup, but also an anti-malware program. We have set Malwarebytes 2 to initiate a quick scan on Windows start-up and the completion of the quick scan will be our new finish line. </i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/boot.jpg" border="0" alt="" /></div>


Adobe CC 2017 Load Time


<i>Photoshop is a notoriously slow loading program under the best of circumstances, and while the latest version is actually pretty decent, when you add in a bunch of extra brushes and the such you get a really great torture test which can bring even the best of the best to their knees. Let’s see how our review unit fared in the Adobe crucible! </i>

<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/adobe.jpg" border="0" alt="" />
</div>

Once again these results are going to impress newcomers to Solid State Drives. However, that is a rather low bar to clear as almost any SSD is sure to impress novices that have never experienced the sheer performance boost that an SSD offers. When compared to what other – comparably priced – series can do the TR200 is starting to fall behind the curve. Considering this is a brand-new series, already being inferior to previously released competition is not a place any manufacturer wishes to find themselves in on launch day.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Firefox / Real World Data Transfers

Firefox Portable Offline Performance


<i>Firefox is notorious for being slow on loading tabs in offline mode once the number of pages to be opened grows larger than a dozen or so. We can think of fewer worse case scenarios than having 120 tabs set to reload in offline mode upon Firefox startup, but this is exactly what we have done here.

By having 120 pages open in Firefox portable, setting Firefox to reload the last session upon next session start and then setting it to offline mode, we are able to easily recreate a worst case scenario. Since we are using Firefox portable all files are easily positioned in one location, making it simple to repeat the test as necessary. In order to ensure repetition, before touching the Firefox portable files, we have backed them up into a .rar file and only extracted a copy of it to the test device.</i>

<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/ff.jpg" border="0" alt="" /></div>


Data Transfer Torture Test


<i>New to our testbed suite is a simultaneous read and write test using real world data. Unlike almost all other tests in our arsenal this is a test that literally pits the controller and NAND against itself. The faster the controller reads data from the NAND the more pressure it puts on itself to write to the NAND – and vice versa. This is truly a no win scenario for the controller. Rather it has to find the optimal balance between read and writes in real-time, while also juggling house-keeping and other behind the scene tasks that allow the controller to be able to write to ready to use NAND.

By doing this we not only strip away all cache boosting performance, as well as short term performance boosting algorithms, we also see exactly how good the firmware, the controller, and even the NAND is at handling high stress environments. Further helping to show what a controller & NAND’s true abilities are we have opted for 60GB single file for the large file test and 20GB for the small file test. This way even the largest pseudo-SLC buffer will be unable to mask any underlying weakness.

On the surface the idea of the average home user running into a scenario that requires simultaneous read and write performance seems minimal at best. The reality however is it is a very common occurrence. Most PC users do not have multiple solid state drives and instead rely upon a single storage device to handle all their needs. As such when they download a steam game and then install it, this is the type of scenario they will run into. Albeit to a more limited extent.

In order to allow for consistency from run to run we have chosen RichCopy to carry out this arduous task. Also, in order to replicate as close as possible a home user environment we have limited RichCopy to a single thread / queue depth. </i>

<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/copy_lg.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/TR200/copy_sm.jpg" border="0" alt="" /></div>


As with the IOMeter Latency torture test, these results just underscore exactly what is wrong with this drive. Basically, it is slower than the competition because Toshiba is still using an older controller that has no external RAM buffer.
 

Latest posts

Top