SSDs


Reminder: OCZ SSD Giveaway & Ask The Experts Recap

Reminder: OCZ SSD Giveaway & Ask The Experts Recap

As a final reminder to anyone who has yet to enter, our OCZ SSD giveaway closes on later this week on Friday. We’ll be giving away 3 SSDs altogether: 512GB and 1TB versions of the recently launched OCZ VX500 SATA SSD, and a 512GB OCZ RD400 NVMe M.2 SSD. So if you like free stuff and are in the United States, don’t hesitate to enter.

Meanwhile our Ask The Experts session was a big success, so thank you to everyone who submitted questions for OCZ to answer. In case you missed it, here’s a recap of the questions that were answered.


What, if anything, has Toshiba done with the Indilinx IP/engineers? Is there a Barefoot successor (i.e. a high performance controller) in the works?

Back when we acquired Indilinx it really was somewhat early days in terms of manufacturers developing their own controllers and firmware to not only push the envelope in performance but also to try and take advantage of the latest NAND nodes when it came to driving both capacity and cost improvement. I’m sure you and all the readers here remember how expensive per GB SSDs were just a few years ago, and by bringing the Inidlinx team and IP in house we were able to drive development that positively impacted performance and cost to make SSDs more accessible to consumers.

Today the Indilinx team and IP have been completely integrated into our R&D team which also includes the team located in Oxford UK, which was originally acquired from PLX. This combined team is the group that was responsible for creating the Barefoot 3 controller, that is still being used today in our Vector 180 SATA SSD. Over the years the team has continued to develop and enhance the firmware as well, which is why we were able to leverage the BF3 controller for so many different product lines and generations. These engineers are a big part of why we also became part of Toshiba over two years ago, and are now working on next generation controllers and firmware for both enterprise and client SSDs. As part of Toshiba the team now also realizes two major benefits, first the early and complete access to next generation NAND from Toshiba which enables for optimization for future NAND technologies, and second the additional resources. For example all those hardware and firmware engineers here in California are now part of our still growing Toshiba America Storage Research & Design Center (SRDC) which is based in Folsom, CA. and we are continuing to invest in R&D. While I can’t go into too much detail on what they are working on I can say that the team is indeed developing next-gen controllers that will be utilized in the Toshiba-OCZ brand for consumers.

When VX500 will be available in America and in Europe? Will it replace Trion 150 in pricing anytime soon?

We just launched the VX500 Series but it is available now in North America and will be available in Europe this week as well as stock makes its way to our distributor and reseller partners.

Here in the US you can find the VX500 at e-tailers now, for example:

Newegg: http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=OCZ+VX500&N=-1&isNodeId=1

Amazon: https://www.amazon.com/Toshiba-OCZ-VX500-256GB-VX500-25SAT3-256G/dp/B01LYVB7F8/ref=sr_1_4?ie=UTF8&qid=1475770873&sr=8-4&keywords=OCZ+VX500

Though many SATA SSDs are pushing the value envelope and using TLC and 3D NAND (ours included) we still saw the need for a mid-range SSD that provides strong performance and endurance for mainstream users. This is why we engineered the VX500 Series with higher endurance MLC NAND, and optimized the firmware for write-intensive environments. Because the VX500 utilizes MLC NAND it is going to be more performance and endurance oriented than the TR and TL Series SSDs, which both make use of TLC NAND, and we also bundle it with data migration software. The extra endurance enables us to provide a 5 year warranty on the VX line as well. So while we will try and make the VX500 as accessible as possible to consumers the build costs are higher with MLC and users can still expect the TR and TL lines to be the more aggressively priced lines for more value oriented consumers.

What do you guys see as the most important aspect to focus on for making a successful SSD: Capacity, Performance, Price

The importance of all 3 of these factors has shifted over the years with things like controller limitations, SATA 3 saturation, and manufacturing technology affecting each of them. Obviously all 3 of those need to be in balance to have a well-rounded product, but every product focuses on 1-2 of those aspects at the expense of the other(s). Certainly with OCZ and Toshiba now being best buds, you guys have more ability than ever to be able to pick and choose what the tech specs of your drives are and really push the envelope in any of those 3 areas and so it’ll be exciting to watch OCZ products over the next few years!

In addition to those 3 areas, reliability and quality are also areas we focus on, to ensure that the customers have a great experience when using a Toshiba-OCZ product. We really see price to have a direct correlation to capacity as the die capacity continue to increase. As you’d stated, with SATA bus saturating, for the SATA space we are more focused on bringing affordable SSD product to the market, migrating customers from conventional hard drive over to solid state technology. We are currently more focused on pushing the performance envelop on the NVMe platforms as there is much more head room.

What do you see in the near future (next 1-5 years) for NVMe/ PCIe/ SATA and other interfaces? What kind of longevity technologies are expected in the near future (next 1-5 years)?

The initial concept work on NVMe began back in 2009, with the 1.0 specification released in 2011. Although NVMe drives were piloted in some OEM applications, we really didn’t see consumer adoption begin till last year and not really take off until this year. So yeah, propagation of new standards takes quite a bit of time (see: DDR3/4, USB-C, Thunderbolt, etc.) In the immediate future (1-5 years) NVMe will still be ramping up and hitting its stride. In terms of the next 5 years, NVMe was specifically architected for generic non-volatile storage, not just NAND. So the working group tried to future proof it for whatever next generation storage technologies come down the line in the hopes that we can continue to use it rather than have to start this whole standards body process from scratch again.

So I see NVMe taking over as the dominant interface going forward, with minor iterations on the specification. The other bump will be to PCIe gen. 4, but still running NVMe.

What do you consider to be your target markets? Do you have plans to offer higher capacity PCIe drives for workstation type applications?

Our target markets have changed over the years, but I’m pleased that we are now getting back to our enthusiast consumer roots. Today the target market for the Toshiba-OCZ brand is focused on end-users, and especially performance oriented users. While we have previously serviced OEM and enterprise markets those customers are being driven by the core Toshiba side of our business. The Toshiba-OCZ branded products are being developed specifically with end-user platforms, applications, and workloads in mind. For us Workstation customers absolutely reside in our enthusiast and power user target market.

In terms of future PCIe solutions we not only plan to offer higher capacity drives for workstation type applications we are already developing them. Our RD400 M.2 PCIe/NVMe SSD was among the first 1TB M.2 drives that was readily available for end-users, and the RD400 found a lot of homes in the systems of both gamers and workstation users. With the latest Toshiba NAND we will be driving the density 2TB and beyond with the next generation PCIe SSDs.

Are there plans to support TCG OPAL + IEEE 1667 in the future? It is marketed as “eDrive” (Encrypted Drive) by Microsoft for Bitlocker and also supported in both Linux and Windows by open source tools provided the Drive Trust Alliance https://www.drivetrust.com/apps/

Simple answer is yes; adding e-Drive support is something we are looking into for our SSDs, and by extension our software user tools.

What is the number one problem with failed SSDs, does it have to do with NAND wear, or is it controller/firmware issues that caused the drive to lock itself in a panic mode state, and not accepting any commands from BIOS to identify the drive anymore?

I will say that i rarely have seen SSDs fail due to NAND wear. This doesn’t mean there aren’t any. Thankfully, the percentage of people that would put a client drive in a high-end, high traffic server is miniscule.  The latter scenario you have described is something we have seen. As a result, we’ve worked with engineering and end users to understand better what has occurred at the time of failure so that we can make corrections where appropriate.

What does OCZ see as the future/successor of current 2.5” SATA drives? Their high capacity is nice, but SATA III less so. M.2/PCIe drives are fast, but their capacity is limited. Is it U.2? Or do we stick with SATA 3 for longer?

I think you actually answered your own question, the only value 2.5″ SATA III drives currently provide are capacity, cost, and socket adoption. So once NAND density can provide affordable high capacity in an M.2 form factor and more motherboards in consumers’ hands have M.2 slots, 2.5″ SATA will start to fade rapidly. That point is probably sooner than people realize, within the next few years.

In terms of U.2 it has its applications, but will be relegated to more of a high end niche while M.2 dominates the market. Both form factors currently utilize the same interface, so there’s no base bandwidth difference. U.2’s main advantage is surface area, which provides benefits to capacity and thermal management. The disadvantages compared to M.2 are cost and cabling. As noted above, NAND density will hit a point where an M.2 can provide sufficient capacity in that form factor.

I know, I know, you can never have enough capacity, but from what we’ve seen there’s effectively a price ceiling consumers are willing to pay for their storage. OCZ actually had one of the first 1TB drives on the market back in 2011. It cost around $1,300 and we sold, well, let’s say not as many as we had hoped. The 1TB segment didn’t really take off until 2 years later when Crucial released a 1TB m500 for $500, then it started flying. Of course there are always going to be the enthusiasts for whom money is no object, give me the best possible, but in general it seems customers don’t want to be spending more on their storage than their graphics cards. So keeping that in mind, as long as we can fit enough capacity up to that price ceiling in an M.2, the value proposition for a U.2 starts to get less appealing.

Now thermals are the other advantage U.2 has, it’s definitely easier to cool that bigger surface area. And better cooling can improve performance, either by operating in a higher power envelope or by limiting the instances of thermal throttling. But while instances of thermal throttling on M.2 can be demonstrated in bench testing, it happens much less frequently in real world workloads. (Protip: once motherboard manufacturers realized putting the M.2 sockets directly under the main GPU was not the best idea, things got a LOT better.) So we can eek out a bit more performance on a U.2, but it’s going to cost a non-trivial amount more than the M.2: added connector cost, cables, heatsink, bigger PCB, more NAND packages, etc. Making a number up, let’s say $50 more for a drive that thermal throttles less frequently, but otherwise is the same capacity and performance as an M.2.

The last factor is that despite Intel’s best efforts we still don’t have wide adoption of native U.2 sockets on consumer motherboards, you’re plugging into a U.2 > M.2 adapter anyways. So I definitely think there’s a potential enthusiast niche for a high end U.2 and we may still productize one eventually, but it would be a much smaller market segment than M.2. Also, most of the reviewers and system builders I’ve talked too much prefer M.2. It’s a cleaner look, less cabling to run and less use of the drive bays either means better small form factor PCs or better airflow and watercooling room.

I’d love to hear what OCZ have as their silver bullet against Samsung SSD’s. Samsung just look like a behemoth waiting to take over the entirety of the SSD market. Say it ain’t so, OCZ! Tell us what you’ve got that’ll make yours the SSD to buy.

Long answer short, there really isn’t a silver bullet per-say. We do respect the competition and it’s certainly a good thing for consumers that there are multiple providers in the space. While there is no easy answer here we do have a strategy, and at the highest level that is to really focus on the real needs of end-user consumers. We often get the question, after OCZ was acquired by Toshiba why did we keep the “OCZ” brand? It was all a matter of focus. Today OCZ is a sub-brand of Toshiba and the team was not only kept intact but was given additional resources in which to improve quality, reliability, and invest in developing next generation products specifically with performance oriented end users in mind. Our team has tried to stay nimble in this fast moving market, keeping that innovative DNA, while being able to leverage all that being part of a FAB has to offer.

An example of all of this coming together was the recent launch of our RD400. Rather than just launch a M.2 module we wanted to specifically service the complete spectrum of end users and offered it in both standalone module and AIC PCIe/NVMe versions. This gives end users the flexibility to adopt the product even if their platforms/desktops did not have a M.2 slot. It also enables end-users to leverage the product now, depopulate the drive from the AIC adapter, and use it in a mobile platform in the future. We thought about how we could really develop products that were not only fast, but were easy to use now and in the future by our customers.

Combine all that with the fact that Toshiba makes some of the highest quality NAND in the world, and that we are able to develop products with complete access to those future storage technologies, and we believe we can create compelling solutions. We were born from the consumer market,and while we know that the competition is fierce, we also believe that if we develop products that really address the needs of our valued end-users such as yourself then we can compete.

Any 2TB or 4TB SSDs in the pipeline?

Yes! Our pipeline has some very exciting products, and among them are higher density offerings!

So far it seems like drive manufacturers have kept bringing down SSD prices by increasingly utilizing TLC NAND. Now that TLC is so prevalent, what’s the next shift to bring down prices further?

Larger die capacity will continue to drive the $/GB down, and as already seen in some of our products, going DRAM-less also helps in reducing the drive cost especially for the lower capacity drives (120GB-240GB). System on Chip where the controller and flash are integrated into single package also reduces cost on the silicon, as well as reducing PCB size and cost.

What are your thought on 3D XPoint? Do you guys see this as competition to your business or do you see it as a separate category?

We’re still waiting to see how the 3D XPoint stuff turns out, but based on the presentations Intel has provided thus far it seems to be positioned as a separate tier of storage between DRAM and SSD, leaning more towards expanding the DRAM pool. I believe it’ll more directly compete with NVDIMM technologies in the enterprise space. But I guess we’ll find out!

How do you see PCs in 5 years’ time? Dominated by Laptops or some other (possibly new) form factor? The distinction between RAM and Storage becoming blurred or completely removed? What’s the vision you’re building towards?

I think 5 years is still a very short amount of time (see my answer on how it took longer than 5 years just to get NVMe out), so by then more of the same. Laptops and convertibles (I’m currently answering all these off my Surface) will dominate the workplace and a lot of homes. Desktop PCs will stick around for content creation and gaming (PC masterrace, yo), but laptops/convertibles and tablets are starting to get powerful enough to handle the bulk of what people traditionally used PCs for (my mom uses her ipad more than anything else).

I think the rise of mobile is interesting and we’ve invested into BGA SSDs for that reason, but phones as primary computing devices are still a ways off. We’ll need to see how Xpoint and other future storage technologies shake out, but its doubtful DRAM ever goes away completely. It may just shrink and get integrated into SoCs so it becomes less visible, but there’s still a role for it.

For years, conventional wisdom has suggested that SSDs are not suitable for long term storage of data. With SSDs increasing in size and declining in price, consumers will certainly not think twice about buying SSDs for long term storage. Do the warnings concerning the integrity of data in untouched files over the period of years still hold for modern SSDs?

Yes, data retention on unpowered SSDs is an unfortunate side effect of how NAND is architected. Kristian wrote about it some here when one of the scare articles came out: http://www.anandtech.com/show/9248/the-truth-about-ssd-data-retention

The JEDEC standard for client SSDs requires drives to retain data unpowered for 1 full year once the endurance has been exhausted. This can be affected by temperature and it was that realization that prompted the scare a year ago. As a manufacturer we always insist on the importance of maintaining proper backups, it’s just a necessary factor of life.

But to answer your long term storage question, I would not recommend SSDs for long term storage, no. What we see are the media for storage shifting down along the traditional hierarchy, with SSDs taking over the main computing storage slot traditional hard drives occupied, and hard drives shifting into the long term storage slot where tape has been dominant.

For example, a few years ago the conventional wisdom was to have a small SSD as your OS drive and a larger HDD for media and other data storage. Today my personal setup is an NVMe drive as my OS (*cough*OCZ RD400*cough*), with a 1TB TLC drive as my Steam folder (something, something, TR150), and then all my large files sit on a 4TB NAS. I then have both cloud and external drive backups. So data retention isn’t something I ever worry about in my personal life because if I ever leave my PC off for over a year I’m probably dead (and want my data/browser history gone!) and most of my files are on my NAS and backups anyways.

Will we see NVMe drives eventually reach price parity with current SATA/AHCI drives? Does the increasing capacity of SSDs make power loss protection any more important?

Absolutely, yes, NVMe drives will hit price and capacity parity with current SATA drives at which point SATA will fade away. This will probably happen sooner than people think, within a few years.

In a pure technical sense, as the capacity of an SSD increases so does the size of the mapping table so it’s logical to think power loss protection becomes more important. There are two approaches to handling unexpected power loss: increasing the hold up time the drive has to clean up and safely shut down by adding capacitors, or improving the robustness of the firmware and decreasing the amount of work that needs to be done during this additional hold up time, reducing the amount of additional capacitance required. I unfortunately can’t go into too much technical detail here as this strays into special sauce territory, but think of similar work and techniques done on the filesystem level like journaling, write verification, and increasingly sophisticated error correction. We tackle the problem from both ends, so the presence, or lack thereof, or large power loss protection capacitors are not necessarily indicative of how robust a drive manages unexpected power loss. In fact, if you check your SMART data you’re likely to see some amount of unexpected power losses recorded you didn’t even notice.

Funnily enough, the trend in the datacenter space is not build in any additional power loss protection on the drive level, but to push the cost down as much as possible and build in redundancy at higher levels. Relevant xkcd: https://xkcd.com/1737/

How do you see your SSDs holding up in the event of a catastrophic, electronic destroying solar flare? Also, would you have any preservation suggestions to keep data contained on one of your SSDs intact so that it can be put into a time capsule and eventually figured out when rediscovered by a different hypothetical civilization centuries later? And lastly, are you working on SSD solutions to replace eMMC in tablets?

Heh, interesting question. I can’t honestly say we’ve generated a catastrophic solar flare in the test lab, but as long as the drives are not physically fried it would depend on how many bits got flipped. The drives are designed with multiple error correction checkpoints in the datapath specifically to catch silent errors from bitflips, so I guess it depends on your definition of “catastrophic”. 🙂

In terms of the long term storage I touched on this somewhat here: https://forums.anandtech.com/thread…iveaway-with-ocz.2487888/page-2#post-38507023

But no, due to how NAND functions I don’t believe SSDs in their current state are suitable for crazy long term storage like time capsules. In the timeframes you’re suggesting even HDDs will have their issues. There are optical discs explicitly designed for archival purposes, but I’ll confess to not being an expert on storage in the millenia scale, sorry. Maybe try this? http://www.popularmechanics.com/tec…hard-drive-the-first-immortal-storage-medium/

And yes, earlier this year at CES we demoed an NVMe BGA SSD. These are perfect for mobile applications and much, much faster than eMMC or UFS.

http://www.anandtech.com/show/10546/toshiba-announces-new-bga-ssds-using-3d-tlc-nand

With a refresh of the Vertex line (VX) will we see the Vector line refreshed soon as well to replace the Vector 180?

We tagged the VX500 with the homage to the Vertex name as it slots in quite well as a mainstream product. We envision the Vector brand as a higher end, enthusiast product. The rub is that I believe SATA is shifting towards a value market, it’s hard to get excited about squeezing a few more IOPS out of an enthusiast SATA product when we’re both bus constrained and massively faster NVMe drives are already here. Why pay a price premium for AN AWESOME SATA drive that nets you a few % points of performance when you can buy an NVMe drive for not a whole lot more and get 5x the performance? We’re already seeing the NVMe market split into both high end, and cheaper NVMe, so I feel this is the segment that’s going to get a lot more exciting, while SATA becomes cheap bulk storage.

The RD400 was branded to recall the RevoDrive line that’s been our PCIe hallmark, so that leaves the Vector brand in an odd position. That said, Vector hold a special place in my heart as the original Vector was the first product I solo managed and launched, so I’m gonna have to find some way to bring it back.

Reminder: OCZ SSD Giveaway & Ask The Experts Recap

Reminder: OCZ SSD Giveaway & Ask The Experts Recap

As a final reminder to anyone who has yet to enter, our OCZ SSD giveaway closes on later this week on Friday. We’ll be giving away 3 SSDs altogether: 512GB and 1TB versions of the recently launched OCZ VX500 SATA SSD, and a 512GB OCZ RD400 NVMe M.2 SSD. So if you like free stuff and are in the United States, don’t hesitate to enter.

Meanwhile our Ask The Experts session was a big success, so thank you to everyone who submitted questions for OCZ to answer. In case you missed it, here’s a recap of the questions that were answered.


What, if anything, has Toshiba done with the Indilinx IP/engineers? Is there a Barefoot successor (i.e. a high performance controller) in the works?

Back when we acquired Indilinx it really was somewhat early days in terms of manufacturers developing their own controllers and firmware to not only push the envelope in performance but also to try and take advantage of the latest NAND nodes when it came to driving both capacity and cost improvement. I’m sure you and all the readers here remember how expensive per GB SSDs were just a few years ago, and by bringing the Inidlinx team and IP in house we were able to drive development that positively impacted performance and cost to make SSDs more accessible to consumers.

Today the Indilinx team and IP have been completely integrated into our R&D team which also includes the team located in Oxford UK, which was originally acquired from PLX. This combined team is the group that was responsible for creating the Barefoot 3 controller, that is still being used today in our Vector 180 SATA SSD. Over the years the team has continued to develop and enhance the firmware as well, which is why we were able to leverage the BF3 controller for so many different product lines and generations. These engineers are a big part of why we also became part of Toshiba over two years ago, and are now working on next generation controllers and firmware for both enterprise and client SSDs. As part of Toshiba the team now also realizes two major benefits, first the early and complete access to next generation NAND from Toshiba which enables for optimization for future NAND technologies, and second the additional resources. For example all those hardware and firmware engineers here in California are now part of our still growing Toshiba America Storage Research & Design Center (SRDC) which is based in Folsom, CA. and we are continuing to invest in R&D. While I can’t go into too much detail on what they are working on I can say that the team is indeed developing next-gen controllers that will be utilized in the Toshiba-OCZ brand for consumers.

When VX500 will be available in America and in Europe? Will it replace Trion 150 in pricing anytime soon?

We just launched the VX500 Series but it is available now in North America and will be available in Europe this week as well as stock makes its way to our distributor and reseller partners.

Here in the US you can find the VX500 at e-tailers now, for example:

Newegg: http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=OCZ+VX500&N=-1&isNodeId=1

Amazon: https://www.amazon.com/Toshiba-OCZ-VX500-256GB-VX500-25SAT3-256G/dp/B01LYVB7F8/ref=sr_1_4?ie=UTF8&qid=1475770873&sr=8-4&keywords=OCZ+VX500

Though many SATA SSDs are pushing the value envelope and using TLC and 3D NAND (ours included) we still saw the need for a mid-range SSD that provides strong performance and endurance for mainstream users. This is why we engineered the VX500 Series with higher endurance MLC NAND, and optimized the firmware for write-intensive environments. Because the VX500 utilizes MLC NAND it is going to be more performance and endurance oriented than the TR and TL Series SSDs, which both make use of TLC NAND, and we also bundle it with data migration software. The extra endurance enables us to provide a 5 year warranty on the VX line as well. So while we will try and make the VX500 as accessible as possible to consumers the build costs are higher with MLC and users can still expect the TR and TL lines to be the more aggressively priced lines for more value oriented consumers.

What do you guys see as the most important aspect to focus on for making a successful SSD: Capacity, Performance, Price

The importance of all 3 of these factors has shifted over the years with things like controller limitations, SATA 3 saturation, and manufacturing technology affecting each of them. Obviously all 3 of those need to be in balance to have a well-rounded product, but every product focuses on 1-2 of those aspects at the expense of the other(s). Certainly with OCZ and Toshiba now being best buds, you guys have more ability than ever to be able to pick and choose what the tech specs of your drives are and really push the envelope in any of those 3 areas and so it’ll be exciting to watch OCZ products over the next few years!

In addition to those 3 areas, reliability and quality are also areas we focus on, to ensure that the customers have a great experience when using a Toshiba-OCZ product. We really see price to have a direct correlation to capacity as the die capacity continue to increase. As you’d stated, with SATA bus saturating, for the SATA space we are more focused on bringing affordable SSD product to the market, migrating customers from conventional hard drive over to solid state technology. We are currently more focused on pushing the performance envelop on the NVMe platforms as there is much more head room.

What do you see in the near future (next 1-5 years) for NVMe/ PCIe/ SATA and other interfaces? What kind of longevity technologies are expected in the near future (next 1-5 years)?

The initial concept work on NVMe began back in 2009, with the 1.0 specification released in 2011. Although NVMe drives were piloted in some OEM applications, we really didn’t see consumer adoption begin till last year and not really take off until this year. So yeah, propagation of new standards takes quite a bit of time (see: DDR3/4, USB-C, Thunderbolt, etc.) In the immediate future (1-5 years) NVMe will still be ramping up and hitting its stride. In terms of the next 5 years, NVMe was specifically architected for generic non-volatile storage, not just NAND. So the working group tried to future proof it for whatever next generation storage technologies come down the line in the hopes that we can continue to use it rather than have to start this whole standards body process from scratch again.

So I see NVMe taking over as the dominant interface going forward, with minor iterations on the specification. The other bump will be to PCIe gen. 4, but still running NVMe.

What do you consider to be your target markets? Do you have plans to offer higher capacity PCIe drives for workstation type applications?

Our target markets have changed over the years, but I’m pleased that we are now getting back to our enthusiast consumer roots. Today the target market for the Toshiba-OCZ brand is focused on end-users, and especially performance oriented users. While we have previously serviced OEM and enterprise markets those customers are being driven by the core Toshiba side of our business. The Toshiba-OCZ branded products are being developed specifically with end-user platforms, applications, and workloads in mind. For us Workstation customers absolutely reside in our enthusiast and power user target market.

In terms of future PCIe solutions we not only plan to offer higher capacity drives for workstation type applications we are already developing them. Our RD400 M.2 PCIe/NVMe SSD was among the first 1TB M.2 drives that was readily available for end-users, and the RD400 found a lot of homes in the systems of both gamers and workstation users. With the latest Toshiba NAND we will be driving the density 2TB and beyond with the next generation PCIe SSDs.

Are there plans to support TCG OPAL + IEEE 1667 in the future? It is marketed as “eDrive” (Encrypted Drive) by Microsoft for Bitlocker and also supported in both Linux and Windows by open source tools provided the Drive Trust Alliance https://www.drivetrust.com/apps/

Simple answer is yes; adding e-Drive support is something we are looking into for our SSDs, and by extension our software user tools.

What is the number one problem with failed SSDs, does it have to do with NAND wear, or is it controller/firmware issues that caused the drive to lock itself in a panic mode state, and not accepting any commands from BIOS to identify the drive anymore?

I will say that i rarely have seen SSDs fail due to NAND wear. This doesn’t mean there aren’t any. Thankfully, the percentage of people that would put a client drive in a high-end, high traffic server is miniscule.  The latter scenario you have described is something we have seen. As a result, we’ve worked with engineering and end users to understand better what has occurred at the time of failure so that we can make corrections where appropriate.

What does OCZ see as the future/successor of current 2.5” SATA drives? Their high capacity is nice, but SATA III less so. M.2/PCIe drives are fast, but their capacity is limited. Is it U.2? Or do we stick with SATA 3 for longer?

I think you actually answered your own question, the only value 2.5″ SATA III drives currently provide are capacity, cost, and socket adoption. So once NAND density can provide affordable high capacity in an M.2 form factor and more motherboards in consumers’ hands have M.2 slots, 2.5″ SATA will start to fade rapidly. That point is probably sooner than people realize, within the next few years.

In terms of U.2 it has its applications, but will be relegated to more of a high end niche while M.2 dominates the market. Both form factors currently utilize the same interface, so there’s no base bandwidth difference. U.2’s main advantage is surface area, which provides benefits to capacity and thermal management. The disadvantages compared to M.2 are cost and cabling. As noted above, NAND density will hit a point where an M.2 can provide sufficient capacity in that form factor.

I know, I know, you can never have enough capacity, but from what we’ve seen there’s effectively a price ceiling consumers are willing to pay for their storage. OCZ actually had one of the first 1TB drives on the market back in 2011. It cost around $1,300 and we sold, well, let’s say not as many as we had hoped. The 1TB segment didn’t really take off until 2 years later when Crucial released a 1TB m500 for $500, then it started flying. Of course there are always going to be the enthusiasts for whom money is no object, give me the best possible, but in general it seems customers don’t want to be spending more on their storage than their graphics cards. So keeping that in mind, as long as we can fit enough capacity up to that price ceiling in an M.2, the value proposition for a U.2 starts to get less appealing.

Now thermals are the other advantage U.2 has, it’s definitely easier to cool that bigger surface area. And better cooling can improve performance, either by operating in a higher power envelope or by limiting the instances of thermal throttling. But while instances of thermal throttling on M.2 can be demonstrated in bench testing, it happens much less frequently in real world workloads. (Protip: once motherboard manufacturers realized putting the M.2 sockets directly under the main GPU was not the best idea, things got a LOT better.) So we can eek out a bit more performance on a U.2, but it’s going to cost a non-trivial amount more than the M.2: added connector cost, cables, heatsink, bigger PCB, more NAND packages, etc. Making a number up, let’s say $50 more for a drive that thermal throttles less frequently, but otherwise is the same capacity and performance as an M.2.

The last factor is that despite Intel’s best efforts we still don’t have wide adoption of native U.2 sockets on consumer motherboards, you’re plugging into a U.2 > M.2 adapter anyways. So I definitely think there’s a potential enthusiast niche for a high end U.2 and we may still productize one eventually, but it would be a much smaller market segment than M.2. Also, most of the reviewers and system builders I’ve talked too much prefer M.2. It’s a cleaner look, less cabling to run and less use of the drive bays either means better small form factor PCs or better airflow and watercooling room.

I’d love to hear what OCZ have as their silver bullet against Samsung SSD’s. Samsung just look like a behemoth waiting to take over the entirety of the SSD market. Say it ain’t so, OCZ! Tell us what you’ve got that’ll make yours the SSD to buy.

Long answer short, there really isn’t a silver bullet per-say. We do respect the competition and it’s certainly a good thing for consumers that there are multiple providers in the space. While there is no easy answer here we do have a strategy, and at the highest level that is to really focus on the real needs of end-user consumers. We often get the question, after OCZ was acquired by Toshiba why did we keep the “OCZ” brand? It was all a matter of focus. Today OCZ is a sub-brand of Toshiba and the team was not only kept intact but was given additional resources in which to improve quality, reliability, and invest in developing next generation products specifically with performance oriented end users in mind. Our team has tried to stay nimble in this fast moving market, keeping that innovative DNA, while being able to leverage all that being part of a FAB has to offer.

An example of all of this coming together was the recent launch of our RD400. Rather than just launch a M.2 module we wanted to specifically service the complete spectrum of end users and offered it in both standalone module and AIC PCIe/NVMe versions. This gives end users the flexibility to adopt the product even if their platforms/desktops did not have a M.2 slot. It also enables end-users to leverage the product now, depopulate the drive from the AIC adapter, and use it in a mobile platform in the future. We thought about how we could really develop products that were not only fast, but were easy to use now and in the future by our customers.

Combine all that with the fact that Toshiba makes some of the highest quality NAND in the world, and that we are able to develop products with complete access to those future storage technologies, and we believe we can create compelling solutions. We were born from the consumer market,and while we know that the competition is fierce, we also believe that if we develop products that really address the needs of our valued end-users such as yourself then we can compete.

Any 2TB or 4TB SSDs in the pipeline?

Yes! Our pipeline has some very exciting products, and among them are higher density offerings!

So far it seems like drive manufacturers have kept bringing down SSD prices by increasingly utilizing TLC NAND. Now that TLC is so prevalent, what’s the next shift to bring down prices further?

Larger die capacity will continue to drive the $/GB down, and as already seen in some of our products, going DRAM-less also helps in reducing the drive cost especially for the lower capacity drives (120GB-240GB). System on Chip where the controller and flash are integrated into single package also reduces cost on the silicon, as well as reducing PCB size and cost.

What are your thought on 3D XPoint? Do you guys see this as competition to your business or do you see it as a separate category?

We’re still waiting to see how the 3D XPoint stuff turns out, but based on the presentations Intel has provided thus far it seems to be positioned as a separate tier of storage between DRAM and SSD, leaning more towards expanding the DRAM pool. I believe it’ll more directly compete with NVDIMM technologies in the enterprise space. But I guess we’ll find out!

How do you see PCs in 5 years’ time? Dominated by Laptops or some other (possibly new) form factor? The distinction between RAM and Storage becoming blurred or completely removed? What’s the vision you’re building towards?

I think 5 years is still a very short amount of time (see my answer on how it took longer than 5 years just to get NVMe out), so by then more of the same. Laptops and convertibles (I’m currently answering all these off my Surface) will dominate the workplace and a lot of homes. Desktop PCs will stick around for content creation and gaming (PC masterrace, yo), but laptops/convertibles and tablets are starting to get powerful enough to handle the bulk of what people traditionally used PCs for (my mom uses her ipad more than anything else).

I think the rise of mobile is interesting and we’ve invested into BGA SSDs for that reason, but phones as primary computing devices are still a ways off. We’ll need to see how Xpoint and other future storage technologies shake out, but its doubtful DRAM ever goes away completely. It may just shrink and get integrated into SoCs so it becomes less visible, but there’s still a role for it.

For years, conventional wisdom has suggested that SSDs are not suitable for long term storage of data. With SSDs increasing in size and declining in price, consumers will certainly not think twice about buying SSDs for long term storage. Do the warnings concerning the integrity of data in untouched files over the period of years still hold for modern SSDs?

Yes, data retention on unpowered SSDs is an unfortunate side effect of how NAND is architected. Kristian wrote about it some here when one of the scare articles came out: http://www.anandtech.com/show/9248/the-truth-about-ssd-data-retention

The JEDEC standard for client SSDs requires drives to retain data unpowered for 1 full year once the endurance has been exhausted. This can be affected by temperature and it was that realization that prompted the scare a year ago. As a manufacturer we always insist on the importance of maintaining proper backups, it’s just a necessary factor of life.

But to answer your long term storage question, I would not recommend SSDs for long term storage, no. What we see are the media for storage shifting down along the traditional hierarchy, with SSDs taking over the main computing storage slot traditional hard drives occupied, and hard drives shifting into the long term storage slot where tape has been dominant.

For example, a few years ago the conventional wisdom was to have a small SSD as your OS drive and a larger HDD for media and other data storage. Today my personal setup is an NVMe drive as my OS (*cough*OCZ RD400*cough*), with a 1TB TLC drive as my Steam folder (something, something, TR150), and then all my large files sit on a 4TB NAS. I then have both cloud and external drive backups. So data retention isn’t something I ever worry about in my personal life because if I ever leave my PC off for over a year I’m probably dead (and want my data/browser history gone!) and most of my files are on my NAS and backups anyways.

Will we see NVMe drives eventually reach price parity with current SATA/AHCI drives? Does the increasing capacity of SSDs make power loss protection any more important?

Absolutely, yes, NVMe drives will hit price and capacity parity with current SATA drives at which point SATA will fade away. This will probably happen sooner than people think, within a few years.

In a pure technical sense, as the capacity of an SSD increases so does the size of the mapping table so it’s logical to think power loss protection becomes more important. There are two approaches to handling unexpected power loss: increasing the hold up time the drive has to clean up and safely shut down by adding capacitors, or improving the robustness of the firmware and decreasing the amount of work that needs to be done during this additional hold up time, reducing the amount of additional capacitance required. I unfortunately can’t go into too much technical detail here as this strays into special sauce territory, but think of similar work and techniques done on the filesystem level like journaling, write verification, and increasingly sophisticated error correction. We tackle the problem from both ends, so the presence, or lack thereof, or large power loss protection capacitors are not necessarily indicative of how robust a drive manages unexpected power loss. In fact, if you check your SMART data you’re likely to see some amount of unexpected power losses recorded you didn’t even notice.

Funnily enough, the trend in the datacenter space is not build in any additional power loss protection on the drive level, but to push the cost down as much as possible and build in redundancy at higher levels. Relevant xkcd: https://xkcd.com/1737/

How do you see your SSDs holding up in the event of a catastrophic, electronic destroying solar flare? Also, would you have any preservation suggestions to keep data contained on one of your SSDs intact so that it can be put into a time capsule and eventually figured out when rediscovered by a different hypothetical civilization centuries later? And lastly, are you working on SSD solutions to replace eMMC in tablets?

Heh, interesting question. I can’t honestly say we’ve generated a catastrophic solar flare in the test lab, but as long as the drives are not physically fried it would depend on how many bits got flipped. The drives are designed with multiple error correction checkpoints in the datapath specifically to catch silent errors from bitflips, so I guess it depends on your definition of “catastrophic”. 🙂

In terms of the long term storage I touched on this somewhat here: https://forums.anandtech.com/thread…iveaway-with-ocz.2487888/page-2#post-38507023

But no, due to how NAND functions I don’t believe SSDs in their current state are suitable for crazy long term storage like time capsules. In the timeframes you’re suggesting even HDDs will have their issues. There are optical discs explicitly designed for archival purposes, but I’ll confess to not being an expert on storage in the millenia scale, sorry. Maybe try this? http://www.popularmechanics.com/tec…hard-drive-the-first-immortal-storage-medium/

And yes, earlier this year at CES we demoed an NVMe BGA SSD. These are perfect for mobile applications and much, much faster than eMMC or UFS.

http://www.anandtech.com/show/10546/toshiba-announces-new-bga-ssds-using-3d-tlc-nand

With a refresh of the Vertex line (VX) will we see the Vector line refreshed soon as well to replace the Vector 180?

We tagged the VX500 with the homage to the Vertex name as it slots in quite well as a mainstream product. We envision the Vector brand as a higher end, enthusiast product. The rub is that I believe SATA is shifting towards a value market, it’s hard to get excited about squeezing a few more IOPS out of an enthusiast SATA product when we’re both bus constrained and massively faster NVMe drives are already here. Why pay a price premium for AN AWESOME SATA drive that nets you a few % points of performance when you can buy an NVMe drive for not a whole lot more and get 5x the performance? We’re already seeing the NVMe market split into both high end, and cheaper NVMe, so I feel this is the segment that’s going to get a lot more exciting, while SATA becomes cheap bulk storage.

The RD400 was branded to recall the RevoDrive line that’s been our PCIe hallmark, so that leaves the Vector brand in an odd position. That said, Vector hold a special place in my heart as the original Vector was the first product I solo managed and launched, so I’m gonna have to find some way to bring it back.

Western Digital Introduces WD Blue And WD Green SSDs

Western Digital Introduces WD Blue And WD Green SSDs

Five months ago, Western Digital completed its acquisition of SSD and NAND flash manufacturer SanDisk, adding consumer SSDs and more enterprise SSDs to their existing portfolio of hard drives and HGST enterprise SSDs. WD is now introducing two families of WD-branded consumer SSDs, each derived from existing SanDisk product lines.

The WD Blue SSD is based on the SanDisk X400 SATA SSD with minimal hardware changes but has modified firmware and different usable capacities. Like the X400, the WD Blue is available as either a 2.5″ or M.2 drive and uses SanDisk 15nm TLC NAND with the Marvell 88SS1074 controller. Our review of the 1TB WD Blue SSD shows that it improves on some of the X400’s weaknesses but sacrifices some performance on many tests, producing a drive that is not quite as fast overall. The MSRP for the WD Blue is about the same as current actual retail prices for the SanDisk X400, which position it as a mid-range SATA SSD and puts it up against formidable competition from the new wave of drives using the more affordable 3D TLC NAND from Micron.

Western Digital WD Blue Specifications
Capacity 250GB 500GB 1000GB
Form Factor 2.5″ 7mm SATA or M.2 2280 SATA
Controller Marvell 88SS1074
NAND SanDisk 15nm TLC
Sequential Read 540 MB/s 545 MB/s 545 MB/s
Sequential Write 500 MB/s 525 MB/s 525 MB/s
4KB Random Read 97k IOPS 100k IOPS 100k IOPS
4KB Random Write 79k IOPS 80k IOPS 80k IOPS
Average Power 70 mW
Max Power 4.4 W
Encryption No
Endurance (TBW) 100 TB 200 TB 400 TB
Warranty Three years
MSRP $79.99 $139.99 $299.99

The WD Green SSD is an entry-level product line with limited capacity options. Based on the SanDisk SSD Plus, it uses a Silicon Motion controller in a DRAM-less configuration with SanDisk 15nm TLC NAND. The WD Green has a similar purpose to drives like the Samsung 750 EVO and the recently-announced OCZ TL100: to offer the lowest possible price while still providing acceptable reliability and a noticeable performance jump over hard drives. Higher capacities are omitted from the product line because the total price would be too high for the most cost-sensitive consumers even if the price per GB is marginally lower than a more mainstream budget drive.

While the Green label has connotations of better than average power efficiency when applied to WD’s hard drives, the low performance of DRAM-less SSDs usually leads to poor energy efficiency during active use and the idle power savings tend to be minimal.

The WD Green will be available later this quarter, and pricing has not been announced.

Western Digital WD Green Specifications
Capacity 120GB 240GB
Form Factor 2.5″ 7mm SATA or M.2 2280 SATA
Controller Silicon Motion SM2256S
NAND SanDisk 15nm TLC
Sequential Read 540 MB/s 545 MB/s
Sequential Write 405 MB/s 435 MB/s
4KB Random Read 37k IOPS 37k IOPS
4KB Random Write 63k IOPS 68k IOPS
Idle Power 30 mW
Encryption No
Endurance (TBW) 40 TB 80 TB
Warranty Three years

 

Western Digital Introduces WD Blue And WD Green SSDs

Western Digital Introduces WD Blue And WD Green SSDs

Five months ago, Western Digital completed its acquisition of SSD and NAND flash manufacturer SanDisk, adding consumer SSDs and more enterprise SSDs to their existing portfolio of hard drives and HGST enterprise SSDs. WD is now introducing two families of WD-branded consumer SSDs, each derived from existing SanDisk product lines.

The WD Blue SSD is based on the SanDisk X400 SATA SSD with minimal hardware changes but has modified firmware and different usable capacities. Like the X400, the WD Blue is available as either a 2.5″ or M.2 drive and uses SanDisk 15nm TLC NAND with the Marvell 88SS1074 controller. Our review of the 1TB WD Blue SSD shows that it improves on some of the X400’s weaknesses but sacrifices some performance on many tests, producing a drive that is not quite as fast overall. The MSRP for the WD Blue is about the same as current actual retail prices for the SanDisk X400, which position it as a mid-range SATA SSD and puts it up against formidable competition from the new wave of drives using the more affordable 3D TLC NAND from Micron.

Western Digital WD Blue Specifications
Capacity 250GB 500GB 1000GB
Form Factor 2.5″ 7mm SATA or M.2 2280 SATA
Controller Marvell 88SS1074
NAND SanDisk 15nm TLC
Sequential Read 540 MB/s 545 MB/s 545 MB/s
Sequential Write 500 MB/s 525 MB/s 525 MB/s
4KB Random Read 97k IOPS 100k IOPS 100k IOPS
4KB Random Write 79k IOPS 80k IOPS 80k IOPS
Average Power 70 mW
Max Power 4.4 W
Encryption No
Endurance (TBW) 100 TB 200 TB 400 TB
Warranty Three years
MSRP $79.99 $139.99 $299.99

The WD Green SSD is an entry-level product line with limited capacity options. Based on the SanDisk SSD Plus, it uses a Silicon Motion controller in a DRAM-less configuration with SanDisk 15nm TLC NAND. The WD Green has a similar purpose to drives like the Samsung 750 EVO and the recently-announced OCZ TL100: to offer the lowest possible price while still providing acceptable reliability and a noticeable performance jump over hard drives. Higher capacities are omitted from the product line because the total price would be too high for the most cost-sensitive consumers even if the price per GB is marginally lower than a more mainstream budget drive.

While the Green label has connotations of better than average power efficiency when applied to WD’s hard drives, the low performance of DRAM-less SSDs usually leads to poor energy efficiency during active use and the idle power savings tend to be minimal.

The WD Green will be available later this quarter, and pricing has not been announced.

Western Digital WD Green Specifications
Capacity 120GB 240GB
Form Factor 2.5″ 7mm SATA or M.2 2280 SATA
Controller Silicon Motion SM2256S
NAND SanDisk 15nm TLC
Sequential Read 540 MB/s 545 MB/s
Sequential Write 405 MB/s 435 MB/s
4KB Random Read 37k IOPS 37k IOPS
4KB Random Write 63k IOPS 68k IOPS
Idle Power 30 mW
Encryption No
Endurance (TBW) 40 TB 80 TB
Warranty Three years

 

The Western Digital Blue (1TB) SSD Review: WD Returns to SSDs

After completing the acquisition of SanDisk, Western Digital is entering the consumer SSD market under its own brand with new SSDs derived from existing SanDisk product lines. As with their hard drives, the Blue SSD is a mainstream mid-range product, in this case using the SanDisk X400’s combination of SanDisk 15nm TLC and Marvell’s 88SS1074 controller.

Today we’ll be taking a look at the 1TB drive, how it compares to its sibling, the X400, and whether it can find its place in the highly competitive mainstream SSD market.