Storage


OCZ Introduces Z-Drive 6000 Enterprise PCIe SSD Series with NVMe Support

OCZ Introduces Z-Drive 6000 Enterprise PCIe SSD Series with NVMe Support

Back at CES OCZ teased us by showcasing the Z-Drive 6000, but the drive was still under development, so the details were rather scarce. Today OCZ is finally lifting the curtain and making a formal announcement of the Z-Drive 6000 series, the company’s first NVMe compliant SSD.

We’ve talked about NVMe in the past, but in short it’s a software/driver stack that replaces the ancient AHCI. NVMe has been designed for SSDs from the ground up and its main benefits are scalability (up to 64,000 outstanding commands versus 32 in AHCI) and streamlined software stack that reduces both latency and CPU overhead for higher and more efficient performance. The Z-Drive 6000 series supports the native Windows (8.1 & Server 2012 R2), Linux, UNIX, Solaris, and VMware NVMe drivers, although OCZ will also have custom NVMe drivers for Windows, Linux, and VMware for drive management reasons. The current native drivers lack some necessary management features (e.g. I still haven’t found a way to secure erase an NVMe drive with the in-box drivers), so in order to have the features available at launch OCZ is offering a custom driver. That said, OCZ is fully invested in improving the native open source drivers, but the problem is the long turnaround time for updates that has created the need for vendor-specific drivers.

OCZ Z-Drive 6000 Series Specifications
  6000 6300
Capacities 800GB; 1,600GB & 3,200GB
Form Factors 2.5″ 15mm (SFF-8639) 2.5″ 15mm & HHHL AIC
Interface PCIe 3.0 x4 (NVMe 1.1b)
Controller PMC-Sierra “Princeton”
NAND Toshiba A19nm 128Gbit MLC Toshiba A19nm 128Gbit eMLC
Endurance 1 DWPD 3 DWPD
Encryption AES-256
Power Loss Protection Yes Yes
Warranty Five Years
Price ~$1.70/GB ~$2.00/GB

The Z-Drive 6000 series comes in two flavors: 6000 and 6300. The underlying controller and firmware architectures are the same in both models and the difference lies merely in NAND: the 6300 uses more durable eMLC NAND, which increases the endurance to three drive writes per day from one in the 6000 with normal MLC. Due to the endurance, the 6000 is aimed more towards read-intensive applications such as online archiving and media streaming, whereas the 6300 is suitable for mixed workloads that includes for example big data analysis and financial transaction services.

Unlike Intel, OCZ doesn’t offer a model for write-intensive workloads with super high endurance (10 DWPD). OCZ explained to me that the reason is mostly cost efficiency — a high endurance drive is doable, but it would require additional over-provisioning (which is what Intel does) that would increase the cost. OCZ did some market research and concluded that most customers are seeking for lower cost NVMe drives to make the transition, so OCZ didn’t see a huge niche for the expensive high endurance drives. At $1.70 and $2.00 per gigabyte, the Z-Drive 6000 series is certainly price competitive against Intel’s P3x00 series and the premium isn’t too large compared to the enterprise SATA/SAS SSDs.

At the time of launch, the Z-Drive 6000 series will be available in capacities of 800GB, 1.6TB and 3.2TB. OCZ does, however, have a 6.4TB Z-Drive 6300 in development, which is scheduled to be available in Q4’15. The reason for the delay lies in NAND because in order to fit 8TB of flash inside a 2.5″ chassis, OCZ needs to use 16-die packages, but currently the price of those is, from what I have heard, approximately 3x higher per gigabyte compared to 8-die stacks. The production as well as yields are expected to ramp up during this year, so the 6.4TB Z-Drive 6300 will be available once the NAND is available in high volume and at a reasonable price.

The Z-Drive 6000 series employs PMC-Sierra’s “Princeton” controller, which is a native NVMe controller with support for 16 NAND channels. The controller supports PCIe 3.0 x8 interface, but OCZ has decided to split that to offer two independent 3.0 x4 connections to the host. The benefit of dual-port is redundancy and data availability because if one of the host systems is down (due to hardware failure for instance), the data is still accessible through the second host. This is a feature borrowed from SAS and the Z-Drive 6000 is actually the first NVMe drive to have dual-port support, which is something OCZ said its customers have been looking for.

Typical enterprise-class features, such as AES-256 encryption, full power loss protection and end-to-end data protection are all included as well. The Z-Drive 6000 series also features user-configurable power modes (15W, 20W and 25W), which can be used to limit the power consumption (and performance) in more temperature critical environments. A 2.5″ 25W drive will definitely run hot and require a high amount of airflow for cooling, although OCZ did pay close attention to the chassis design to maximize heat dissipation and avoid throttling issues.

OCZ Z-Drive 6000 Series Performance Specifications
Model 6000 6300
Capacity 800GB 1,600GB 3,200GB 800GB 1,600GB 3,200GB
Raw NAND Capacity 1,024GiB 2,048GiB 4,096GiB 1,024GiB 2,048GiB 4,096GiB
128KB Sequential Read 2.2GB/s 2.9GB/s 2.9GB/s 2.2GB/s 2.9GB/s 2.9GB/s
128KB Sequential Write 1.3GB/s 1.9GB/s 1.9GB/s 1.0GB/s 1.4GB/s 1.4GB/s
4KB Random Read 600K IOPS 700K IOPS 700K IOPS 600K IOPS 700K IOPS 700K IOPS
4KB Random Write 115K IOPS 160K IOPS 160K IOPS 75K IOPS 120K IOPS 120K IOPS
Mixed 4KB 30R / 70W 290K IOPS 330K IOPS 330K IOPS 230K IOPS 280K IOPS 280K IOPS
Idle Power Consumption 9W 9W 9W 9W 9W 9W
Active Power Consumption 25W 25W 25W 25W 25W 25W

OCZ focused specifically on read performance and at up to 700K random read IOPS, the Z-Drive is definitely top of the class as Intel specs the P3700 at only 460K IOPS. Random write and mixed performance look excellent too, so it will be very interesting to see how the Z-Drive 6000 stacks up against the Intel and Samsung drives in objective third party testing. Note that the Z-Drive 6300 with eMLC is a bit slower in writes, which is due to the fact that eMLC NAND has higher program latencies (basically the voltage distribution for each voltage state is smaller, so programming requires more precision that is achieved by increasing the number of program pulse and verification iterations). 

OCZ’s own testing data puts the Z-Drive 6000 way ahead of the competition in terms of performance and consistency in both reads and writes. I would of course take the data with a grain of salt, but if the Z-Drive 6000 series is really as good as OCZ’s marketing suggests, then OCZ has one hell of a drive in its hands. OCZ is currently sampling the Z-Drive 6000 series to key customers and partners, so I would expect more widespread availability to be later this year. All in all, it’s a very potent drive that could very well help OCZ gain some market share in the enterprise space.

OCZ Introduces Z-Drive 6000 Enterprise PCIe SSD Series with NVMe Support

OCZ Introduces Z-Drive 6000 Enterprise PCIe SSD Series with NVMe Support

Back at CES OCZ teased us by showcasing the Z-Drive 6000, but the drive was still under development, so the details were rather scarce. Today OCZ is finally lifting the curtain and making a formal announcement of the Z-Drive 6000 series, the company’s first NVMe compliant SSD.

We’ve talked about NVMe in the past, but in short it’s a software/driver stack that replaces the ancient AHCI. NVMe has been designed for SSDs from the ground up and its main benefits are scalability (up to 64,000 outstanding commands versus 32 in AHCI) and streamlined software stack that reduces both latency and CPU overhead for higher and more efficient performance. The Z-Drive 6000 series supports the native Windows (8.1 & Server 2012 R2), Linux, UNIX, Solaris, and VMware NVMe drivers, although OCZ will also have custom NVMe drivers for Windows, Linux, and VMware for drive management reasons. The current native drivers lack some necessary management features (e.g. I still haven’t found a way to secure erase an NVMe drive with the in-box drivers), so in order to have the features available at launch OCZ is offering a custom driver. That said, OCZ is fully invested in improving the native open source drivers, but the problem is the long turnaround time for updates that has created the need for vendor-specific drivers.

OCZ Z-Drive 6000 Series Specifications
  6000 6300
Capacities 800GB; 1,600GB & 3,200GB
Form Factors 2.5″ 15mm (SFF-8639) 2.5″ 15mm & HHHL AIC
Interface PCIe 3.0 x4 (NVMe 1.1b)
Controller PMC-Sierra “Princeton”
NAND Toshiba A19nm 128Gbit MLC Toshiba A19nm 128Gbit eMLC
Endurance 1 DWPD 3 DWPD
Encryption AES-256
Power Loss Protection Yes Yes
Warranty Five Years
Price ~$1.70/GB ~$2.00/GB

The Z-Drive 6000 series comes in two flavors: 6000 and 6300. The underlying controller and firmware architectures are the same in both models and the difference lies merely in NAND: the 6300 uses more durable eMLC NAND, which increases the endurance to three drive writes per day from one in the 6000 with normal MLC. Due to the endurance, the 6000 is aimed more towards read-intensive applications such as online archiving and media streaming, whereas the 6300 is suitable for mixed workloads that includes for example big data analysis and financial transaction services.

Unlike Intel, OCZ doesn’t offer a model for write-intensive workloads with super high endurance (10 DWPD). OCZ explained to me that the reason is mostly cost efficiency — a high endurance drive is doable, but it would require additional over-provisioning (which is what Intel does) that would increase the cost. OCZ did some market research and concluded that most customers are seeking for lower cost NVMe drives to make the transition, so OCZ didn’t see a huge niche for the expensive high endurance drives. At $1.70 and $2.00 per gigabyte, the Z-Drive 6000 series is certainly price competitive against Intel’s P3x00 series and the premium isn’t too large compared to the enterprise SATA/SAS SSDs.

At the time of launch, the Z-Drive 6000 series will be available in capacities of 800GB, 1.6TB and 3.2TB. OCZ does, however, have a 6.4TB Z-Drive 6300 in development, which is scheduled to be available in Q4’15. The reason for the delay lies in NAND because in order to fit 8TB of flash inside a 2.5″ chassis, OCZ needs to use 16-die packages, but currently the price of those is, from what I have heard, approximately 3x higher per gigabyte compared to 8-die stacks. The production as well as yields are expected to ramp up during this year, so the 6.4TB Z-Drive 6300 will be available once the NAND is available in high volume and at a reasonable price.

The Z-Drive 6000 series employs PMC-Sierra’s “Princeton” controller, which is a native NVMe controller with support for 16 NAND channels. The controller supports PCIe 3.0 x8 interface, but OCZ has decided to split that to offer two independent 3.0 x4 connections to the host. The benefit of dual-port is redundancy and data availability because if one of the host systems is down (due to hardware failure for instance), the data is still accessible through the second host. This is a feature borrowed from SAS and the Z-Drive 6000 is actually the first NVMe drive to have dual-port support, which is something OCZ said its customers have been looking for.

Typical enterprise-class features, such as AES-256 encryption, full power loss protection and end-to-end data protection are all included as well. The Z-Drive 6000 series also features user-configurable power modes (15W, 20W and 25W), which can be used to limit the power consumption (and performance) in more temperature critical environments. A 2.5″ 25W drive will definitely run hot and require a high amount of airflow for cooling, although OCZ did pay close attention to the chassis design to maximize heat dissipation and avoid throttling issues.

OCZ Z-Drive 6000 Series Performance Specifications
Model 6000 6300
Capacity 800GB 1,600GB 3,200GB 800GB 1,600GB 3,200GB
Raw NAND Capacity 1,024GiB 2,048GiB 4,096GiB 1,024GiB 2,048GiB 4,096GiB
128KB Sequential Read 2.2GB/s 2.9GB/s 2.9GB/s 2.2GB/s 2.9GB/s 2.9GB/s
128KB Sequential Write 1.3GB/s 1.9GB/s 1.9GB/s 1.0GB/s 1.4GB/s 1.4GB/s
4KB Random Read 600K IOPS 700K IOPS 700K IOPS 600K IOPS 700K IOPS 700K IOPS
4KB Random Write 115K IOPS 160K IOPS 160K IOPS 75K IOPS 120K IOPS 120K IOPS
Mixed 4KB 30R / 70W 290K IOPS 330K IOPS 330K IOPS 230K IOPS 280K IOPS 280K IOPS
Idle Power Consumption 9W 9W 9W 9W 9W 9W
Active Power Consumption 25W 25W 25W 25W 25W 25W

OCZ focused specifically on read performance and at up to 700K random read IOPS, the Z-Drive is definitely top of the class as Intel specs the P3700 at only 460K IOPS. Random write and mixed performance look excellent too, so it will be very interesting to see how the Z-Drive 6000 stacks up against the Intel and Samsung drives in objective third party testing. Note that the Z-Drive 6300 with eMLC is a bit slower in writes, which is due to the fact that eMLC NAND has higher program latencies (basically the voltage distribution for each voltage state is smaller, so programming requires more precision that is achieved by increasing the number of program pulse and verification iterations). 

OCZ’s own testing data puts the Z-Drive 6000 way ahead of the competition in terms of performance and consistency in both reads and writes. I would of course take the data with a grain of salt, but if the Z-Drive 6000 series is really as good as OCZ’s marketing suggests, then OCZ has one hell of a drive in its hands. OCZ is currently sampling the Z-Drive 6000 series to key customers and partners, so I would expect more widespread availability to be later this year. All in all, it’s a very potent drive that could very well help OCZ gain some market share in the enterprise space.

The Truth About SSD Data Retention

The Truth About SSD Data Retention

In the past week, quite a few media outlets have posted articles claiming that SSDs will lose data in a matter of days if left unpowered. While there is some (read: very, very little) truth to that, it has created a lot of chatter and confusion in forums and even I have received a few questions about the validity of the claims, so rather than responding to individual emails/tweets from people who want to know more, I thought I would explain the matter in depth to everyone at once. 

First of all, the presentation everyone is talking about can be found here. Unlike some sites reported, it’s not a presentation from Seagate — it’s an official JEDEC presentation from Alvin Cox, the Chairman of JC-64.8 subcommittee (i.e. SSD committee) at the time, meaning that it’s supposed to act as an objective source of information for all SSD vendors. It is, however, correct that Mr. Cox works as a Senior Staff Engineer at Seagate, but that is irrelevant because the whole purpose of JEDEC is to bring manufacturers together to develop open standards. The committee members and chairmen are all working for some company and currently the JC-64.8 subcommittee is lead by Frank Chu from HGST.

Before we go into the actual data retention topic, let’s outline the situation by focusing on the conditions that must be met when the manufacturer is determining the endurance rating for an SSD. First off, the drive must maintain its capacity, meaning that it cannot retire so many blocks that the user capacity would decrease. Secondly, the drive must meet the required UBER (number of data errors per number of bits read) spec as well as be within the functional failure requirement. Finally, the drive must retain data without power for a set amount of time to meet the JEDEC spec. Note that all these must be conditions must be met when the maximum number of data has been written i.e. if a drive is rated at 100TB, it must meet these specs after 100TB of writes.

The table above summarizes the requirements for both client and enterprise SSDs. As we can see, the data retention requirement for a client SSD is one-year at 30°C, which is above typical room temperature. The retention does depend on the temperature, so let’s take a closer look of how the retention scales with temperature.

EDIT: Note that the data in the table above is based on material sent by Intel, not Seagate.

At 40°C active and 30°C power off temperature, a client SSD is set to retain data for 52 weeks i.e. one year. As the table shows, the data retention is proportional to active temperature and inversely proportional to power off temperature, meaning that a higher power off temperature will result in decreased retention. In a worst case scenario where the active temperature is only 25-30°C and power off is 55°C, the data retention can be as short as one week, which is what many sites have touted with their “data loss in matter of days” claims. Yes, it can technically happen, but not in typical client environment.

In reality power off temperature of 55°C is not realistic at all for a client user because the drive will most likely be stored somewhere in the house (closet, basement, garage etc.) in room temperature, which tends to be below 30°C. Active temperature, on the other hand, is usually at least 40°C because the drive and other components in the system generate heat that puts the temperature over room temperature.

As always, there is a technical explanation to the data retention scaling. The conductivity of a semiconductor scales with temperature, which is bad news for NAND because when it’s unpowered the electrons are not supposed to move as that would change the charge of the cell. In other words, as the temperature increases, the electrons escape the floating gate faster that ultimately changes the voltage state of the cell and renders data unreadable (i.e. the drive no longer retains data). 

For active use the temperature has the opposite effect. Because higher temperature makes the silicon more conductive, the flow of current is higher during program/erase operation and causes less stress on the tunnel oxide, improving the endurance of the cell because endurance is practically limited by tunnel oxide’s ability to hold the electrons inside the floating gate.

All in all, there is absolutely zero reason to worry about SSD data retention in typical client environment. Remember that the figures presented here are for a drive that has already passed its endurance rating, so for new drives the data retention is considerably higher, typically over ten years for MLC NAND based SSDs. If you buy a drive today and stash it away, the drive itself will become totally obsolete quicker than it will lose its data. Besides, given the cost of SSDs, it’s not cost efficient to use them for cold storage anyway, so if you’re looking to archive data I would recommend going with hard drives for cost reasons alone.

The Truth About SSD Data Retention

The Truth About SSD Data Retention

In the past week, quite a few media outlets have posted articles claiming that SSDs will lose data in a matter of days if left unpowered. While there is some (read: very, very little) truth to that, it has created a lot of chatter and confusion in forums and even I have received a few questions about the validity of the claims, so rather than responding to individual emails/tweets from people who want to know more, I thought I would explain the matter in depth to everyone at once. 

First of all, the presentation everyone is talking about can be found here. Unlike some sites reported, it’s not a presentation from Seagate — it’s an official JEDEC presentation from Alvin Cox, the Chairman of JC-64.8 subcommittee (i.e. SSD committee) at the time, meaning that it’s supposed to act as an objective source of information for all SSD vendors. It is, however, correct that Mr. Cox works as a Senior Staff Engineer at Seagate, but that is irrelevant because the whole purpose of JEDEC is to bring manufacturers together to develop open standards. The committee members and chairmen are all working for some company and currently the JC-64.8 subcommittee is lead by Frank Chu from HGST.

Before we go into the actual data retention topic, let’s outline the situation by focusing on the conditions that must be met when the manufacturer is determining the endurance rating for an SSD. First off, the drive must maintain its capacity, meaning that it cannot retire so many blocks that the user capacity would decrease. Secondly, the drive must meet the required UBER (number of data errors per number of bits read) spec as well as be within the functional failure requirement. Finally, the drive must retain data without power for a set amount of time to meet the JEDEC spec. Note that all these must be conditions must be met when the maximum number of data has been written i.e. if a drive is rated at 100TB, it must meet these specs after 100TB of writes.

The table above summarizes the requirements for both client and enterprise SSDs. As we can see, the data retention requirement for a client SSD is one-year at 30°C, which is above typical room temperature. The retention does depend on the temperature, so let’s take a closer look of how the retention scales with temperature.

EDIT: Note that the data in the table above is based on material sent by Intel, not Seagate.

At 40°C active and 30°C power off temperature, a client SSD is set to retain data for 52 weeks i.e. one year. As the table shows, the data retention is proportional to active temperature and inversely proportional to power off temperature, meaning that a higher power off temperature will result in decreased retention. In a worst case scenario where the active temperature is only 25-30°C and power off is 55°C, the data retention can be as short as one week, which is what many sites have touted with their “data loss in matter of days” claims. Yes, it can technically happen, but not in typical client environment.

In reality power off temperature of 55°C is not realistic at all for a client user because the drive will most likely be stored somewhere in the house (closet, basement, garage etc.) in room temperature, which tends to be below 30°C. Active temperature, on the other hand, is usually at least 40°C because the drive and other components in the system generate heat that puts the temperature over room temperature.

As always, there is a technical explanation to the data retention scaling. The conductivity of a semiconductor scales with temperature, which is bad news for NAND because when it’s unpowered the electrons are not supposed to move as that would change the charge of the cell. In other words, as the temperature increases, the electrons escape the floating gate faster that ultimately changes the voltage state of the cell and renders data unreadable (i.e. the drive no longer retains data). 

For active use the temperature has the opposite effect. Because higher temperature makes the silicon more conductive, the flow of current is higher during program/erase operation and causes less stress on the tunnel oxide, improving the endurance of the cell because endurance is practically limited by tunnel oxide’s ability to hold the electrons inside the floating gate.

All in all, there is absolutely zero reason to worry about SSD data retention in typical client environment. Remember that the figures presented here are for a drive that has already passed its endurance rating, so for new drives the data retention is considerably higher, typically over ten years for MLC NAND based SSDs. If you buy a drive today and stash it away, the drive itself will become totally obsolete quicker than it will lose its data. Besides, given the cost of SSDs, it’s not cost efficient to use them for cold storage anyway, so if you’re looking to archive data I would recommend going with hard drives for cost reasons alone.

Intel Releases SSD DC S3510

Intel Releases SSD DC S3510

In February Intel refreshed its enterprise SATA SSD lineup with the DC S3610 and S3710 SSDs, but left the entry-level S35xx series untouched. That changes today with the launch of the DC S3510, which succeeds the popular S3500 that has been around since late 2012.

Similar to its big brothers, the S3510 features Intel’s second generation SATA 6Gbps controller that was first introduced in the high capacity S3500 models late last year. Intel has remained quiet about the specifics of the second generation controller (and the SATA 6Gbps controller as a whole), but we do know that it adds support for larger capacities, which suggests the internal caches and DRAM controller could be larger. 

The most significant change in the S3510 is the NAND. The S3510 switches to IMFT’s latest 16nm 128Gbit MLC NAND node, which is a rather surprising move given that all Intel’s client SSDs are still utilizing 20nm NAND. The reason lies behind the fact that Intel didn’t invest in IMFT’s 16nm node, meaning that Micron produces and owns all 16nm NAND output. Intel and Micron reconsider the partnership and investments for each generation separately and for 16nm Intel decided not to invest — likely because Intel’s focus is in the enterprise nowadays and 16nm is more geared towards the client market given its lower endurance, and Intel also wanted to concentrate more heavily in the companies’ upcoming 3D NAND.

That said, Intel and Micron do have strong supply agreements in place, which gives Intel access to Micron’s 16nm NAND despite not investing in its development and production. I suspect the use of 16nm NAND is why the S3510 wasn’t launched alongside the S3610 and S3710 earlier this year because validating a new NAND node is time consuming and might be that the 16nm node wasn’t even mature enough for the enterprise back then. In any case, the S3510 is the first enterprise SSD to utilize sub-19nm NAND, which is a respectable achievement on its own already. 

Intel SSD DC S3510 Specifications
Capacity 80GB 120GB 240GB 480GB 800GB 1.2TB 1.6TB
Controller Intel 2nd Generation SATA 6Gbps Controller
NAND Micron 16nm 128Gbit Standard Endurance Technology (SET) MLC
Sequential Read 375MB/s 475MB/s 500MB/s 500MB/s 500MB/s 500MB/s 500MB/s
Sequential Write 110MB/s 135MB/s 260MB/s 440MB/s 460MB/s 440MB/s 430MB/s
4KB Random Read 68K IOPS 68K IOPS 68K IOPS 68K IOPS 67K IOPS 67K IOPS 65K IOPS
4KB Random Write 8.4K IOPS 5.3K IOPS 10.2K IOPS 15.1K IOPS 15.3K IOPS 20K IOPS 15.2K IOPS
Avg Read Power 1.93W 2.14W 2.21W 2.32W 2.39W 2.61W 2.69W
Avg Write Power 1.91W 2.14W 3.06W 4.45W 4.74W 5.24W 5.59W
Endurance 45TB 70TB 140TB 275TB 450TB 660TB 880TB

On the performance side, the S3510 provides slightly better random write performance at larger capacities than its predecessor (you can find the S3500 specs here), but other than that the S3510 is a very close match with the S3500. Typical to enterprise SSDs, the S3510 features AES-256 hardware and full power loss protection that protects all data, including in-flight user writes, from sudden power losses. 

Comparison of Intel’s Enterprise SATA SSDs
  S3510 S3610 S3710
Form Factors 2.5″ 2.5″ & 1.8″ 2.5″
Capacity Up to 1.6TB Up to 1.6TB Up to 1.2TB
NAND 16nm MLC 20nm HET MLC 20nm HET MLC
Endurance 0.3 DWPD 3 DWPD 10 DWPD
Random Read Performance Up to 68K IOPS Up to 84K IOPS Up to 85K IOPS
Random Write Performance Up to 20K IOPS Up to 28K IOPS Up to 45K IOPS

The endurance is also equal to the S3500 and comes in at 0.3 drive writes per day for five years, which is a typical rating for entry-level enterprise SSDs that are mostly aimed for read intensive workloads like media streaming. For more write-centric applications, Intel offers the S3610 and S3710 with higher endurance and better write performance (but at a higher cost, of course). I didn’t get the S3510 MSRPs from Intel yet, but I suspect that the S3510 is priced around $0.80 per gigabyte, but I’ll confirm this as soon as I hear back from Intel.

All in all, even though the industry is transitioning more and more towards PCIe and NVMe, there is still a huge market for SATA drives. Many applications don’t necessarily benefit much from higher performance and especially hyperscale customers are looking at cost and compatibility, which is where SATA is still the king of the hill.