Vik


ADATA Launches ISSS314 and IM2P3388 Industrial SSDs: 3D NAND, Extreme Temps

ADATA Launches ISSS314 and IM2P3388 Industrial SSDs: 3D NAND, Extreme Temps

ADATA has introduced two new families of 3D NAND-based SSDs aimed at industrial applications. Dubbed the ISSS314 and the IM2P3388, these drives are designed to handle extreme temperatures as well as humidity levels, allowing them to work reliably in very tough environmental conditions. The more powerful IM2P3388 drives use a PCIe interface and offer high performance levels along with a powerful ECC engine and encryption, whereas the less speedy ISSS314 uses a SATA interface and offers very low power consumption that barely tops 2.5 W.

The IM2P3388: M.2, High Performance, Extreme Temps, Encryption, TCG Opal

The ADATA IM2P3388 is an M.2 drive that uses a NVMe PCIe 3.0 x4 interface and is based on 3D MLC NAND. This specific drive is designed to withstand ESD and EMI, up to 20 G vibration and 1500G/0.5ms shock, extreme temperatures from –40°C to +90°C, as well as high humidity (5%-95% RH, non-condensing). To put it into perspective: the IM2P3388 drives can operate in Antarctica or in the Lut Desert in Iran. In the real world, ADATA’s new SSDs will serve inside space-constrained industrial or commercial PCs, servers, military-grade systems, and embedded computers.

The IM2P3388 drives are based on a Silicon Motion controller that ADATA does not name, we suspect is the SM2260 with some additional customization. As for the NAND, the IM2P3388 SSDs use carefully selected 3D MLC that can handle high temperatures for prolonged amounts of time. The IM2P3388 takes advantage of all the capabilities of the controller and therefore supports AES-256 encryption, TCG Opal 2.0 spec, end-to-end data protection, and so on. In addition, the drive has multiple sensors that monitor its condition.

ADATA IM2P3388 SSD Specifications
Capacity 128 GB 256 GB 512 GB 1 TB
Model Number Commercial IM2P3388-128GB IM2P3388-256GB IM2P3388-512GB IM2P3388-001TB
Industrial IM2P3388-128GC IM2P3388-256GC IM2P3388-512GC IM2P3388-001TC
Controller Silicon Motion SM2260 (?)
NAND Flash 3D MLC NAND
Form-Factor, Interface M.2-2280, PCIe 3.0 x4, NVMe 1.2
Operating Temperature Commercial -10°C to 80°C
Industrial -40°C to C to 90°C
Vibration Resistance 20G (10 – 2000 Hz)
Shock Resistance 1500G/0.5 ms half sine wave
Operating Humidity 5% – 95% RH non-condensing
Sequential Read ~1000 MB/s (?) ~2000 MB/s (?) 2500 MB/s
Sequential Write ~300 MB/s (?) ~600 MB/s (?) 1100 MB/s
Random Read IOPS unknown
Random Write IOPS unknown
Pseudo-SLC Caching Supported
DRAM Buffer Yes, capacity unknown
TCG Opal Encryption Yes
Power Consumption Up to 4.8W
Power Management DevSleep, Slumber
Warranty unknown
MTBF >2,000,000 hours

As for performance, ADATA specifies the drive to offer up to 2.5 GB/s sequential read speeds and up to 1.1 GB/s sequential write speeds (when pSLC caching is used), but does not specify random performance. ADATA’s IM2P3388 will be available in 128 GB, 256 GB, 512 GB, and 1 TB configurations. Keeping in mind the high density of modern flash chips, expect the entry-level models to be slower than their higher-capacity counterparts. In general, expect performance  of the IM2P3388 to be comparable to the XPG SX8000 drives featuring the SM2260 and 3D MLC.

The ISSS314: 2.5”, Extreme Temps, Low Power, Starting at 32 GB

The ADATA ISSS314 SSDs come in a traditional 2.5”/7 mm drive form-factor and use a SATA 6 Gbps interface. In order to satisfy the diverse needs of customers, ADATA will offer the ISSS314 in 32 GB, 64 GB, 128 GB, 256 GB, and 512 GB configurations. The higher-end models will provide up to 560 MB/s sequential read and up to 520 MB/s sequential write speeds, whereas the entry-level drives will be considerably slower. As for power consumption, the new SSDs are rated to only use up to 2.5 W, which puts them into the energy efficient category.

The ISSS314 SSDs are based on an unknown controller as well as 3D MLC and 3D TLC NAND memory sorted using ADATA’s proprietary A+ testing methodology to find the higher quality chips. The industrial ISSS314 drives based on 3D MLC memory are rated to withstand shock, EMI, and extreme temperatures from –40°C to +85°C, and thus are aimed at industrial applications. By contrast, commercial 3D MLC ISSS314 SSDs are rated for –10°C to +80°C operation. Meanwhile, the 3D TLC-powered ISSS314 is guaranteed to work in a temperature range from 0°C to +70°C, but can also withstand shocks, ESD, EMI, and so on. As for features, all the ISS314 SSDs have S.M.A.R.T, a temperature sensor, hardware power detection, and flash protection.

ADATA ISSS314 Specifications
Capacity 32 GB 64 GB 128 GB 256 GB 512 GB
Model Number MLC Commercial ISSS314-032GB ISSS314-064GB ISSS314-128GB ISSS314-256GB ISSS314-512GB
Industrial ISSS314-032GC ISSS314-064GC ISSS314-128GC ISSS314-256GC ISSS314-512GC
TLC Commercial ISSS314-128GD ISSS314-256GD ISSS314-512GD
Controller Silicon Motion SM2258 (?)
Form-Factor/Interface 2.5″/7 mm/SATA
NAND MLC Commercial 3D MLC
Industrial 3D MLC
TLC Commercial 3D TLC
Operating Temp. MLC Commercial -10°C to 80°C
Industrial -40°C to C to 85°C
TLC Commercial 0°C to 70°C
Vibration Resistance 20G (10 – 2000 Hz)
Shock Resistance 1500G/0.5 ms half sine wave
Operating Humidity 5% – 95% RH non-condensing
Sequential Read unknown 560 MB/s
Sequential Write unknown 520 MB/s
Random Read IOPS Up to 90K IOPS (taken from SM2258, actual will be lower)
Random Write IOPS Up to 80K IOPS (taken from SM2258, actual will be lower)
Pseudo-SLC Caching Supported
DRAM Buffer Yes, capacity unknown
TCG Opal Encryption No
Power Consumption Up to 2.5W
Power Management DevSleep
Warranty unknown
MTBF 2,000,000 hours

ADATA does not publish recommended prices for its industrial and commercial SSDs. Since such products rarely show up in mainstream retail, their actual prices for customers typically fluctuate depending on the order size and other factors.

Related Reading:

Alphacool Releases Two New SSD Coolers: Passive HDX-2 and Watercooled HDX-3

Alphacool Releases Two New SSD Coolers: Passive HDX-2 and Watercooled HDX-3

This week Alphacool announced the availability of their new M.2 SSD Coolers, the HDX-2 and HDX-3. Some may recall the original HDX M.2 cooler was a simple, passive, clip on heatsink for M.2 SSDs, and was designed to help prevent thermal throttling which has a tendency to plague synthetic test results on some M.2 based drives. With the advent of the HDX-2 and HDX-3, they have moved beyond the simple clip cooler and to using a PCIe x4 card. This design change allowed a full sized heatsink to be mounted on it giving more surface area to cool the attached M.2 device. The HDX-3 takes the HDX-2 and its passive setup a step further and uses a waterblock instead of the large heatsink to remove the heat created from these SSDs.

 

The dimensions both the of HDX-2 come in at 100 x 81.5 x 20 mm with the HDX-3 being slightly taller at 120 mm while sharing the same width and height. According to Alphacool, the included 4x PCIe card allows a maximum bandwidth of around 3900 MB/s. Existing M.2 drives on the market will not be able to saturate it. Though it is double sided, both M.2 coolers hold one M.2 based device. Contact from the M.2 device to the heatsink is provided by included thermal pads. These thermals pads cover both single and double sided M.2 drives. The HDX-2 and HDX-3 both support up to one 80mm M.2 SSD. Both devices connect to the PCIe slot and mount to the case for a stable platform. 

Alphacool HDX-2 and HDX-3 M.2 SSD Coolers
Technical Data HDX-2 HDX-3
Dimensions (LxWxH) 100 x 81.5 x 20 mm 120 x 81.5 x 20 mm
Material Aluminum Copper, Acetal
Threads N/A 2x G 1/4″
PCIe Form Factor PCIe 3.0 x4
Compatibility M.2 2280 PCIe SSDs
Max. Bandwidth PCIe Card 3938 MB/s

The HDX-2 uses large aluminum heatsinks on both sides of the included PCIe card easily covering the M.2 drive it aims to cool. The heatsinks are black with αCOOL and HDX-2 stenciled on it in white as well as having cooling fins to increase cooling area – the heatsinks cover the entire PCB of the PCIe card. The drive is mounted to the PCB, thermal pads applied to the drive, then mount the heatsink to the board. 

The HDX-3 block is made of nickel-plated copper with the top made from a single piece of acetal. The block mounts to one side of the PCIe card leaving the other side open. Alphacool says water flows over the entire SSD to help keep things cool. The water enters and exits the block at the opposite end to the PCIe connector using standard G ¼” a threads. 

Pricing and availability were not listed in the press release. 

Related Reading

Intel Launches Movidius Neural Compute Stick: Deep Learning and AI on a $79 USB Stick

Intel Launches Movidius Neural Compute Stick: Deep Learning and AI on a $79 USB Stick

Today Intel subsidiary Movidius is launching their Neural Compute Stick (NCS), a version of which was showcased earlier this year at CES 2017. The Movidius NCS adds to Intel’s deep learning and AI development portfolio, building off of Movidius’ April 2016 launch of the Fathom NCS and Intel’s later acquisition of Movidius itself in September 2016. As Intel states, the Movidius NCS is “the world’s first self-contained AI accelerator in a USB format,” and is designed to allow host devices to process deep neural networks natively – or in other words, at the edge. In turn, this provides developers and researchers with a low power and low cost method to develop and optimize various offline AI applications.

Movidius’s NCS is powered by their Myriad 2 vision processing unit (VPU), and, according to the company, can reach over 100 GFLOPs of performance within an nominal 1W of power consumption. Under the hood, the Movidius NCS works by translating a standard, trained Caffe-based convolutional neural network (CNN) into an embedded neural network that then runs on the VPU. In production workloads, the NCS can be used as a discrete accelerator for speeding up or offloading neural network tasks. Otherwise for development workloads, the company offers several developer-centric features, including layer-by-layer neural networks metrics to allow developers to analyze and optimize performance and power, and validation scripts to allow developers to compare the output of the NCS against the original PC model in order to ensure the accuracy of the NCS’s model.

The 2017 Movidius NCS vs. 2016 Fathom NCS

According to Gary Brown, VP of Marketing at Movidius, this ‘Acceleration mode’ is one of several features that differentiate the Movidius NCS from the Fathom NCS. The Movidius NCS also comes with a new “Multi-Stick mode” that allows multiple sticks in one host to work in conjunction in offloading work from the CPU. For multiple stick configurations, Movidius claims that they have confirmed linear performance increases up to 4 sticks in lab tests, and are currently validating 6 and 8 stick configurations. Importantly, the company believes that there is no theoretical maximum, and they expect that they can achieve similar linear behavior for more devices. Though ultimately scalability will depend at least somewhat with the neural network itself, and developers trying to use the feature will want to play around with it to determine how well they can reasonably scale.

Meanwhile, the on-chip memory has increased from 1 GB on the Fathom NCS to 4 GB LPDDR3 on the Movidius NCS, in order to facilitate larger and denser neural networks. And to cap it all off, Movidius has been able to reduce the MSRP to $79 – citing Intel’s “manufacturing and design expertise” – lowering the cost of entry even more.

Like other players in the edge inference market, Movidius is looking to promote and capitalize on the need for low-power but capable inference processors for stand-alone devices. That means targeting use cases where the latency of going to a server would be too great, a high-performance CPU too power hungry, or where privacy is a greater concern. In which case, the NCS and the underlying Myriad 2 VPU are Intel’s primary products for device manufacturers and software developers.

Movidius Neural Compute Stick Products
  Movidius Neural Compute Stick Fathom Neural Compute Stick
Interface USB 3.0 Type A USB 3
On-chip Memory 4Gb LPDDR3 1Gb/512Mb LPDDR3
Deep Learning Framework Support Caffe Caffe
TensorFlow
Native Precision Support FP16 FP16, 8bit
Features Acceleration mode
Multi-Stick mode
N/A
Nominal Power Envelope 1W 1W
SoC Myriad 2 VPU Myriad 2 VPU (MA2450)
Launch Date 7/20/2017 4/28/2016
MSRP $79 $99

As for the older Fathom NCS, the company notes that the Fathom NCS was only ever released in a private beta (which was free of charge). So the Movidius NCS is the de facto production version. For customers who did grab a Fathom NCS, Movidius says that Fathom developers will be able to retain their current hardware and software builds, but the company will be encouraging developers to switch over to the production-ready Movidius NCS.

Stepping back, it’s clear that the Movidius NCS offers stronger and more versatile features beyond the functions described in the original Fathom announcement. As it stands, the Movidius NCS offers native FP16 precision, with over 10 inferences per second at FP16 precision on GoogleNet in single-inference mode, putting it in the same range as the 15 nominal inferences per second of the Fathom. While the Fathom NCS was backwards compatible with USB 1.1 and USB 2, it was noted that the decreased bandwidth reduced performance; presumably, this applies for the Movidius NCS as well.

SoC-wise, while the older Fathom NCS had a Myriad 2 MA2450 variant, a specific Myriad 2 model was not described for the Movidius NCS. A pre-acquisition 2016 VPU product brief outlines 4 Myriad 2 family SoCs to be built on a 28nm HPC process, with the MA2450 supporting 4Gb LPDDR3 while the MA2455 supports 4Gb LPDDR3 and secure boot. Intel’s own Myriad 2 VPU Fact Sheet confirms the 28nm HPC process, implying that the VPU remains fabbed with TSMC. Given that the 2014 Myriad 2 platform specified a TSMC 28nm HPM process, as well as a smaller 5mm x 5mm package configuration, it’s possible that a different, more refined 28nm VPU powers the Movidius NCS. In any case, it was mentioned that the 1W power envelope applies to the Myriad 2 VPU, and that in certain complex cases, the NCS may operate within a 2.5W power envelope.

Ecosystem Transition: From Google’s Project Tango to Movidius, an Intel Company

Close followers of Movidius and the Myriad SoC family may recall Movidius’ previous close ties with Google, having announced a partnership with Myriad 1 in 2014, culminating in the Myriad 1’s appearance in Project Tango. Further agreements in January 2016 saw Google sourcing Myriad processors and Movidius’ entire software development environment in return for Google contributions to Movidius’ neural network technology roadmap. In the same vein, the original Fathom NCS also supported Google’s TensorFlow, in contrast to the Movidius NCS, which is only launching with Caffe support.

As an Intel subsidiary, Movidius has unsurprisingly shifted into Intel’s greater deep learning and AI ecosystem. On that matter, Intel’s acquisition announcement explicitly linked Movidius with Intel RealSense (which also found its way into Project Tango) and computer vision endeavors; though explicit Movidius integration with RealSense is yet to be seen – or if in the works, made public. In the official Movidius NCS news brief, Intel does describe Movidius fitting into Intel’s portfolio as an inference device, while training and optimizing neural networks falls to the Nervana cloud and Intel’s new Xeon Scalable processors respectively. To be clear, this doesn’t preclude Movidius NCS compatibility with other devices, and to that effect Mr. Brown commented: “If the network has been described in Caffe with the supported layer types, then we expect compatibility, but we also want to make clear that NCS is agnostic to how and where the network was trained.”

On a more concrete note, Movidius has a working demonstration of a Xeon/Nervana/Caffe/NCS workflow, where an end-to-end workflow of a Xeon-based training scheme generates a Caffe network optimized by Nervana’s Intel Caffe format, which is then deployed via NCS. Movidius plans to debut this demo at Computer Vision and Pattern Recognition (CVPR) conference in Honolulu, Hawaii later this week. In general, Movidius and Intel promise to have plenty to talk about in the future, where Mr. Brown comments: “We will have more to share about technical integrations later on, but we are actively pursuing the best end-to-end experience for training through to deployment of deep neural networks.”

Upcoming News and NCS Demos at CVPR

Alongside the Xeon/Caffe/Nervana/NCS workflow demo, Movidius has a slew of other things to showcase at CVPR 2017. Interestingly, Intel has described their presentations and demos as two separate Movidius and RealSense affairs, implying that the aforementioned Movidius/RealSense unification is still in the works.

For Movidius, Intel describes three demonstrations: “SDK Tools in Action,” “Multi-Stick Neural Network Scaling,” and “Multi-Stage Multi-Task Convolutional Neural Network (MTCNN).” The first revolves around the Movidius Neural Compute SDK and the platform API. The multi-stick demo showcases 4 Movidius NCS’ in accelerating object recognition. Finally, the third demo showcases Movidius NCS support for MTCNN, “a complex multi-stage neural network for facial recognition.” Meanwhile, Intel is introducing the RealSense D400 series, a depth-sensing camera family

The multi-stick demo is presumably what the company mentioned as a multi-stick demo that has been validated on three different host platforms: desktop CPU, laptop CPU, and a low-end SoC. The company also has a separate acceleration demo, where the Movidius NCS accelerates a Euclid developer module and offloads the CPUs, “freeing up the CPU for other tasks such as route planning or running application-level tasks.” The result is around double the framerate and a two-thirds power reduction.

All-in-all, Intel sees and outright states that they consider the Movidius NCS to be a means towards democratizing deep learning application development. As recent as this week, we’ve seen a similar approach as Intel’s recent 15.46 integrated graphics driver brought support for CV and AI workload acceleration on Intel integrated GPUs, tying in with Intel’s open source Compute Library for Deep Neural Networks (clDNN) and associated Computer Vision SDK and Deep Learning Deployment Toolkits. On a wider scale, Intel has already publicly positioned itself for deep learning in edge devices by way of their ubiquitous iGPUs, and Intel’s ambitions are highlighted by its recent history of machine learning and autonomous automotive oriented acquisitions: MobilEye, Movidius, Nervana, Yogitech, and Saffron.

As Intel pushes forward with machine learning development by way of edge devices, it will be very interesting to see how their burgeoning ecosystem coalesces. Like the original Fathom, the Movidius NCS is aimed at lowering the barriers to entry, and as the Fathom launch video supposes, a future where drones, surveillance cameras, robots, and any device can be made smart by “adding a visual cortex” that is the NCS.

With that said, however, technology is only half the challenge for Intel. Neural network inference at the edge is a popular subject for a number of tech companies, all of whom are jockeying for the lead position in what they consider a rapidly growing market. So while Intel has a strong hand with their technology, success here will mean that they need to be able to break into this new market in a convincing way, which is something they’ve struggled with in past SoC/mobile efforts. The fact that they already have a product stack via acquisitions may very well be the key factor here, since being late to the market has frequently been Intel’s Achilles’ heel in the past.

Wrapping things up, the Movidius NCS is now available for purchase for a MSRP of $79 through select distributors, as well as at CVPR.

EKWB Releases New RGB Monoblock for MSI X299 Motherboards

EKWB Releases New RGB Monoblock for MSI X299 Motherboards

This week, the Slovenian based liquid cooling manufacturer, EKWB (EK Water Blocks) released a new monoblock which is custom made for specific MSI X299 motherboards, and named the EK-FB MSI X299 Gaming Pro Carbon RGB Monoblock. MSI claims this solution provides up to 30% cooler VRM and CPU temperatures as measured at the back of the PCB. It has a built-in 4-pin RGB LED strip compatible with MSI’s Mystic Light software in order to customize the lighting experience.

Based on the EK-Supremacy Evo cooling engine, EK states it has a high flow design and can be used in systems running a weaker pump. The EK-FB MSI X299 Gaming Pro Carbon Monoblock directly cools Intel LGA2066 socket CPUs as well as the potentially hot running VRM area on many X299 boards. It does so with direct contact on the both the CPU and MOSFETs with liquid flowing directly over those critical parts inside the block.

Sparing little expense, the base is made out of nickel-plated electrolytic copper, with the top constructed of acrylic glass. According to MSI, the cold plate portion of the block has been redesigned to ensure it “…has better mechanical contact with the IHS…thus enabling better thermal transfer”. An example of this is the raised circle where the CPU IHS would be pressed against (see picture below). The required barbs are the common G 1/4″ type. 

The included 4-pin RGB LED strip connects to the motherboard’s 4-pin RGB LED header, or to other 3rd party 4-pin LED controllers. The LED strip cover can be removed and replaced with another compatible RGB LED strip, or flipped around for better cable management, orientation, and aesthetics. 

Though the name of this monoblock specifically mentions one specific motherboard, MSI says it is compatible with the following boards in their X299 lineup:

  • MSI X299 Gaming Pro Carbon AC
  • MSI X299 Gaming Pro Carbon
  • MSI X299 Gaming M7 ACK

The EK-FB MSI X299 Gaming Pro Carbon RGB Monoblock is available for pre-order now through the EK Webshop and its Partner Reseller Network. Pricing, including VAT, will be 119.95€. EK says shipping of the pre orders will start on Thursday, July 27th. 

Related Reading