Enterprise


Synology Launches RC18015xs+ / RXD1215sas High-Availability Cluster Solution

Synology Launches RC18015xs+ / RXD1215sas High-Availability Cluster Solution

Synology is no stranger to high-availability (HA) systems. Synology High Availability is touted as one of the features that differentiate Synology’s NAS units from other vendors’ for small business and enterprise usage. Put simply, Synology HA allows two NAS units (same model) to be connected to each other directly through their LAN ports, while also being connected to the main network through their other LAN ports. One of the NAS units is designated as the active unit, while the other passively tracks updates made to that unit. In case of any failure in the active unit, the other one can seamless take over without any downtime.

Synology is now extending this concept to a high-availability cluster. The products being introduced today are the RackStation RC18015xs+ compute node and the 12-bay RXD1215sas expansion unit.

Unlike Synology’s traditional RackStation products, the compute node doesn’t come with storage bays. They are just 1U servers sporting a Xeon E3-1230 v2 (4C / 8T Ivy Bridge running at 3.3 GHz) CPU. The specifications of the RC18015xs+ are provided below.

The PCIe 3.0 x8 slot allows for installation of 10 GbE adapters, if required. The compute node is priced at $4000. The expansion unit comes with the following specifications, and it is priced at $3500.

In order to set up a high-availability cluster, two compute nodes and at least one expansion unit is needed (as shown in the diagram on top). The operation of the cluster and high-availability features are similar to Synology HA. Performance numbers are of the order of 2,300 MBps and 330K IOPS using dual 10G adapters. All DSM (v5.2) features such as SSD caching and virtualization certifications are available. High-availability is also ensured with redundancy of hardware components (PSUs / SAS connectors / fans etc.).

The other important aspect of today’s announcement is the usage of btrfs for the file system. As of now, the only COTS NAS units with btrfs support in this market segment have been those from Netgear and Thecus. So, it is heartening to see Synology also adopting it. btrfs brings along many advantages, including snapshots with minimal overhead and protection against bit-rot. The unfortunate aspect is that it is currently only available in this high-availability cluster solution. We hope it becomes an option for other NAS models soon.

Coming to the pricing aspect, we see that consumers need to buy two compute nodes and one expansion unit at the minimum, bringing the cost of a diskless configuration to $11500. This is pretty steep, considering that Quanta’s cluster-in-a-box solutions (with similar computing performance) can be had along with Windows Server licenses for around half the price. Synology’s products have always carried a premium (deservedly so for the ease of setup and maintenance), so it is not a surprise to see the pricing strategy here.

Synology Launches RC18015xs+ / RXD1215sas High-Availability Cluster Solution

Synology Launches RC18015xs+ / RXD1215sas High-Availability Cluster Solution

Synology is no stranger to high-availability (HA) systems. Synology High Availability is touted as one of the features that differentiate Synology’s NAS units from other vendors’ for small business and enterprise usage. Put simply, Synology HA allows two NAS units (same model) to be connected to each other directly through their LAN ports, while also being connected to the main network through their other LAN ports. One of the NAS units is designated as the active unit, while the other passively tracks updates made to that unit. In case of any failure in the active unit, the other one can seamless take over without any downtime.

Synology is now extending this concept to a high-availability cluster. The products being introduced today are the RackStation RC18015xs+ compute node and the 12-bay RXD1215sas expansion unit.

Unlike Synology’s traditional RackStation products, the compute node doesn’t come with storage bays. They are just 1U servers sporting a Xeon E3-1230 v2 (4C / 8T Ivy Bridge running at 3.3 GHz) CPU. The specifications of the RC18015xs+ are provided below.

The PCIe 3.0 x8 slot allows for installation of 10 GbE adapters, if required. The compute node is priced at $4000. The expansion unit comes with the following specifications, and it is priced at $3500.

In order to set up a high-availability cluster, two compute nodes and at least one expansion unit is needed (as shown in the diagram on top). The operation of the cluster and high-availability features are similar to Synology HA. Performance numbers are of the order of 2,300 MBps and 330K IOPS using dual 10G adapters. All DSM (v5.2) features such as SSD caching and virtualization certifications are available. High-availability is also ensured with redundancy of hardware components (PSUs / SAS connectors / fans etc.).

The other important aspect of today’s announcement is the usage of btrfs for the file system. As of now, the only COTS NAS units with btrfs support in this market segment have been those from Netgear and Thecus. So, it is heartening to see Synology also adopting it. btrfs brings along many advantages, including snapshots with minimal overhead and protection against bit-rot. The unfortunate aspect is that it is currently only available in this high-availability cluster solution. We hope it becomes an option for other NAS models soon.

Coming to the pricing aspect, we see that consumers need to buy two compute nodes and one expansion unit at the minimum, bringing the cost of a diskless configuration to $11500. This is pretty steep, considering that Quanta’s cluster-in-a-box solutions (with similar computing performance) can be had along with Windows Server licenses for around half the price. Synology’s products have always carried a premium (deservedly so for the ease of setup and maintenance), so it is not a surprise to see the pricing strategy here.

Ruckus Unifies Controller Product Platforms under SmartZone Umbrella

Ruckus Unifies Controller Product Platforms under SmartZone Umbrella

The enterprise Wi-Fi market has been growing at a tremendous rate, thanks to the proliferation of smart wireless devices (even in business settings). There are many vendors targeting the enterprise WLAN space Ruckus Wireless, Aruba Networks and Ubiquiti Networks are examples. We talked briefly about Ruckus Wireless when we covered the launch of their cloud-based WLAN management service last year.

Till now, Ruckus has had different product lines for different market segments – small business, medium-sized enterprises, large scale enterprises and carrier-grade infrastructure. However, as the number of WLAN clients for a given system increases, the traditional market delineation makes it difficult for customers to choose the correct product. In order to solve this problem, Ruckus is introducing an umbrella ‘SmartZone’ software platform for management and control which allows their customers to easily upgrade the infrastructure to meet future requirements (‘Wi-Fi as you grow’).

The new Ruckus SmartZone has a single-pane software interface for different deployments (on-premises single controller, clustered controllers as well as virtualized WLAN controllers). These can scale from one up to 300K devices. In terms of hardware offerings, we have the SmartZone 100 WLAN Controller, each of which can support up to 25K clients and 2048 WLANs, with up to 10 Gbps data throughput. Each can manage up to 1000 Ruckus ZoneFlex Access Points and a cluster can have up to three units.

Ruckus is also supporting virtual SmartZones (vSZ) for easy scaling and flexibility with software-defined networking (SDN) and support for network functions virtualization (NFV). The vSZ can support up to 30K APs and 300K clients. Rounding up the SmartZone platforms is the Ruckus SmartCell Gateway SCG-200. By integrating 3GPP gateway functions along with WLAN controller duties, the SCG-200 can help carriers to integrate Ruckus WLANs into their existing mobile networks. Wi-Fi is being talked about as a credible solution for congested mobile network cells, and the SCG-200 targets that market trend.

The SmartZone 100 has a MSRP of $4995 (for the 1 Gbps throughput model), while the vSZ licenses will go for $995 for each deployed instance. Each ZoneFlex AP attached to either carries an additional $100 license fee.

 

Ruckus Unifies Controller Product Platforms under SmartZone Umbrella

Ruckus Unifies Controller Product Platforms under SmartZone Umbrella

The enterprise Wi-Fi market has been growing at a tremendous rate, thanks to the proliferation of smart wireless devices (even in business settings). There are many vendors targeting the enterprise WLAN space Ruckus Wireless, Aruba Networks and Ubiquiti Networks are examples. We talked briefly about Ruckus Wireless when we covered the launch of their cloud-based WLAN management service last year.

Till now, Ruckus has had different product lines for different market segments – small business, medium-sized enterprises, large scale enterprises and carrier-grade infrastructure. However, as the number of WLAN clients for a given system increases, the traditional market delineation makes it difficult for customers to choose the correct product. In order to solve this problem, Ruckus is introducing an umbrella ‘SmartZone’ software platform for management and control which allows their customers to easily upgrade the infrastructure to meet future requirements (‘Wi-Fi as you grow’).

The new Ruckus SmartZone has a single-pane software interface for different deployments (on-premises single controller, clustered controllers as well as virtualized WLAN controllers). These can scale from one up to 300K devices. In terms of hardware offerings, we have the SmartZone 100 WLAN Controller, each of which can support up to 25K clients and 2048 WLANs, with up to 10 Gbps data throughput. Each can manage up to 1000 Ruckus ZoneFlex Access Points and a cluster can have up to three units.

Ruckus is also supporting virtual SmartZones (vSZ) for easy scaling and flexibility with software-defined networking (SDN) and support for network functions virtualization (NFV). The vSZ can support up to 30K APs and 300K clients. Rounding up the SmartZone platforms is the Ruckus SmartCell Gateway SCG-200. By integrating 3GPP gateway functions along with WLAN controller duties, the SCG-200 can help carriers to integrate Ruckus WLANs into their existing mobile networks. Wi-Fi is being talked about as a credible solution for congested mobile network cells, and the SCG-200 targets that market trend.

The SmartZone 100 has a MSRP of $4995 (for the 1 Gbps throughput model), while the vSZ licenses will go for $995 for each deployed instance. Each ZoneFlex AP attached to either carries an additional $100 license fee.

 

Avago Announces PLX PEX9700 Series PCIe Switches: Focusing on Data Center and Racks

Avago Announces PLX PEX9700 Series PCIe Switches: Focusing on Data Center and Racks

One of the benefits of PCIe switches is that they are designed to be essentially transparent. In the consumer space, I would wager that 99% of the users do not even know if their system has one, let alone what it does or how it uses it. In most instances, PCIe switches help balance multiple PCIe configurations when a CPU and chipset supports multiple devices. More advanced situations might include multiplexing out PCIe lanes into multiple ports, allowing more devices to be used and expanding the limitations of the design. For example, the PEX8608 found in the ASRock C2750D4I which splits one PCIe x4 into four PCIe x1 lanes, allowing for four controllers as end points rather than just the one. Or back in 2012 we did a deep dive on the PLX8747 which splits 8 or 16 PCIe lanes into 32, through the use of a FIFO buffer and a mux, to allow for x8/x8/x8/x8 PCIe arrangements – the 8747 is still in use today in products like the ASRock X99 Extreme11 which uses two or the X99 WS-E/10G which has one.

Today’s announcement is from Avago, the company that purchased PLX back in June 2014, for a new range of PCIe switches focused on the data center and racks called the PEX9700 series. Part of the iterative improvements in PCIe switches should ultimately be latency and bandwidth, but there are several other features worth noting which from the outside might not be considered, such as the creation of a switching fabric.

Typically the PCIe switches we encounter in the consumer space use one upstream host to several downstream ports, and each port can have a series of PCIe lanes as bandwidth (so 4 ports can total 16 lanes, etc). This means there is one CPU host by which the PCIe switch can send the work from the downstream ports. The PLX9700 series is designed to communicate with several hosts at once, up to 24 at a time, allowing direct PCIe to PCIe communication, direct memory copy from one host to another, or shared downstream ports. Typically PCIe is a host-to-device topology, however the PEX9700 line allows multiple hosts to come together with an embedded DMA engine on each port to probe host memory for efficient transfer.

Unlike the previous PCIe switches from PLX, the new series also allows for downstream port isolation or containment, meaning that if one device downstream fails, the switch can isolate the data pathway and disable it until it is replaced. This can also be done manually as the PEX9700 series will also come with a management port which Avago states will use software modules for different control applications.

In the datacenter and within rack infrastructure, redundancy is a key feature to consider. As the PEX9700 switches allow host-to-host communication, it also allows control from multiple hosts, allowing one host to take over in the event of failure. The switches can also agglomerate and talk to each other, allowing for multiple execution routes especially with shared IO devices or in multiple socket systems for GPGPU use. Each switch will also have a level of hot-plugging and redundancy, allowing disabled devices to be removed, replaced and restarted. When it comes to IO, read requests mid-flow are fed back to the host as information on failed attempts, allowing instant reattempts when a replacement device is placed back into the system.

Avago is stating that the 9700 series will have seven products ranging from 5 to 24 ports (plus one for a management port) from 12 to 97 lanes. This also includes hot plug capability, tunneled connections, clock isolation and as mentioned before, downstream port isolation. These models are currently in full scale production, as per today’s announcement, using TSMC’s 40nm process. In a briefing call today with Akber Kazmi, the Senior Product Line Manager for the PEX9700 series, he stated that validation of the designs took the best part of eight months, but that relevant tier one customers already have their hands on the silicon to develop their platforms.

For a lot of home users, this doesn’t mean that much. We might see one of these switches in a future consumer motherboard focused on dual-socket GPGPU, but the heart of these features lies in the ability to have multiple nodes access data quickly within a specific framework without having to invest in expensive technologies such as Infiniband. Avago is stating a 150ns latency per hop, with bandwidth limited ultimately by the upstream data path – the PCIe switch ultimately moves the bandwidth around to where it is most needed depending on downstream demand. The PEX9700 switches also allow for direct daisy chaining or as a cascading architecture through a backplane, reducing costs of big switches and allowing for a peak bandwidth between two switches of a full PCIe 3.0 x16 interface, allowing scaling up to 128 Gbps (minus overhead).

Personally, the GPGPU situation interests me a lot. When we have a dual socket system with each socket feeding multiple GPUs, with one PEX9700 switch per CPU (in this case, PEX9797) but interconnected, it allows GPUs on one socket to talk to GPUs on the other without having to go all the way back up to the CPU and across the QPI bus, which saves both latency and bandwidth, and each of the PCIe switches can be controlled.

The PEX9700 series of switches bucks the status quo of requiring translation layers such as NICs or Infiniband for host-to-host-to-device communication and all inbetween, which is what Avago is hoping the product stack will accomplish. The main factors that Avago see the benefit include latency (fewer translation layers for communication), cost (scales up to 128 Gbps minus overhead), power (one PEX9700 chip has a 3W-25W power rating) and energy cost savings on top of that. On paper at least, the capabilities of the new range could potentially be disruptive. Hopefully we’ll get to see one in the flesh at Computex from Avago’s partners, and we’ll update you when we do.

Source: Avago