Review: Brocade's big, fat datacenter fabric switch

DCX Backbone is the cornerstone of Brocade's policy-driven network

At 230 pounds, the Brocade DCX Backbone would be on the lighter end of middle linebackers in the NFL, but it's well built to fill the middle of a storage network. Unveiled in late January, the DCX represents the first deliverable of Brocade's DCF (Data Center Fabric), the company's newly designed architecture that promises a more flexible, easier-to-manage, policy-driven network, one that embraces multiple connectivity protocols and is better able to respond to applications' demands and to support new technologies such as server virtualization.

In Brocade's vision, the DCX Backbone is the cornerstone of that architecture, with specs that suggest a level of performance never attained before. In fact, Brocade assures me that the DCX is capable of sustaining full transfer rates of 8Gbps on the 896 Fibre Channel ports supported in its largest, dual-chassis configuration.

Brocade DCX.jpg
Brocade's DCX Backbone

In addition to FC, the DCX supports just about any other connectivity protocol, including FICON (Fiber Connectivity), FCoIP (FC over IP), Gigabit Ethernet, and iSCSI. That versatility brings to mind the Silkworm Multiprotocol Router, which was the first product from Brocade aimed at consolidating multiple SANs (see my review, "Building the uber-SAN").

In the belly of the beast

I recently had the chance to visit Brocade's labs in San Jose to see what the DCX can do. Though my test configuration provided plenty of ports to spare, it's interesting to note that the DCX has dedicated ISL (Inter-Switch Link) ports that don't take away from the number of ports available for, say, storage arrays or application servers.

As impressive as the raw specs of the DCX may be, the DCX's most innovative features are software functions that provide better control of bandwidth allocation, let you restrict access to specific ports according to security policies, and allow you to create independent domains to separately manage different areas of the fabric.

I started my evaluation with the bandwidth monitoring features. In a traditional fabric, each connection acts like a garden hose, a passive conduit that has no ability to regulate the flow it carries. With DCX, Brocade's Adaptive Networking option lets you limit the I/O rate on selected ports, a feature that Brocade calls Ingress Rate Limiting.

Here is how it works. In my test configuration Brocade had installed two DCX units: one linked to six HBAs on three hosts, the other linked to a storage array. To better show the traffic management capabilities of the DCX, each host HBA was assigned a dedicated LUN (logical unit number) and a dedicated storage port. The two DCX chassis were connected using two 4Gbps ISLs.

Using a simple Iometer script, I generated a significant volume of traffic on each host. To measure how that traffic spread across the fabric, I invoked Top Talkers, the performance monitoring tool. A new capability of Brocade's Fabric OS 6.0, which was running on both DCX chassis, is to define a Top Talkers profile either for specific ports or for the whole fabric.

As the name suggests, Top Talkers lists the source-destination pairs that are carrying the most traffic. It told me that I had four source-destination pairs that were exchanging more than 40MB of data per second, and a fifth that was flowing at a trickle.

The next step was to limit the traffic flowing from one of those hosts, in order to open more bandwidth to higher-priority streams. After moving to the CLI of the hosts-facing DCX, I typed portcfgqos --setratelimit 3/2 200, setting a maximum data rate of 200mbps on slot 3, port 2 of the DCX, where the HBA in question was connected.

Moving back to the storage-facing DCX, Top Talkers was showing a much reduced traffic rate on that pair (fourth in the list), making more bandwidth available to the other pairs. Now the first three pairs were flowing at 51.1MBps, 45.6MBps, and 45.5MBps respectively, while that fourth pair (previously running at 43.2MBps) dropped to 14.5MBps.

Zone flow control

The rate limit, which can be applied in 200-megabit increments, is an invaluable tool to prevent damaging data transfer bursts. A typical real-world use could be to rein in bandwidth-intensive applications such as backups. Rate limits can easily be flipped on when needed, and then easily reset with a similar command to bring those ports back to the previous, unrestricted flow.

DCX Schematic Image_small.jpg

A schematic of the DCX Backbone

Click to view larger image

To prepare for the next test, I needed to reduce the bandwidth between the two DCX chassis to make it easier to exceed its data rate. Therefore, I disabled one of the ISL ports and set the other one to 1Gbps.

Almost immediately, the Brocade Enterprise Fabric Connectivity Monitor displayed the link between the two DCX in bright red, indicating traffic congestion.

Sure enough, Top Talkers showed that the transfer rate had plunged to about 22MBps on each pair. Of course, no one in their right mind would choke an ISL like this in real life. But it does help show how you can use the DCX to assign a specific service level to each zone in the fabric.

Strangely enough, Brocade has devised a zone naming convention to assign those QoS levels: A zone named with the QOSH prefix will be assigned a high service level, while a zone named with the QOSL prefix will be assigned a low service level. Of course the initials QOSM identify a zone with medium service level, which is also the default for zones not following the name coding. High, medium, and low reserve 60, 30, and 10 percent of available bandwidth, respectively, for their zones.

1 2 3 Page
FREE Computerworld Insider Guide: IT Certification Study Tips
Join the discussion
Be the first to comment on this article. Our Commenting Policies