Review: Cisco's Unified Computing System wows

Revolutionary. Cutting edge. State of the art. These words and phrases are bandied around for so very many products in the IT field that they become useless, bland, expected. The truth is that truly revolutionary products are few and far between. That said, Cisco's Unified Computing System fits the bill.

To fully understand what Cisco has done requires that you dispense with preconceived notions of blade servers and blade chassis. Rewire your concepts of KVM, console access, and network and storage interfaces. Reorganize how you think of your datacenter as islands of servers surrounded by storage arrays and networks. Cisco had the advantage of starting from scratch with a blade-based server platform, and it's made the most of it.

[ Compare Cisco UCS to traditional blade server systems. Compare Cisco UCS to HP BladeSystem Matrix. ]

In short, UCS is built around a familiar concept -- the blade chassis -- but rearchitects it to enable both greater manageability and greater scalability. For the architectural background, read my summary, "How Cisco UCS reinvents the datacenter." This article focuses on the nitty-gritty details of UCS, and my experiences working with the system in a recent visit to Cisco's San Jose test labs.

UCS building blocks

A Cisco UCS chassis provides eight slots for half-width blades, each equipped with two Intel Nehalem processors, up to 96GB of RAM with 8GB DIMMs, two SAS drive slots, an LSI Logic SAS RAID controller, and a connection to the blade backplane. In addition, each blade is outfitted with a Cisco Converged Network Adapter, or CNA. The CNA is essentially the heart of the system, the component that makes UCS unlike traditional blade systems.

The CNA is a mezzanine board that fits a QLogic 4Gb Fibre Channel HBA and an Intel 10Gb Ethernet interface on a single board, connecting directly to the chassis network fabric. The presentation to the blade is two 10Gb NICs and two 4Gb FC ports, with two 10Gb connections to the backplane on the other side. The initial release does not support multiple CNAs per blade, or really even require one. But the CNA is integral to how the entire UCS platform operates, as it essentially decouples the blade from traditional I/O by pushing storage and network through two 10Gb pipes. This is accomplished through the use of FCoE (Fibre Channel over Ethernet). Everything leaving the blade is thus Ethernet, with the FC traffic broken out by the brains of the operation, the Fabric Interconnects (FI).

So we have some number of CNA-equipped blades in a chassis. We also have two four-port 10Gb fiber interface cards in the same chassis and two FIs downstream that drive everything. It's not technically accurate to call the FIs switches, since the chassis function more like remote line cards populated with blades. No switching occurs in the chassis themselves; they are simply backplanes for blades that have direct connections to the FIs. Physically, the FIs are identical in appearance to Cisco Nexus 5000 switches, but they have more horsepower and storage to handle the FCoE to FC breakout tasks. They offer 20 10Gb ports, and they support a single expansion card each.

The expansion cards come in a few different flavors, supporting either four 4Gb FC ports and four 10Gb Ethernet ports, or six 10Gb Ethernet ports, or eight 4Gb FC ports. This is in addition to the twenty 10Gb ports built into each FI. There are also three copper management and clustering ports, as well as the expected serial console port. The FI is wholly responsible for the management and orchestration of the UCS solution, running both the CLI and GUI interface natively -- no outside server-based component is required.

Test Center Scorecard

Management

Performance

Reliability

Scalability

Interoperability

Value

Overall score

20%

20%

20%

20%

10%

10%

Cisco UCS 1.0

8

10

8

11

9

9

9.2 (excellent)

Connecting the dots

Perhaps a mental picture is in order. A baseline UCS configuration would have two FIs run in active/passive mode, with all network communication run in active/active mode across both FIs and each chassis. (Think of a Cisco Catalyst 6509 switch chassis with redundant supervisors -- even if one supervisor is standby, the Ethernet ports on that supervisor are usable. The two FIs work basically the same way.) They are connected to each other with a pair of 1Gb Ethernet ports, and they have out-of-band management ports connected to the larger LAN. The blade chassis is connected by two or four 10Gb links from each FEX (Fabric Extended) in the chassis, a set to each FI. That's it. A fully configured chassis with 80Gb uplinks will have four power cords and eight SFP+ cables coming out of it -- nothing more. Conceivably, an entire rack of UCS chassis running 56 blades could be driven with only 56 data cables, 28 if only four 10Gb links are required on each chassis.

From there, the pair of FIs are connected to the LAN with some number of 10Gb uplinks, and the remainder of the ports on the FI are used to connect to the chassis. A pair of FIs can drive 18 chassis at 40Gb per chassis with two 10Gb uplinks to the datacenter LAN, allowing for eight 4Gb FC connections to a SAN from an eight-port FC expansion card.

The basis of the UCS configuration is the DME (Data Management Engine), a memory-based relational database that controls all aspects of the solution. It is itself driven by an XML API that is wide open. Everything revolves around this API, and it's quite simple to script interactions with the API to monitor or perform every function of UCS. In fact, the GUI and the CLI are basically shells around the XML configuration, so there's no real disparity between what can and can't be done with the CLI and GUI, or even external scripts. UCS is a surprisingly open and accessible system. Following that tenet, backing up the entirety of a UCS configuration is simple: The whole config can be sent to a server via SCP, FTP, SFTP, or TFTP, although this action cannot be scheduled through the GUI or CLI.

The initial setup of a UCS installation takes about a minute. Through the console, an IP is assigned to the out-of-band management interface on the initial FI, and a cluster IP is assigned within the same subnet. A name is given to the cluster, admin passwords are set, and that's about it. The secondary FI will detect the primary and require only an IP address to join the party. Following that, pointing a browser at the cluster will provide a link to the Java GUI, and the UCS installation is ready for configuration.

Build me up, Scotty

The first order of business is to define the ports on the FIs. They can either be uplink ports to the LAN or server ports that connect to a chassis. Configuring these ports is done by right-clicking on a visual representation of each FI and selecting the appropriate function. It's simple, but also cumbersome because you cannot select a group of ports; you have to do them one by one. Granted, this isn't a common task, but it's annoying just the same. Once you've defined the ports, the chassis will automatically be detected, and after a few minutes, all the blades in the chassis will be visible and ready for assignment.

This is where it gets interesting. Before anything happens to the blades, various pools and global settings must be defined. These pools concern Fibre Channel WWNN (World Wide Node Name) and WWPN (World Wide Port Name) assignments, Ethernet MAC pool assignments, UUIDs (Universally Unique Identifiers), and management IP pools for the BMC (Baseboard Management Controller) interfaces of the blades. These are open for interpretation, as you can assign whatever range of addresses you like for the UUID, WWNN, WWPN, and MAC ranges. In fact, it's so wide open that you can get yourself into trouble by inadvertently overlapping these addresses if you're not careful. However, assigning pools is extremely simple, accomplished by specifying a starting address and the number of addresses to put into the pool. Make sure you get it right, however, because you cannot modify a pool later; you can only specify another pool using an adjacent range of addresses.

You also need to worry about firmware revisions. You can load several different versions of firmware for all blade components into the FIs themselves and assign those versions to custom definitions, ensuring that certain blades will run only certain versions of firmware for every component, from the FC HBAs to the BIOS of the blades themselves. Because UCS is so new, there are only a few possible revisions to choose from, and loading them on the FIs can be accomplished through FTP, SFTP, TFTP, and SCP. Once present on the FIs, firmware can then be pushed to each blade as required. You also can set up predefined boot orders -- say, CD-ROM, then local disk, followed by an FC LUN, and PXE (Pre-boot Execution Environment). These can also be assigned to each server instance as required and can include only one element if desired.

You can also define VLANs to present to the blades and which VLAN should be native. It's assumed that each server will trunk those 10Gb interfaces, but native VLAN assignment means that that isn't a hard and fast requirement. In production, it's likely that each blade will trunk, so that assumption is valid. However, the FIs don't play nice with VTP (VLAN Trunk Protocol), so VLAN definitions are manual, not derived from the rest of the switched LAN. If you have a pile of VLANs that you need to present to your servers, be ready for lots of clicking and typing. Cisco hopes to remedy this in an upcoming release.

There are a few other odds and ends, such as scrub policies. These exist to determine what action to take when a service policy is pulled from a physical blade with local disk -- in other words, whether the local disk should be erased or left alone. Unfortunately, this "scrub" really isn't -- it just destroys the partition table, without actually overwriting the disks.

Once you've created your pools, you can start building your blades into actual servers. The options for building out servers are simple: Either a blade boots from the SAN or PXE, or it boots from local disk. Managing storage is outside the scope of UCS, so let's assume you have a competent storage administrator, and you need a bunch of LUNs assigned for our budding UCS installation. Through the UCS GUI, you can pull up a simple list of all WWNN and WWPN assignments and immediately export that list to CSV, making it extremely simple to pass that information off to the admin for the storage configuration. Talk about handy.

But I digress -- we haven't even built a server yet.

Service profiles

Server builds are defined in service profiles, which are themselves derived from service profile templates. Service profile templates allow you to define specific server instances and automatically provision one or more servers. Once you've created one global profile, you can duplicate that profile to however many servers you may need to fulfill that task. The configuration profiles determine the firmware revision for each blade component; the WWNN, WWPN, and MAC pools to choose from; the boot orders you may have defined; and even the boot policy -- boot from SAN, local, or what have you. All of this is surprisingly simple to organize.

You can also call upon the Ethernet and FC port designations you created earlier -- such as eth0, eth1, fc0, and fc1 -- that correspond to each FI, thus providing redundancy across each blade. I did run across a few bugs here; for example, port assignments that were clearly defined as Fabric A and Fabric B somehow merged in to Fabric A when that template was applied to a server and had to be manually corrected. I was assured that this bug was being actively addressed. In the grand scheme of things, it's minor and highlights the fact that this is a 1.0 release.

There are two forms of service profile templates: initial and updating. Each has specific pros and cons, and it's unfortunately not possible to switch forms after a profile has been created; if you begin with an initial profile, the profile cannot later be used to propagate updates.

1 2 Page 1
Page 1 of 2
How to supercharge Slack with ‘action’ apps
  
Shop Tech Products at Amazon