Enterprise Windows: Clustering the Microsoft way

10.05.2006
I can't talk about the embargoed Longhorn meeting I had Tuesday, except to say that I got one Microsoft rep to say that Redmond doesn't think there'll be any more service packs after Longhorn (http://iws.infoworld.com/iws?t=all&s=freq&q=longhorn) sees daylight. But then he burst out laughing, so I'm not sure how reliable that is.

So instead of the wealth of ultrasecret Longhorn-maybe info I just got, let's talk about the not-so-hush-hush peek I got at Microsoft Compute Cluster Server 2003 (MCCS2K3) (http://iws.infoworld.com/iws?t=all&s=&q=microsoft+compute+cluster) release candidate. A buddy got this installed at his lab because doing it at mine would have sliced too deeply into my beer-drinking schedule. Fortunately, he's smarter than I am anyway, so we're probably better off all around.

Microsoft is positioning MCCS2K3 to run on what it terms "inexpensive machines," but it's also pushing 64-bit CPU power. Then again, considering what companies such as Dell and HP are selling core duos for nowadays, those two things may not be so far apart. But if you're looking to run a cluster on 3-year-old Pentium 4s, you'll need to reconsider.

Installing Compute Cluster means beginning with a head node. Strangely, even though the head node is the beating heart of your computing cluster, MCCS2K3 doesn't support multimachine fail-over. The head node is a lone wolf, a single machine leading the cluster pack. (I asked to call ours Oliver, but the jealous weenies voted me down.) The upshot is that if your head node does a face plant, it takes your cluster along for the splat, so make sure that this machine at least is running on robust hardware -- something with a RAID system and preferably multiple CPUs.

After that, the rest of the cluster can be as rickety as a 64-bit-capable workstation can be. Further, you can attach as many nodes as you like, according to the docs. No word on how licensing might affect that last statement, but I've got some Microsoftees looking into that for me.

The docs also blithely refer to 10 Gigabit Ethernet as being the preferred cluster interconnect medium. But when I went outside to check my 10Gig infrastructure tree, there weren't any ripe 10Gig switches or NICs hanging off the branches, so we just stuck with straight GbE over copper.