LeftHand boosts its SAN/iQ

26.02.2007
Many companies would embrace the superior performance and enhanced reliability of clustered storage were it not for the fear that adoption would cost a fortune and lock them into proprietary hardware.

Although that perception is, unfortunately, all too often justified, LeftHand Networks has been offering for quite some time a clustered storage solution dubbed SAN/iQ that runs not only on proprietary hardware but on plain vanilla gear such as HP ProLiant servers. Further, it offers a range of much-needed features including replicas and snapshots. SAN/iQ also offers a set of powerful automated features, such as load balancing and reallocation of existing volumes across nodes, which remove significant burden and cost from storage administration.

Last fall, LeftHand released SAN/iQ 6.6, adding larger disk drives to its clustering capabilities, as well as a redesigned management console with a treelike interface that makes administering a storage cluster more intuitive.

In my test environment, I had four HP ProLiant DL380 G4 servers, each mounting six SCSI drives with 146GB capacity. The fifth machine in my cluster was an NSM 260 (Network Storage Module), a proprietary storage array from LeftHand with 12 SATA drives of 250GB capacity. Each machine was running LeftHand Networks SAN/iQ, which made each of them an active node of the clustered iSCSI storage network.

Throughout my testing, flexibility jumped out as a clear differentiator between SAN/iQ and traditional nonclustered solutions. For example, you can use SAN/iQ management tools to easily combine the capacity of two or more nodes without disrupting live applications. Another remarkable feature: When you add a node, SAN/iQ will automatically redistribute existing volumes over it. This translates to improved performance in that I/O operations will be spread over more disks and more controllers.

Admins can access the system's various features via SAN/iQ's management console, a Java application that runs either on a Windows or Linux machine connected to the iSCSI network. Through the console, admins can combine nodes into a cluster; create a new volume from that pool of storage and assign it to an application server; and set the level of data redundancy for each volume and, if needed, limit the bandwidth used for background tasks such as restriping a volume to a new node.