Achieving meaningful storage management metrics

25.04.2006
Years ago, when IT folk and vendors still bandied about the term MIPS, an over-repeated witticism of the time was that the acronym actually stood for "meaningless indication of processor speed."

Effectively meeting business demands through service-level agreements, driving efficiency and addressing corporate governance policies require relevant metrics. These key performance indicators (KPI) should provide a meaningful aggregation of lower-level data to enable informed planning and decision making at senior levels.

Determining this information is not easy. Identifying appropriate metrics that truly provide the right insights and then assembling the necessary data from the disparate sources within the storage mosaic is a major endeavor.

Here are three examples of storage metric challenges:

Cost per gigabyte: Cost modeling is an arcane science. We have seen (and even developed) some extremely elaborate models, but effective cost models need not be overly complex to measure what is needed for a particular environment. They must be developed in conjunction with the finance and purchasing functions within the organization, and they must accurately reflect all significant operational and technical costs. Administration and management, port and bandwidth requirements, and data-policy factors can dwarf the cost of spinning disk.

Backup success rate: Knowing that last night's backups succeeded is important -- for a backup administrator. But it doesn't go far enough. From a business perspective, what is critical to know, and all too often unreported, is the overall recoverability status of an application. Deriving this information requires more than the backup success rate. It demands an understanding of application interdependencies, application-to-server mapping and integration of other data-protection components, such as snapshots, split mirrors and replicated volumes.