Why 1TB disks will foster virtualization

26.01.2007
The first wave of network-based storage virtualization fell apart. It was too new, too untested and required companies to install it in one of the most sensitive parts of the corporate infrastructure: in front of expensive storage arrays and behind mission-critical, highly visible applications. This was a strategy destined for failure.

Fast forward to today and network-based storage virtualization is hanging tough, growing up and spawning a second wave that this time should gain some momentum.

The reasons for this growing momentum are practical. Almost every storage manager in any size shop manages tens or hundreds of servers with multiple terabytes behind them. And with 1TB disk drives now a reality, the amount of storage they will manage is certain to increase.

Yet, as storage capacities increase, it can cause storage managers to take their eye off the ball. Configuring and installing new capacity and decommissioning old storage arrays is a time-consuming task and results in routine tasks like data migrations becoming a logistical nightmare. Ten-gigabit Ethernet and iSCSI complicate this scenario.

Many corporate servers are still not SAN connected since it is still too expensive to connect them. Ten-gigabit Ethernet and iSCSI remove the cost barrier, but with more server and storage network connections, storage complexity also increases. Unfortunately, low-cost network connectivity does not translate into simpler and lower costs of storage management. If anything, it has the opposite effect.

So why will network-based storage virtualization take off this time? As companies SAN-connect more of their servers using lower-cost Ethernet connections, their storage provisioning and data migration problems will become more acute. This will force many companies to re-evaluate network-based storage virtualization's value proposition and why it now makes sense, whereas just a few years ago, its risks outweighed its rewards.