Cloud Makes Capacity Planning Harder: 3 Fight-Back Tips

13.01.2011
One of the issues we focus on in conversations with companies evaluating moving to cloud computing is the importance -- and challenge -- of capacity planning in a cloud environment. The bottom line is that cloud computing is going to make capacity planning much more difficult for CIOs who intend to maintain all or most of their company's computing in internal data centers. Moreover, utilization becomes a highly risk-associated topic as utilization risk is shifted onto the cloud operator.

Why is this?

As a starting point, it's important to recognize that the scale of computing -- the sheer number of applications that an organization runs is about to explode. I and noted that we in the industry typically underestimate by a factor or 100 or more the growth unleashed by new computing platforms. This recent comment by longtime analyst Amy Wohl on a Google group mailing list reinforced my perspective: "On the day the IBM PC was announced I had a one-on-one call with IBM about their new product (I couldn't get to the press announcement) and they assured me the total market for PCs was 5,000." Which explains why I found laughable by Bernstein Research analyst Toni Sacconaghi. With all due respect, we are on the cusp of seeing server demand explode as more and more applications get envisioned, funded, and implemented. The odds of server demand shrinking are vanishingly small.

Which brings us to the issue of capacity planning. The traditional mode of capacity planning -- focused on new servers funded by applications able to achieve capital investment funding -- is finished off by cloud computing. If an application group assumes that resources will be available on demand, and can be paid for by assigning an operating budget funding code, there's much less forecast insight about total demand possible. Put another way, fewer signals about total demand are available, and the timeframe of insight is much shorter.

Some organizations feel they have dealt with this by imposing a limit to the number of servers that can be provisioned at any one time. The thinking is, a limit of, say, 10 servers, is imposed and any larger number has to go through an exception handling process. Which is fine, but the assumption underpinning it is that the number of applications will remain relatively stable -- and if the total resources each app can request is limited, total resource demand can be limited, thereby making capacity planning manageable.

However, the assumption that the total number of applications is going to remain stable is tenuous at best. With lower costs, no need of application-level capital funding, and lower friction in obtaining resources, the total number of applications is undoubtedly going to skyrocket. So even if each application can only request a limited number of resources, if the number of applications grows dramatically, capacity planning becomes problematic.