How to Track Cost Allocation for Cloud Apps

25.09.2012
One of the most interesting aspects of cloud computing is the way it changes cost allocation over the lifetime of an application. Many people understand that pay-as-you go is an attractive cost model, but fail to understand the implications that the new cost allocation model imposes on IT organizations.

The pay-as-you-go model addresses several obvious and painful limitations of the previous model, which was based on asset purchase; in other words, prior to application deployment, a significant capital investment had to be made to purchase computing equipment (i.e., servers, switches, storage, and so on).

However, there is one big advantage of this approach: once the investment is made, the financial decision is over. Assuming the application obtains the necessary capital, no further financial commitment will be needed. Of course, this has led to utilization issues, as applications commonly only used single-digit percentages of the computing resource assigned to them, but there were no ongoing bills or invoices for the application resources.

[Related: ?]

Many people are excited about cloud computing because it uses a different cost allocation model over the lifetime of an application. Instead of a large upfront payment, you pay throughout the lifetime of the application; moreover, you have to pay only for actual resource used, thereby avoiding the underutilized capital investment situation typical of the previous approach.

[Related: ]

That's a lot of wasted money. Add in the likelihood that these resources are not being tracked and spun up instances are often started and then forgotten about, organizations could easily experience months or years of extra costs.

[Related: ]

So what is the right approach for IT organizations to realize the benefits of cloud computing, but avoid the unfortunate cost effects outlined above? Here are five critical items to pay attention to:

1. Design: Your application must be designed so that the appropriate level of resources can be assigned and used. Think of this as a "just-in-time computing resource." This implies that the application must be designed as a collection of small. finely grained resources that can be added or subtracted as application load dictates.

Instead of one very large instance, the right design approach is to use multiple smaller instances that can grow or shrink in number as appropriate. Of course, this requires the application be facile with respect to adding or subtracting resources while in operation. There are implications in terms of state and session management, load balancing, and application monitoring and management and these must be taken into account to ensure the application can respond to changing workloads.

2. Operations: Monitor utilization and terminate unneeded resources. As previously mentioned, lots of bad habits from the previous, upfront investment approach remain in cloud computing users. Probably the worst one is the habit of starting resources and never shutting them off, or, indeed, never monitoring them to determine whether they're being used or not.

In the pay-as-you-go world, every unused or underused resource is a hole down which you're pouring money. A service such as or one of its competitors can be enormously helpful here, but to my mind the financial tracking needs to be married to an operational tracking in which developers and system administrators are constantly monitoring resources to evaluate use, use levels, and potential design optimizations to reduce cost while still maintaining operational efficiency and required performance levels.

3. Finance: Evaluate total organizational spend. I've worked with a number of companies that have many, many AWS accounts and don't realize that by centralizing the spend, they would achieve greater discounts. While within some companies that decentralized approach is deliberate (aka ""), everyone will benefit from lower prices, so it makes sense to move to a collective bill.

4. Procurement: Negotiate pricing. While AWS posts its prices, if there is sufficient spend, it will demonstrate flexibility. Certainly every other cloud service provider (CSP) out there is very flexible on pricing, especially in a situation in which the account would be moving from AWS. Of course, it's critical to ensure other critical elements of the application, like availability and security, can be achieved in another cloud environment.

Also, the application design must be such that it can be transferred from one cloud environment to another. Even if one is "locked in" via application requirements or design, it's amazing just what pricing flexibility can be generated by even the threat of potential provider switching.

5. Management: Recognize that cloud computing is a new operation mode and cost tracking and application utilization monitoring are critical IT skills. Set up a group that examines ongoing financial performance to ensure maximum cost/benefit outcomes.

Don't staff the group with only finance people either. Technical skills are required as well to enable a full 360 degree evaluation of application financial and technical performance. Above all, realize that IT is now in the service provider business, and service providers pay attention to operational costs all the time.

Bernard Golden is CEO of consulting firm , which specializes in virtualization, cloud computing and related issues. He is also the author of "Virtualization for Dummies," the best-selling book on virtualization to date.

Follow Bernard Golden on Twitter @bernardgolden. Follow everything from CIO.com on Twitter @CIOonline