Virtualization Technology News and Information
Article
RSS
A Four-Step Checklist for Cloud Cost Efficiency

By Pavel Despot, Senior Product Marketing Manager, Akamai

Several years ago, as cloud computing was gaining traction, moving infrastructure to the cloud was thought to be a cost-saving tactic. Many organizations saw the cloud as a "deploy it and forget it" strategy that would save them money over on-premises computing. Now, organizations see the non-panacean impact of some of their early decisions-and today they're focused on cutting skyrocketing cloud costs. 

As most CTOs and CIOs are finding out, cutting cloud costs isn't simple. It's important to recognize that the cost associated with running your infrastructure on the cloud entails the interconnection of several vital components, each with associated price tags. To this end, cloud-cost containment is like squeezing one end of a balloon and watching the other end inflate; you cannot reduce investment in one without impacting the others.Saving money on cloud expenditures needs to be approached thoughtfully.

The good news is that there are effective ways to achieve cost-efficiency without wreaking havoc on an otherwise well-functioning cloud program. Let's look at four ways to get started.

Step One: Start with the Largest Expense

Compare the cost savings of reducing your largest expense by a certain percentage (taking into account the required additional effort). It's better to save 5% on a million-dollar line item than on a $10,000 one. Start by looking at the monthly bill for each account and create a summary by service and region. This can be done using pivot tables with billing APIs, allowing you to sort and see which account/service combination is the biggest line item.

Now that a big line item is staring you in the face, identify the team that owns that service so you can start the conversation about how to reduce their costs. Many environments are shared with a variety of workloads deployed in each account. For this scenario, the VPC, billing tags, region, or other metadata is a good way to start tracking. As you move through this process, be honest about the required effort for the reductions you're considering. Reducing services is unlikely to run smoothly without some rollbacks and extra effort.

Step Two: Consider Open Source Options

Chances are there are open source options available to replace whatever costly proprietary tools you're using today. Open source is frequently well-supported and can get you where you want to go. Of course, it's essential to consider the cost of extra team hours to make the change.

First, each team must decide if an open source, "roll your own" solution is more economical than proprietary. There are strengths and weaknesses to either choice, especially depending on the maturity of the solutions. If the team can do it efficiently, there are savings to be had with open source that are baked into services and support fees.

Step Three: Pay Only for What You Use

Check your provider dashboards and monitoring tools for unused, idle or low-load instances and scale them down. This may not be possible if you're managing hundreds of containers or VM instances. AWS, Azure, and GCP all have cost management tools that show which resources are being used and to what extent. Operations tools can tell you which machines and clusters are busy and which are consistently underutilized. That could be a good indication to decrease the minimum number of pods in a Kubernetes cluster, for example.

Similarly, consider lower-cost storage options if you have older, infrequently accessed objects in storage that you can't part with. Most of the time, these kinds of changes are easy lifts. One example is AWS and its storage lifecycle. With this you can set up older objects to be automatically transferred to a lower cost, less frequently-accessed storage.

Step Four: Networking-a Horse of a Different Color

Saving money on networking takes more work, but it's not impossible. Network bandwidth consumed varies by application-but a byte is a byte, and that's how network costs are determined. You can't reduce the bytes that are going to your user, but you can reduce the bytes that egress from the cloud-racking up fees.

Here are some ways to reduce costs for networking.

  1. Reuse bytes to respond to multiple people. APIs can and should be cached (with obvious exceptions for authentication APIs) to retrieve and reuse data. It's a valid design pattern, and while lowering networking costs, availability and performance are also improved.
  2. To save on egress fees, don't mitigate security risks directly from the cloud. You'll pay more than if you were to mitigate those risks at the edge. Denying malicious traffic at the edge incurs no ingress or compute costs. Egress bytes are minimized and what little is left benefits from rates that are nominal in comparison. In fact, since the edge already had to terminate the request, the additional effort of scanning is computationally "cheaper." So deploying and operating web application firewalls and DoS controls there are cheaper than centralizing them in a handful of clouds.
  3. Consider moving workloads out of a centralized cloud region. Egress economics at the edge are substantially different from those of any single cloud. For example, let's look at the Olympics. Akamai delivered 261 Tbps peak traffic. At that rate, a 90-minute soccer match would transfer about 176,175 terabytes of data. The lowest list price for AWS egress is $0.05 / gigabyte. At that rate 176,175 TB (180,403,200 GB) would cost $9,020,160. That's just egress. Never mind the hundreds of load balancers, VMs, and buckets you'd need to handle that request rate if you're going to centralize it in a handful of regions.

Edge platforms have proven this design pattern for content caching and security. The recent availability of compute on edge platforms now allows us to do the same with compute. For smaller workloads, consider serverless at the edge. The edge nodes are (by design) deployed globally, close to end users for minimal latency. If you move an API endpoint to the edge (closer to the user with lower latency), you'd have less Internet to deal with between users, providing better performance. And there's no additional hops/costs on the way.

Getting a handle on cloud costs isn't trivial. But it doesn't have to be maddening. A logical approach that takes into account how services are consumed as well as understanding how unintended scale-out effects can spike costs is an approachable way to get your arms around cloud computing costs and reduce them without negatively impacting developer or end-user experiences.

##

ABOUT THE AUTHOR

Despot-Pavel 

Pavel has over 20 years of experience designing and deploying critical, large-scale solutions for global carriers and other fortune 500 companies. He is currently the Sr Product Marketing for the cloud in Akamai's Cloud Technologies Group. In his previous role as Principal Cloud Solutions Engineer, he led application modernization and security initiatives for Akamai’s largest SaaS clients. Before joining Akamai, Pavel held various leadership roles on standards bodies, including the CTIA Wireless Internet Caucus (WIC), the CDMA Developers Group (CDG), and the Interactive Advertising Bureau (IAB). He has two patents in mobile network design and currently resides in the Boston area.

Published Friday, May 19, 2023 7:41 AM by David Marshall
Filed under: ,
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<May 2023>
SuMoTuWeThFrSa
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910