Top AWS Cost-Saving Strategies You Need to Know

Top AWS Cost-Saving Strategies You Need to Know

Find out how to stop wasting money on AWS services

Since DevOps started, it has aimed to create quick, scalable, and cost-effective infrastructure. However, the cost-effectiveness of the resources we use sometimes gets overlooked while we handle infra at a large scale and hide different types of resources under various environments and accounts.

Here are some ways to make your resources less pain in the pocket.

Nat Gateway Costs

When I say NAT gateways, some people would say they are already cheap; what is there to save here? But when doing things at scale, these NAT gateways can be a significant amount of the network cost you pay.

Usually, we have a single NAT in a VPC, and all things in different availability zones use the same NAT. As of now, there are no charges for you to transfer data from an instance to s3 or some other service, but the condition is they should lie in the same region. However, there are charges between your ec2 and your nat if they lie in different availability zones, which can, over time, be significant. To get rid of it, we can use vpc endpoints for AWS internal services like s3 for data transfer.

Spot Instances

Most of every organization's cloud costs go toward its EC2 instance. Whether you use it to deploy an app or use Kubernetes or Bastion hosts, these instances cost a ton. While you can’t get away with a fixed cost of bastion instances, you can reduce the cost of your application and Kubernetes workloads by using spot instances/nodes.

At least for the lower environments you are having. Nobody wants their production to go down because you were trying to save a few bucks. However, for lower environments like alpha, beta, and staging, this can help significantly reduce your cost.

For your k8 workloads, karpenter provides an efficient way to scale your nodes efficiently, and it can also make spot instances easier for your workloads.

Scaling Efficiently

  1. Using tools like karpenter to scale your nodes can help a lot. It’s an efficient way to scale up and down your nodes based on utilization.

  2. Doing quarterly perf testing can help me get a sense of how many users and pods or instances I need. It can also help me effectively calculate the minimum, maximum, and desired values for efficient and cost-effective scaling.

  3. Using valid metrics to scale upon. You can scale based on CPU metrics, RAM utilization, or any requests from your application. Identifying the right metric can be a very good way for efficient scaling. In k8, using event-driven scaling with KEDA can be very beneficial, too.

Using the right Deployment Strategy

The right deployment strategy that can satisfy your business and your budget needs and also not hinder the overall performance can be the area you can research.

There are multiple strategies, including-:

  1. Rolling

  2. Blue Green

  3. Recreate

  4. Canary

I have only listed these to get a broader sense; there are other strategies as well. Every organization has its own different needs. I am not here saying change your deployment strategy just for the cost factor. It’s totally fine for an organization to pay the higher cost of deploying using blue/green if it suits their business needs. But I want to say that testing other strategies as well can’t be bad if it doesn’t affect your business needs.

The recreate option is a total killer in cost savings but a loser in having zero downtime. Blue-green is just the opposite of that. In the middle is rolling/canary, which, by ensuring zero or low downtime with its cost-effectiveness, can also be a great fit for you.

Graviton Instances

AWS has its own range of ARM based processors called graviton which are obviously much cheaper to use. If you are using kubernetes which you would be so your application is in the form of images and you can transition from your current nodes based on x86 to the new arm based graviton to save cost.

You can do this by making your images specific arm arch or using the multi arch build method of docker buildkit. I know there might be special usecases where you need to use x86 arch, but you can try to switch most part of your infra slowly to graviton.

These graviton has proven record to work well with iterpreted languages than compiled languages, I read in an article ( not tested myself ).

Also these graviton instances have high burstable capacity, so you can push your nodes far beyond the normal arm or x86 instances you were using before. This might also help in cost reduction by effectively using the full strength of the node.

Here also a case study by Datadog from this years Re: Invent how they are transitioning to graviton instances.

AWS Savings Plan to reduce EC2, Fargate and Lambda costs

AWS Savings Plans offer a discount model where customers agree to spend a certain amount of money per hour over a set period. In return, AWS provides discounts on eligible usage types. For instance, a customer might commit to spending $1 per hour for one year. AWS then tracks the applicable usage each hour during this period and applies a discount based on the usage type and the Savings Plan.

Organisations that have the capacity to commit to the fixed usage, this savings plan method can be a game changer for them. You can read more how you can effectively use saving plan on this arcticle.

Reserved Instances to reduce RDS, Redshift, ElastiCache, and Elasticsearch costs

Use one-year, no upfront Reserved Instances (RIs) to get a discount of up to 42% compared to On-Demand pricing. Follow the recommendations in AWS Cost Explorer’s RI purchase suggestions based on your RDS, Redshift, ElastiCache, and Elasticsearch usage. Be sure to set the parameters to one year, no upfront. This requires a one-year commitment, but the break-even point usually lasts seven to nine months.

You can use this if you are willing to commit to it for one year.

Assess the unused resources and optimize

Create automation to remove unused stale resources like detached EBS volumes and stale snapshots.

Also, you can turn on S3 intelligent tearing or use tiering according to your needs to manage your S3 usage cost.

Conclusion

Here, I have covered a few strategies to save costs in AWS. You can also improvise on your infrastructure on your own and find where the cost can be reduced. As every organisation's infrastructure is different, you have to see what suits you the most, and changing it won't hinder your business or customer needs.