AWS recently launched a new feature for EC2 users: Optimization Recommendations. However, real optimization requires a far more complex approach. Learn why.

Why AWS’s ‘EC2 Resource Optimization Recommendations’ Only Scratch the Surface

AWS recently launched a new feature for EC2 users: Optimization Recommendations. This feature operates, they explained, by “calculating ideal configurations based on your past usage. Using these recommendations in the AWS Cost Management product suite, you can identify opportunities for cost efficiency and act on them by terminating idle instances and rightsizing under-used instances.”

By AWS’s calculations, idle instances are instances that have lower than 1% maximum CPU utilization; and underutilized instances are instances with maximum CPU utilization between 1% and 40%.

Terminating idle instances and rightsizing under-used instances is all well and good. You’ll see some performance improvement, and some modest savings. But this is only the very tip of the iceberg of real optimization. 

In the same spirit as AWS’s recent announcement, APM (Application Performance Monitoring) has long been focused on scanning for VMs with low utilization. The equation is simple. Overprovisioning = wasting money. 

However, tweaking utilization is only the basic, most blunt approach to optimization. Real optimization requires a far more complex approach, because apps are multi-faceted, fine-grained entities in which multiple different factors contribute to overall efficiency. Even a simple application, composed of a few containers, will have trillions of resource and basic parameter permutations. The whole infrastructure is customizable: compute, memory, cache, storage, network (bandwidth and latency), thread management, job placement, database config, application runtime, java garbage collector. And that’s before you even get on to the workload itself. 

Utilization is important. But a real optimization would engage with a range of parameters, such as requests per second, or response time; and a range of settings, like VM instance type, CPU shares, thread count, garbage collection and memory pool sizes. 

We’re not bashing AWS here. The Optimization Recommendations look fine, and they’ll help customers. But real optimization has to go deeper. It requires a tool that possesses full insight into layers across the application, data and cloud infrastructure stack – and the power to tweak resource allocation with precision accuracy. 

At Opsani, we’ve built this tool. We pay attention to utilization, but at a far more detailed level. Opsani slots onto the end of the CI/CD chain, and automatedly examines millions of configurations to identify the optimal combination of resource and parameter settings. Opsani is perfectly reactive, tuning parameters in response to new traffic patterns, new code, new instance types, and so on. We pay attention to utilization; but we also pay attention to everything else.
 View Demo

The outcome? Infrastructure is tuned precisely to the workload and goals of the application. On average, when they implement Continuous Optimization with Opsani, customers experience a 2.5x increase in performance, or a 40-70% decrease in cost.

Contact us today for a free demo.

Share This