One costly DevOps myth is the idea that going fast means compromising on fundamentals: “Oh, my application is rebuilt ten times per day, so it doesn’t make sense to try and optimize it.” The reality is the exact opposite. If your apps are this dynamic, you actually need more optimization. You need it to be seamless, automated, and smart, with help from Artificial Intelligence and Machine Learning (AI & ML).
Containers, microservices, and agile DevOps processes are allowing developers to release code several times a day. People are right that this makes optimization difficult for a human user. A software performance engineer cannot reasonably spend hours on software measurement and adjustment cycles when the next revision of code could be deployed any second, rendering all their work obsolete.
It is simply not possible for humans to optimize adjustments at the cloud infrastructure, server, and application level, across multiple microservices within an application, at today’s DevOps velocity. There are too many variables to tweak, too many commands to run, and too many metrics to measure. For an infrequently changing, monolithic application, you might get away with a human tuning an application for the best performance-to-cost ratio. But not with a microservices application.
However, the solution to this predicament is not to simply abandon optimizing altogether! The solution is to leverage AI, ML and automation to help us achieve what humans cannot. With a tool like Opsani, optimization can occur in a way that flows seamlessly with your CI/CD practices, and that keeps your application tuned for best performance and cost at all times.
With Opsani, once runtime adjustments and measurements are defined, once the load is characterized, and once the desired performance goal is established, we use standard interfaces to adjust runtime parameters up and down the application stack and read relevant performance metrics to give you the best application performance to cost ratio. With Opsani, not only can you optimize an isolated application with a load generator, but you can also use a runtime canary topology to optimize apps in production.
Enterprises are defaulting to a hopefully-big-enough “T-Shirt” size in cloud provider virtual machine sizes. But this is producing massive overspend. It doesn’t have to be this way. Yes, DevOps makes manual app optimization too tricky for humans. But with a little help from AI & ML, optimizing application performance and trimming cloud spend is a breeze.