Why aren’t more enterprises properly performance tuning their cloud applications?
Short answer: it’s too hard.
People know that altering container or VM resource settings can have a huge impact on the cost and performance of cloud applications. But modern cloud-native microservice architectures are dauntingly complicated. Even a simple app, composed of a few containers, will have trillions of resource and basic parameter permutations.
This means a massive proliferation of possible configuration tweaks. Not only changes to basic parameters, but to more minor ones like requests per second and response time, as well as settings VM instance type, CPU shares, thread count, garbage collection and memory pool sizes.
Getting the right instance, the right amount of instances, and the right settings in each instance involve numerous interdependencies (that are constantly shifting). And so to confidently intervene in this complex system – to properly performance tune – you would have to possess perfect knowledge of the whole infrastructure, covering layers across the application, data and cloud infrastructure stack. On top of this, you would need to be deeply familiar with the application workload itself.
It is highly unlikely that any person within an organization possesses all of this knowledge. Whoever wrote the code probably won’t be an infrastructure nerd. Anyone who is comfortable with both will be a generalist.
In summary: real performance tuning is beyond the reach of the human mind.
Deep Reinforcement Learning
We need help from artificial sorts of cognition. This is where Opsani makes use of a branch of machine learning called deep reinforcement learning (DRL).
Deep reinforcement learning is a technology powered by neutral networks – digital systems that mirror the human brain’s instinct for pattern recognition. DRL converts observed patterns and learned responses into ever more refined algorithmic behavior. In the case of Opsani, we pay close attention to how movements in every sort of setting affect cloud app performance, and tweak resource assignments and configuration settings to enhance performance or reduce cost. Data and the effect of every alteration are fed through the perfect recall of the neural network, and the quality of Opsani’s interventions compound over time.
And: Opsani can optimize those settings that are too complex for humans to touch: middleware configuration variables like JVM GC type and pool sizes, kernel parameters like page sizes and jumbo packet sizes, and application parameters like thread pools, cache timeouts and write delays.
Because it is constantly gathering new data, Opsani constantly discovers more solutions.
The engine reacts constantly to new traffic patterns, new code, new instance types, and all other relevant factors. With each new iteration, the system’s predictions hone in on the optimal solution, and as improvements are found they are automatically promoted.
Advanced performance tuning has historically been judged impossible, too complex to contemplate. No longer. Opsani is simple to integrate and gets up and running with a Docker run command. Most users see the benefits within hours.
On average, when they implement CO, Opsani customers experience a 2.5x increase in performance or a 40-70% decrease in cost.
To learn more about Continuous Optimization, check out our whitepaper: