The motion towards cloud computing is swiftly reaching a crescendo. It’s the most viable architecture for accommodating the scale of statistical Artificial Intelligence, one of the most effective mediums for the modern necessity of remote collaboration, and nearly ubiquitous because of the assortment of cloud-native approaches empowering DevSecOps teams.
Yet one aspect of this paradigm remains as nebulous as ever (particularly when utilizing cloud-native methods such as containers and orchestration platforms like Kubernetes): its cost.
Whether pertaining to storage fees, various aspects of vendor lock-in, or the true consumption of resources for individual applications, cloud costs are rarely transparent.
According to Replex CTO Costantino Lattarulo, cloud costs particularly burgeon when various developer teams are working on or monitoring different apps via Kubernetes in multi-cloud settings because “they start spinning up Kubernetes for some of their teams to play around with it, then more or less suddenly go into production and see their cost increase dramatically. But, they have no insight into which of their teams is consuming these resources or who is responsible for the high costs.”
This growing concern is responsible for a deliberate broadening of traditional cost management, which is tacitly evolving to encompass this hidden dimension of cost manifest in cloud environments. Emergent solutions can now deliver timely insight into which applications are consuming how many resources across developer teams to enable organizations to pare them, streamline operations, and ultimately optimize their cloud computing deployments for well-governed cost effectiveness.
Doing so has quickly become imperative in today’s cloud-first world in which oftentimes, the cost of cloud uses cases—especially when involving the aforementioned cloud-native methodologies—can hamper the overall productivity of mission-critical functions because it’s “nothing the cloud provider or existing tools show,” explained Replex CEO Patrick Kirchhoff.
Evolving Infrastructure Costs
Cloud architecture heralds a significant transition in how organizations manage traditional expenses for IT resources. It’s reconfigured the way organizations typically finance and procure their infrastructure, which has traditionally required upfront purchases for on-premise deployments. Even with on-premise hybrid clouds, however, cloud architecture has swung the focus of infrastructure costs to the DevOps personnel responsible for scaffolding and implementing applications.
On the one hand, this development attests to the flexibility and scalability for which the cloud is nearly universally renowned. On the other, it significantly decreases the transparency of the costs buttressing this architectural model. “The [developer] team is working on a different agenda,” Kirchhoff acknowledged. “They just want to work on their backlog of software features that they want to deploy. They don’t have cost savings on their agenda.”
Nonetheless, the platform architect and CIO sponsoring these teams’ efforts unequivocally do. While developers can further their objectives by scaling better or processing faster simply by spinning up an additional node or container in Kubernetes, such license has very real, undesired ramifications for expenditures. Subsequently, more organizations are turning to solutions that correlate “the underlying [bare metal or cloud] instances, Kubernetes in the middle, and then the applications within the clusters,” Kirchhoff observed.
By focusing on a number of different metrics pertaining to usage, these tools can pinpoint how many resources individual applications are consuming in specific clusters across teams and locations. By fortifying this information with pricing, organizations can “figure out how much it costs and how to optimize it,” Kirchhoff noted. “Each application runs in Kubernetes and maybe consumes capacity for many nodes below, but usually you don’t know how much of this capacity is used since it’s shared across Kubernetes clusters.”
Cost Predictions, Optimization
The foregoing method for managing costs pinpoints exactly how much capacity is being used and what’s actually needed to optimize apps in the cloud or on-premises. Equipped with this intelligence, developers can maximize their efforts to achieve enterprise goals in terms of performance and cost. The crux of this approach is an optimization engine relying on statistical measures to accurately forecast data for the metrics involved, which includes aspects of CPU, RAM, and billing (when provided).
“The optimization engine looks at how much capacity is requested and how big are the underlying instances,” Kirchhoff revealed. The statistical approaches then devise predictions about how many resources are actually needed, which are readily compared with those being used.
As a result, developers can perceive “how many resources does your application actually need versus how much capacity you defined in the configuration files,” Lattarulo mentioned. The billing information is critical for determining expenses on specific applications in precise quantities. Developers can use this data to allocate resources where they’re actually required, instead of squandering them on nodes no longer used for testing or research and development purposes.
This method also provides a blueprint for what Kirchhoff termed “optimization potential”, based on how many resources developers “requested and how much they actually used, and then they can adapt the configurations to what’s really needed and then see how much money they can save,” Kirchhoff posited.
The greater merit of the insight produced by cost governance solutions for multi-cloud settings is realized by applying it across different providers, cloud types, and geographically dispersed locations. The optimization potential Kirchhoff referenced is readily exposed via an API “so you can integrate it into your existing tool stack and potentially connect it to the infrastructure provisioning tools or other tools that might use this,” he remarked. Thus, developers can see how to reduce costs while increasing efficiency in the cloud, on-premises, and with Kubernetes. This visibility is also granted to platform architects and other stakeholders, most notably CIOs.
The result is heightened efficiency across Kubernetes deployments, which easily justifies these expenses to those responsible for their payment. “Initiatives like a Kubernetes rollout and cloud-native rollout through large organizations get slowed down because of cost,” Kirchhoff reflected. “We just want to make sure they can move faster within their budgetary constraints. This is where the platform architects and the team itself needs more visibility into their spend. What we often see is they have no idea about how much they spend on certain applications.”
Featured Image: NeedPix
Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance, and analytics.