The inexorable march of the enterprise to the cloud is underpinned by the multi-cloud tenet which, when involving hybrid clouds, enables organizations to seamlessly access resources between various clouds as well as on-premise settings.
According to Denodo Director of Product Marketing Saptarshi Sengupta, a relatively recent survey regarding enterprise cloud habits indicates multi-cloud adoption is gaining credence. “This is departmental,” Sengupta acknowledged. “One department might be in Azure cloud. Another department might be in AWS’s or Google Cloud’s platform because they have various different tools that are strong in that particular cloud platform.”
Maximizing organizational value from the multi-cloud trend increasingly hinges on three different aspects of data management: orchestration platforms for containers like Kubernetes, high availability of resources in both cloud and on-premise settings, and data virtualization to connect to sources as needed.
Availing themselves of these capabilities will enable organizations to not only access data and tools between different clouds (and on-premise settings), but also shift resources between them to fortify their use cases of choice. As DH2i CEO Don Boxley noted, this combination allows users to “connect those things transparently without someone having to implement a VPN across multiple clouds, which would be really, really hard to do.”
Containers have almost become the de facto means of managing resources in the cloud; orchestration mechanisms like Kubernetes are widely used to control them. “Kubernetes, containerization… all these things are getting very prominent these days,” Sengupta admitted. Perhaps proof of the pervasive deployment of containers for multi-cloud use cases is the ascending usage of stateful containers, or those which actually house data for applications or analytics purposes.
Boxely referenced information from the Cloud Native Computing Foundation in which “they do these annual surveys and in the latest one they’ve seen significant uptake in the number of containers and the number of stateful container applications is running at a pretty high rate. So, I think stateful is definitely coming into play.”
Containers are crucial for use cases in which the cloud’s scalability is necessary to manage things like spikes in demand for video streaming applications, concert ticket sales, or e-commerce activity; stateful containers are necessary for supporting such applications with databases like SQL Server. Containers enable organizations to spin up those workloads in different clouds and geographic regions to make good on the multi-cloud promise.
Ensuring continuous availability between workloads is critical for multi-cloud deployments, particularly when they involve precious data in stateful containers. Multi-cloud flexibility allows organizations to utilize those resources around the world for failover capabilities and other purposes. Sengupta mentioned a use case in which a mining company relied on this approach with “instances on-premises and in the cloud around the world; some in the U.S., some in Australia. These instances talk to the data sources closest to it.”
High availability options ensure that in the event of failure, organizations can simply shift resources between nodes, settings, and clouds for near continuous uptime. According to Boxley, competitive options failover discreetly with “express microtunnels that enable users to connect to nodes across boundaries” and clouds to exploit the multi-cloud concept for business continuity.
Recent developments in this space provide availability groups for SQL Server. Organizations can avail themselves of these capabilities with stateful containers managed by Kubernetes for “another level of high availability,” Boxley explained. “Within the context of Kubernetes you’ve got the availability of the pods, the nodes, and in case of SQL Server, you’ve got availability at the database level as well.”
Data virtualization technologies naturally complement the multi-cloud concept by providing an abstraction layer in which organizations can link together all their resources across clouds, the cloud’s edge, and on-premises locations in what is widely referred to as a data fabric. This approach supports the applications, data, and tools in various clouds by presenting a cohesive virtualization layer for “handshaking this information from one department to another,” Sengupta commented. There are a number of critical advantages to coupling data virtualization with multi-cloud deployments, including:
- Time to Value: Without virtualization approaches, many organizations find themselves endlessly replicating data between sources, systems, and settings. According to Sengupta, with such batch processes, many “take months to do what we do in real-time.”
- Regulatory Compliance: Another pivotal advantage of fortifying multi-cloud deployments with data virtualization is information assets remain where they are, which is optimal for stringent regulations about where data reside, who’s accessing them, and data sovereignty.
- Cost Reductions: Replicating data, managing unwieldy ETL processes, and incurring regulatory penalties all result in a number of costs that are avoided by abrogating these concerns with prudent data virtualization implementations.
Reliance on containers, high availability strategies, and data virtualization enables firms to reap the benefits of multi-cloud deployments while decreasing the difficulty—and risk—of doing so. Containers and orchestration platforms like Kubernetes mask the difficulty of flexibly deploying in various clouds and on-premises, while high availability protects business continuity, especially when “all that traffic is going back and forth via an express microtunnel so that if the primary node goes down, boom, everything fails over,” Boxley added. Data virtualization stitches resources together across clouds and on-premises so organizations can access the data they want wherever they are, at any point in time.
Featured Image: NeedPix