Google Cloud’s Anthos promises a single, consistent way to manage Kubernetes workloads in on-premises and public cloud environments.
Google Cloud launched the Anthos platform in April 2019, promising customers a way to run Kubernetes workloads on-premises, in Google Cloud, and above all, in other major public clouds, including Amazon Web Services (AWS) and Microsoft Azure. Speaking at Google Cloud Next in San Francisco in 2019, Sundar Pichai, CEO of Google, said the idea behind Anthos was to let developers “write once and run anywhere”. – a commitment to simplify the development, deployment and operation of containerized applications in the hybrid cloud by bridging incompatible cloud architectures.
It took a while to get this important multi-cloud support. Google has finally announced support for Anthos for AWS (from April 2020) and Anthos for Azure (from December 2021), with the release of the Anthos Multi-Cloud API, fulfilling its initial promise of a true hybrid and multi-cloud operability.
The Google Cloud Anthos console that displays Azure and AWS assets. (Credit: Google)
By providing a single platform for managing all Kubernetes workloads, Anthos helps customers focus their skills on one technology, rather than relying on certified experts on a host of proprietary cloud solutions. Similarly, Anthos provides operational consistency between hybrid and public cloud, with the ability to apply common configurations to all infrastructures, as well as custom security policies associated with certain workloads and namespace, wherever these workloads are performed. Finally, IT operators can monitor cluster telemetry and log information from a console.
There are many parts
Anthos is the natural evolution of the cloud services platform developed by the vendor before 2019. The platform combines Google Cloud managed services, Google Kubernetes Engine (GKE), GKE On-Prem, and the Anthos Config Management console for administration , unified policies and security in hybrid and multi-cloud deployments of Kubernetes. If we add Stackdriver for observability, GCP Cloud Interconnect for high-speed connectivity, Anthos Service Mesh (based on the Istio open source project) and the Cloud Run serverless deployment service (based on the Knative open source project), GCP aims provide a one-stop-shop for managing Kubernetes workloads, wherever they are made.
Based on GKE, Anthos automatically handles Kubernetes updates and security patches once they are released. Native GKE On-Prem installations will run on VMware vSphere or bare-metal, with launch partners VMware, Dell EMC, HPE, Intel and Lenovo promising to deliver Anthos on hyperconverged infrastructures.
Compete with AWS, Oracle or even Microsoft
The fear of being blocked by a supplier is very real for companies. Providing a flexible and open path to migrate to the cloud is something sacred for cloud providers today. But some want to have butter and butter money, locking those customers into their own ecosystem when they decide to move workloads to the cloud. Amazon Web Services has finally given up on the hybrid cloud front by announcing Outposts to help customers bridge the gap between on-premises and cloud workloads. Thus, Outposts combines the hardware configured by AWS and the services and APIs it manages. Then, in December 2020, AWS extended its managed service to Amazon Elastic Kubernetes (EKS) to workloads running both on -site and in the AWS Cloud.
Oracle Customer Cloud and Microsoft Azure Stack are similar hybrid cloud offerings offered by other major players, while the platform-as-a-service offerings Red Hat OpenShift and VMware Tanzu, both powered by Kubernetes, allows containerized enterprise applications to operate in hybrid and public clouds. In an attempt to remove these big rivals from the throne, Google Cloud is betting heavily on the fact that Kubernetes is the future of the enterprise’s infrastructure. Of course, competitors are also aggressively pushing the world of Kubernetes management, but as the petri dish on which Kubernetes was developed, Google strongly says that’s the best way to run this technology.
A simplified move to Anthos
To help customers get started, Google launched Migrate for Anthos following the 2018 acquisition of Velostrata, an Israeli company that specializes in migrating to the cloud by intelligently decoupling storage and computing. So businesses can leave storage on-premises and run computing in the cloud. Migrate for Anthos converts workloads to containers for Kubernetes directly from physical servers and virtual machines. How does this work? Migrate for Anthos scans the file system of a server or virtual machine and converts it to a continuous volume of Kubernetes. Application containers, service containers, networking, and continuous volumes come together in a Kubernetes pod, which is a group of containers deployed together on the same host.
For GCP customers, getting started with Anthos is as simple as creating a new GKE cluster, Istio service mesh-enabled, on the console. For on-prem customers, the first step in using Anthos is to set up a GKE On-Prem cluster and migrate the current application. Once this cluster is registered with GCP, the installation of Istio is sufficient to get the visibility of the workload across all clusters. Then, by enabling Anthos Config Management in GKE clusters, all Kubernetes and Istio policies can be managed in one place.
Google Cloud Anthos pricing
Google Cloud Anthos is available on a pay-as-you-go or monthly subscription basis, with discounts based on commitment. For cloud customers, Anthos costs $ 8 per vCPU cluster per month on a pay-as-you-go basis, or $ 6 if paying on a subscription basis, regardless of the public cloud platform on which the workload operates.
For in-area customers, the cost of Anthos is $ 24 per vCPU cluster for pay-as-you-go customers running VMware or bare-metal. A free trial for new customers allows use of up to $ 800 for up to 30 days. (Prices are not available in euros).