Google leverages the capabilities of Cloud Bigtable

Google announced a series of general availability updates for its Bigtable cloud database service at the end of January.

Google Cloud Bigtable is a NoSQL managed database service capable of handling analytical and operational workloads. New updates to Bigtable include increased storage capacity: up to 5TB of SSD storage is now accessible per node, double the previous 2.5TB limit. Data on the HDD is restricted to 16 TB per node, compared to 8 previously.

In comparison, Atlas, MongoDB’s DBaaS can accommodate 4 TB of physical storage per cluster, but sharding makes it possible to completely exceed this threshold to store several tens of terabytes per cluster.

AWS DocumentDB has a storage quota of 64 TB per cluster. Bigtable, on the other hand, claims to deploy storage by node to acquire similar faculty. Note that 5 TB of SSD storage or 16 TB of HDD storage are maximum capacities. “The best practice is to add enough nodes to your cluster, so that you can only use 70% of these limits,” the Google Cloud Platform (GCP) documentation recommends.

GCP also provides enhanced autoscaling capabilities, so that a NoSQL DBMS cluster can grow or shrink in its computing and storage capacity without human intervention, based on need. The Bigtable update complements better visibility into database workloads, to more quickly pinpoint the root of an issue.

“The new features announced at Bigtable are a testament to the continued focus on automation and scaling, which is becoming a major challenge for modern cloud services. »

Adam RonthalAnalyst, Gartner

“The new features announced at Bigtable demonstrate the continued focus on automation and scaling, which is becoming a leading challenge for modern cloud services,” said Adam Ronthal, analyst at Gartner. “They also bring improved price/performance-the key metric for evaluating and managing any cloud service-and observance, which serves as the foundation for better financial management and optimization. »

Cloud Bigtable: native autoscaling, checked and corrected

One of the promises of the cloud has long been the ability to elasticly scale resources as needed, without the need for new physical infrastructure for end users.

Programmatic scaling is always available at Bigtable, according to Anton Gething, Bigtable Product Manager at Google. He added that many Google customers have developed their own autoscaling approaches for Bigtable through APIs. Spotify, for example, has made available an open source implementation of Cloud Bigtable autoscaling.

“The latest version of Bigtable introduces a native auto-scaling solution,” says Anton Gething.

He added that native autoscaling directly monitors Bigtable servers, to be very responsive. As a result, the size of a Bigtable deployment can be measured based on demand.

According to Anton Gething, users do not need to update their current deployment to benefit from the increased storage capacity. He pointed out that Bigtable separates computing and storage, allowing each resource type to scale separately.

This storage capacity update aims to optimize costs for storage-centric workloads that require more resources, without having to increase computing capacity, ”said Anton Gething .

“In one of our experiments, autoscaling reduced average daytime workload costs by more than 40%,” GCP spokespersons boasted in a blog post.

“You need two pieces of information: a target CPU usage and a set where your node number can be maintained. No calculations, no programming, and no complicated maintenance are required, ”they added.

Two barriers must be considered according to them. The maximum number of nodes cannot be 10 times greater than the minimum number of nodes and it is not possible to configure the use of storage spaces, autoscaling takes care of it automatically.

Google Cloud chooses native autoscaling focused on Bigtable and more metrics to track workloads.

Workload optimization and observability

Another new capability is coming to Bigtable: it’s known as cluster group routing.

Anton Gething explained that in a replicated instance of Google Cloud Bigtable, cluster groups provide finer control over high -availability deployments and better management of workloads. Prior to the new update, he noted that a user of a replicated Bigtable instance could route traffic to either of its Bigtable clusters in single-cluster routing mode, or to all of its clusters in single-cluster routing mode. multicluster routing. The administrator said cluster groups now allow customers to route traffic to a subset of their clusters.

Google has also added a specific metric for CPU usage to each app profile. This provides better visibility into the performance of a given application workload. Although Google provided some CPU usage visibility to Bigtable administrators prior to this update, Anton Gething explained that GCP provides new dimensions of visibility to ways of accessing data queries and the tables themselves. both.

“Without those additional dimensions, troubleshooting can be difficult,” Gething admits. “You had visibility into the cluster’s CPU usage, but you couldn’t determine which application profile was consuming CPU, or which application was accessing which table using the method. »

Leave a Comment