Google announced a series of general availability updates for its Bigtable cloud database service at the end of January.
Google Cloud Bigtable is a NoSQL managed database service capable of handling analytical and operational workloads. New updates to Bigtable include increased storage capacity: up to 5TB of SSD storage is now accessible per node, double the previous 2.5TB limit. Data on the HDD is restricted to 16 TB per node, compared to 8 previously.
In comparison, Atlas, MongoDB’s DBaaS can accommodate 4 TB of physical storage per cluster, but sharding makes it possible to completely exceed this threshold to store several tens of terabytes per cluster.
AWS DocumentDB has a storage quota of 64 TB per cluster. Bigtable, on the other hand, claims to deploy storage by node to acquire similar faculty. Note that 5 TB of SSD storage or 16 TB of HDD storage are maximum capacities. “The best practice is to add enough nodes to your cluster, so that you can only use 70% of these limits,” the Google Cloud Platform (GCP) documentation recommends.
GCP also provides enhanced autoscaling capabilities, so that a NoSQL DBMS cluster can grow or shrink in its computing and storage capacity without human intervention, based on need. The Bigtable update complements better visibility into database workloads, to more quickly pinpoint the root of an issue.
“The new features announced at Bigtable are a testament to the continued focus on automation and scaling, which is becoming a major challenge for modern cloud services. »
Adam RonthalAnalyst, Gartner
“The new features announced at Bigtable demonstrate the continued focus on automation and scaling, which is becoming a leading challenge for modern cloud services,” said Adam Ronthal, analyst at Gartner. “They also bring improved price/performance-the key metric for evaluating and managing any cloud service-and observance, which serves as the foundation for better financial management and optimization. »
Cloud Bigtable: native autoscaling, checked and corrected
One of the promises of the cloud has long been the ability to elasticly scale resources as needed, without the need for new physical infrastructure for end users.
Programmatic scaling is always available at Bigtable, according to Anton Gething, Bigtable Product Manager at Google. He added that many Google customers have developed their own autoscaling approaches for Bigtable through APIs. Spotify, for example, has made available an open source implementation of Cloud Bigtable autoscaling.
“The latest version of Bigtable introduces a native auto-scaling solution,” says Anton Gething.
He added that native autoscaling directly monitors Bigtable servers, to be very responsive. As a result, the size of a Bigtable deployment can be measured based on demand.
According to Anton Gething, users do not need to update their current deployment to benefit from the increased storage capacity. He pointed out that Bigtable separates computing and storage, allowing each resource type to scale separately.
This storage capacity update aims to optimize costs for storage-centric workloads that require more resources, without having to increase computing capacity, ”said Anton Gething .
“In one of our experiments, autoscaling reduced average daytime workload costs by more than 40%,” GCP spokespersons boasted in a blog post.
“You need two pieces of information: a target CPU usage and a set where your node number can be maintained. No calculations, no programming, and no complicated maintenance are required, ”they added.
Two barriers must be considered according to them. The maximum number of nodes cannot be 10 times greater than the minimum number of nodes and it is not possible to configure the use of storage spaces, autoscaling takes care of it automatically.
Google Cloud chooses native autoscaling focused on Bigtable and more metrics to track workloads.
Workload optimization and observability
Another new capability is coming to Bigtable: it’s known as cluster group routing.
Anton Gething explained that in a replicated instance of Google Cloud Bigtable, cluster groups provide finer control over high -availability deployments and better management of workloads. Prior to the new update, he noted that a user of a replicated Bigtable instance could route traffic to either of its Bigtable clusters in single-cluster routing mode, or to all of its clusters in single-cluster routing mode. multicluster routing. The administrator said cluster groups now allow customers to route traffic to a subset of their clusters.
Google has also added a specific metric for CPU usage to each app profile. This provides better visibility into the performance of a given application workload. Although Google provided some CPU usage visibility to Bigtable administrators prior to this update, Anton Gething explained that GCP provides new dimensions of visibility to ways of accessing data queries and the tables themselves. both.
“Without those additional dimensions, troubleshooting can be difficult,” Gething admits. “You had visibility into the cluster’s CPU usage, but you couldn’t determine which application profile was consuming CPU, or which application was accessing which table using the method. »
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.