Five reasons to restore data from the cloud

The transition to cloud computing is not always the long calm river we think of. While the cloud attracts an ever -increasing share of enterprise budgets (a trend industry analysts predict is set to continue), it’s not a panacea. Companies were forced to backtrack: repatriate site applications and data they moved to the cloud. For now, these cases remain in the minority, but the repatriation process should be considered in any cloud approach.

The notion of on-site data repatriation is inherent in some specific uses, such as backup and restoration. This operation can also be dictated by financial, practical or even regulatory considerations. Let’s look at the top five reasons to restore data from the cloud.

1- Reduce costs

Cloud computing is not always cheaper than on -premises solutions. And costs can change, because vendors are raising their prices, because needs are changing, or often because the company is underestimating cloud operating costs. In the case of on-demand or pay-per-use service, the more you use the cloud (in the form of storage or processing resources), the higher the charge. Businesses risk seeing their storage needs quickly exceed their budget. In on-premises systems, when hardware is purchased or leased, most costs do not vary according to usage.

In the cloud, the more the service is used, the greater the cost. This is the case for data storage in general, as well as for specific aspects such as reading data from outside the cloud (even if only from company offices), the use of relevant resources such as security and administration tools, or writing to databases. .

Alternatively, the cloud provider is likely to raise its prices. Depending on the contract, companies can experience rapid cost increases, sometimes to the point that an on -premises solution is ultimately more economical.

2- Comply with security policies or regulations

Regulatory obligations should not be a reason to return data from the cloud, nor does it mean that the initial migration to the cloud was not planned properly. Moreover, there is no evidence that a public cloud deployment is less secure than an on-premise architecture, especially if adequate security rules are applied and if the systems are properly configured.

However. While security breaches are rare among public cloud providers, it is common for customers to incorrectly configure their cloud infrastructure. In the event of data loss or breach, the company may decide to return its data to the site, if only to minimize the impact on its reputation.

In the field of regulation, public cloud providers, including hyperscalers, have taken steps to meet government and industry requirements. Specific cloud services are available for confidential data, such as HIPAA-compliant or PCI-DSS-compliant information.

However, the main problem is often the location of the data. Although major cloud storage providers now offer storage in specific geographies, a business may still decide, or be forced to decide, that the best solution is to move data to an on-premise system or local data center.

“It’s wrong to think that regulations create significant barriers to moving applications to the cloud,” said Adam Stringer, business stability expert at PA Consulting. “Regulators need to be strict, like other outsourced devices, but there are many compelling examples of highly regulated companies moving to the cloud. The secret is in careful planning.»

On the other hand, surveys have come to complicate the situation. If a regulator, police department, or court requires in -depth analysis of data, it may not be possible, or even very expensive, in the cloud. It is then necessary to return the data within the country.

3- Reduce latency and data inertia

Although the cloud offers virtually unlimited storage capacity, its operation depends on internet connections, leading to latency. Some applications (backup and restore, messaging and office automation, and on-demand software packages) are not particularly sensitive to latency. The quality of enterprise connectivity is now sufficient so that users do not notice any slowdown.

But for some applications, including real-time analytics, databases, security tools, and those associated with sensors or other connected objects, latency sensitivity can be more critical. System architects must consider the latency between the data source, storage or processing resources, and the user, as well as the latency between the cloud services themselves.

Technologies such as caching, network optimization, or near-edge equipment (known as edge computing) can reduce latency. But, sometimes, the simplest solution is to repatriate the data internally. Thus, the links are shortened and the IT team can fine-tune storage, processing and networking according to the needs of the application.

To avoid latency issues, data localization should be considered, in order to address inertia issues. If most of the data is in the cloud and its processing takes place in the cloud, data inertia is not an obstacle. On the other hand, if data continues to flow between multiple clouds and in -place storage or processing resources, there is a problem.

4- Fix poorly designed cloud migrations

Sometimes companies return data simply because moving to the cloud doesn’t meet their expectations. In this case, they’re looking to “save the face,” according to Forrester analyst Naveen Chhabra. “They tried to adapt an application to the cloud whereas, from an architectural point of view, it didn’t make sense”, he explains.

“If the data architecture you move to the cloud is anarchic, you will just mimic that anarchy in the cloud. »

Adam StringerBusiness resilience expert, PA Consulting

Perhaps the workload was not designed for the cloud, or the transition was not properly planned or executed. “If the data architecture you move to the cloud is anarchic, you’re just going to do that anarchy in the cloud,” said Adam Stringer of PA. “Moving to the cloud will not, on its own, solve IT design issues.»

And when companies want to use the cloud — whether as part of a redeployment or an entirely new project — they must apply the same or better design standards than they implement on the spot. “The rigidity of the architecture is as important for cloud deployments as it is in place,” says Adam Stringer. “If they don’t do it right, companies will end up having to repatriate some of their assets.»

This does not mean that repatriation will be easy, or even that it will solve the problem. But somehow the IT team will have the opportunity to put everything back, review what went wrong, and rethink ways to use the cloud more effectively in the future.

5- Deal with host failures

Cloud provider failure is a major reason for data return. The customer probably has no choice. Ideally, the provider will give businesses advance notice and a realistic time frame to recover their data or transfer it to another cloud provider. But it is possible for a supplier to go bankrupt without notice, or for technical or environmental problems to force it to leave business overnight. Companies have to rely on backup copies of their data, on site or with another supplier.

Fortunately, complete supplier failure is rare. However, experience from the recent loss of cloud service shows that at a minimum, organizations need to develop a plan to secure and recover their data in the event of a problem. And on -premises technology is likely to play a key role in any recovery plan, if until the business finds new resources in the cloud.

“The question to be asked before moving an application to the cloud is whether this operation increases customer stability or the service offered in the market”, explains Adam Stringer of PA. “If migration is simply to reduce costs, the costs that could be incurred to restore stability later could nullify any benefits. »

Leave a Comment