The data mesh makes it possible to re -launch initiatives around data.

In a world where the diversity and quantity of data is exploding, the architectures that enable Data Mesh represent the future of new ways of data consumption. But how will companies prepare to implement and use it?

Today, every organization must increasingly drive its business through data. The explosion in the amount of information brings huge potential for exploitation, but constitutes a double -edged sword for many companies. In fact, this motivates more complexity, especially because of the challenges associated with aggregating data from different sources, but related to the management and security of them.

The data is increasingly diffuse and growing exponentially

It’s not just the amount of data that has increased over the years. It’s also their diversity, along with the relentless multiplication of data lakes, their duplication of them, applications and other resources, all in different formats and protocols. Some of these resources are of large scale: clicks, the Internet of Things (IoT) or the data streams that users constantly generate are difficult to control.

This data is organized across multiple warehouses and applications, sometimes in locations, sometimes in the cloud, or in multicloud environments. Faced with different business needs, analytical data storage is often scattered across different platforms, with similar or redundant content in many cases.

This results in different analytical processes running in separate systems, which typically results in the creation of silos. Repeatedly retrieving, cleaning, and modifying the same data in each loop causes delays, inconsistencies, and bottlenecks for different teams. As a result, the goals of real-time streaming, data democratization, and scalability are simply not being achieved.

The data architectures of tomorrow

The data mesh is emerging as a new hope for companies looking to truly understand and use their data. It aims to remove bottlenecks and bring data decisions closer to those who understand them. It offers an organizational approach based on unified infrastructures that allows domains to create and share data products, while applying standards in terms of interoperability, quality, management and security.

A key part of this philosophy is a distributed model where each business unit – a “domain” – has its own data product managers. This allows the company to increase the speed and size of the scan. In fact, domains know better how to use their own data, which reduces the number of iterations until results and improves their quality.

Instead of being a secondary product, data becomes a decentralized stand-alone product, which can be used by anyone in the enterprise. It also eliminates the bottleneck of centralized infrastructure and gives domains the autonomy to use the tools that best suit their needs.

Undoubtedly, data mesh -enabled architectures represent the future of new modes of data consumption. But how will companies prepare to implement and use it?

“Mesh” is not afraid of data virtualization

Once a company decides that a data mesh approach is the way to go, its IT managers must determine which organizations and technologies will help implement it. Data virtualization and its management tools are prime candidates for this, as they are specifically designed to overlay multiple distributed systems with a unified, managed, and secure data layer.

This includes creating virtual models over any data source. These logic models apply a semantic layer, exposing all data in an understandable business form, while freeing their consumers from the complexity of location and source format. The ease of use and minimal replication enabled by virtualization significantly speeds up the creation of data products compared to traditional solutions.

The most advanced solutions on the market allow access to data products through a variety of methods (SQL, REST, OData, GraphQL, MDX, etc.) without the need for a developer to write code. They also allow the enforcement of management and security rules. Data products can also be automatically exposed to an enterprise global catalog, which acts as a “marketplace” for the enterprise where users can review and discover them.

Data virtualization also meets management requirements. Not only does it reduce data duplication and provide an access point, but its virtual layer allows companies to automate the application of common security policies. For example, managers can hide salary indications in all data products unless the user has a specific HR role or is at a particular level in the hierarchy.

Undoubtedly, the data mesh brings an unparalleled new approach to supporting decision -making and analytical systems. Focusing on delivering, managing, and using data to reduce loopholes, prevent redundancy, and ensure consistency will eliminate bottlenecks that have plagued businesses for decades. By implementing such architectures based on modern technologies such as data virtualization, companies can take a new step and truly exploit the full potential of their data.

By Vincent Fages-GouyouEMEA Product Management Director at Denodo

Also read:

12 Tech Trends to Review for Buying

10 technological trends seen from China …

Center: the alternative to data silos

Professions and Data: the difficulty of data provisioning professions

Data management: a big challenge

Leave a Comment