• RPA Nuggets

Breaking through data-architecture gridlock to scale AI

Large-scale data modernization and rapidly evolving data technologies can tie up AI transformations. Five steps give organizations a way to break through the gridlock.


Certainly, technology changes are not easy. But often, we find the culprit is not technical complexity; it’s process complexity.


Traditional architecture design and evaluation approaches may paralyze progress as organizations over plan and overinvest in developing road-map designs and spend months on technology assessments and vendor comparisons that often go off the rails as stakeholders debate the right path in this rapidly evolving landscape.


Once organizations have a plan and are ready to implement, their efforts are often stymied as teams struggle to bring these behemoth blueprints to life and put changes into production. Amid it all, business leaders wonder what value they’re getting from these efforts.


The good news is that data and technology leaders can break this gridlock by rethinking how they approach modernization efforts. This article shares five practices that leading organizations use to accelerate their modernization efforts and deliver value faster. Their work offers a proven formula for those still struggling to get their efforts on track and give their company a competitive edge.


1. Take advantage of a road-tested blueprint

Data and technology leaders no longer need to start from scratch when designing a data architecture. The past few years have seen the emergence of a reference data architecture that provides the agility to meet today’s need for speed, flexibility, and innovation (Exhibit 1). It has been road-tested in hundreds of IT and data transformations across industries, and we have observed its ability to reduce costs for traditional AI use cases and enable faster time to market and better reusability of new AI initiatives.

2. Build a minimum viable product, and then scale

Organizations can realize results faster by taking a use-case approach. Here, leaders build and deploy a minimum viable product that delivers the specific data components required for each desired use case. Thereafter, make adjustments as needed based on user feedback.

3. Prepare your business for change

Legitimate business concerns over the impact any changes might have on traditional workloads can slow modernization efforts to a crawl. Companies often spend significant time comparing the risks, trade-offs, and business outputs of new and legacy technologies to prove out the new technology.

However, we find that legacy solutions cannot match the business performance, cost savings, or reduced risks of modern technology, such as data lakes.

Additionally, legacy solutions won’t enable businesses to achieve their full potential, such as the 70 percent cost reduction and greater flexibility in data use that numerous banks have achieved from adopting a data-lake infrastructure for their ingestion layer.

As a result, rather than engaging in detailed evaluations against legacy solutions, data and technology leaders better serve their organization by educating business leaders on the need to let go of legacy technologies. One telecom provider, for example, set up mandatory technology courses for its top 300 business managers to increase their data and technology literacy and facilitate decision making.

As part of the training, the data leadership team (including engineers, scientists, and practitioners) shared the organization’s new data operating model, recent technology advances, and target data architecture to help provide context for the work.


4. Build an agile data-engineering organization

In our experience, successful modernization efforts have an integrated team and an engineering culture centered around data to accelerate implementation of new architectural components. Achieving this requires the right structural and cultural elements.

From an organizational perspective, we see a push toward reorienting the data organization toward a product and platform model, with two types of teams:

  • Data platform teams, consisting of data engineers, data architects, data stewards, and data modelers, build and operate the architecture. They focus on ingesting and modeling data, automating pipelines, and building standard APIs for consumption, while ensuring high availability of data, such as customer data.

  • Data product teams, consisting mostly of data scientists, translators, and business analysts, focus on the use of data in business-driven AI use cases such as campaign management. (To see how this structure enables efficiency across even the larger, more complex organizations, see sidebar, “Sharing data across subsidiaries.”)

The cultural elements are aimed at improving talent recruiting and management to ensure engineers are learning and growing.


5. Automate deployment using DataOps

Changing the data architecture and associated data models and pipelines is a cumbersome activity. A big chunk of engineering time is spent on reconstructing extract, transform, and load (ETL) processes after architectural changes have been made or reconfiguring AI models to meet new data structures.


A method that aims to change this is DataOps, which applies a DevOps approach to data, just as MLOps applies a DevOps approach to AI. Like DevOps, DataOps is structured into continuous integration and deployment phases with a focus on eliminating “low-value” and automatable activities from engineers’ to-do lists and spanning the delivery life cycle across development, testing, deployment, and monitoring stages. Instead of assessing code quality or managing test data or data quality, engineers should focus their time on code building.


A structured and automated pipeline, leveraging synthetic data and machine learning for data quality, can bring code and accompanying ETL and data-model changes into production much faster.


Recent Posts

See All