Data mesh applied: a decentralized, connected, and context-aware data supply chain for AI
AI applications need boatloads of data - governed, high-quality data, connected to real-world business and operational domains. At the same time organizations wanting to apply AI are faced with an increasingly complex landscape of data management products and technology. Inside the organization we often find a brittle hairball of pipelines being managed by backlogged data management teams struggling to deliver value to business stakeholders. And often the data that gets delivered isn’t trusted, because its lineage, quality and meaning can’t be expressed, or aren’t easily understood.
Data mesh can address these challenges by streamlining and accelerating the process of creating useful, governed, observable, discoverable data that can train machine learning algorithms and augment generative AI inference.
In this talk for the Industrial AI Consortium, Zhamak shares the challenges of dealing with complexity in the data supply chain, and how it's getting much harder as data is being consumed and generated at exponentially accelerated rates as enterprises turn to AI for competitive differentiation.
In this talk she shows how data mesh can simplify the creation of high quality data products to drive the data supply chains that feed machine learning and GenAI applications.
Zhamak proposes a novel approach based on Nextdata’s technology, a unified data product container that encapsulates and orchestrates the end to end data supply chain in a box for each and every domain-centric data product. Some of Nextdata’s data product containerization unique properties such as their self-orchestration of data processing, built-in context at the time of data production and their dynamic interconnectedness are introduced in this talk.
The next evolution of the data/ai technology operating model requires a leap in data management technology. This talk gives you a sneak peak.
Enjoy!