The variety feature of Big Data, represented by multi-model data, has brought a new dimension of complexity to all aspects of data management. The need to process a set of distinct but interlinked data models is a challenging task.
In this paper, we focus on the problem of inference of a schema, i.e., the description of the structure of data. While there exist several verified approaches in the single-model world, their application for multi-model data is not straightforward. We introduce an approach that ensures inference of a common schema of multi-model data capturing their individual specifics. It can infer local integrity constraints as well as intra- and inter-model references. Following the standard features of Big Data, it can cope with overlapping models, i.e., data redundancy, and it is designed to process efficiently significant amounts of data.
To the best of our knowledge, ours is the first approach addressing schema inference in the world of multi-model databases.
Fri 28 OctDisplayed time zone: Eastern Time (US & Canada) change
13:30 - 15:00
|Early timing analysis based on scenario requirements and platform modelsJ1st
Journal-firstLink to publication
|Requirements document relations - A reuse perspective on traceability through standardsJ1st
Katharina Großer University of Koblenz-Landau, Volker Riediger University of Koblenz-Landau, Jan Jürjens University of Koblenz-LandauLink to publication
|Schema Inference for Multi-Model DataFT
|EvolveDB - A tool for model driven schema evolutionDemo
Tools & Demonstrations