Framework

Estimating the Quality of Ontology-Based Annotations by Considering Evolutionary Changes

Authors: 
Gross, A; Hartung, M; Kirsten, T; Rahm, E
Year: 
2009
Venue: 
6th Intl. Workshop on Data Integration in the Life Sciences (DILS)

Ontology-based annotations associate objects, such as genes and proteins, with well-defined ontology concepts to semantically and uniformly describe object properties. Such annotation mappings are utilized in different applications and analysis studies whose results strongly depend on the quality of the used annotations. To study the quality of annotations we propose a generic evaluation approach considering the annotation generation methods (provenance) as well as the evolution of ontologies, object sources, and annotations.

Rule-based Management of Schema Changes at ETL sources

Authors: 
Papastefanatos, G.; Vassiliadis, P.; Simitsis, A.; Sellis, T.; Vassiliou, Y.
Year: 
2009
Venue: 
Workshop on Managing Evolution of Data Warehouses (MEDWa 2009)

In this paper, we visit the problem of the management of inconsistencies emerging on ETL processes as results of evolution operations occurring at their sources. We abstract Extract-Transform-Load (ETL) activities as queries and sequences of views. ETL activities and its sources are uniformly modeled as a graph that is annotated with rules for the management of evolution events. Given a change at an element of the graph, our framework detects the parts of the graph that are affected by this change and highlights the way they are tuned to respond to it.

Automatic Generation of Model Translations

Authors: 
Papotti, Paolo; Torlone, Riccardo
Year: 
2007
Venue: 
CAISE

The translation of information between heterogeneous rep-
resentations is a long standing issue. With the large spreading of cooper-
ative applications fostered by the advent of the Internet the problem has
gained more and more attention but there are still few and partial solu-
tions. In general, given an information source, different translations can
be defined for the same target model. In this work, we first identify gen-
eral properties that “good” translations should fulfill. We then propose
novel techniques for the automatic generation of model translations. A

An Approach to Heterogeneous Data Translation based on XML Conversion.

Authors: 
Papotti, Paolo; Torlone, Riccardo
Year: 
2004
Venue: 
CAiSE Workshops: Web Information Systems Modeling (WISM)

In this paper, we illustrate a preliminary approach to the
translation of Web data between heterogeneous formats. This work fits
into a larger pro ject whose aim is the development of a tool for the man-
agement of data described according to a large variety of formats used on
the Web and the (semi)automatic translation of schemes and instances
from one model to another. Data translations operate over XML repre-
sentations of instances and rely on a uniform representation of models
that we call metamodel. The metamodel shows structural diversities and

Heterogeneous Data Translation through XML Conversion.

Authors: 
Papotti, Paolo; Torlone, Riccardo
Year: 
2005
Venue: 
Journal of Web Engineering Vol. 4( No. 3): 189-204, 2005.

Automatically Determining Compatibility of Evolving Services

Authors: 
Becker, Karin; Lopes, Andre; Milojicic, Dejan S.; Pruyne, Jim; Singhal, Sharad
Year: 
2008
Venue: 
ICWS 2008

A major advantage of Service-Oriented Architectures (SOA) is composition and coordination of loosely coupled services. Because the development lifecycles of services and clients are decoupled, multiple service versions have to be maintained to continue supporting older clients. Typically versions are managed within the SOA by updating service descriptions using conventions on version numbers and namespaces. In all cases, the compatibility among services description must be evaluated, which can be hard, error-prone and costly if performed manually, particularly for complex descriptions.

Information Systems Integration and Evolution: Ontologies at Rescue

Authors: 
Curino, Carlo A.; Tanca, Letizia; Zaniolo, Carlo
Year: 
2008
Venue: 
STSM

The life of a modern Information System is often characterized by (i) a push toward integration with other systems, and (ii) the evolution of its data management core in response to continuously changing application requirements. Most of the current proposals dealing with these issues from a database perspective rely on the formal notions of mapping and query rewriting.

Managing and querying transaction-time databases under schema evolution

Authors: 
Moon, Hyun J.; Curino, Carlo A.; Deutsch, Alin; Hou, Chien-Yi; Zaniolo, Carlo
Year: 
2008
Venue: 
VLDB

The old problem of managing the history of database information is now made more urgent and complex by fast-spreading web information systems. Indeed, systems such as Wikipedia are faced with the challenge of managing the history of their databases in the face of intense database schema evolution. Our PRIMA system addresses this difficult problem by introducing two key pieces of new technology.

MeDEA: A database evolution architecture with traceability.

Authors: 
Dominguez, Eladio; Lloret, Jorge; Rubio, Angel Luis; Zapata, Maria Antonia
Year: 
2008
Venue: 
Data Knowl. Eng. 65(3): 419-441 (2008)

One of the most important challenges that software engineers (designers, developers) still have to face in their everyday work is the evolution of working database systems. As a step for the solution of this problem in this paper we propose MeDEA, which stands for Metamodel-based Database Evolution Architecture. MeDEA is a generic evolution architecture that allows us to maintain the traceability between the different artifacts involved in any database development process. MeDEA is generic in the sense that it is independent of the particular modeling techniques being used.

A Model for Schema Integration in Heterogeneous Databases

Authors: 
Gal, A.; Trombetta, A.; Anaby-Tavor, A.; Montesi, D.
Year: 
2003
Venue: 
IDEAS

Schema integration is the process by which schemata from heterogeneous databases are conceptually integrated into a single cohesive schema. In this work we propose a modeling framework for schema integration, capturing the inherent uncertainty accompanying the integration process. The model utilizes a fuzzy framework to express a confidence measure, associated with the outcome of a schema integration process.

Syndicate content