Data Integration

69
Data Integration Helena Galhardas DEI IST (based on the slides of the course: CIS 550 – Database & Information Systems, Univ. Pennsylvania, Zachary Ives)

description

Data Integration. Helena Galhardas DEI IST (based on the slides of the course: CIS 550 – Database & Information Systems, Univ. Pennsylvania, Zachary Ives). Agenda. Overview of Data Integration Some systems: The LSD System The TSIMMIS System Information Manifold. A Problem. - PowerPoint PPT Presentation

Transcript of Data Integration

Page 1: Data Integration

Data Integration

Helena GalhardasDEI IST

(based on the slides of the course: CIS 550 – Database & Information Systems, Univ. Pennsylvania, Zachary Ives)

Page 2: Data Integration

Agenda

Overview of Data Integration Some systems:

The LSD System The TSIMMIS System Information Manifold

Page 3: Data Integration

A Problem Even with normalization and the same needs, different

people will arrive at different schemas In fact, most people also have different needs! Often people build databases in isolation, then want to

share their data Different systems within an enterprise Different information brokers on the Web Scientific collaborators Researchers who want to publish their data for others to use

This is the goal of data integration: tie together different sources, controlled by many people, under a common schema

Page 4: Data Integration

Motivating example (1)

FullServe: company that provides internet access to homes, but also sells products to support the home computing infrastructure (ex: modems, wireless routers, etc)

FullServe is a predominantly American company and decided to acquire EuroCard, an European company that is mainly a credit card provider, but has recently started leveraging its customer base to enter the internet market

Page 5: Data Integration

Motivating Example (2)FullServe databases:Employee DatabaseFullTimeEmp(ssn, empId, firstName, middleName, lastName) Hire(empId, hireDate, recruiter)TempEmployees(ssn, hireStart, hireEnd, name, hourlyRate)Training DatabaseCourses(courseID, name, instructor)Enrollments(courseID, empID, date) Services DatabaseServices(packName, textDescription)Customers(name, id, zipCode, streetAdr, phone)Contracts(custID, packName, startDate)Sales Database Products(prodName, prodId)Sales(prodName, customerName, address)Resume DatabaseInterview(interviewDate, name, recruiter, hireDecision, hireDate)CV(name, resume)HelpLine DatabaseCalls(date, agent, custId, text, action)

Page 6: Data Integration

Motivating Example (3)EuroCard databases:Employee DatabaseEmp(ID, firstNameMiddleInitial, lastName)Hire(ID, hireDate, recruiter) CreditCard DatabaseCustomer(CustID, cardNum, expiration, currentBalance)CustDetail(CustID, name, address)Resume DatabaseInterview(ID, date, location, recruiter)CV(name, resume)HelpLine DatabaseCalls(date, agent, custId, description, followup)

Page 7: Data Integration

Motivating Example (4)Some queries employees or managers in FullServe

may want to pose: The Human Resources Department may want to be able to

query for all of its employees whether in the US or in Europe Require access to 2 databases in the American side and 1 in the

European side There is a single customer support hot-line, where

customers can call about any service or product they obtain from the company.When a representative is on the phone with a customer, it´s important to see the entire set of services the customer is getting from FullServe (internet service, credit card or products purchased). Furthermore, it is useful to know that the customer is a big spender on its credit card. Require access to 2 databases in the US side and 1 in the

European side.

Page 8: Data Integration

Another example: searching for a new job (1)

Page 9: Data Integration

Another example: searching for a new job (2) Each form (site) asks for a slighly different set of

attributes (ex: keywords describing job, location and job category or employer and job type)

Ideally, would like to have a single web site to pose our queries and have that site integrating data from all relevant sites in the Web,

Page 10: Data Integration

Goal of data integration Offer uniform access to a set of data

autonomous and heterogeneous data sources: Querying disparate sources Large number of sources Heterogeneous data sources (different systems, diff.

Schemas, some structured others unstructured) Autonomous data sources: we may not have full

access to the data or source may not be available all the time.

Page 11: Data Integration

Why is it hard? Systems reasons: even with the same HW

and all relational sources, the SQL supported is not always the same

Logical reasons: different schemas (e.g. FullServe and EuroCard temporary employees), diff attributes (e.g., ID), diff attribute names for the same knowledge (e.g., text and action), diff. Representations of data (e.g. First-name, last name) also known as semantic heterogeneity

Social and administrative reasons

Page 12: Data Integration

Building a Data Integration SystemCreate a middleware “mediator” or “data integration system”

over the sources Can be warehoused (a data warehouse) or virtual Presents a uniform query interface and schema Abstracts away multitude of sources; consults them for relevant

data Unifies different source data formats (and possibly schemas) Sources are generally autonomous, not designed to be integrated

Sources may be local DBs or remote web sources/services Sources may require certain input to return output (e.g., web

forms): “binding patterns” describe these

Page 13: Data Integration

Logical components of a virtual integration system

Programs whose role is to send queries to a data source, receive answers and possibly apply some transformation to the answer

Is built for the data integration application and contains only the aspects of the domain relevant to the application. Most probably will contain a subset of the attributes seen in sources

Specify the properties of the sources the system needs to know to use their data. Main component are semantic mappings that specify how attributes in the sources correspond to attributes in the mediated schema.

Other info is whether sources are complete

Page 14: Data Integration

A data integration scenario

Page 15: Data Integration

Components of a data integration system

Page 16: Data Integration

Query reformulation (1)

Rewrite the user query that was posed in terms of the relations in the mediated schema, into queries referring to the schemas of data sources

Result is called a logical query plan Ex: SELECT title, startTime

FROM Movie, Plays WHERE Movie.title = Plays.movie AND

location=”New York” AND director=”Woody Allen”

Page 17: Data Integration

Query reformulation (2) Tuples for Movie can be obtained from source

S1 but attribute title needs to be reformulated to name

Tuples for Plays can be obtained from S2 or S3. Since S2 is complete for showings in NY, we choose it

Since source S3 requires the title of a movie as input and the title is not secified in the query, the query plan must first access S1 and then feed the movie titles returned from S1 as inputs to S3.

Page 18: Data Integration

Query optimization

Acepts a logical query plan as input and produces a physical query plan Specifies the exact order in which sources are

accessed, when results are combined, which algorithms are used for performing operations on the data

Page 19: Data Integration

Query execution

Responsible for the execution of the physical query plan Dispatches the queries to the individual sources

through the wrappers and combines the results as specified by the query plan.

Also may ask the optimizer to reconsider its plan based on its monitoring of the plan’s progress (e.g., if source S3 is slow)

Page 20: Data Integration

Challenges of Mapping SchemasIn a perfect world, it would be easy to match up items from one schema with another Every table would have a similar table in the other schema Every attribute would have an identical attribute in the other schema Every value would clearly map to a value in the other schema

Real world: as with human languages, things don’t map clearly! May have different numbers of tables – different decompositions Metadata in one relation may be data in another Values may not exactly correspond It may be unclear whether a value is the same

Page 21: Data Integration

Different Aspects to MappingSchema matching / ontology alignment

How do we find correspondences between attributes?

Entity matching / deduplication / record linkage / etc.How do we know when two records refer to the same thing?

Mapping definition How do we specify the constraints or transformations

that let us reason about when to create an entry in one schema, given an entry in another schema?

Page 22: Data Integration

Why Schema Matching is Important

Enterprise 1

Enterprise 2 Knowledge Base 1

World-Wide Web

Knowledge Base 2

Home users

Data integrationData translationData warehousingE-commerce

Data integration

Ontology Matching

Application has more than one schema need for schema matching!

Informationagent

Page 23: Data Integration

Why Schema Matching is Difficult No access to exact semantics of concepts Semantics not documented in sufficient details Schemas not adequately expressive to capture semantics

Must rely on clues in schema & data Using names, structures, types, data values, etc.

Such clues can be unreliable Synonyms: Different names => same entity:

area & address => location Homonyms: Same names => different entities:

area => location or square-feet Done manually by domain experts

Expensive and time consuming

Page 24: Data Integration

AnHai Doan, Pedro Domingos, Alon HalevyUniversity of WashingtonSIGMOD 2001

Reconciling Schemas of Disparate Data Sources: A Machine Learning Approach

The LSD Project

Page 25: Data Integration

Charlie comesto town

Find houses with 2 bedrooms

priced under 300K

homes.comrealestate.com homeseekers.com

Page 26: Data Integration

Data Integration

mediated schema

homes.comrealestate.com

source schema 2

homeseekers.com

wrapper wrapperwrapper

source schema 3source schema 1

Find houses with 2 bedrooms priced under 300K

Page 27: Data Integration

Suppose user wants to integrate 100 data sources

1. User manually creates mappings for a few sources, say 3 shows LSD these mappings

2. LSD learns from the mappings3. LSD proposes mappings for remaining 97

sources

The LSD (Learning Source Descriptions) Approach

Page 28: Data Integration

Semantic Mappings between Schemas Mediated & source schemas = XML DTDs

house

location contact

house

address

name phone

num-baths

full-baths half-baths

contact-info

agent-name agent-phone

1-1 mapping non 1-1 mapping

Page 29: Data Integration

listed-price $250,000 $110,000 ...

address price agent-phone description

Example

location Miami, FL Boston, MA ...

phone(305) 729 0831(617) 253 1429 ...

commentsFantastic houseGreat location ...

realestate.com

location listed-price phone comments

Schema of realestate.com

If “fantastic” & “great”

occur frequently in data values =>

description

Learned hypotheses

price $550,000 $320,000 ...

contact-phone(278) 345 7215(617) 335 2315 ...

extra-infoBeautiful yardGreat beach ...

homes.com

If “phone” occurs in the name =>

agent-phone

Mediated schema

Page 30: Data Integration

LSD Contributions1. Use of multi-strategy learning

well-suited to exploit multiple types of knowledge highly modular & extensible

2. Extend learning to incorporate constraints handle a wide range of domain & user-specified constraints

3. Develop XML learner exploit hierarchical nature of XML

Page 31: Data Integration

Multi-Strategy LearningUse a set of base learners

Each exploits well certain types of information: Name learner looks at words in the attribute names Naïve Bayes learner looks at patterns in the data values Etc.

Match schema elements of a new source Apply the base learners

Each returns a score For different attributes one learner is more useful than another

Combine their predictions using a meta-learnerMeta-learner

Uses training sources to measure base learner accuracy Weights each learner based on its accuracy

Page 32: Data Integration

Base Learners Input

schema information: name, proximity, structure, ... data information: value, format, ...

Output prediction weighted by confidence score

Examples Name learner

agent-name => (name,0.7), (phone,0.3) Naive Bayes learner

“Kent, WA” => (address,0.8), (name,0.2) “Great location” => (description,0.9), (address,0.1)

Page 33: Data Integration

The two phases of LSD

L1 L2 Lk

Mediated schema

Source schemas

Data listings

Training datafor base learners Constraint Handler

Mapping Combination

User Feedback

Domain Constraints

Base learners: Name Learner, XML learner, Naive Bayes, Whirl learner Meta-learner

uses stacking [Ting&Witten99, Wolpert92] returns linear weighted combination of base learners’ predictions

Matching PhaseTraining Phase

Page 34: Data Integration

Training phase

1. Manually specify 1-1 mappings for several sources

2. Extract source data3. Create training data for each base learner4. Train the base learners5. Train the meta-learner

Page 35: Data Integration

<location> Boston, MA </> <listed-price> $110,000</> <phone> (617) 253 1429</> <comments> Great location </>

<location> Miami, FL </> <listed-price> $250,000</> <phone> (305) 729 0831</> <comments> Fantastic house </>

Training the Learners

Naive Bayes Learner

(location, address)(listed-price, price)(phone, agent-phone)(comments, description) ...

(“Miami, FL”, address)(“$ 250,000”, price)(“(305) 729 0831”, agent-phone)(“Fantastic house”, description) ...

realestate.com

Name Learner

address price agent-phone description

Schema of realestate.com

Mediated schema

location listed-price phone comments

Page 36: Data Integration

The matching phase

1. Extract and collect data2. Match each source-DTD tag3. Apply the constraint handler

Page 37: Data Integration

Figure 5 - SIGMOD’01 paper

Page 38: Data Integration

<extra-info>Beautiful yard</><extra-info>Great beach</><extra-info>Close to Seattle</>

<day-phone>(278) 345 7215</><day-phone>(617) 335 2315</><day-phone>(512) 427 1115</>

<area>Seattle, WA</><area>Kent, WA</><area>Austin, TX</>

Applying the Learners

Name LearnerNaive Bayes

Meta-Learner

(address,0.8), (description,0.2)(address,0.6), (description,0.4)(address,0.7), (description,0.3)

(address,0.6), (description,0.4)

Meta-LearnerName LearnerNaive Bayes

(address,0.7), (description,0.3)

(agent-phone,0.9), (description,0.1)

address price agent-phone descriptionSchema of homes.com Mediated schema

area day-phone extra-info

Page 39: Data Integration

Domain Constraints Impose semantic regularities on sources

verified using schema or data Examples

a = address & b = address a = b a = house-id a is a key a = agent-info & b = agent-name b is nested in a

Can be specified up front when creating mediated schema independent of any actual source schema

Page 40: Data Integration

area: address contact-phone: agent-phoneextra-info: description

area: address contact-phone: agent-phoneextra-info: address

area: (address,0.7), (description,0.3)contact-phone: (agent-phone,0.9), (description,0.1)extra-info: (address,0.6), (description,0.4)

The Constraint Handler

Can specify arbitrary constraints User feedback = domain constraint

ad-id = house-id Extended to handle domain heuristics

a = agent-phone & b = agent-name a & b are usually close to each other

0.30.10.40.012

0.70.90.60.378

0.70.90.40.252

Domain Constraintsa = address & b = adderss a = b

Predictions from Meta-Learner

Page 41: Data Integration

Existing learners flatten out all structures

Developed XML learner similar to the Naive Bayes learner

input instance = bag of tokens differs in one crucial aspect

consider not only text tokens, but also structure tokens

Exploiting Hierarchical Structure

<description> Victorian house with a view. Name your price! To see it, contact Gail Murphy at MAX Realtors.</description>

<contact> <name> Gail Murphy </name> <firm> MAX Realtors </firm></contact>

Page 42: Data Integration

Related Work Rule-based approaches

TRANSCM [Milo&Zohar98], ARTEMIS [Castano&Antonellis99], [Palopoli et. al. 98], CUPID [Madhavan et. al. 01]

utilize only schema information Learner-based approaches

SEMINT [Li&Clifton94], ILA [Perkowitz&Etzioni95] employ a single learner, limited applicability

Others DELTA [Clifton et. al. 97], CLIO [Miller et. al. 00][Yan et. al. 01]

Multi-strategy learning in other domains series of workshops [91,93,96,98,00] [Freitag98], Proverb [Keim et. al. 99]

Page 43: Data Integration

Summary LSD project

applies machine learning to schema matching Main ideas & contributions

use of multi-strategy learning extend learning to handle domain & user-specified constraints develop XML learner

System design: A contribution to generic schema-matching highly modular & extensible handle multiple types of knowledge continuously improve over time

Page 44: Data Integration

End LSD

Page 45: Data Integration

Mappings between SchemasLSD provides attribute correspondences, but not

complete mappingsMappings generally are posed as views: define relations in

one schema (typically either the mediated schema or the source schema), given data in the other schema This allows us to “restructure” or “recompose + decompose” our

data in a new way

We can also define mappings between values in a view We use an intermediate table defining correspondences – a

“concordance table” It can be filled in using some type of code, and corrected by hand

Page 46: Data Integration

A Few Mapping Examples Movie(Title, Year, Director,

Editor, Star1, Star2)

Movie(Title, Year, Director, Editor, Star1, Star2)

PieceOfArt(ID, Artist, Subject, Title, TypeOfArt)

MotionPicture(ID, Title, Year)Participant(ID, Name, Role)

CustID CustName1234 Smith, J.

PennID EmpName46732 John Smith

PieceOfArt(I, A, S, T, “Movie”) :- Movie(T, Y, A, _, S1, S2),I = T || Y, S = S1 || S2

Movie(T, Y, D, E, S1, S2) :- MotionPicture(I, T, Y), Participant(I, D, “Dir”), Participant(I, E, “Editor”), Participant(I, S1, “Star1”), Participant(I, S2, “Star2”)

T1 T2

Need a concordance table from CustIDs to PennIDs

Page 47: Data Integration

Two Important Approaches TSIMMIS [Garcia-Molina+97] – Stanford

Focus: semistructured data (OEM), OQL-based language (Lorel)

Creates a mediated schema as a view over the sources Spawned a UCSD project called MIX, which led to a company

now owned by BEA Systems Other important systems of this vein: Kleisli/K2 @ Penn

Information Manifold [Levy+96] – AT&T Research Focus: local-as-view mappings, relational model Sources defined as views over mediated schema Led to peer-to-peer integration approaches (Piazza, etc.)

Focus: Web-based queriable sources

Page 48: Data Integration

TSIMMIS

One of the first systems to support semi-structured data, which predated XML by several years: “OEM”

An instance of a “global-as-view” mediation system We define our global schema as views over the

sources

Page 49: Data Integration

XML vs. Object Exchange Model<book> <author>Bernstein</author> <author>Newcomer</author> <title>Principles of TP</title></book>

<book> <author>Chamberlin</author> <title>DB2 UDB</title></book> O1: book {

O2: author { Bernstein } O3: author { Newcomer } O4: title { Principles of TP }

}

O5: book { O6: author { Chamberlin } O7: title { DB2 UDB }}

Page 50: Data Integration

Queries in TSIMMISSpecified in OQL-style language called Lorel

OQL was an object-oriented query language that looks like SQL Lorel is, in many ways, a predecessor to XQuery

Based on path expressions over OEM structures: select book where book.title = “DB2 UDB” and book.author = “Chamberlin”

This is basically like XQuery, which we’ll use in place of Lorel and the MSL template language. Previous query restated =

for $b in AllData()/bookwhere $b/title/text() = “DB2 UDB” and $b/author/text() = “Chamberlin”return $b

Page 51: Data Integration

Query Answering in TSIMMIS

Basically, it’s view unfolding, i.e., composing a query with a view

The query is the one being asked The views are the MSL templates for the wrappers Some of the views may actually require

parameters, e.g., an author name, before they’ll return answers

Common for web forms (see Amazon, Google, …) XQuery functions (XQuery’s version of views) support

parameters as well, so we’ll see these in action

Page 52: Data Integration

Example: View Unfolding/Expansion A view consisting of branches and their customers

Find all customers of the Perryridge branch

create view all_customer as (select branch_name, customer_name from depositor, account where depositor.account_number =

account.account_number ) union (select branch_name, customer_name from borrower, loan where borrower.loan_number = loan.loan_number )

select customer_namefrom all_customerwhere branch_name = 'Perryridge'

Page 53: Data Integration

A Wrapper Definition in MSLWrappers have templates and binding patterns ($X) in MSL:

B :- B: <book {<author $X>}> // $$ = “select * from book where author=“ $X //

This reformats a SQL query over Book(author, year, title)

In XQuery, this might look like:define function GetBook($x AS xsd:string) as book {

for $b in sql(“Amazon.DB”, “select * from book where author=‘” + $x +”’”)return <book>{$b/title}<author>$x</author></book>

}

book

title author

… …

The union of GetBook’s results is unioned with others to form the view Mediator()

Page 54: Data Integration

How to Answer the Query

Given our query:for $b in Mediator()/bookwhere $b/title/text() = “DB2 UDB” and $b/author/text() = “Chamberlin”return $b

Find all wrapper definitions that: Contain enough “structure” to match the

conditions of the query Or have already tested the conditions for us!

Page 55: Data Integration

Query Composition with ViewsWe find all views that define book with author and title, and we compose the query with each:

define function GetBook($x AS xsd:string) as book {for $b in sql(“Amazon.DB”, “select * from book where author=‘” + $x + “’”)return <book> {$b/title} <author>{$x}</author></book>

}for $b in Mediator()/book

where $b/title/text() = “DB2 UDB” and $b/author/text() = “Chamberlin”return $b

book

title author

… …

Page 56: Data Integration

Matching View Output to Our Query’s Conditions Determine that $b/book/author/text() $x by

matching the pattern on the function’s output:define function GetBook($x AS xsd:string) as book {

for $b in sql(“Amazon.DB”, “select * from book where author=‘” + $x + “’”)return <book>{ $b/title } <author>{$x}</author></book>

}

let $x := “Chamberlin”for $b in GetBook($x)/bookwhere $b/title/text() = “DB2 UDB” return $b

book

title author

… …

Page 57: Data Integration

The Final Step: Unfoldinglet $x := “Chamberlin”

for $b in ( for $b’ in sql(“Amazon.com”,“select * from book where author=‘” + $x + “’”) return <book>{ $b/title }<author>{$x}</author></book> )/bookwhere $b/title/text() = “DB2 UDB” return $b

How do we simplify further to get to here?for $b in sql(“Amazon.com”,“select * from book where author=‘Chamberlin’”)where $b/title/text() = “DB2 UDB” return $b

Page 58: Data Integration

Virtues of TSIMMIS

Early adopter of semistructured data, greatly predating XML Can support data from many different kinds of

sources Obviously, doesn’t fully solve heterogeneity problem

Presents a mediated schema that is the union of multiple views Query answering based on view unfolding

Easily composed in a hierarchy of mediators

Page 59: Data Integration

Limitations of TSIMMIS’ ApproachSome data sources may contain data with certain ranges or properties

“Books by Aho”, “Students at UPenn”, … If we ask a query for students at Columbia, don’t

want to bother querying students at Penn… How do we express these?

Mediated schema is basically the union of the various MSL templates – as they change, so may the mediated schema

Page 60: Data Integration

Schema mapping languages Schema mapping: set of expressions that

describe a relationship between a set of schemata (typically two). In our case, mediator schema and the schema of the sources Used to reformulate a query formulated in terms of

the mediated schema into appropriate queries on the sources.

Result is called logical query plan (refers only to the relations in the data sources)

Schema mapping languages: global-as-view, local-as-view, global-local-as-view.

Page 61: Data Integration

Components of a data integration system

Page 62: Data Integration

Properties of mapping languages Flexibility: the formalism should be able to

express a wide variety of relationships between schemata.

Efficient reformulation: reformulation algorithms should have well understood properties and be efficient

Easy update: Must be easy to add and remove sources

Page 63: Data Integration

An Alternate Approach:The Information Manifold (Levy et al.)When you integrate something, you have some conceptual

model of the integrated domain Define that as a basic frame of reference, everything else as a

view over it “Local as View”

May have overlapping/incomplete sources Define each source as the subset of a query over the mediated

schema We can use selection or join predicates to specify that a source

contains a range of values:ComputerBooks(…) Books(Title, …, Subj), Subj =

“Computers”

Page 64: Data Integration

The Local-as-View Model

The basic model is the following: “Local” sources are views over the mediated

schema Sources have the data – mediated schema is virtual Sources may not have all the data from the domain

– “open-world assumption”

The system must use the sources (views) to answer queries over the mediated schema

Page 65: Data Integration

Query AnsweringAssumption: conjunctive queries, set semanticsSuppose we have a mediated schema:

author(aID, isbn, year), book(isbn, title, publisher)the query:

q(a, t) :- author(a, i, _), book(i, t, p), t = “DB2 UDB”and sources:

s1(a,t) author(a, i, _), book(i, t, p), t = “123”…s5(a, t, p) author(a, i, _), book(i,t), p = “SAMS”

We want to compose the query with the source mappings – but they’re in the wrong direction!

Yet: everything in s1, s5 is an answer to the query! The idea is to determine which views may be relevant

to each subgoal of the query in isolation

Page 66: Data Integration

Answering Queries Using ViewsNumerous recently-developed algorithms for

these Inverse rules [Duschka et al.] Bucket algorithm [Levy et al.] MiniCon [Pottinger & Halevy] Also related: “chase and backchase” [Popa,

Tannen, Deutsch]

Requires conjunctive queries

Page 67: Data Integration

Advantages and Shortcomings of LAV Enables expressing incomplete information More robust way of defining mediated schemas

and sources Mediated schema is clearly defined, less likely to

change Sources can be more accurately described

Computationally more expensive!

Page 68: Data Integration

Summary of Data Integration Integration requires standardization on a single

schema Can be hard to get consensus Today we have peer-to-peer data integration, e.g.,

Piazza [Halevy et al.], Orchestra [Ives et al.], Hyperion [Miller et al.]

Some other aspects of integration were addressed in related papers Overlap between sources; coverage of data at

sources Semi-automated creation of mappings and wrappers

Data integration capabilities in commercial products: BEA’s Liquid Data, IBM’s DB2 Information Integrator, numerous packages from middleware companies

Page 69: Data Integration

Referências Draft of the book on Data Integration by Alon Halevy

(in preparation). AnHai Doan, Pedro Domingos, Alon Halevy,

“Reconciling Schemas of Disparate Data Sources: A Machine Learning Approach”, SIGMOD 2001.

Hector Garcia-Molina et al, “The TSIMMIS Approach to Mediation: Data Models and Languages”, Journal of Intelligent Information Systems (JIIS) , 8 (2) : 117-132 , 1997

http://infolab.stanford.edu/tsimmis/ Alon Levy et al, “Querying Heterogeneous Information

Sources using Source Descriptions”, VLDB 1996.