CS621 : Artificial Intelligence

Post on 31-Jan-2016

11 views 0 download

description

CS621 : Artificial Intelligence. Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 15: IR models contd. The IR scenario. Docs. Index Terms. doc. match. Ranking. Information Need. IR system Maker’s view. query. Definition of IR Model. An IR model is a quadrupul - PowerPoint PPT Presentation

Transcript of CS621 : Artificial Intelligence

CS621 : Artificial Intelligence

Pushpak BhattacharyyaCSE Dept., IIT Bombay

Lecture 15: IR models contd.

The IR scenarioDocs

Information Need

Index Terms

doc

query

Rankingmatch

IR system Maker’s view

Definition of IR Model

An IR model is a quadrupul

[D, Q, F, R(qi, dj)]

Where,

D: documents

Q: Queries

F: Framework for modeling document, query and their relationships

R(.,.): Ranking function returning a real no. expressing the relevance of dj with qi

Index Terms

• Keywords representing a document• Semantics of the word helps remember the main

theme of the document• Generally nouns• Assign numerical weights to index terms to

indicate their importance

Classic IR Models - Basic Concepts• The importance of the index terms is represented by weights associated to them• Let

– t be the number of index terms in the system

– K= {k1, k2, k3,... kt} set of all index terms

– ki be an index term

– dj be a document

– wij is a weight associated with (ki,dj)

– wij = 0 indicates that term does not belong to doc

– vec(dj) = (w1j, w2j, …, wtj) is a weighted vector associated with the document dj

– gi(vec(dj)) = wij is a function which returns the weight associated with pair (ki,dj)

The Boolean Model

• Simple model based on set theory• Only AND, OR and NOT are used• Queries specified as boolean expressions

– precise semantics– neat formalism

– q = ka (kb kc)

• Terms are either present or absent. Thus, wij {0,1}

• Consider

– q = ka (kb kc)

– vec(qdnf) = (1,1,1) (1,1,0) (1,0,0)

– vec(qcc) = (1,1,0) is a conjunctive component

The Boolean Model

• q = ka (kb kc)

• sim(q,dj) = 1 if vec(qcc) | (vec(qcc) vec(qdnf)) (ki, gi(vec(dj)) = gi(vec(qcc))) 0 otherwise

(1,1,1)(1,0,0)

(1,1,0)

Ka Kb

Kc

Drawbacks of the Boolean Model

• Retrieval based on binary decision criteria with no notion of partial matching

• No ranking of the documents is provided (absence of a grading scale)• Information need has to be translated into a Boolean expression which

most users find awkward• The Boolean queries formulated by the users are most often too

simplistic• As a consequence, the Boolean model frequently returns either too few

or too many documents in response to a user query

The Vector Model

• Use of binary weights is too limiting• Non-binary weights provide consideration for partial

matches• These term weights are used to compute a degree

of similarity between a query and each document• Ranked set of documents provides for better

matching

The Vector Model• Define:

– wij > 0 whenever ki dj

– wiq >= 0 associated with the pair (ki,q)

– vec(dj) = (w1j, w2j, ..., wtj)vec(q) = (w1q, w2q, ..., wtq)

• In this space, queries and documents are represented as weighted vectors

The Vector Model

• Sim(q,dj) = cos() = [vec(dj) vec(q)] / |dj| * |q|

= [ wij * wiq] / |dj| * |q|

• Since wij > 0 and wiq > 0, 0 <= sim(q,dj) <=1

• A document is retrieved even if it matches the query terms only partially

i

j

dj

q

The Vector Model

• Sim(q,dj) = [ wij * wiq] / |dj| * |q|

• How to compute the weights wij and wiq ?

• A good weight must take into account two effects:– quantification of intra-document contents (similarity)

• tf factor, the term frequency within a document– quantification of inter-documents separation (dissi-

milarity)• idf factor, the inverse document frequency

– wij = tf(i,j) * idf(i)

The Vector Model• Let,

– N be the total number of docs in the collection– ni be the number of docs which contain ki– freq(i,j) raw frequency of ki within dj

• A normalized tf factor is given by– f(i,j) = freq(i,j) / max(freq(l,j))– where the maximum is computed over all terms which occur within the

document dj• The idf factor is computed as

– idf(i) = log (N/ni)– the log is used to make the values of tf and idf comparable. It can also be

interpreted as the amount of information associated with the term ki.

The Vector Model

• The best term-weighting schemes use weights which are give by

– wij = f(i,j) * log(N/ni)– the strategy is called a tf-idf weighting scheme

• For the query term weights, a suggestion is

– wiq = (0.5 + [0.5 * freq(i,q) / max(freq(l,q)]) * log(N/ni)

• The vector model with tf-idf weights is a good ranking strategy with general collections

• The vector model is usually as good as the known ranking alternatives. It is also simple and fast to compute.

The Vector Model

• Advantages:– term-weighting improves quality of the answer set– partial matching allows retrieval of docs that

approximate the query conditions– cosine ranking formula sorts documents according to

degree of similarity to the query• Disadvantages:

– assumes independence of index terms (??); not clear that this is bad though

The Vector Model: Example I

k1 k2 k3 q djd1 1 0 1 2d2 1 0 0 1d3 0 1 1 2d4 1 0 0 1d5 1 1 1 3d6 1 1 0 2d7 0 1 0 1

q 1 1 1

d1

d2

d3d4 d5

d6d7

k1k2

k3

The Vector Model: Example II

d1

d2

d3d4 d5

d6d7

k1k2

k3

k1 k2 k3 q djd1 1 0 1 4d2 1 0 0 1d3 0 1 1 5d4 1 0 0 1d5 1 1 1 6d6 1 1 0 3d7 0 1 0 2

q 1 2 3

The Vector Model: Example III

d1

d2

d3d4 d5

d6d7

k1k2

k3

k1 k2 k3 q djd1 2 0 1 5d2 1 0 0 1d3 0 1 3 11d4 2 0 0 2d5 1 2 4 17d6 1 2 0 5d7 0 5 0 10

q 1 2 3

Probabilistic Model

Probabilistic Model

• Objective: to capture the IR problem using a probabilistic framework

• Given a user query, there is an ideal answer set• Querying as specification of the properties of this ideal

answer set (clustering)• But, what are these properties? • Guess at the beginning what they could be (i.e., guess

initial description of ideal answer set)• Improve by iteration

Probabilistic Model• An initial set of documents is retrieved somehow

• User inspects these docs looking for the relevant ones (in truth, only top 10-20 need to be inspected)

• IR system uses this information to refine description of ideal answer set

• By repeting this process, it is expected that the description of the ideal answer set will improve

• Have always in mind the need to guess at the very beginning the description of the ideal answer set

• Description of ideal answer set is modeled in probabilistic terms

Probabilistic Ranking Principle• Given a user query q and a document dj, the probabilistic model

tries to estimate the probability that the user will find the document dj interesting (i.e., relevant). The model assumes that this probability of relevance depends on the query and the document representations only. Ideal answer set is referred to as R and should maximize the probability of relevance. Documents in the set R are predicted to be relevant.

• But,– how to compute probabilities?– what is the sample space?

The Ranking

• Probabilistic ranking computed as:– sim(q,dj) = P(dj relevant-to q) / P(dj non-relevant-

to q)– This is the odds of the document dj being relevant– Taking the odds minimize the probability of an

erroneous judgement• Definition:

– wij {0,1}– P(R | vec(dj)) : probability that given doc is relevant– P(R | vec(dj)) : probability doc is not relevant

The Ranking

• sim(dj,q) = P(R | vec(dj)) / P(R | vec(dj))

= [P(vec(dj) | R) * P(R)] [P(vec(dj) | R) * P(R)]

~ P(vec(dj) | R) P(vec(dj) | R)

• P(vec(dj) | R) : probability of randomly selecting the document dj from the set R of relevant documents

The Ranking

• sim(dj,q) ~ P(vec(dj) | R) P(vec(dj) | R)

~ [ P(ki | R)] * [ P(ki | R)] [ P(ki | R)] * [ P(ki | R)]

• P(ki | R) : probability that the index term ki is present in a document randomly selected from the set R of relevant documents

The Ranking• sim(dj,q) ~ log [ P(ki | R)] * [ P(kj | R)]

[ P(ki | R)] * [ P(kj | R)]

~ K * [ log P(ki | R) + P(ki | R)

log P(ki | R) ] P(ki | R)

~ wiq * wij * (log P(ki | R) + log P(ki | R) ) P(ki | R) P(ki | R)

where P(ki | R) = 1 - P(ki | R)P(ki | R) = 1 - P(ki | R)

The Initial Ranking• sim(dj,q) ~ ~ wiq * wij * (log P(ki | R)

+ log P(ki | R) ) P(ki | R) P(ki | R)• Probabilities P(ki | R) and P(ki | R) ?• Estimates based on assumptions:

– P(ki | R) = 0.5– P(ki | R) = ni

Nwhere ni is the number of docs that contain ki

– Use this initial guess to retrieve an initial ranking– Improve upon this initial ranking

Improving the Initial Ranking• sim(dj,q) ~ ~ wiq * wij *

(log P(ki | R) + log P(ki | R) ) P(ki | R) P(ki | R)

• Let– V : set of docs initially retrieved– Vi : subset of docs retrieved that contain ki

• Reevaluate estimates: – P(ki | R) = Vi V– P(ki | R) = ni - Vi N - V

• Repeat recursively

Improving the Initial Ranking• sim(dj,q) ~ ~ wiq * wij * (log

P(ki | R) + log P(ki | R) ) P(ki | R) P(ki | R)

• To avoid problems with V=1 and Vi=0:– P(ki | R) = Vi + 0.5

V + 1– P(ki | R) = ni - Vi + 0.5

N - V + 1• Also,

– P(ki | R) = Vi + ni/N V + 1

– P(ki | R) = ni - Vi + ni/N N - V + 1

Pluses and Minuses

• Advantages:– Docs ranked in decreasing order of probability of

relevance• Disadvantages:

– need to guess initial estimates for P(ki | R)– method does not take into account tf and idf

factors

Brief Comparison of Classic Models

• Boolean model does not provide for partial matches and is considered to be the weakest classic model

• Salton and Buckley did a series of experiments that indicate that, in general, the vector model outperforms the probabilistic model with general collections

• This seems also to be the view of the research community