INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

62
INF 2914 Web Search Lecture 2: Crawlers

Transcript of INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Page 1: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

INF 2914Web Search

Lecture 2: Crawlers

Page 2: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Today’s lecture Crawling

Page 3: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Basic crawler operation Begin with known “seed”

pages Fetch and parse them

Extract URLs they point to Place the extracted URLs on a queue

Fetch each URL on the queue and repeat

Page 4: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Crawling picture

Web

URLs crawledand parsed

URLs frontier

Unseen Web

Seedpages

Page 5: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Simple picture – complications Web crawling isn’t feasible with one machine

All of the above steps distributed Even non-malicious pages pose challenges

Latency/bandwidth to remote servers vary Webmasters’ stipulations

How “deep” should you crawl a site’s URL hierarchy?

Site mirrors and duplicate pages Malicious pages

Spam pages Spider traps – incl dynamically generated

Politeness – don’t hit a server too often

Page 6: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

What any crawler must do Be Polite: Respect implicit and explicit

politeness considerations for a website Only crawl pages you’re allowed to Respect robots.txt (more on this

shortly) Be Robust: Be immune to spider traps

and other malicious behavior from web servers

Page 7: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

What any crawler should do Be capable of distributed operation:

designed to run on multiple distributed machines

Be scalable: designed to increase the crawl rate by adding more machines

Performance/efficiency: permit full use of available processing and network resources

Page 8: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

What any crawler should do Fetch pages of “higher quality”

first Continuous operation: Continue

fetching fresh copies of a previously fetched page

Extensible: Adapt to new data formats, protocols

Page 9: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Updated crawling picture

URLs crawledand parsed

Unseen Web

SeedPages

URL frontierCrawling thread

Page 10: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

URL frontier Can include multiple pages from

the same host Must avoid trying to fetch them

all at the same time Must try to keep all crawling

threads busy

Page 11: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Explicit and implicit politeness Explicit politeness: specifications

from webmasters on what portions of site can be crawled robots.txt

Implicit politeness: even with no specification, avoid hitting any site too often

Page 12: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Robots.txt Protocol for giving spiders (“robots”)

limited access to a website, originally from 1994 www.robotstxt.org/wc/norobots.html

Website announces its request on what can(not) be crawled For a URL, create a file URL/robots.txt

This file specifies access restrictions

Page 13: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Robots.txt example No robot should visit any URL starting with

"/yoursite/temp/", except the robot called “searchengine":

User-agent: *Disallow: /yoursite/temp/

User-agent: searchengineDisallow:

Page 14: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Processing steps in crawling Pick a URL from the frontier Fetch the document at the URL Parse the URL

Extract links from it to other docs (URLs) Check if URL has content already seen

If not, add to indexes For each extracted URL

Ensure it passes certain URL filter tests Check if it is already in the frontier

(duplicate URL elimination)

E.g., only crawl .edu, obey robots.txt, etc.

Which one?

Page 15: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Basic crawl architecture

WWWFetch

DNS

ParseContentseen?

URLfilter

DupURLelim

DocFP’s

URLset

URL Frontier

robotsfilters

Page 16: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

DNS (Domain Name Server) A lookup service on the internet

Given a URL, retrieve its IP address Service provided by a distributed set of servers

– thus, lookup latencies can be high (even seconds)

Common OS implementations of DNS lookup are blocking: only one outstanding request at a time

Solutions DNS caching Batch DNS resolver – collects requests and

sends them out together

Page 17: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Parsing: URL normalization When a fetched document is parsed, some of

the extracted links are relative URLs E.g., at http://en.wikipedia.org/wiki/Main_Pagewe have a relative link to

/wiki/Wikipedia:General_disclaimer which is the same as the absolute URL http://en.wikipedia.org/wiki/Wikipedia:General_disclaimer

During parsing, must normalize (expand) such relative URLs

Page 18: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Content seen? Duplication is widespread on

the web If the page just fetched is

already in the index, do not further process it

This is verified using document fingerprints or shingles

Page 19: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Filters and robots.txt Filters – regular expressions for

URL’s to be crawled/not Once a robots.txt file is fetched

from a site, need not fetch it repeatedly Doing so burns bandwidth, hits

web server Cache robots.txt files

Page 20: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Duplicate URL elimination For a non-continuous (one-shot)

crawl, test to see if an extracted+filtered URL has already been passed to the frontier

For a continuous crawl – see details of frontier implementation

Page 21: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Distributing the crawler Run multiple crawl threads, under

different processes – potentially at different nodes Geographically distributed nodes

Partition hosts being crawled into nodes Hash used for partition

How do these nodes communicate?

Page 22: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Communication between nodes

The output of the URL filter at each node is sent to the Duplicate URL Eliminator at all nodes

WWWFetch

DNS

ParseContentseen?

URLfilter

DupURLelim

DocFP’s

URLset

URL Frontier

robotsfilters

Hostsplitter

Tootherhosts

Fromotherhosts

Page 23: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

URL frontier: two main considerations Politeness: do not hit a web server

too frequently Freshness: crawl some pages more

often than others E.g., pages (such as News sites)

whose content changes oftenThese goals may conflict each other.

Page 24: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Politeness – challenges Even if we restrict only one

thread to fetch from a host, can hit it repeatedly

Common heuristic: insert time gap between successive requests to a host that is >> time for most recent fetch from that host

Page 25: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

URL frontier: Mercator scheme

Prioritizer

Biased front queue selectorBack queue router

Back queue selector

K front queues

B back queuesSingle host on each

URLs

Crawl thread requesting URL

Page 26: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Mercator URL frontier

Front queues manage prioritization Back queues enforce politeness Each queue is FIFO

Page 27: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Front queuesPrioritizer

1 K

Biased front queue selectorBack queue router

Page 28: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Front queues Prioritizer assigns to URL an integer

priority between 1 and K Appends URL to corresponding queue

Heuristics for assigning priority Refresh rate sampled from previous

crawls Application-specific (e.g., “crawl news

sites more often”)

Page 29: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Biased front queue selector When a back queue requests a URL

(in a sequence to be described): picks a front queue from which to pull a URL

This choice can be round robin biased to queues of higher priority, or some more sophisticated variant Can be randomized

Page 30: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Back queuesBiased front queue selector

Back queue router

Back queue selector

1 B

Heap

Page 31: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Back queue invariants Each back queue is kept non-empty while

the crawl is in progress Each back queue only contains URLs from

a single host Maintain a table from hosts to back queues

Host name Back queue… 3

1B

Page 32: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Back queue heap One entry for each back queue The entry is the earliest time te at

which the host corresponding to the back queue can be hit again

This earliest time is determined from Last access to that host Any time buffer heuristic we choose

Page 33: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Back queue processing A crawler thread seeking a URL to crawl: Extracts the root of the heap Fetches URL at head of corresponding back

queue q (look up from table) Checks if queue q is now empty – if so, pulls a

URL v from front queues If there’s already a back queue for v’s host,

append v to q and pull another URL from front queues, repeat

Else add v to q When q is non-empty, create heap entry for it

Page 34: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Number of back queues B Keep all threads busy while

respecting politeness Mercator recommendation: three

times as many back queues as crawler threads

Page 35: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Determinando a frequência de atualização de páginas

Estudar políticas de atualização de uma base de dados local com N páginas

As páginas na base de dados são cópias das páginas encontradas na Web

Dificuldade Quando uma página da Web é atualizada o crawler não é

informado

Page 36: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

FrameworkMedindo o quanto o banco de dados esta atualizado

Freshness F(ei,t)=1 se a página ei esta atualizada no instante 1 e 0

caso contrário F(S,t): Freshness médio da base de dados S no instante t

Age A(ei,t) = 0 se ei esta atualizada no instante t e t-tm(ei) onde

tm é o instante da última modificação de ei A(S,t) : Age médio da base de dados S no instante t

Page 37: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

FrameworkUtiliza-se o freshness (age) médio ao longo do tempo

para comparar diferentes políticas de atualização de páginas

t

ot

t

oiti

dttSFt

SF

dtteFt

eF

),(1lim)(ˆ

),(1lim)(ˆ

Page 38: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Framework Processo de Poisson (N(t))

Processo estocástico que determina o número de eventos ocorridos no intervalo [0,t]

Propriedades Processo sem memória: O número de

eventos que ocorrem em um intervalo limitado de tempo após o instante t independe do número de eventos que ocorreram antes de t

ttNt ]1)(Pr[0lim

Page 39: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Framework Hipótese: Um processo de Poisson é uma

boa aproximação para o modelo de modificação de páginas

Probabilidade de uma página ser modificada pelo menos uma vez no intervalo (0,t]

Quanto maior a taxa , maior a probabilidade de haver mudança

)exp(1 t

Page 40: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Framework Probabilidade de uma página

ser modificada pelo menos uma vez no intervalo (0,t]

Quanto maior a taxa , maior a probabilidade de haver mudança

)exp(1 t

Page 41: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Framework Evolução da base de dados

Taxa uniforme de modificação (único )

É razoável quando não se conhece o parâmetro

Taxa não uniforme de modificação (um diferente para cada página)

Page 42: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Políticas de Atualização Frequência de Atualização

Deve-se decidir quantas atualizações podem ser feitas por unidade de tempo

Assumimos que os N elementos são atulizados em I unidades de tempo

Diminuindo I aumentamos a taxa de atualização

A relação N/I depende do número de cralwlers, banda disponível, banda nos servidores, etc.

Page 43: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Políticas de Atualização Alocação de Recursos

Após decidir quantos elementos atualizar devemos decidir a frequência de atualização de cada elemento

Podemos atualizar todos elementos na mesma taxa ou atualizar mais frequentemente elementos que se modificam com maior frequência

Page 44: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Políticas de Atualização Exemplo

Um banco de dados contem 3 elementos {e1,e2,e3}

As taxas de modificação 4/dia , 3/dia e

2/dia, respectivamente.

São permitidas 9 atualizações por dia

Com que taxas as atualizações devem ser feita ?

Page 45: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Políticas de Atualização Política Uniforme

3 atualizações / dia para todas as páginas

Política Proporcional: e1: 4 atualizações / dia e2: 3 atualizaões /dia e3: 2 atualizações /dia

Qual da políticas é melhor ?

Page 46: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Política ótimaCalculando a frequência de atualização ótima. Dada uma frequência média f=N/I e a taxa média i

para cada página, deve-se calcular a frequência fi com que cada página i deve ser atualizada

ff

as

fFN

eFN

Maximizar

N

ii

N

iii

N

ii

1

11

N1

.

),(ˆ1)(ˆ1

Page 47: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Política Ótima

Considere um banco de dados com 5 elementos e com as seguintes taxas de modificação 1,2,3,4 e 5. Assumindo que 5 atualizações por dias são possíveis.

Page 48: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Política ÓtimaFrequência de atualização ótima. Para maximizar o freshness temos a seguinte curva

Page 49: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Política Ótima Maximizando o Freshenss

Páginas que se modificam pouco devem ser pouco atualizadas

Páginas que se modificam demais devem ser pouco atualizadas

Uma atualização consome recurso e garante o freshness da página atualizada por muito pouco tempo

Page 50: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Experimentos Foram escohidos 270 sites. Para

cada um destes 3000 páginas são atualizadas todos os dias.

Experimentos realizados de 9PM as 6AM durante 4 meses

10 segundos de intervalo entre requisições ao mesmo site

Page 51: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Estimando frequências de modificação

O intervalo médio de modificação de uma página é estimado dividindo o período em que o experimento foi realizado pelo total de modificações no período

Page 52: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Experimentos Verificação do processo de Poisson

Seleciona-se as páginas que se

modificam a uma taxa média (e.g. a cada 10 dias) e plota-se a distribuição do intervalo de mudanças

Page 53: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Verificando o Processo de Poisson Eixo horizontal: tempo entre duas modificações Eixo vertical: fração das modificações que ocorreram no dado

intervalo As retas são as predições Para a páginas que se modificam muito e páginas que se

modificam muito pouco não foi possível obter conclusões.

Page 54: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Alocação de Recursos Baseado nas frequências estimadas, calcula-se o freshness

esperado para cada uma das três políticas consideradas. Consideramos que é possível visitar 1Gb páginas/mês

Page 55: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Resources IIR 20 See also

Effective Page Refresh Policies For Web Crawlers

Cho& Molina

Page 56: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Trabalho 2 - Proposta Estudar políticas de atualização de páginas Web

Effective Page Refresh Policies For Web Crawlers

Optimal Crawling Strategies for Web Search Engines

User-centric Web crawling http://www.cs.cmu.edu/~olston/publications/

wic.pdf

Page 57: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Trabalho 3- Proposta Estudar políticas para distribuição de Crawlers

Parallel Crawlers Realizar pesquisa bibliográfica em artigos que citam este

Page 58: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Parallel crawling

Why? Aggregate resources of many machines Network load dispersion

Issues Quality of pages crawled (as before) Communication overhead Coverage; Overlap

Page 59: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Parallel crawling approach [Cho+ 2002] Partition URL’s; each crawler node responsible for

one partition many choices for partition function e.g., hash(host IP)

What kind of coordination among crawler nodes? 3 options:

firewall mode cross-over mode exchange mode

Page 60: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Coordination of parallel crawlers

ih

g

f

e

d

a

c b

crawler node 1 crawler node 2

modes: firewall cross-over exchange

Page 61: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Trabalho 4 - Proposta Google File System Map Reduce

http://labs.google.com/papers.html

Page 62: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling.

Trabalho 5 - Proposta XML Parsing, Tokenization, and Indexing