Try It The Google Way .
-
Upload
abhinavbom -
Category
Documents
-
view
766 -
download
0
description
Transcript of Try It The Google Way .
TRY IT THE WAY !!!
FOUNDERS:
Larry Page (currently, President of Manufacturing) and Sergey Brin (President of Technology)
Created “BackRub” web search engine in 1996 with a motive to bring the net on their system
HISTORY OF GOOGLE SO FAR :
In 1998 Larry and Sergey(Stanford Graduates) changed the name BackRub to google and started their company “Google Inc.”
Later that year they received their first funding cheque worth $100,000.
In 2000, google toolbar and adwords were introduced.
AOL added google as their search partners officially.
In 2003, google launched their adSense program.
SOME ROUGH STATISTICS OF GOOGLE (FROM AUGUST 29TH, 1996)
Number of webpages fetched-24 Million
Total indexable HTML urls: 75.2306 Million
Total content downloaded: 207.022 gigabytes
SERVICES PROVIDED BY GOOGLE APART FROM BEING A SEARCH ENGINE
WHAT MADE GOOGLE SO POPULAR ?
Chief features are: pageRank Algorithm Anchor text
Other features are: Big Files Repository Document Index Hit lists
PAGERANK ALGORITHM(BRINGING ORDER TO THE WEB) A PageRank for 26 million web
pages can be computed in a few hours on a medium size workstation.
Firstly, citation graphs are created, containing as many as 518 million hyperlinks(Assumed).
These maps help in calculating the page rank of different web pages.
A simple formula is used to create the page ranks for any search
PAGERANK FORMULA
PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))
T1….Tn are citations to a page d is the Damping Factor (value between 0 to
1). Usually has a value of 0.85. C(A) is the no of links going out of that page. pageRank can be calculated by using a
simple iterative algorithm.
ANCHOR TEXT
Usually the links are given the text as the type of page they are associated with.
Google creates a separate database to maitain these indexes.
This helps to retrieve even those pages which are not being crawled.
In this case, the search engine can even return a page that never actually existed, but had hyperlinks pointing to it.
REPOSITORY
The repository contains the full HTML of every web page.
Each page is compressed using zlib. compression rate of zlib is 3 to 1. the documents are stored one after the other
and are prefixed by docID, length, and URL.
HIT LISTS-A hit list corresponds to a list of occurrences of a particular word in a particular document including position, font, and capitalization information.
DOCUMENT INDEX-The document index keeps information about each document. It is a fixed width ISAM (Index sequential access mode) index, ordered by docID.
BIGFILES-BigFiles are virtual files spanning multiple file systems and are addressable by 64 bit integers.
GOOGLE ARCHITECTURE OVERVIEW
CRAWLING THE WEB
In order to scale to hundreds of millions of web pages, Google has a fast distributed crawling system. A single URLserver serves lists of URLs to a number of crawlers (we typically ran about 3). Both the URLserver and the crawlers are implemented in Python. Each crawler keeps roughly 300 connections open at once.At peak speeds, the system can crawl over 100 web pages per second using four crawlers.Googlebot is the search bot software used by Google, which collects documents from the web to build a searchable index for the Google Search engine.
WHAT ELSE GOOGLE CAN DO ?
Refine search results Calculator Currency converter Time zones Specific “filetype” search Advanced search I Am Feeling Lucky. Dictionary Language translator
CREATED BY:
ANMOL BUBER(0713313015)ABHINAV SINGH(0713313003)