Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with...

27
Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1

Transcript of Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with...

Page 1: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Challenges and Evolution of the LHC

Production Grid April 13, 2011

Ian Fisk

1

Page 2: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Evolution

➡ Over the development the evolution of the WLCG Production grid has oscillated between structure and flexibility

- Driven by capabilities of the infrastructure and the needs of the experiments

2

CERN

Tier2

Lab a

Uni a

Lab c

Uni n

Lab m

Lab b

Uni bUni y

Uni x

Physics

Department

!

"

#

Desktop

Germany

Tier 1

USA

FermiLab

UK

France

Italy

NL

USA

Brookhaven

……….

Open

Science

Grid

Tier-2

Tier-2

Tier-2

Uni x

Uni u

Uni z

Tier-2

ALICE RemoteAccess

PD2P/Popularity

CMS Full Mesh

Page 3: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Evolution➡ LHC Computing has grown up with Grid development

- Many previous experiments have achieved distributed computing

- LHC experiments started with a fully distributed

3

‣ LHC Computing Grid was approved by CERN Council Sept. 20 2001

‣ First Grid Deployment Board was Oct. 2002

‣ LCG was built on services developed in Europe and the US.

‣ LCG has collaborated with a number of Grid Projects

‣ It evolved into the Worldwide LCG (WLCG)

‣ EGEE, EGI, NorduGrid, and Open Science Grid

‣ Services Support the 4 LHC Experiments

Grid Solution for Wide AreaComputing and Data HandlingGrid Solution for Wide AreaComputing and Data Handling

NORDUGRID

Page 4: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

WLCG Today

4

• Today  >140  sites• ~150k  CPU  cores• Hit  1M  jobs  per  day

• >50  PB  disk

Page 5: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Architectures➡ To greater and lesser extents

LHC Computing model are based on the MONARC model

- Developed more than a decade ago

- Foresaw Tiered Computing Facilities to meet the needs of the LHC Experiments

5

- 16 -

acknowledgement of the objective situation of network bandwidths and costs. Short distance networks will always be cheaper and higher bandwidth than long distance (especially intercontinental) networks. A

hierarchy of centres with associated data storage ensures that network realities will not interfere with physics analysis. Finally, regional centres provide a way to utilise the expertise and resources residing in

computing centres throughout the world. For a variety of reasons it is difficult to concentrate resources

(not only hardware but more importantly, personnel and support resources) in a single location. A regional centre architecture will provide greater total computing resources for the experiments by allowing flexibility in how these resources are configured and located.

A corollary of these motivations is that the regional centre model allows to optimise the efficiency of data delivery/access by making appropriate decisions on processing the data (1) where it resides, (2)

where the largest CPU resources are available, or (3) nearest to the user(s) doing the analysis.

Under different conditions of network bandwidth, required turnaround time, and the future use of the data, different combinations of (1) - (3) may be optimal in terms of resource utilisation or responsiveness to the users.

Figure 4-1 shows a schematic of the proposed hierarchy.

4.3 Characteristics of Regional Centres

The various levels of the hierarchy are characterised by services and capabilities provided, constituency served, data profile, and communications profile.

The offline software of each experiment performs the following tasks:

initial data reconstruction (which may include several steps such as preprocessing, reduction and streaming; some steps might be done online); Monte Carlo production (including event generation, detector simulation and reconstruction); offline (re)calibration ; successive data reconstruction; and

622 M

bit

s/s

622 M

bits

/s

CERN/CMS

350k Si95350 Tbytes Disk;

Robot

Tier2 Center20k Si95

20 Tbytes Disk, Robot

FNAL/BNL70k Si95

70 Tbytes Disk;Robot

622

Mbits

/s

N X

62

2 M

bit

s/s

622Mbits/s

622 Mbits/sTier3

Univ WG

1

Tier3Univ WG

M

Model Circa 2005Model Circa 2005

Tier3Univ WG

2

Fig. 4-1 Computing for an LHC Experiment Based on a Hierarchy of Computing Centers. Capacities

for CPU and disk are representative and are provided to give an approximate scale).

Tier-0 Tier-0 Tier-0

Tier-1 Tier-1 Tier-1 Tier-1 Tier-1 Tier-1 Tier-1 Tier-1

Tier-2 Tier-2 Tier-2 Tier-2 Tier-2 Tier-2 Tier-2

ATLASALICE CMS LHCb

Cloud

Mesh

Page 6: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Working Today➡ At the

LHC most analysis work is conducted far away from the data archives and storage is widely distributed

CERN Tape

Tier-1 Tier-1 Tier-1

Tape Tape

Tier-2 Tier-2 Tier-2 Tier-2

Prompt ProcessingArchival Storage

Organized ProcessingStorage

Data Serving

Chaotic Analysis

6

Page 7: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Processing Scale

➡ 2010 was the first full year of running

- Adding Tier-1 and Tier-2 computing time LHC used roughly 80 CPU millennia in 2010

7

0%

20%

40%

60%

80%

100%

120%

140%

160%

180%

200%

1 2 3 4 5 6 7 8 9 10 11 12

Perc

ent

ag

e o

f Util

ize

d H

our

s

Month 2010

Usage Tier-1 Only

ALICE Percentage Use

ATLAS Percentage Use

CMS Percentage Use

LHCb Percentage Use

Total Percentage Use

Figure 41: Percentage of available CPU hours used for all Tier-1s and CERN in 2010 by VO

350M hours at Tier-1

Page 8: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Scale of Storage➡ Decreases in the cost of disk and technology to run big

disk farms

- LHC is no longer talking about 10% disk caches

- In 2011 majority of the currently accessed data could be disk resident

ALICE ATLAS CMS LHCb

T0 Disk (TB) 6100 7000 4500 1500

T0 Tape (TB) 6800 12200 21600 2500

T1 Disk (TB) 7900 24800 19500 3500

T1 Tape (TB) 13100 30100 52400 3470

T2 Disk (TB) 6600 37600 19900 20

Disk Total (TB) 20600 69400 43900 5020

Tape Total (TB) 19900 42300 74000 5970

8

DZero CDF

~500 ~500

5900 6600

Page 9: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Scale of Archival Storage➡ Challenge is

growing volume of data that is produced

✦ With the current technology evolution CERN will have robotic capacity for half an exabyte

Experiment Data in CERN Castor

9

Experiment Data in FNAL Enstore per day(26PB total)

Page 10: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Analysis Disk Storage

➡ Example from LHC

- Tier-2s are very heavily utilized - Many of the challenging IO Applications

are conducted at centers with exclusively disk

➡ Tier-2s vary from 10s of TB at the smallest site to 1PB of disk at the larger sites

- There have been many more options to manage this much space

➡ In 2011 there are more than 60PB of T2 Disk in LHC

Tier-2 Tier-2 Tier-2

GPFSGFS

10

Country Usage Reports for Tier-2s

0%!20%!40%!60%!80%!

100%!120%!140%!160%!180%!200%!

January! February! March! April! May! June! July! August! September!October!November!December!

Per

cent

age

of H

ours

Use

d!

Total Percentage of Tier-2 Usage!

ALICE Percentage Use! ATLAS Percentage Use! CMS Percentage Use!

LHCb Percentage Use! Total Percentage Use!

Figure 17: Total Usage of Tier-2 Sites by Experiment

Figure 18: Australia Tier-2 Usage

Figure 19: Austria Tier-2 Usage

0%!

200%!

400%!

600%!

800%!

1,000%!

1,200%!

1,400%!

January! February! March! April! May! June! July! August!September!October!November!December!

Per

cent

age

of P

led

ge H

ours

Use

d !

Percentage of Use for Australia Tier-2!

0%!

25%!

50%!

75%!

100%!

January! February! March! April! May! June! July! August!September!October!November!December!

Per

cent

age

of P

led

ge H

ours

Use

d!

Percentage of Use for Austria Tier-2!

ALICE ATLAS CMS LHCb

Page 11: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Evolving Challenge - Data Management

➡ Data is backed up on tape. Organized processing centers have substantial disk caches

➡ Analysis centers have large disk resources➡ Good options in technology for virtualizing disk storage➡ What’s the problem?

- There are almost 100 Tier-2 sites that make up WLCG✦ Managing the space accessed by users efficiently is an

interesting problem 11

Page 12: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Placement➡ Computing models for LHC were based on

to greater and lesser extents on the MONARC computing model of 2000 and relied heavily on data placement

Tier-1

Tape

Tier-2 Tier-2- Jobs were sent to

datasets already resident on sites

- Multiple copies of the data would be hosted on the distributed infrastructure

- General concern that the network would be insufficient or unreliable

12Richard Mount

Page 13: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Distribution- Change

from

- To

13

Tran

sfer

s Wes

tTr

ansf

ers

East

Tier-1 Tier-1

Tier-1 Tier-1

Tier-2 Tier-2

Tier-2 Tier-2

Page 14: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Data Management

➡ Experiments have chosen a variety of philosophy

- ATLAS started with replication of nearly all data out to regions

- CMS divided into Central background samples, physics groups, and the local community

14

0

500

1000

1500

2000

Ana

lysi

s O

psB-

Phys

ics

b-ta

gE-

gam

ma

EWK

Exot

ica

Forw

ard HI

Hig

gsJe

t-M

ETLo

cal

Muo

nQ

CD

SUSY

Tau-

pFlo

wTo

p Tr

acke

r-D

PGTr

acke

r-O

PGTr

igge

r

Space (TB) CMS

Page 15: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Access➡ For a system intended to

protect against weak networking, we’re using a lot of network- LHC Experiments

reprocessed a lot of data in 2010

- Refreshing large disk caches requires a lot of networking

15

GDB, October 2010

ATLAS Demonstrator:PanDA Dynamic Data Placement

Kaushik De, Tadashi Maeno, Torre Wenaus, Alexei Klimentov, Rodney Walker, Graeme Stewart

0

100000

200000

300000

400000

500000

600000

700000

Nu

mb

er

of

Accesses

Update II

Traffic  on  OPN  up  to  70  Gb/s!-­‐  ATLAS  reprocessing  campaigns

ATLAS

In CMS 30 % of samples subscribed by physicists not used for 3 months during 2010

CMS

100-1000CPU Hours

Page 16: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Placement➡ In an environment that discounts

the network the sites are treated independently

- On the time scale of a job submitted and running on a site it is assumed the local environment cannot be changed

➡ From a data access perspective in 2011 data available over the network from a disk at a remote site may be closer than data on the local tape installation

Tier-1

Tape

Tier-1

Tape

Tier-1 Tier-1 Tape

Tier-2

16

Page 17: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Dynamic Replication➡ ATLAS introduced Panda

Dynamic Data Placement (PD2P)

➡ Jobs are sent to Tier-1 and data replicated to a Tier-2 at submission time

17

Tier-1

Tier-2

GDB, October 2010

Data Reuse

• Most data is not reused

• But that which is reused is quite heavily accessed

• 1.6M file accesses to PD2P replicated datasets

5

Page 18: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Data Placement and ReUse➡ Dynamic placement

now accounts for a lot of the networking

➡ Re-brokering jobs is increasing the reuse of samples and the efficiency

18

GDB, January 2011

Reuse Improving

• Helped by re-brokering

6

GDB, January 2011

PD2P Activity

• PD2P now responsible for significant data movement on the grid

5

Page 19: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Popularity➡ If you want to

understand how better to manage storage space, important to know how it’s used

➡ Interesting challenge to track the utilization of 30PB worth of files spread over more than 50 sites

- Equally important to know what’s not accessed 19

D. Giordano (CERN) - 06.04.2011

Architecture

2

Page 20: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Clean-Up and Replication

➡ Once popularity is understood

- Popular data can be replicated multiple times

- Unused data replicas can be cleaned up

20

➡ Data Popularity will be tracked at the file level

- Improves granularity and should improve reuse of the service

-

Page 21: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Analysis Data➡ We like to think of high energy data as series of

embarrassing parallel events

➡ In reality it’s not how we either write or read the files- More like

➡ Big gains in how storage is used by optimizing how events are read and streamed to an application

- Big improvements from the Root team and application teams in this area

21

1 2 3 4 5 6 7 8

Page 22: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Wide Area Access➡ With properly optimized IO other methods of

managing the data and the storage are available

- Sending data directly to applications over the WAN

➡ Not immediately obvious that this increases the wide area network transfers

- If a sample is only accessed once, then transferring it before hand or in real time are the same number of bytes sent

- If we only read a portion of the file, then it might be fewer bytes

22

Page 23: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

xrootd Demonstrator

➡ Current Xrootd demonstrator in CMS is intended to support the university computing

- Facility in Nebraska and Bari with data served from a variety of locations

- Tier-3 receiving data runs essentially diskless

➡ Similar installation being prepared in ATLAS 23

Caching CaseGlobal Xrootd

Redirector

Tier 3 Site Remote Site

User Analysis

Xrootd Cache

Xrootd Local Redirector

Xrootd Local Data

Xrootd CacheXrootd Cache

Xrootd

Remote Site

Xrootd

Notice xrootd can download from multiple sites at once!This helps one avoid overloaded sites; bittorrent-like.

Page 24: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Performance➡ This Tier-3 has a

10Gb/s network

➡ CPU Efficiency competitive

24

Data Served

Average 1.5TB/day, Max 8TB/dayWon’t win records, but shows it’s not a joke.

Omaha Analysis

CPU efficiency about 60% in Omaha

Best USCMS T2 efficiency about 80%

Example: T3 at Omaha

• We don’t have the effort to efficiently maintain CMS PhEDEx at Omaha.

• This T3 only reads from the global xrootd system. Good continuous test.

• 6,000 wall hours in the last day.

Xrootd traffic to Omaha

8TB/day peak about 1.5TB average

Page 25: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Networking

➡ ALICE Distributes Data in this way

- Rate from the ALICE Xrootd servers is comparable in peaks to other LHC experiments

25

1GB/sALICE

Page 26: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Future?

➡ Once you have streams of objects and optimized IO, the analysis application an application like skimming does not look so different from video streaming

- Read in incoming stream of objects. Once in a while read the entire event

➡ Web delivery of content in a distributed system is an interesting problem, but one with lots of existing tools

- Early interest in Content Delivery Networks and other technologies capable of delivering a stream of data to lots of applications

26

Page 27: Challenges and Evolution of the LHC Production Grid · I Evolution LHC Computing has grown up with Grid development-Many previous experiments have achieved distributed computing-LHC

Ian

Fisk

CM

S/FN

AL

E

GI

Outlook

➡ First year of running on LHC went well

- We are learning rapidly how to operate the infrastructure more efficiently

- Making more dynamic use of the storage and making better use of the networking

- 2011 and 2012 are long runs at the LHC

✦ Data volumes and user activities are both increasing

27