PROOF/Xrootd for a Tier3

45
Mengmeng Chen, Michael Ernst, Annabelle Leung, Miron Livny, Bruce Mellado, Sergey Panitkin, Neng Xu and Sau Lan Wu BNL/Wisconsin Special thanks to Gerri Ganis, Jan Iwaszkiewicz, Fons Rademakers, Andy Hanushevsky, Wei Yang, Dan Bradley, Sridhara Dasu, Torre Wenaus and the BNL team SLAC meeting Tools meeting, 11/28/07 PROOF/Xrootd for a Tier3

description

PROOF/Xrootd for a Tier3. Mengmeng Chen, Michael Ernst, Annabelle Leung, Miron Livny, Bruce Mellado, Sergey Panitkin, Neng Xu and Sau Lan Wu BNL/Wisconsin - PowerPoint PPT Presentation

Transcript of PROOF/Xrootd for a Tier3

Page 1: PROOF/Xrootd for a Tier3

Mengmeng Chen, Michael Ernst, Annabelle Leung, Miron Livny, Bruce Mellado, Sergey Panitkin, Neng Xu and Sau

Lan WuBNL/Wisconsin

Special thanks to Gerri Ganis, Jan Iwaszkiewicz, Fons Rademakers, Andy Hanushevsky, Wei Yang, Dan Bradley, Sridhara Dasu, Torre

Wenaus and the BNL team SLAC meeting Tools meeting, 11/28/07

PROOF/Xrootd for a Tier3

Page 2: PROOF/Xrootd for a Tier3

2

Outline• Introduction

• PROOF benchmarks

• Our views on Tier3

• A Multilayer Condor System

• PROOF and Condor’s COD

• The I/O Queue

• Data Redistribution in Xrootd

• Outlook and Plans

Page 3: PROOF/Xrootd for a Tier3

3

PROOF/XROOTDWhen data comes it will not be possible for the physicist to do analysis with ROOT in one node due to large data volumes• Need to move to a model that allows parallel

processing for data analysis, or distributed analysis.

As far as software for distributed analysis goes US ATLAS is going for the xrootd/PROOF system• Xrootd is a set of tools for serving data, maintained by

SLAC which is proven to support up to 1000 nodes with no scalability problems within this range

• PROOF (the Parallel ROOT Facility, CERN) is an extension of ROOT allowing transparent analysis of large sets of ROOT files in parallel on compute clusters or multi-core computers

See Sergey Panitkin’s talk at the PROOF workshop at CERN on Thursday overviewing ATLAS efforts/experience

Page 4: PROOF/Xrootd for a Tier3

4

PROOF in a SlidePROOF: Dynamic approach to end-user HEP analysis on

distributed systems exploiting the intrinsic parallelism of HEP data

subsubmastermaster

workersworkers MSSMSS

geographical domain

toptopmastermaster

subsubmastermaster

workersworkers MSSMSS

geographical domain

subsubmastermaster

workersworkers MSSMSS

geographical domain

master

clientclient

list of outputlist of outputobjectsobjects

(histograms, …)(histograms, …)

commands,commands,scriptsscripts

PROOF enabled facilityPROOF enabled facility

An

aly

sis

Fa

cility

, Tie

r3

Page 5: PROOF/Xrootd for a Tier3

5

The end Point: ScalabilityC

ou

rtes

y o

f P

RO

OF

tea

m

Page 6: PROOF/Xrootd for a Tier3

6

Structure of PROOF pool: Redirector Worker Supervisor

Procedure of PROOF job: User submit the PROOF job Redirector find the exact location of each file Workers validate each file Workers process the root file Master collects the results and sends to user User make the plots

Packetizers. They work like job schedulers. TAdaptivePacketizer (Default one, with dynamic packet size) TPacketizer (Optional one, with fixed packet size) TForceLocalPacktizer (Special one, no network traffic between workers. Workers only deal with the file stored locally)

To be optimized

for the Tier3

Some Technical

Details

Page 7: PROOF/Xrootd for a Tier3

7

Xrootd test farm at ACF BNL• 10 machines allocated so far for Xrootd test farm

Two dual core Opteron CPUs at 1.8 Ghz per node 8 GB RAM per node 4x 500 GB SATA drives per node, configured as a 2 TB partition Gigabit network

• 5 node configuration used for tests 1 redirector + 4 data servers 20 CPU cores ~10 TB of available disk space

• Behind ACF firewall, e.g visible from ACF only

• 2 people involved in set up, installation, configuration, etc ~0.25 FTE

Page 8: PROOF/Xrootd for a Tier3

8

Xrootd/PROOF Tests at BNL

• Evaluation of Xrootd as a data serving technology • Comparison to dCache and NFS servers

• Athena single client performance with AODs• I/O optimization for dCache and Xrootd

• Athena TAG based analysis performance studies

• Athena scalability studies with AODs

• Evaluation of Xrootd/PROOF for root based analyses

•Proof of the principle tests (factor of N scaling)

•“Real” analyses (Cranmer, Tarrade, Black, Casadei,Yu....)

• HighPtView, Higgs…

•Started evaluation of different PROOF packetizers

• Evaluation and tests of the monitoring and administrative setup

• Integration with patena and Atlas DDM (T. Maeno)

• Disk I/O benchmarks, etc

Page 9: PROOF/Xrootd for a Tier3

9

Integration with Atlas DDM

Tested by Tadashi Maeno(See demonstration tomorrow)

Page 10: PROOF/Xrootd for a Tier3

10

Big pool 1 Redirector + 86 computers 47 AMD 4x2.0GHz cores, 4GB memory 39 Pentium4 2x2.8GHz, 2GB memory We use just the local disk for performance tests Only one PROOF worker run each node

Small pool A 1 Redirector + 2 computers 4 x AMD 2.0GHz cores, 4GB memory, 70GB disk Best performance with 8 workers running on each node

Small pool B 1 Redirector + 2 computers 8 x Intel 2.66GHz cores, 16GB memory, 8x750GB on RAID 5 Best performance when 8 workers running on each node, mainly for high performance tests

PROOF test farms at GLOW-ATLAS

Page 11: PROOF/Xrootd for a Tier3

11

Xrootd/PROOF Tests at GLOW-ATLAS

(Jointly with PROOF team)

Focused on needs of a university-based Tier3• Dedicated farms for data analysis, including

detector calibration and performance, and physics analysis with high level objects

Various performance test and optimizations• Performance in various hardware configurations• Response to different data formats, volumes and

file multiplicities• Understanding system with multiple users

• Developing new ideas with the PROOF team

• Tests and optimization of packetizersUnderstanding the complexities of the packetizers

Page 12: PROOF/Xrootd for a Tier3

12

PROOF test webpagehttp://www-wisconsin.cern.ch/~nengxu/proof/

Page 13: PROOF/Xrootd for a Tier3

13

The Data Files

•Benchmark files:•Big size benchmark files (900MB)

•Medium size benchmark files (400MB)

•Small size benchmark files (100MB)

•ATLAS format files:•EV0 files(50MB)

The ROOT version

URL: http://root.cern.ch/svn/root/branches/dev/proof Repository UUID: 27541ba8-7e3a-0410-8455-

c3a389f83636 Revision: 21025

Page 14: PROOF/Xrootd for a Tier3

14

The Data Processing settings

•Benchmark files (Provided by PROOF team):•With ProcOpt.C (Read 25% of the branches)

•With Pro.C (Read all the branches)

•ATLAS format files (H DPD):•With EV0.C

•Memory Refresh:•After each PROOF job, the Linux kernel stores the

data in the physical memory. When we process the same data again, the PROOF will read from memory instead of disk. In order to see the real disk I/O in the benchmark, we have to clean up the memory after each test.

Page 15: PROOF/Xrootd for a Tier3

15

What can we see from the results?

•How much resource PROOF jobs need:•CPU

•Memory

•Disk I/O

•How does PROOF job use those resources:•How to use a multi-core system?

•How much data does it load to the memory?

•How fast does it load to the memory?

•Where do the data go after processing? (Cached memory)

Page 16: PROOF/Xrootd for a Tier3

16

Disk I/ODisk I/O

Memory Memory UsageUsage

Cached Cached MemoryMemory

1 2 4 6 8 9 10 1 2 4 6 8 9 10 1 2 4 6 8 9 10 1 2 4 6 8 9 10

1 2 4 6 8 9 10 1 2 4 6 8 9 10

1 2 4 6 8 9 10 1 2 4 6 8 9 10

The jobs were running on a machine with Intel 8 core, 2.66GHz, 16GB DDR2 memory, 8 disks on RAID 5. 8 disks on RAID 5.

Number of Number of WorkersWorkers

CPU CPU UsageUsage

Benchmark files, big size, read all the data

KB/sKB/sMBMB

%% %%

Page 17: PROOF/Xrootd for a Tier3

17

Disk I/ODisk I/O

Memory Memory UsageUsage

Cached Cached MemoryMemory

1 2 4 31 2 4 3

Number of Number of WorkersWorkers

CPU CPU UsageUsage

1 2 4 31 2 4 3

1 2 4 31 2 4 3

1 2 4 31 2 4 3

Benchmark files, big size, read all the data

KB/sKB/s MBMB

%% %%

The jobs were running on a machine with Intel 8 core, 2.66GHz, 16GB DDR2 memory, SINGLE DISK SINGLE DISK

Page 18: PROOF/Xrootd for a Tier3

18

Disk I/ODisk I/O

Memory Memory UsageUsage

Cached Cached MemoryMemory

1 2 4 6 8 9 10 1 2 4 6 8 9 10

The jobs were running on a machine with Intel 8 core, 2.66GHz, 16GB DDR2 memory. 8 disks on RAID 5. Without

Memory Refresh

Number of Number of WorkersWorkers

CPU CPU UsageUsage

Benchmark files, big size, read all the data

1 2 4 6 8 9 10 1 2 4 6 8 9 10

1 2 4 6 8 9 10 1 2 4 6 8 9 10

1 2 4 6 8 9 10 1 2 4 6 8 9 10

KB/sKB/s MBMB

%% %%

Page 19: PROOF/Xrootd for a Tier3

19

An Overview of the Performance rate

• All the tests on same machine using default packetizer

• Using Xrootd preload function seems to work well

• Should not start more than 2 workers on single disk…

Number of workersAv

era

ge

Pro

ce

ss

ing

Sp

ee

d (

ev

en

ts/s

ec

)

Page 20: PROOF/Xrootd for a Tier3

20

Our Views on a Tier3 at GLOW

Putting PROOF into Perspective

Page 21: PROOF/Xrootd for a Tier3

21

Main Issues to AddressNetwork TrafficAvoiding Empty CPU cyclesUrgent need for CPU resourcesBookkeeping, management and and

processing of large amount of data

Core TechnologiesCONDOR

Job managementMySQL

Bookkeeping and file management

XROOTDStorage

PROOFData analysis

21

Page 22: PROOF/Xrootd for a Tier3

22

One Possible Way to go...

Computing PoolComputing nodes with small local disk.

Storage PoolCentralized storage servers (NFS, Xrootd, Dcache, CASTOR)

The gatekeeperTakes the production jobs fromGrid and submits to local pool.

The usersSubmit their own jobs to

the local pool.

GRID

Batch SystemNormally uses Condor, PBS, LSF, etc.

Heavy I/OLoad

Dedicated PROOF poolcpus cores + big disks.

CPUs are idle most of the time.

Page 23: PROOF/Xrootd for a Tier3

23

The way we want to go...

Xrootd Poolcpus cores + big disks.

Pure computing Poolcpus cores + small local disk

Storage Poolvery big disks

Less I/OLoad

GRIDThe gatekeeper

Takes the production jobs fromGrid and submits to local pool.

Local job SubmissionUsers’ own jobs to

the whole pool.

Proof jobs SubmissionUsers’ PROOF jobs to

the Xrootd pool.

Page 24: PROOF/Xrootd for a Tier3

24

A Multi-layer Condor System

Production QueueNo Pre-emption, Cover all the CPUs, Maximum 3 days, No number limitation. Suspension of ATHENA jobs well tested. Currently testing suspension on PANDA jobs.

Local Job QueueFor Private Jobs, No number limitation, No run time limitation, Cover all the CPUs, Higher priority.

I/O QueueFor I/O intensive jobs, No number limitation, No run time limitation, Cover the CPUs in Xrootd pool, Higher priority

Fast QueueFor high priority private Jobs, No number limitation, run time limitation, cover all the CPUs, half with suspension and half without, with highest priority

PROOF Queue (Condor’s COD ?)For PROOF Jobs, Cover all the CPUs, no affective to the condor queue, jobs get the CPU immediately.

The gatekeeperTakes the production jobs fromGrid and submits to local pool.

Local job SubmissionUsers’ own jobs to

the whole pool.

Proof jobs SubmissionUsers’ PROOF jobs to

the Xrootd pool.

Page 25: PROOF/Xrootd for a Tier3

25

PROOF + Condor’s COD Model

Long Production

or local Condor jobs

PROOF jobs

Condor Master

Xrootd Redirector

Condor + Xrootd + PROOF pool

COD requests

PROOF reques

ts The local storage on each machine

Use Condor’s Computing-on-Demand to free-up nodes (in ~2-3 sec) running long jobs with local Condor system

A lot of discussion with PROOF team and Miron about integration of PROOF and CONDOR scheduling. May not need COD in the end.

Page 26: PROOF/Xrootd for a Tier3

26

Local_xrd.sh

Xrootd_sync.py

fileidpathtypeusermd5sumfsizetimedataserverstatus

Xrootd_sync.py

Xrootd_sync.py…

dataserver1

dataserver2

dataserver2

client

DATA

Database

Xrootd File Tracking System Framework(to be integrated into LRC DB)

redirector

Page 27: PROOF/Xrootd for a Tier3

27

The I/O Queue

Submitting node

0. The tracking system provides file locations in the Xrood pool.

1. Submission node asks Mysql database for the input file location.

2. Database provides the location for file and also the validation info of the file.

3. Submission node adds the location to the job requirement and submit to the condor system.

4. Condor sends the job to the node where the input file stored.

5. The node runs the job and puts the output file on the local disk.

0. The tracking system provides file locations in the Xrood pool.

0

12

34

5

Mysql database server

Xrootd Poolcpus cores + big disks.

Condor master

Page 28: PROOF/Xrootd for a Tier3

28

I/O Queue Tests• Direct Access

• Jobs go on machines where the input files reside

• Accesses ESD files directly and converts them to CBNTAA files

• Copies output file to xrootd on the same machine using xrdcp

• Each file has 250 events

• xrdcp• Jobs go on any machines – not necessarily on the ones which have the input files

• Copies input and output files via xrdcp to/from the xrootd pool

• Converts the input ESD file to CBNTAA

• cp_nfs• Jobs go on any machine

• Copies input and output files to/from NFS

• Converts the input ESD file to CBNTAA

Page 29: PROOF/Xrootd for a Tier3

29

I/O Queue Test Configuration

• Input file (ESD files) size ~700MB• Output File (CBNTAA) size ~35MB

• Each machine has ~10 ESD files • 42 running nodes• 168 CPUs cores

Page 30: PROOF/Xrootd for a Tier3

30

I/O Queue Test Results

Number of jobs

Time save per job: ~230sec

Page 31: PROOF/Xrootd for a Tier3

31

I/O Queue Test Results

Number of jobs

Page 32: PROOF/Xrootd for a Tier3

32

Data Redistribution in Xrootd

• When and why do we need data redistribution?• Case 1:

One of the data servers is dead. All the data on it got lost. Replace it with a new data server.

• Case 2:

When we extend the Xrootd pool, we add new data servers into the pool.

When new data comes, all the new data will go the new server because of the load balancing function of Xrootd.

The problem is that if we run PROOF jobs on the new data, all the PROOF jobs will read from this new server.

New machineto replace the bad one.

This one is down.

Page 33: PROOF/Xrootd for a Tier3

33

An Example of Xrootd file Distribution

Nu

mb

er o

f fi

les

All the files were copied through Xrootd redirector

This machine was down

This node happened to be filled with most of the files in one data set

Computer nodes in the xrootd pool

Page 34: PROOF/Xrootd for a Tier3

34

PROOF Performance on this Dataset

Here is the Here is the problemproblem

Page 35: PROOF/Xrootd for a Tier3

35

After File RedistributionN

um

ber

of

file

s

Computer nodes in the xrootd pool

Page 36: PROOF/Xrootd for a Tier3

36

Before File Redistribution

Running Time

Nu

mb

er o

f W

ork

ers

Acc

essi

ng

File

sN

um

ber

of

Wo

rker

sA

cces

sin

g F

iles

After File Redistribution

Page 37: PROOF/Xrootd for a Tier3

37

PROOF Performance after Redistribution

Page 38: PROOF/Xrootd for a Tier3

38

The Implementation of DBFR

•We are working on a MySQL+Python based system

•We are trying to integrate this system into the LRC database

•Hopefully, this system can be implemented at PROOF level because PROOF already work with datasets

Page 39: PROOF/Xrootd for a Tier3

39

Summary• Xrootd/PROOF is an attractive technology for ATLAS

physics analysis, specially for the post-AOD phase• The work of understanding this technology is in

progress by BNL and Wisconsin• Significant experience has been gained

• Several Atlas analysis scenarios were tested, with good results

• Tested machinery on HighPtView, CBNT, EV for Higgs,• Integration with DDM was tested

• Monitoring and farm management prototypes were tested

• Scaling performance is under test

• We think PROOF is a viable technology for Tier3

• Testing Condor’s Multilayer System and COD, Xrootd file tracking and data redistribution, I/O queue• Need to integrate developed DB and LRC

• Need to resolve issue of multi-user utilization of PROOF

Page 40: PROOF/Xrootd for a Tier3

40

Additional Slides

Page 41: PROOF/Xrootd for a Tier3

41

The Basic Idea of DBFR

• Register the location of all the files in every datasets in the database (MySQL)

• With this information, we can easily get the file distribution of each dataset

• Calculate the average number of the files each data server should handle

• Get a list of files which need to move out.

• Get a list of machines which don’t have less files than the average

• Match these 2 lists and move the files

• Register the new location of those files

Page 42: PROOF/Xrootd for a Tier3

42

Disk I/ODisk I/O

Memory Memory UsageUsage

Cached Cached MemoryMemory

The jobs were running onThe jobs were running on a machine with Intel 8 core, 2.66GHz, 16GB DDR2 memory, a machine with Intel 8 core, 2.66GHz, 16GB DDR2 memory,

8 disks on RAID 5. With Xrootd Preload.8 disks on RAID 5. With Xrootd Preload. 11/29/2007

Number of Number of WorkersWorkers

CPU CPU UsageUsage

Benchmark files, big size, read all the dataBenchmark files, big size, read all the data

1 2 4 6 8 9 7 5 1 2 4 6 8 9 7 5

KB/sKB/s MBMB

%% %%

1 2 4 6 8 9 7 5 1 2 4 6 8 9 7 5

1 2 4 6 8 9 7 5 1 2 4 6 8 9 7 5

1 2 4 6 8 9 7 5 1 2 4 6 8 9 7 5

Page 43: PROOF/Xrootd for a Tier3

43

Performance Rate

• Xrootd preloading doesn’t change disk throughput much

• Xrootd preloading helps to increase the top speed by ~12.5%.

• When we use Xrootd preload, disk I/O reaches ~60MB/sec

• CPUs usage reached 60%.

• The best performance is achieved when the number of workers is less than the number of CPUs.(6 workers provides the best performance.)

Number of workersAv

era

ge

Pro

ce

ss

ing

Sp

ee

d (

ev

en

ts/s

ec

)

Page 44: PROOF/Xrootd for a Tier3

44

Disk I/ODisk I/O

Memory Memory UsageUsage

Cached Cached MemoryMemory

The jobs were running onThe jobs were running on a machine with Intel 8 core, 2.66GHz, 16GB DDR2 memory, a machine with Intel 8 core, 2.66GHz, 16GB DDR2 memory,

8 disks on RAID 5.8 disks on RAID 5.

Number of Number of WorkersWorkers

CPU CPU UsageUsage

1 2 4 6 7 8 91 2 4 6 7 8 9

KB/sKB/s MBMB

%% %%

Benchmark files, big size, read 25% of the dataBenchmark files, big size, read 25% of the data

1 2 4 6 7 8 91 2 4 6 7 8 9

1 2 4 6 7 8 91 2 4 6 7 8 9

1 2 4 6 7 8 91 2 4 6 7 8 9

Page 45: PROOF/Xrootd for a Tier3

45

Performance Rate

• Disk I/O reaches ~60MB/sec which is 5MB/sec more than reading all the data.

• CPUs usage reached 65%.

• 8 workers provides the best performance.

Number of workers

Av

era

ge

Pro

ce

ss

ing

Sp

ee

d (

ev

en

ts/s

ec

)