Post on 17-Jul-2020
Open Science Grid
D. Petravick, FNAL
I2 Spring Meeting/HEP SIG
May 3, 2005
5/27/05 D. Petravick - Open Science GRid 2
Open Science GridConsortium
• The Open Science Grid (OSG) Consortiumwas formed in 2004 by teams from U.S.universities and national laboratories in orderto build and support a production qualitypeta-scale Grid infrastructure for large scalescience.
• Partners with EGEE, LCG and TerraGrid.• Non US institutions.may be partners.
– Case under consideration is when an applicationbring s in non US sites.
5/27/05 D. Petravick - Open Science GRid 3
CMS and U.S. OSGparticipants
5/27/05 D. Petravick - Open Science GRid 4
OSG Organizational Structure
5/27/05 D. Petravick - Open Science GRid 5
Two Important TechnicalGroups
• Storage (active for a year)
• Network (just forming)
• Buffering Storage and networking arecomplimentary activities.
5/27/05 D. Petravick - Open Science GRid 6
Storage Technical Group
• The Storage Technical group is lead by PaulSheldon (Vanderbilt) and Robert Kennedy(Fermilab).
• The main activity of the group is to deploymanaged storage elements on the OSG.– Independently implemented, interoperating, SE
are being deployed.• LBL DRM• Fermilab/DESY Dcache.• Interest in from the IBP people (Tennessee)• Possible interest from Xootd people (SLAC)
5/27/05 D. Petravick - Open Science GRid 7
OSG Storage element
5/27/05 D. Petravick - Open Science GRid 8
March Tape IngestDemonstration
5/27/05 D. Petravick - Open Science GRid 9
Current Production
5/27/05 D. Petravick - Open Science GRid 10
Current (POSIX side) reads
5/27/05 D. Petravick - Open Science GRid 11
CMS plans
• SE at US Tier 1 (FNAL)
• SE at US Tier 2s (Caltech, Florida, MIT,Nebraska, Purdue, UCSD, Wisconsin)
• Partial deployment and use in Recentlyconcluded Service Challenge.
• Storage Embedded in OSG.
5/27/05 D. Petravick - Open Science GRid 12
Network Technical Group
• The Network Technical group is lead byShawn McKee(Michigan) and DonaldPetravick (Fermilab).
• We are beginning to organize.– Have seeded a monitoring activity.
– Want to consider the needs of the HEPexperiments exploiting OSG.
5/27/05 D. Petravick - Open Science GRid 13
Evident Areas of Interest
• Understand the impact of ourapplication on the existing networksinfrastructures.– Large Data Transfers for LHC experiments
within the US are just being organized.
• HEP desires an “overlay network”composed from all available resources
5/27/05 D. Petravick - Open Science GRid 14
Mona Lisa Monitoring
• For OSG-ITB– Integration Test Bed
• http://gocmon.uits.iupui.edu:8888/
5/27/05 D. Petravick - Open Science GRid 15
HEP Overlay network
• Esnet and I2.
• Ultralight
• US university initiatives.
• LHC net.
• T[12] <-> T[12] worldwide
5/27/05 D. Petravick - Open Science GRid 16
5/27/05 D. Petravick - Open Science GRid 17
UltraLightUltraLight Network: Network: PHASE 2PHASE 2
Move into productionMove into production(2007)(2007)
Optical switching fullyOptical switching fullyenabled amongst primaryenabled amongst primarysitessites
Integrated internationalIntegrated internationalinfrastructureinfrastructure
5/27/05 D. Petravick - Open Science GRid 18
FNAL Inbound traffic
5/27/05 D. Petravick - Open Science GRid 19
Summary
• The OSG is a significant computationalresource.
• HENP stakeholders are counting on the OSGto provide a grid framework in the US for theLHC.
• Many contributions beyond the DOE labs andT2 centers.
• OSG is in a position to exploit resourcesopportunistically, including high impactnetworks.
5/27/05 D. Petravick - Open Science GRid 20
Grid computing and nationalinfrastructure.
• In HEP tape is usedin data movement,not only dataarchive.
• Data movement is atnational scales.
• D0, one currentHEP system.
5/27/05 D. Petravick - Open Science GRid 21