1 ESnet Joint Techs, Feb. 2005 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P....

22
1 ESnet Joint Techs, Feb. 2005 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan Kluz, Joseph Burrescia, and James V. Gagliardi, ESnet Leads Gizella Kapus, Resource Manager and the ESnet Team Lawrence Berkeley National Laboratory

Transcript of 1 ESnet Joint Techs, Feb. 2005 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P....

1

ESnet Joint Techs, Feb. 2005

William E. Johnston, ESnet Dept. Head and Senior Scientist

R. P. Singh, Federal Project Manager

Michael S. Collins, Stan Kluz,Joseph Burrescia, and James V. Gagliardi, ESnet Leads

Gizella Kapus, Resource Manager

and the ESnet Team

Lawrence Berkeley National Laboratory

2

ESnet’s Mission

Support the large-scale, collaborative science of DOE’s Office of Science

Provide high reliability networking to support the operational traffic of the DOE Labs

• Provide network services to other DOE facilities

Provide leading-edge network and Grid services to support collaboration

• ESnet is a component of the Office of Science infrastructure critical to the success of its research programs (program funded through Office of Advanced Scientific Computing Research / MICS; managed and operated by ESnet staff at LBNL)

ES

ne

t S

cie

nc

e D

ata

Ne

two

rk (

SD

N)

co

re

TWC

SNLL

YUCCA MT

BECHTEL-NV

PNNLLIGO

INEEL

LANL

SNLAAlliedSignal

PANTEX

ARM

KCP

NOAA

OSTIORAU

SRS

ORNLJLAB

PPPLINEEL-DCORAU-DC

LLNL/LANL-DC

MIT

ANL

BNL

FNALAMES

4xLAB-DC

NR

EL

ALBHUB

LLNL

GA

DOE-ALB

GTN&NNSA

International (high speed)10 Gb/s SDN core10G/s IP core2.5 Gb/s IP coreMAN rings (> 10 G/s)OC12 ATM (622 Mb/s)OC12 / GigEthernetOC3 (155 Mb/s)45 Mb/s and less

Office Of Science Sponsored (22)NNSA Sponsored (12)Joint Sponsored (3)

Other Sponsored (NSF LIGO, NOAA)Laboratory Sponsored (6)

QWESTATM

42 end user sites

ESnet IP core

SInet (Japan)Japan – Russia(BINP)CA*net4 France

GLORIAD Kreonet2MREN NetherlandsStarTap TANet2Taiwan (ASCC)

AustraliaCA*net4Taiwan (TANet2)Singaren

ESnet IP core: Packet over SONET Optical Ring and

Hubs

ELP HUB

CHI HUB

ATL HUB

DC HUB

peering points

MAE-E

PAIX-PAEquinix, etc.

PN

WG

SEA HUB

ESnet Physical Network – mid 2005High-Speed Interconnection of DOE Facilities

and Major Science Collaborators

IP core hubs SNV HUB

Abilene high-speed peering points

Abilene

Ab

ilen

e MA

N L

AN

Abi

lene

CERN(DOE link)

GEANT - Germany, France, Italy, UK, etc

NYC HUB

Starlight

Chi NAP

CHI-SL HUB

SNV HUB

Ab

ilene

SNV SDN HUB

JGI

LBNL

SLACNERSC

SND core hubs SNV HUB

SDSC HUB

MaxG

P

Equinix

STARLI

GH

T

MAE-E

NY-NAP

GA

LB

NL

ESnet Logical Network: Peering and Routing Infrastructure

ESnet peering points (connections to other networks)

NYC HUBS

SEA HUB

SNV HUB

MAE-W

FIX-

W

PAIX-W16 PEERS

CA*net4 CERNFrance GLORIADKreonet2 MRENNetherlands StarTapTaiwan (ASCC) TANet2

Abilene +6 Universities

MAX GPOP

GEANT - Germany - France - Italy - UK - etc SInet (Japan)KEKJapan – Russia (BINP)

AustraliaCA*net4Taiwan

(TANet2)Singaren

13 PEERS

2 PEERS

LANL

TECHnet

2 PEERS

36 PEERS

CENICSDSC

PNW-GPOP

CalREN2 CHI NAP

Distributed 6TAP18 Peers

2 PEERS

EQX-ASH

1 PEER

1 PEER

10 PEERS

ESnet supports collaboration by providing full Internet access• manages the full complement of Global Internet routes (about 150,000

IPv4 from 180 peers) at 40 general/commercial peering points• high-speed peerings w/ Abilene and the international R&E networks.This is a lot of work, and is very visible, but provides full Internet access

for DOE.

ATL HUB

University

International

Commercial

Abilene

EQX-SJ

Abilene

28 PEERS

Abilene

6 P

EE

RS

14 PEERS

NGIX

2 PEERS

5

Drivers for the Evolution of ESnet

•The network and middleware requirements to support DOE science were developed by the OSC science community representing major DOE science disciplines

o Climate simulationo Spallation Neutron Source facilityo Macromolecular Crystallographyo High Energy Physics experiments

o Magnetic Fusion Energy Scienceso Chemical Scienceso Bioinformaticso (Nuclear Physics)

Available at www.es.net/#research

•The network is essential for:o long term (final stage) data analysiso “control loop” data analysis (influence an experiment in progress)o distributed, multidisciplinary simulation

August, 2002 Workshop Organized by Office of ScienceMary Anne Scott, Chair, Dave Bader, Steve Eckstrand. Marvin Frazier, Dale Koelling, Vicky White

Workshop Panel Chairs Ray Bair, Deb Agarwal, Bill Johnston, Mike Wilde, Rick Stevens, Ian Foster, Dennis Gannon, Linda Winkler, Brian Tierney, Sandy Merola, and Charlie Catlett

6

Evolving Quantitative Science Requirements for Networks

Science Areas Today End2End Throughput

5 years End2End Throughput

5-10 Years End2End Throughput

Remarks

High Energy Physics

0.5 Gb/s 100 Gb/s 1000 Gb/s high bulk throughput

Climate (Data & Computation)

0.5 Gb/s 160-200 Gb/s N x 1000 Gb/s high bulk throughput

SNS NanoScience Not yet started 1 Gb/s 1000 Gb/s + QoS for control channel

remote control and time critical throughput

Fusion Energy 0.066 Gb/s(500 MB/s burst)

0.198 Gb/s(500MB/20 sec. burst)

N x 1000 Gb/s time critical throughput

Astrophysics 0.013 Gb/s(1 TBy/week)

N*N multicast 1000 Gb/s computational steering and collaborations

Genomics Data & Computation

0.091 Gb/s(1 TBy/day)

100s of users 1000 Gb/s + QoS for control channel

high throughput and steering

7

ESnet is Currently Transporting About 350 terabytes/mo.T

Byt

es/M

onth

Annual growth in the past five years about 2.0x

annually.

ESnet Monthly Accepted TrafficJan., 1990 – Dec. 2004

A Small Number of Science UsersAccount for a Significant Fraction of all ESnet Traffic

TB

ytes

/Mon

th

DOE Lab-International R&E

Lab-U.S. R&E

Lab-Lab

Note that this data does not include intra-Lab traffic.ESnet ends at the Lab border routers, so science traffic on the Lab LANs is invisible to ESnet.

123

International

Domestic

Top 100 host-host flows = 99 TBy

Total ESnet traffic (Dec, 2004) = 330 TBy

Top Flows - ESnet Host-to-Host, 2 Mo., 30 Day Averaged

9

Top Flows - ESnet Host-to-Host, 2 Mo., 30 Day AveragedT

Byt

es/M

onth

Fer

mila

b (U

S)

Wes

tGrid

(C

A)

Fer

mila

b (U

S)

IN2P

3 (F

R)

SLA

C (

US

)

INF

N C

NA

F (

IT)

SLA

C (

US

)

RA

L (U

K)

Fer

mila

b (U

S)

Wes

tGrid

(C

A)

SLA

C (

US

)

RA

L (U

K)

SLA

C (

US

)

IN2P

3 (F

R)

BN

L (U

S)

IN2P

3 (F

R)

SLA

C (

US

)

IN2P

3 (F

R)

FN

AL

K

arls

ruhe

(D

E)

LIG

O

Cal

tech

LLN

L

NC

AR

FN

AL

M

IT

??

LB

NL

FN

AL

M

IT

FN

AL

S

DS

C

FN

AL

J

ohn

s H

opki

ns

NE

RS

C

NA

SA

Am

es

NE

RS

C

NA

SA

Am

es

LBN

L

U.

Wis

c.

BN

L

LLN

L

BN

L

LLN

L

BN

L

LLN

L

BN

L

LLN

L

NE

RS

C

LB

NL

NE

RS

C

LB

NL

NE

RS

C

LB

NL

NE

RS

C

LB

NL

NE

RS

C

LB

NL

NE

RS

C

LB

NL

10

ESnet Traffic

• Since BaBar (SLAC high energy physics experiment) production started, the top 100 ESnet flows have consistently accounted for 30% - 50% of ESnet’s monthly total traffic

• As LHC (CERN high energy physics accelerator) data starts to move, this will increase a lot (200-2000 times)o Both LHC tier 1 (primary U.S. experiment data centers)

are at DOE Labs – Fermilab and Brookhaven

• U.S. tier 2 (experiment data analysis) centers will be at universities – when they start pulling data from the tier 1 centers the traffic distribution will change a lot

11

ESnetAbileneORNL

DENDEN

ELPELP

ALBALB

DCDC

DOE Labs w/ monitorsUniversities w/ monitorsnetwork hubshigh-speed cross connects: ESnet ↔ Internet2/Abilene

Monitoring DOE Lab ↔ University Connectivity• Current monitor infrastructure (red&green) and target infrastructure• Uniform distribution around ESnet and around Abilene

Japan

Japan

EuropeEurope

SDGSDG

Japan

CHICHI

AsiaPacSEASEA

NYCNYC

HOUHOU

KCKC

LALA

ATLATL

INDIND

SNVSNV

Initial site monitors

SDSC

LBNL

FNAL

NCS

BNL

OSU

ESnet

Abilene

CERNCERN

12

ESnet Evolution• With the current architecture ESnet cannot address

o the increasing reliability requirements- Labs and science experiments are insisting on network redundancy

o the long-term bandwidth needs- LHC will need dedicated 10/20/30/40 Gb/s into and out of FNAL and BNL- Specific planning drivers include HEP, climate, SNS, ITER and SNAP, et al

• The current core ring cannot handle the anticipated large science data flows at affordable cost

• The current point-to-point tail circuits are neither reliable nor scalable to the required bandwidth

ESnetCore

New York (AOA)

Chicago (CHI)

Sunnyvale (SNV)

Atlanta (ATL)

Washington, DC (DC)

El Paso (ELP)

DOE sites

13

ESnet Strategy – A New Architecture

• Goals derived from science needso Fully redundant connectivity for every siteo High-speed access to the core for every site (at least 20

Gb/s)o 100 Gbps national bandwidth by 2008

• Three part strategy1) Metropolitan Area Network (MAN) rings to provide dual

site connectivity and much higher site-to-core bandwidth

2) A Science Data Network core for- large, high-speed science data flows

- multiply connecting MAN rings for protection against hub failure

- a platform for provisioned, guaranteed bandwidth circuits

- alternate path for production IP traffic

3) A High-reliability IP core (e.g. the current ESnet core) to address Lab operational requirements

14

Site gateway routersite equip. Site gateway router

ESnet production

IP core

LabLab

ESnet MAN ArchitectureR&E peerings

monitor

ESnet management and

monitoring

ESnet managedλ / circuit servicestunneled through the IP backbone

monitor

site equip.

ESnet production IP service

ESnet managedλ / circuit services

T320

International peerings

Site LAN Site LAN

ESnet SDN core

T320

2-4 x 10 Gbps channels

core router

switches managingmultiple lambdas

core router

GEANT (Europe)

Asia-Pacific

ESnetIP Core

New York(AOA)

Chicago (CHI)

Sunnyvale(SNV)

Washington, DC (DC)

El Paso (ELP)

Primary DOE Labs

Existing IP core hubs

New ESnet Strategy:Science Data Network + IP Core + MANs

Possible new hubs

Atlanta (ATL)

MetropolitanAreaRings

CERN

Core loops

ESnetScience Data

Network(2nd Core)

Seattle(SEA)

Albuquerque (ALB)

New hubs

SDN hubs

16

DENDEN

ELPELP

ALBALBATLATL

Metropolitan Area Rings

Tactics for Meeting Science Requirements – 2007/2008

Aus.

CERN

Europe

SDGSDG

AsiaPacSEASEA

Major DOE Office of Science SitesHigh-speed cross connects with Internet2/Abilene

ESnet hubsESnet hubs

SNVSNV

Europe

10Gb/s30Bg/s

40Gb/s

Japan

CHICHI

High-impact science coreLab suppliedMajor international

2.5 Gbs10 Gbs

Future phases

Production IP ESnet core

DCDC

Japan

NYCNYC

Aus.

ESnetIP Core

(>10 Gbps ??)

ESnetScience Data Network(2nd Core – 30-50 Gbps,National Lambda Rail)

MetropolitanAreaRings

• 10 Gbps enterprise IP traffic • 40-60 Gbps circuit based transport

17

ESnet Services Supporting Science Collaboration

• In addition to the high-bandwidth network connectivity for DOE Labs, ESnet provides several other services critical for collaboration

• That is ESnet provides several “science services” – services that support the practice of scienceo Access to collaborators (“peering”)

o Federated trust- identity authentication

– PKI certificates– crypto tokens

o Human collaboration – video, audio, and data conferencing

18

DOEGrids CA Usage Statistics

0250500750

100012501500175020002250250027503000325035003750400042504500475050005250

Production service began in June 2003

No

.of

cert

ific

ates

or

req

ues

ts

User Certificates

Service Certificates

Expired(+revoked)Certificates

Total Certificates Issued

Total Cert Requests

* Report as of Jan 11,2005* FusionGRID CA certificates not included here.

User Certificates 1386 Total No. of Certificates 3569

Service Certificates 2168 Total No. of Requests 4776

Host/Other Certificates 15 Internal PKI SSL Server certificates

36

19

DOEGrids CA Statistics (Total Certs 3569)

*Others38.9%

PPDG13.4%

iVDGL17.9%

ANL4.3%

PNNL0.6%

ORNL0.7%

NERSC4.0%

LBNL1.8%

FusionGRID7.4%

FNAL8.6%

ESnet0.6%

ESG1.0%

DOESG0.5%

NCC-EPA0.1%

LCG0.3%

DOEGrids CA Usage - Virtual Organization Breakdown

*DOE-NSF collab.

*

20

ESnet Collaboration Services: Production Services

• Web-based registration and audio/data bridge scheduling

• Ad-Hoc H.323 and H.320 videoconferencing

• Streaming on the Codian MCU using Quicktime or REAL

• “Guest” access to the Codian MCU via the worldwide Global Dialing System (GDS)

• Over 1000 registered users worldwide

.3.86 ProductionRADVISION ECS-500

Gatekeeper(DELL)

.3.171 ProductionRADVISION ViaIP MCU

.4.185 RADVISION Gateway

Eastern Research

ISDN

1- PRI.3.167 Production

LatitudeM3 AudioBridge

.3.166 ProductionWeb Latitude

Server(DELL)

H.323Audio, Data

ESnet

Router

6-T1's6-T1's

.3.172 ProductionCodian MCU

.3.175 ProductionRADVISION ECS-500

Gatekeeper(DELL)

H.323

21

ESnet Collaboration Services: H.323 Video Conferencing

• Radvision and Codiano 70 ports on Radvision available at 384 kbpso 40 ports on Codian at 2 Mbps plus streamingo Usage leveled, but, expect increase in early 2005 (new

groups joining ESnet Collaboration)o Radvision increase to 200 ports at 384 kbps by mid-2005

H.323 MCU Port Hours

0

500

1000

1500

2000

2500

3000

3500

4000

4500

Sep-04 Oct-04 Nov-04 Dec-04 Jan-05

22

Conclusions

• ESnet is an infrastructure that is critical to DOE’s science mission and that serves all of DOE

• ESnet is working on providing the DOE mission science networking requirements with several new initiatives and a new architecture

• ESnet is very different today in both planning and business approach and in goals than in the past