Internet2: Technology Innovation and Distributed Infrastructure Guy Almes Internet2 Project NANOG...

48
Internet2: Technology Innovation and Distributed Infrastructure Guy Almes Internet2 Project <[email protected]> NANOG Meetings Denver — February 1, 1999

Transcript of Internet2: Technology Innovation and Distributed Infrastructure Guy Almes Internet2 Project NANOG...

Internet2:Technology Innovation and Distributed Infrastructure

Guy AlmesInternet2 Project

<[email protected]>

NANOG MeetingsDenver — February 1,

1999

21 April 1998 Internet2 Working Groups - Presentation to JET

Overview

Universities, Engineering, and Applications

Technical Innovation Distributed Infrastructure

The challenge before us

Universities, by their nature, • mix teaching and research• collaborate with scholars at other

universities Thus, advanced applications for

• conferencing• remote instrument access• digital libraries

What networks will these need?

Applications and engineering

Applications

Engineering

Motivate Enables

What makes this hard?

Combination of:• high bandwidth• wide area• intrinsically bursty applications

Need for multicast Need for quality of service Need for measurements

Internet2 History / Status

Initiated 1-Oct-96 by 34 research universities

(NGI Program announced one week later)

UCAID incorporated Oct-97 Board of Directors drawn from

university presidents Staff mainly in three locations Compact, growing set of international

partners

History/Status, continued

We now have about 140 universities

A few dozen corporate members also make key contributions

Key goal: create and support advanced applications

Key infrastructure tactic: campus, gigapop, backbone structure

Working Group Progress

IPv6 Measurement Multicast Network

Management Network

Storage

Quality of Service

Routing Security Topology

Technical Innovation: Measurement

Chair: David Wasley, Univ California and Matt Zekauskas, Internet2 staff

Focus:• Places to measure:

at campuses, at gigaPoPs, within interconnect(s)

• Things to measure: traffic utilization performance: delay and packet loss traffic characterization

Backbone ‘A’

Backbone ‘B’

Backbone ‘A’

Backbone ‘B’

Backbone ‘A’

Backbone ‘B’

Active Measurements of Performance IETF IPPM WG defining one-way

delay Take all delay to be due to:

• Propagation• Transmission• Queuing

Variation in delay suggests congestion

Passive Measurements of Traffic Characterization OC3MON and OC12MON

• Developed by MCI vBNS engineering with NLANR group at UCSD

• passive taps into fiber links• extracts IP packet headers• gradually improving maturity

Help understand nature of Internet use

Technical Innovation: Multicast

Chair: Kevin Almeroth,Univ California at Santa Barbara

Focus: Make native IP multicast scalable and operationally effective• Must be coordinated across backbones,

gigaPoPs, and campuses• Must be coordinated with unicast routing

1999: A key year for multicast In the past, multicast has meant

‘MBone’• core set of committed users and engineers• ‘legacy’ non-scalable approaches to routing

Our hope:• PIM-Sparse Mode• MBGP, MSDP, etc.• enable scalable use of high-speed multicast

flows throughout the Internet2 structure

Technical Innovation: Quality of Service

Chair: Ben Teitelbaum, Internet2 staff

Focus: Multi-network IP-based QoS• Relevant to advanced applications• Interoperability: carriers and kit

• Architecture• QBone distributed testbed

Big Problem #1: Understanding Application Requirements Range of poorly-understood

needs• Both intolerant and tolerant apps

important• Many apps need absolute, per-flow QoS

assurances• Adaptive apps may require a minimum

level of QoS, but can exploit additional network resources if available

Big Problem #2: Scalability

# flows through core >> # flows through edge

Goal: keep per-flow state out of the core Design principles

• Put “smarts” in edge routers• Allow core routers to be fast and dumb

Big Problem #3: Interoperability

CampusNetworks

GigaPoPs

GigaPoPs

CampusNetworks

… and between multipleimplementations of network elements ...

Backbone Networks(vBNS, Abilene, …)

... between separately administered and designed clouds ...

… is crucial if we are to provide end-to-end QoS.

DiffServ Architecture

BB

Leaf Router (police, mark flows)

BB

Ingress Edge Router (classify, police, mark aggregates)

EgressEdge Router

(shape aggregates)

Corerouters

Corerouters

Source

Bandwidth Brokers(perform admissions control, manage network resources,

configure leaf and edge devices) Destination

Premium Service

Emulates a leased line Contract: peak rate profile PHB = “forward me first”

(e.g. priority queuing, WFQ) Policing rule = drop out-of-profile

packets On egress, clouds need to shape

Premium aggregates to mask induced burstiness

Internet2 “QBone” A “meta-testbed” for absolute diff-serv

services Many Internet2 clouds already keenly

interested in experimenting with diff-serv Objectives:

• Fostering interoperability among participant clouds• Encouraging collective problem solving• Creating opportunities for inter-disciplinary dialogue• Growing a snowball of participating clouds

Technical diversity Topological diversity Contiguity

Summary

Internet2’s WGs focus on project’s needs

Complement IETF WGs Membership by invitation of chair

Distributed Infrastructure

Campuses:• scalable 10/100 Mb/s• multicast

GigaPoPs:• scalable access to wide-area resources

Backbones:• vBNS• Abilene

Recent progress and challenges Early gigaPoPs getting stronger Recent major advances:

• CalREN2• Great Plains Network• Northern Crossroads

JET Collaboration

Joint Engineering Team• federal NGI agency• Internet2

NGIX effort• exchange points appropriate for Internet2 / NGI /

non-US similar networks

Ideal: connect universities and labs with advanced performance/functionality

Abilene: Design and Status

Guy AlmesInternet2 Project

<[email protected]>

NANOG MeetingsDenver — February 1,

1999

Abilene and Internet2

Internet2 as infrastructure:• 140+ campus LANs• about 35 gigaPoPs• a few interconnect backbones

Abilene is the 2nd Backbone• OC-48 trunks from Qwest• Cisco 12008 routers with IP/Sonet• OC-3 and OC-12 access to gigaPoPs

Seattle

Kansas City

Denver

Cleveland

New York

Atlanta

Houston

Indianapolis

Abilene Core at 29-Jan-99

Sacramento

Los Angeles

Abilene Architecture

Core Architecture Access Architecture Network Operations Center

• at Indiana University

Schedule:• 14-Apr-98: announced• Sep-98: demonstrated• 29-Jan-99: operational

Abilene Architecture: Core

Router Nodes located at Qwest PoPs• Cisco 12008 GSR• ICS Unix PC: IPPM and Network Mgmt• Cisco 3640 Remote Access for NOC• 100BaseT LAN and ‘console port’ access• Remote 48v DC Power Controllers

Initially, ten Router Nodes

Seattle

Kansas City

Denver

Cleveland

New York

Atlanta

Houston

Indianapolis

Abilene: by end of February 1999

Sacramento

Los Angeles

Abilene Architecture: Access

Access Nodes• Located at Qwest PoPs• Sonet: Connects Local to Long-distance

Initially, about 120 Access Nodes:• This list grows as the Qwest Sonet plant

grows

Seattle

Kansas City

Denver

Cleveland

New York

Atlanta

Houston

Pittsburgh

Minneapolis

ColumbusWashington

Phoenix

Raleigh

TrentonSalt Lake City

Wilmington

Dallas

New Orleans

Lincoln

New Haven

Detroit

Miami

Westfield

Nashville

Philadelphia

Indianapolis

Newark

AlbuquerqueOklahoma City

Abilene, with Some Access Nodes

Access NodeRouter Node

Sacramento

Oakland

Eugene

Los Angeles

Anaheim

Boston

Chicago

Abilene NOC

Located at Indiana University Excellent Operations and

Engineering Skills Commitment evidenced in Abilene

Rollout

Schedule

Design work: Mar-98 and ongoing Rack design: May-98 to Jul-98 Initial assembly / testing: Jul-98 to

Aug-98 Router Nodes / Interior Lines: Jul-98 Demo network installed: Sep-98 Production began: 29-Jan-99 Completion of OC-48 Core: mid-1999 Continuing improvement: ongoing

Seattle

Kansas City

Denver

Cleveland

New York

Atlanta

Houston

Indianapolis

Jun-99: Core Architecture

Sacramento

Los Angeles

Seattle

Kansas City

Denver

Cleveland

New York

Atlanta

Houston

Indianapolis

Sep-99: Core Architecture

Sacramento

Los Angeles

Washington

Outline of Engineering Issues Routing:

• OSPF, BGP4, Routing Arbiter Database Multicast

• PIM-SparseMode, MBGP, MSDP Measurements

• Surveyor: One-way delay and loss• Traffic utilization• End to end flows with gigaPoP help• OC3MON -- passive measurements

Broader Internet2, NGI, and International Advanced Net

Initial NGIX sitesPossible CA*net3 peering sites

StarTap