Participants: Argonne, Berkeley, Illinois, Los Alamos, Princeton, Utah
description
Transcript of Participants: Argonne, Berkeley, Illinois, Los Alamos, Princeton, Utah
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Corridor One: An Integrated Distance Visualization Environment for SSI and ASCI
Applications
Startup Thoughts and Plans
Rick StevensArgonne/Chicago
Participants:
Argonne, Berkeley, Illinois, Los Alamos, Princeton, Utah
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
CorridorOne: An Overview
• The Team
• Our Goals
• Applications Targets
• Visualization Technologies
• Middleware Technology
• Our Testbed
• Campaigns
• Timetable and First Year Milestones
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
The TeamRick Stevens Argonne National Lab [email protected] Brown University of Illinois [email protected] Tom DeFanti University of Illinois [email protected] Adam Finkelstein Princeton University [email protected] Funkhouser Princeton University [email protected] Hansen University of Utah [email protected] Andy Johnson University of Illinois [email protected] Johnson University of Utah [email protected] Jason Leigh University of Illinois [email protected] Kai Li Princeton University [email protected] Dan Sandin University of Illinois [email protected] Ahrens Los Alamos National Laboratory [email protected] Deb Agarwal Lawrence Berkeley Laboratory [email protected] Terrence Disz Argonne National Laboratory [email protected] Ian Foster Argonne National Laboratory [email protected] Nancy Johnston Lawrence Berkeley Laboratory [email protected] Stephen Lau Lawrence Berkeley Laboratory [email protected]
Bob Lucas Lawrence Berkeley Laboratory [email protected] Mike Papka Argonne National Laboratory [email protected] Reynders Los Alamos National Laboratory [email protected] Tang Princeton Plasma Physics Lab [email protected]
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Our Goals
• Grid Middleware and Advanced Networking
• Distributed Visualization and Data Manipulation Techniques
• Distributed Collaboration and Display Technologies
• Systems Architecture, Software Frameworks and Tool Integration
• Application Liaison, Experimental Design and Evaluation
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Distributed Data and Visualization Corridor
Possible WANInterconnection Points
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Applications Targets
• ASCI and SSI Applications Drivers– Climate Modeling (LANL)
– Combustion Simulation (LBNL and ANL)
– Plasma Science (Princeton)
– Neutron Transport Code ( LANL)
– Center for Astrophysical Flashes (ANL)
– Center for Accidental Fires and Explosions (Utah )
– Accelerator Modeling (LANL)
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Climate Modeling: Massive data sizes and time series
• POP Ocean model – 3000 x 4000 x 100 cells per timestep,
1000’s of timesteps
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Combustion Modeling: Adaptive Mesh Refinement
• Data is irregular, not given on a simple lattice
• Data is inherently hierarchical
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
PROBLEM DESCRIPTION: Particle-in-cell Simulation of Plasma Turbulence
PPPL
• Key issue for Fusion is confinement of high temperature plasmas by magnetic fields in 3D geometry (e.g, donut-shaped torus)
• Pressure gradients drives instabilitiesproducing loss of confinement due to turbulent transport
• Plasma turbulence is nonlinear, chaotic, 5-D problem
• Particle-in-cell simulation
distribution function solved by characteristic methodperturbed field solved by Poisson equation
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
With
GYROKINETIC TURBULENCE SIMULATIONS ON NEW MPP’SScience 281, 1835 (1998)
Turbulence reduction via sheared plasma flow, compared tocase with flow suppressed. Results obtained using full
MPP capabilities of CRAY T3E Supercomputer at NERSC
Flowow
Without Flow With Flow
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
MC++ : Monte Carlo Neutronics
• Neutronics simulation of multi-material shell
• Runs all ASCI platforms
• Arbitrary number of particles
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
What Is The FLASH Problem?
• To simulate matter accumulation on the surface of compact stars, nuclear ignition of the accumulated (and possibly stellar) material, and the subsequent evolution of the star’s interior, surface, and exterior• X-ray bursts (on neutron star surfaces)• Novae (on white dwarf surfaces)• Type Ia supernovae (in white dwarf interiors)
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Neutron star surface X-ray Burst
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
• Paramesh Data Structures
• Iris Explorer
• Isosurfaces
• Volume Visualization
• Animations 100’s timesteps
• Resolution moving to Billion
zone computations
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Center for Accidental Fires and Explosions
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Uintah Simulation Runs
Datasets Visualizations
SoftwareVersions
ConfigurationParameters
ComputingResources
Hypotheses
Interpretations
Assumptions
Insight
Fire Spread
ContainerDynamics
HEMaterials
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
C-SAFE Uintah PSE
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Distributed/Parallel Uintah PSE
Computed on remoteresources
Viewedlocally
Main Uintah PSE window on local machine
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Accelerator Simulations
• Accelerator model– 300 million to 2 billion particles per
timestep, 1000’s of timesteps
• Phase space
• Electromagnetic fields
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Distributed Visualization Technologies
• Remote and Distributed Rendering
• Protocols for Remote Visualization
• Progressing Refinement
• Deep Images and Image Based Rendering
• Compression for Visualization Streams
• Remote Immersive Visualization
• Data Organization for Fast Remote Navigation
• High-end Collaborative Visualization Environments
• Collaborative Dataset Exploration and Analysis
• User Interfaces and Computational Steering
• Distributed Network Attached Framebuffers
• Integration with Existing Tools
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
CorridorOneS A N /LA N /W A N based
B ulk D ata T ransfer S erv ices and S tream ingD ata S erv ices
S A N /LA N /W A N basedB D TS + S D S + M ulticast
Co
ntro
l Ch
an
ne
l (ne
two
rk, s
erv
ice
s, s
erv
ers
, pe
rform
an
ce
)
S A N /LA N /W A N basedB D TS + S D S + M C + S tream C om press ion +
Q oS
R em oteV olum eC lien t
Im age-based
R enderingC lien t
G lyph C lient
P rogress iveR efinem ent
V isua liza tionC lien t
M an ipu la tionE ng ine
F ea tu reD etec tion
S am p lingE ng ine
D ata D ataD ata
V olum eV isua liza tion
E ngine
Im age-based
R enderingV isua liza tion
E ngine
G lyphV isua liza tion
E ngine
S urfaceV isua liza tion
E ngine
Large Form atTiled D isplays
W orkbench /I m m ersaDesk
CAVEs Desktops
NGINetworkServices
NGI
NGI
Data Servers M ass S to rage Ins trum en ts S upercom pu te rs
Data Analysis and Manipulation Servers T ransposers In te rpo la tion S am p ling F ea tu re D e tec tion
"Distance" Visualization ServersP ara lle l + H ardw are A cce le ra ted : V o lum e Im age S urfaces(P a ired w ith C lien ts fo r D is tance U ses)
"Distance" Visualization Clients P a ired w ith S erve rs In te rfaced w ith M u ltip le D isp lay
E nv ironm ents C ollabora tive C apab ilities
Display Devicesand User Environments Large F orm a t C ollabora tive Im m ers ive
• Data Servers
• Analysis and Manipulation Engines
• Visualization Backend Servers
• Visualization Clients
• Display Device Interfaces
• Advanced Networking Services
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Protocols for Remote and Distributed Visualization
Database retrieval
Geometry processing
Rasterization Display
High-level primitives
3-D primitives
2-D primitives
Pixels
R en d ere r
A p p en d
D ec im ation
Isosu rface
R ead
R en d ere r
A p p en d
D ec im ation
Isosu rface
R ead• Distributed Scientific Visualization
• Passing data via messaging– Serialization of vtk data structures,
use C++ streams• structured points, grids, unstructured
grids, graphics
• Passing control via messaging– Update protocol
Model Based Remote Graphics
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Example - Parallel Isosurface and Serial Rendering on a Linux Cluster
D ec im ation
Isosu rface
R ead
D ec im ation
Isosu rface
R ead
D ec im ation
Isosu rface
R ead
R en d erer
A p p en d
D ec im ation
Isosu rface
R ead
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Progressive Refinement and Multi-resolution Techniques: Example Application
• Particle Accelerator Density Fields– wavelet-based representation of structured grids
– isosurface visualization with vtk
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Multiresolution Display Development
High ResolutionInset Image
Background Image
• Match display to human visual system
most cones in 5 foveal spot
• Optimal use of rendering power resolution where you need it
• Match display to data resolutionresolution where the data is
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Remote Volume Rendering
• “True” 3D presentation of 3D data
• Blending of user-defined color and opacity
• Reveals subtle details/structure in data that could be lost in isosurface rendering
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Remote Visualization Using Image-Based Rendering
Front View Side View
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
ActiveMural, Giant Display Wall
• Argonne, Princeton UIUC Collaboration
• 8’ x 16’ display wall– Jenmar Visual Systems BlackScreen™ technology, > 10000 lumens
– 8 LCD 15 DLP 24 DLP
– 8-20 MegaPixels
p 4 4 8 9 4 3 6 T 0 2 9
p 2 6 2 ( C e n tu ry)# 5 6 4 11 0 9 l b s / i n / 2 4 l b s8 2 l b s m a xL @ 5 0 l b s : 2 .8 i n
p 4 4 7 ( M c M a s te r)# 3 0 1 0 t2 7
p448 9436T029
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Network Attached — Virtual Frame Buffer
3796p x 1436p (4x2)
5644p x 2772p (6x4)
...
VFB back-end Servers (mapped one-one on graphics output)
VBF front-end serverSerial semantics localframebuffer interface
- output partitioning- blending- serial parallel- flexible transport- shadow buffer
X-Windows ?OpenGL ?ggi ? Message passing, SM or DSM
Accelerator
RAMDAC
Projector
Accelerator
RAMDAC
Projector
Accelerator
RAMDAC
Projector
Accelerator
RAMDAC
Projector
VFB Net Command Interface
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
The MicroMural
Portable Tiled Display forHigh Resolution Vis and Access Grid
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Ambient mic(tabletop)
Presentermic
Presentercamera
Audience camera
Ambient mic(tabletop)
Presentermic
Presentercamera
Audience camera
Access Grid Nodes
• Access Grid Nodes Under Development– Library, Workshop– ActiveMural Room– Office– Auditorium
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Components of an AG Node
DisplayComputer
Video CaptureComputer
Audio CaptureComputer
Mixer
EchoCanceller
Network
RGB Video
NTSC Video
Analog Audio
Digital Video
Digital Video
Digital Audio
Shared App,Control
ControlComputer
RS232 Serial
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Collaborative Dataset Exploration and Analysis Collaboration & Network Aware Visualization Tools
• TIDE being built in collaboration with NCDM as a framework for navigating and viewing data-sets in Tele-Immersion.
– Low-Res navigation
– High-Res visualization
– Set viewpoints then raytrace
• Integrate annotation tools & multiperspective techniques.
• Support VTK and make it collaborative.
• Interface with other commonly used ASCI/SSI visualization tools such as HDF5.
TIDE showing Compressionof a Lattice (ASCI data)
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Collaborative Dataset Exploration and Analysis Annotation and Recording
• How do you record discoveries in tele-immersion?
• V-Mail & Virtual Post-It notes attach to spaces, objects, or states.
• Recording states and checkpoints.
• Useful for documenting spatially located features.
• Useful for asynchronous collaboration.
• Querying in VR.
• People tend to treat recordings as if they were still there.
Viewing V-Mail in Tele-Immersion
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Collaborative Dataset Exploration and Analysis Collaboration techniques & technology for navigating massive data-sets
• Explore human factors to motivate the design of collaborative tools.
• Take advantage of having more than 1 expert to help with interpretation and/or manipulation.Provide Multiple CooperativeRepresentations.
• e.g. Engineer and artist.
• e.g. Partition multi-dimensions across viewers.
• e.g. People with different security clearances.
• CAVE6D implementation and pilot study.
CAVE6D: Tele-Immersive tool forvisualizing Oceanographic Data
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Middleware Technology
• Integrated Grid Architecture
• Grid Services Infrastructure
• Multicast Protocols for Rapid Image Transfer
• Analyzing the User of Network Resources
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
The Grid from a Services View
Resource-specific implementations of basic services:E.g., Transport protocols, name servers, differentiated services, CPU schedulers, public keyinfrastructure, site accounting, directory service, OS bypass
Resource-independent and application-independent services:E.g., authentication, authorization, resource location, resource allocation, events, accounting,remote data access, information, policy, fault detection
DistributedComputing
ApplicationsToolkit
Grid Fabric(Resources)
Grid Services(Middleware)
ApplicationToolkits
Data-Intensive
ApplicationsToolkit
CollaborativeApplications
Toolkit
RemoteVisualizationApplications
Toolkit
ProblemSolving
ApplicationsToolkit
RemoteInstrumentation
ApplicationsToolkit
Applications Chemistry
Biology
Cosmology
Nanotechnology
Environment
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Monitoring : Globus I/O & Netlogger
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Type Latency Bandwidth Reliable Multicast Security Streaming DynQosControl < 30 ms 64Kb/s Yes No High No LowText < 100 ms 64Kb/s Yes No Medium No LowAudio < 30 ms Nx128Kb/s No Yes Medium Yes MediumVideo < 100 ms Nx5Mb/s No Yes Low Yes MediumTracking < 10 ms Nx128Kb/s No Yes Low Yes MediumDatabase < 100 ms > 1GB/s Yes Maybe Medium No HighSimulation < 30 ms > 1GB/s Mixed Maybe Medium Maybe HighHaptic < 10 ms > 1 Mb/s Mixed Maybe High Maybe HighRendering < 30 ms >1GB/s No Maybe Low Maybe Medium
Teleimmersion Networking Requirements
Audio
Video
Tracking
Database and Event Transactions
Simulation Data
Haptic Drivers
Remote Rendering
Text
Control
• Immersive environment• Sharing of objects and virtual space• Coordinated navigation and discovery• Interactive control and synchronization• Interactive modification of environment• Scalable distribution of environment
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Corridor One Testbed
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
High Bandwidth Data Distribution
Achieved 35 MBytes/sec.
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
Midwest Networked CAVE and ImmersaDesk Sites Enabled by EMERGE
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
CorridorOne Application Campaigns
Approximately two weeks in duration (will do approximately three or four each year)
Focused testing and evaluation of one application area during that time
Involving the participation of external applications scientists Part of the effort is qualitative to determine how the users will
use the remote capabilities Part of the effort is a set of well designed quantitative
experiments to collect data
The CorridorOne Project Argonne Berkeley Illinois Los Alamos Princeton Utah
First Year Milestones
• Access Grid nodes up for supporting C1 Collaboration (Oct 31)• Integrate Visualization Tools, Middleware and Display Technologies • Conduct Phase 1 Applications Experiments beginning Dec 1-10• For each applications domain area we will:
– Collect relevant problem datasets and determine possible visualization modalities
– Develop remote scientific visualization and analysis scenarios with the end users,
– Prototype a distributed collaborative visualization application/demonstration– Test the application locally and remotely with variable numbers of participants
and sites– Document how the tools, middleware and network were used and how they
performed during the tests– Evaluate the tests and provide feedback to Grid middleware developers,
visualization tool builders, and network providers