iWarp Based Remote Interactive Scientific Visualization CENIC 2008 Oakland, CA

9
iWarp Based Remote Interactive Scientific Visualization CENIC 2008 Oakland, CA Scott A. Friedman UCLA Academic Technology Services Research Computing Technologies Group

description

iWarp Based Remote Interactive Scientific Visualization CENIC 2008 Oakland, CA. Scott A. Friedman UCLA Academic Technology Services Research Computing Technologies Group. Our Challenge. Applications which make use of 10 gigabit network infrastructure iWarp, leveraging our existing IB code - PowerPoint PPT Presentation

Transcript of iWarp Based Remote Interactive Scientific Visualization CENIC 2008 Oakland, CA

  • iWarp Based Remote Interactive Scientific Visualization

    CENIC 2008Oakland, CAScott A. FriedmanUCLA Academic Technology ServicesResearch Computing Technologies Group

    Supercomputing 2007

  • Our ChallengeApplications which make use of10 gigabit network infrastructureiWarp, leveraging our existing IB code Extend access to our visualization clusterRemote sites around UCLA campusRemote sites around UC systemRemote sites beyond

    Supercomputing 2007

  • Our Visualization ClusterHydra - Infiniband based24 rendering nodes (2x nvidia G70)1 high definition display node3 visualization center projection nodes1 remote visualization bridge nodeResearch System - primarilyHigh performance interactive rendering (60Hz)Load balanced renderingSpatio-temporal data exploration and discoverySystem requires both low latency and high bandwidth

    Supercomputing 2007

  • Remote Visualization Bridge NodeBridges from IB to iWarp/10G ethernetAppears like a display to clusterNo change to existing rendering systemPixel data arrives over IB, whichis sent over iWarp to a remote display nodeuses same RDMA protocolSame buffer used for receives and sendsVery simple in principle - sort ofis pipelined along the entire pathpipeline chunk size is optimized offline

    Supercomputing 2007

  • Simple DiagramvisualizationclusterlocalHD nodeIBiWarpIBremoteHD nodeuclacenicnlrscinetiWarppixeldatainputbridgenodeActs likea display tothe cluster

    Supercomputing 2007

  • What are we transmittingHigh definition uncompressed video stream60Hz at 1920x1080 ~ 396MB/s (3Gbps)One frame every 16.6msAchieved three simultaneous streams at UCLAUsing a single bridge nodeJust over 9.2Gbps over campus backboneActual visualizationDemo is a particle DLA simulationDiffusion limited aggregation - physical chemistryN-body simulations

    Supercomputing 2007

  • Diffusion limited aggregation

    Supercomputing 2007

  • Continuing WorkLocal latency is manageable~60usec around campusLonger latencies / distancesUCLA to UC Davis ~20ms rtt (CENIC)UCLA to SC07 (Reno, NV) ~14ms rtt (CENIC/NLR)How much can we tolerate - factor of interactivity Jitter can be an issue, difficult to buffer, typically just toss dataChallenges Proper application data pipelining helps hide latencyBuffering is not really an option due to interactionPacket loss is a killer - some kind of provisioning desired/neededRemote DMA is not amenable to unreliable transmissionReliable hybrid application protocol likely best solution

    Supercomputing 2007

  • Thank youUCLAMike Van Norman, Chris ThomasUC DavisRodger Hess, Kevin KawaguchiSDSCTom Hutton, Matt Kullberg, Susan RathburnCENICChris CostaOpen Grid Computing, ChelsioSteve Wise, Felix Marti

    Supercomputing 2007