Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6...

66
Introduction to ARC Systems Presenter Name Advanced Research Computing Division of IT Feb 20, 2018

Transcript of Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6...

Page 1: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Introduction to ARC Systems

Presenter NameAdvanced Research Computing

Division of ITFeb 20, 2018

Page 2: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Before We Start• Sign in• Request account if necessary

• Windows Users:

MobaXterm

PuTTY

Web Interface:

ETX newriver.arc.vt.edu

Page 3: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Today’s goals

•Introduce ARC•Give overview of HPC today•Give overview of VT-HPC resources•Familiarize audience with interacting with VT-ARC systems

Page 4: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Should I Pursue HPC?

•Necessity: Are local resources insufficient to meet your needs?

–Very large jobs–Very many jobs–Large data

•Convenience: Do you have collaborators?

–Share projects between different entities–Convenient mechanisms for data sharing 4

Page 5: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

5

Research in HPC is Broad

• Earthquake Science and Civil Engineering

• Molecular Dynamics • Nanotechnology • Plant Science • Storm modeling• Epidemiology • Particle Physics• Economic analysis of

phone network patterns

• Brain science • Analysis of large

cosmological simulations

• DNA sequencing • Computational

Molecular Sciences• Neutron Science • International

Collaboration in Cosmology and Plasma Physics

5

Page 6: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Who Uses HPC?

• >2 billion core-hours allocated

• 1400 allocations• 350 institutions• 32 research

domains

Page 7: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Popular Software Packages

•Molecular Dynamics: Gromacs, LAMMPS, NAMD, Amber•CFD: OpenFOAM, Ansys, Star-CCM+•Finite Elements: Deal II, Abaqus•Chemistry: VASP, Gaussian, PSI4, QCHEM•Climate: CESM•Bioinformatics: Mothur, QIIME, MPIBLAST•Numerical Computing/Statistics: R, Matlab, Julia•Visualization: ParaView, VisIt, Ensight•Deep Learning: Caffe, TensorFlow, Torch, Theano

Page 8: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Learning Curve

•Linux: Command-line interface•Scheduler: Shares resources among multiple users

•Parallel Computing: –Need to parallelize code to take advantage of supercomputer’s resources–Third party programs or libraries make this easier

Page 9: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Advanced Research Computing

•Unit within the Office of the Vice President of Information Technology

•Provide centralized resources for:–Research computing–Visualization

•Staff to assist users•Website: http://www.arc.vt.edu

Page 10: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

ARC Goals

•Advance the use of computing and visualization in VT research

•Centralize resource acquisition, maintenance, and support for research community

•Provide support to facilitate usage of resources and minimize barriers to entry

•Enable and participate in research collaborations between departments

Page 11: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Personnel

•Associate VP for Research Computing: Terry Herdman

•Director, HPC: Terry Herdman•Director, Visualization: Nicholas Polys•Computational Scientists

–John Burkhardt–Justin Krometis–James McClure–Srijith Rajamohan–Bob Settlage–Ahmed Ibrahim

Page 12: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Personnel (Continued)

•System Administrators (UAS)–Matt Strickler–Josh Akers

–Solutions Architects–Brandon Sawyers–Chris Snapp

•Business Manager: Alana Romanella•User Support GRAs: Mai Dahshan, Negin Forouzesh, Xiaolong Li.

Page 13: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Personnel (Continued)

•System & Software Engineers–Nathan Liles–Open

Page 14: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Computational Resources

•NewRiver–Cluster targeting data-Intensive problems

•DragonsTooth–jobs that don't require low latency interconnect i.e. pleasingly parallel –long running, e.g. 30 day wall time

•Blue Ridge–Large scale cluster equipped with Intel Xeon Phi Co-Processors

•Cascades – large general compute with GPU nodes•Huckleberry - Machine Learning cluster •Cloud based access (bring your user story)

Page 15: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Compute ResourcesSystem Usage

Nodes

Node Description Special Features

BlueRidgeLarge-scale CPU, MIC

40816 cores, 64 GB (2× Intel Sandy Bridge)

260 Intel Xeon Phi 4 K40 GPU18 128GB nodes

NewRiverLarge-scale,

Data Intensive

134 24 cores, 128 GB (2× Intel Haswell)

8 K80 GPGPU 16 “big data” nodes 24 512GB nodes 2 3TB nodes 40 P100 nodes

DragonsTooth

Pleasingly parallel, long

jobs48

24 cores, 256 GB (2× Intel Haswell)

2TB SSD local storage30 day walltime

HuckleberryDeepLearnin

g 14Power 8, 256 GB(2x IBM “Minsky S822LC)

4xP100 / NodeNVLink

Page 16: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Computational Resources

Name NewRiver DragonsTooth

BlueRidge Huckleberry

Key Features, Uses

Scalable CPU, Data Intensive

Pleasingly parallel, long running jobs

Scalable CPU or MIC

DeepLearning

Available August 2015 August 2016 March 2013 Supt 2017

Theoretical Peak

(TFlops/s)152.6 .806 398.7

Nodes 134 48 408 14

Cores 3,288 576 6,528

Cores/Node 24 24 16

Accelerators/Coprocessors

8 Nvidia K80 GPU

N/A

260 Intel Xeon Phi

8 Nvidia K40 GPU

56 Nvidia P100 GPUs

Memory Size 34.4 TB 12.3 TB 27.3 TB

Memory/Core 5.3 GB* 10.6 GB 4 GB*

Memory/Node 128 GB* 256 GB 64 GB* 512 GB

Page 17: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

NewRiver•134 Nodes•3,288 Cores•34.4 TB Memory•EDR InfiniBand (IB) Interconnect•Data-intensive, large-memory, visualization, and GPGPU special-purpose hardware

–8 K80 GPGPU –16 “big data” nodes –24 512GB nodes –2 3TB nodes –40 dual P100 nodes

Page 18: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

NewRiver Nodes

Description # Hosts CPU Cores Memory Local Disk Other Features

General 100 nr027-nr126 2 x E5-2680 v3 (Haswell) 24 128 GB,

2133 MHz 1.8 TB

GPU 8 nr019-nr026 2 x E5-2680 v3 (Haswell) 24 512 GB 3.6 TB

(2 x 1.8 TB)NVIDIA K80

GPU

I/O 16 nr003-nr018 2 x E5-2680 v3 (Haswell) 24 512 GB

43.2 TB (24 x 1.8

TB)

2x 200 GB SSD

Large Memory 2 nr001-nr002 4 x E7-4890

v2 (Ivy Bridge) 60 3 TB 10.8 TB (6 x 1.8 TB)

Interactive 8 newriver1-newriver8

2 x E5-2680 v3 (Haswell) 24 256 GB NVIDIA

K1200 GPU

Page 19: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Storage Resources

Name IntentFile

SystemEnvironment Variable

Per User Maximum

Data Lifespan

Available On

HomeLong-term storage of

files

GPFS (NewRiver) NFS (Other)

$HOME

500 GB (NewRiver)

100 GB (Other)

UnlimitedLogin and Compute

Nodes

Group

Shared Data

Storage for Research Groups

GPFS (NewRiver)

$GROUP10 TB Free per faculty researcher

UnlimitedLogin and Compute

Nodes

WorkFast I/O,

Temporary storage

GPFS (NewRiver)

Lustre (BlueRidge)

GPFS (Other)

$WORK

20 TB (NewRiver)

14 TB (Other)

3 million files

120 daysLogin and Compute

Nodes

Page 20: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Storage Resources (continued)

Name IntentFile

SystemEnvironment Variable

Per User Maximum

Data Lifespan

Available On

Archive

Long-term storage for infrequently-accessed

files

CXFS $ARCHIVE - UnlimitedLogin Nodes

Local Scratch

Local disk (hard

drives)$TMPDIR

Size of node hard

drive

Length of Job

Compute Nodes

Memory (tmpfs)

Very fast I/O

Memory (RAM)

$TMPFSSize of node

memory

Length of Job

Compute Nodes

Page 21: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Visualization Resources

•VisCube: 3D immersive environment with three 10′ by 10′ walls and a floor of 1920×1920 stereo projection screens

•DeepSix: Six tiled monitors with combined resolution of 7680×3200

•ROVR Stereo Wall•AISB Stereo Wall

Page 22: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Getting Started with ARC

•Review ARC’s system specifications and choose the right system(s) for you

–Specialty software

•Apply for an account online the Advanced Research Computing website

•When your account is ready, you will receive confirmation from ARC’s system administrators

Page 23: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Resources•ARC Website: http://www.arc.vt.edu•ARC Compute Resources & Documentation: http://www.arc.vt.edu/hpc

•New Users Guide: http://www.arc.vt.edu/newusers

•Frequently Asked Questions: http://www.arc.vt.edu/faq

•Linux Introduction: http://www.arc.vt.edu/unix

Page 24: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Thank you

Questions?

Page 25: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Log In

•Log in via SSH–Mac/Linux have built-in client–Windows need to download client (e.g. PuTTY)

System Login Address (xxx.arc.vt.edu)

NewRiver newriver1 to newriver8

Dragonstooth dragonstooth1

BlueRidge blueridge1 or blueridge2

HokieSpeed hokiespeed1 or hokiespeed2

HokieOne hokieone

Page 26: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Browser-based Access

•Browse to http://newriver.arc.vt.edu•Xterm: Opens an SSH session with X11 forwarding (but faster)

•Other profiles: VisIt, ParaView, Matlab, Allinea

•Create your own!

Page 27: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

27

ALLOCATION SYSTEM

Page 28: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Allocations

•“System unit” (roughly, core-hour) account that tracks system usage

•Applies only to NewRiver and BlueRidge

http://www.arc.vt.edu/allocations

Page 29: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Allocation System: Goals

•Track projects that use ARC systems and document how resources are being used

•Ensure that computational resources are allocated appropriately based on needs

–Research: Provide computational resources for your research lab –Instructional: System access for courses or other training events

Page 30: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Allocation Eligibility

To qualify for an allocation, you must meet at least one of the following:•Be a Ph.D. level researcher (post-docs qualify)

•Be an employee of Virginia Tech and the PI for research computing

•Be an employee of Virginia Tech and the co-PI for a research project led by non-VT PI

Page 31: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Allocation Application Process

•Create a research project in ARC database

•Add grants and publications associated with project

•Create an allocation request using the web-based interface

•Allocation review may take several days

•Users may be added to run jobs against your allocation once it has been approved

Page 32: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Allocation Tiers

•Research allocations fall into three tiers:

•Less than 200,000 system units (SUs)

–200 word abstract

•200,000 to 1 million SUs–1-2 page justification

•More than 1 million SUs–3-5 page justification

Page 33: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Allocation Management

•Web-based: –User Dashboard -> Projects -> Allocations–Systems units allocated/remaining–Add/remove users

•Command line:–Allocation name and membership: glsaccount–Allocation size and amount remaining: gbalance -h -a <name>–Usage (by job): gstatement -h -a <name>

Page 34: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

34

USER ENVIRONMENT

Page 35: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Consistent Environment

•Operating system (CentOS)•Storage locations•Scheduler•Hierarchical module tree for system tools and applications

Page 36: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Modules

•Modules are used to set the PATH and other environment variables

•Modules provide the environment for building and running applications

–Multiple compiler vendors (Intel vs GCC) and versions

–Multiple software stacks: MPI implementations and versions

–Multiple applications and their versions•An application is built with a certain compiler and a certain software stack (MPI, CUDA)

–Modules for software stack, compiler, applications•User loads the modules associated with an application, compiler, or software stack

–modules can be loaded in job scripts

Page 37: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Module commandsCommand Result

module List options

module list List loaded modules

module avail List available modules

module load <module> Add a module

module unload <module> Remove a module

module swap <mod1> <mod2> Swap two modules

module help <module> Module environment

module show <module> Module description

module spider <module> Search modules

module reset Reset to default

module purge Unload all modules

Page 38: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Modules

•Available modules depend on:–The compiler (eg. Intel, gcc) and–The MPI stack selected

•Defaults:–BlueRidge: Intel + mvapich2–NewRiver, DragonsTooth: none

Page 39: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Hierarchical Module Structure

Page 40: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

40

JOB SUBMISSION & MONITORING

Page 41: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Cluster System overview

LoginNode

ComputeNode 1

Internet

UserVT Campus

Off- Campus

VPN

ComputeNode 2

ComputeNode 3

ComputeNode N

Storage /home /work

Page 42: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Cluster System Architecture

GigEInfiniBand

InfiniBand Switch Hierarchy

I/O NodesWORK File System

internet

Login Nodes

2950

Home Server

TopSpin 270

TopSpin 12012

16

GigE Switch Hierarchy

2

1

1

2

130

HOME

Raid 5

TopSpin 120

Fibre Channel

Page 43: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Parallelism is the New Moore’s Law

•Power and energy efficiency impose a key constraint on design of micro-architectures

•Clock speeds have plateaued

•Hardware parallelism is increasing rapidly to make up the difference

Page 44: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Essential Components of HPC

•Supercomputing resources•Storage•Visualization•Data management•Network infrastructure•Support

44

Page 45: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Terminology

•Core: A computational “unit” •Socket: A single CPU (“processor”). Includes roughly 4-15 cores.

•Node: A single “computer”. Includes roughly 2-8 sockets.

•Cluster: A single “supercomputer” consisting of many nodes.

•GPU: Graphics processing unit. Attached to some nodes. General purpose GPUs (GPGPUs) can be used to speed up certain kinds of codes.

•Xeon Phi: Intel’s product name for its GPU competitor. Also called “MIC”.

Page 46: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Blade : Rack : System

•1 node : 2 x 8 cores= 16 cores•1 chassis : 10 nodes = 160 cores•1 rack (frame) : 4 chassis = 640 cores

•system : 10 racks = 6,400 coresx 4

x 10

Page 47: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

HPC Storage

HPC Cluster

ComputeNode

GPFS

$HOME

$SCRATCH

$WORK

NFS

200TB

140TB

Tape archive

DMF

$SHARE

GPFS

GPFS

GPFS

NFS

NFS

Storage Gateway

Page 48: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Shared vs. Distributed memory

•All processors have access to a pool of shared memory

•Access times vary from CPU to CPU in NUMA systems

•Example: SGI UV, CPUs on same node

• Memory is local to each processor

• Data exchange by message passing over a network

• Example: Clusters with single-socket blades

P

Memory

P PP P

P P PP P

M M MM M

Network

Page 49: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Multi-core systems

•Current processors place multiple processor cores on a die

•Communication details are increasingly complex–Cache access–Main memory access–Quick Path / Hyper Transport socket connections

–Node to node connection via network

Memory

Network

Memory Memory Memory Memory

Page 50: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Accelerator-based Systems

•Calculations made in both CPUs and GPUs•No longer limited to single precision calculations•Load balancing critical for performance•Requires specific libraries and compilers (CUDA, OpenCL)

•Co-processor from Intel: MIC (Many Integrated Core)

Network

GPU

Memory

GPU

Memory

GPU

Memory

GPU

Memory

Page 51: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Submitting a Job

•Submission via a shell script–Job description: Resources required, run time, allocation

–Modules & dependencies–Execution statements

•Submit job script: qsub <job_script>

•Interactive options:–Interactive job: qsub -I …–Interactive job with X11 forwarding: qsub -I –X …

Page 52: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Job Monitoring

•Determine job status, and if pending when it will run

Command Meaning

checkjob –v JOBID Get the status and resources of a job

showq See what jobs are running and cluster utilization

showstart JOBID Get expected job start time

qdel JOBID Delete a job

Page 53: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Job Execution

•Order of job execution depends on a variety of parameters:

–Submission Time–Queue Priority–Backfill Opportunities–Fairshare Priority–Advanced Reservations–Number of Actively Scheduled Jobs per User

Page 54: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Examples: ARC Website

•See the Examples section of each system page for sample submission scripts and step-by-step examples:

–http://www.arc.vt.edu/newriver–http://www.arc.vt.edu/dragonstooth–http://www.arc.vt.edu/blueridge–http://www.arc.vt.edu/huckleberry-user-guide

Page 55: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

55

A Step-by-Step Example

Page 56: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Getting Started

•Find your training account (hpcXX)•Log into NewRiver

–Mac: ssh [email protected]–Windows: Use PuTTY

•Host Name: cascades1.arc.vt.edu

Page 57: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Example: Running MPI_Quad

•Or copy from training directory:cp /home/TRAINING/newriver/* ./

•Build the code

Page 58: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Compile the Code

•Intel compiler is already loadedmodule list

•Compile command (executable is mpiqd)mpicc -o mpiqd mpi_quad.c

•To use GCC instead, swap it out:module swap intel gcc

Page 59: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Prepare Submission Script

•Copy sample script: cp /home/TRAINING/ARC_Intro/br.qsub .

•Edit sample script:–Walltime–Resource request (nodes/ppn)–Module commands (add Intel & mvapich2)–Command to run your job

•Save it (e.g., mpiqd.qsub)

Page 60: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Submission Script (Typical)

#!/bin/bash

#PBS -l walltime=00:10:00

#PBS -l nodes=2:ppn=16

#PBS -q normal_q

#PBS -W group_list=newriver

#PBS –A AllocationName <--Only for NewRiver/BlueRidge

module load intel mvapich2

cd $PBS_O_WORKDIR

echo "MPI Quadrature!"

mpirun -np $PBS_NP ./mpiqd

exit;

Page 61: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Submission Script (Today)

#!/bin/bash#PBS -l walltime=00:10:00#PBS -l nodes=2:ppn=8#PBS -q normal_q#PBS -W group_list=training#PBS –A training#PBS –l advres=NLI_ARC_Intro.26

module load intel mvapich2

cd $PBS_O_WORKDIRecho "MPI Quadrature!" mpirun -np $PBS_NP ./mpiqd

exit;

Page 62: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Submit the job

•Copy the files to $WORK:cp mpiqd $WORKcp mpiqd.qsub $WORK

•Navigate to $WORKcd $WORK

•Submit the job:qsub mpiqd.qsub

•Scheduler returns job number: 25770.master.cluster

Page 63: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Wait for job to complete

•Check job status:checkjob –v 25770showq –u hpcXX

•When complete:–Job output: mpiqd.qsub.o25770–Errors: mpiqd.qsub.e25770

•Copy results back to $HOME:cp mpiqd.qsub.o25770 $HOME

Page 64: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

What is Job Submission?

qsub job

Compute Nodes

mpirun –np # ./a.out

Queue

MasterNode C

1C3

C2

Login Node

ssh

Page 65: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Resources•ARC Website: http://www.arc.vt.edu•Compute Resources & Documentation: http://www.arc.vt.edu/hpc

•Storage Documentation: http://www.arc.vt.edu/storage

•New Users Guide: http://www.arc.vt.edu/newusers

•Frequently Asked Questions: http://www.arc.vt.edu/faq

•Linux Introduction: http://www.arc.vt.edu/unix

•Module Tutorial: http://www.arc.vt.edu/modules

•Scheduler Tutorial: http://www.arc.vt.edu/scheduler

Page 66: Introduction to ARC Systems - Peoplejburkardt/workshops/arc_intro_2018_vt/ar… · (TFlops/s) 152.6 .806 398.7 Nodes 134 48 408 14 Cores 3,288 576 6,528 Cores/Node 24 24 16 Accelerators/C

Questions

Presenter Name, EmailComputational ScientistAdvanced Research ComputingDivision of IT