PySPH: A Python framework for SPH

Post on 12-Sep-2021

4 views 0 download

Transcript of PySPH: A Python framework for SPH

PySPH: A Python framework forSPH

Prabhu RamachandranChandrashekhar P. Kaushik

Department of Aerospace Engineering andDepartment of Computer Science and Engineering

IIT Bombay

SciPy 2010Austin, TX, June 30 – July 1, 2010

Outline

Introduction

PySPH

Architecture

Outline

Introduction

PySPH

Architecture

SPH examples

SPH examples

3D

Parallel?

Smoothed Particle HydrodynamicsI Particle based, LagrangianI Gingold and Monaghan (1977),

Lucy (1977)

I PDEsI Complex problemsI Moving geometriesI Free surface problemsI . . .

Smoothed Particle HydrodynamicsI Particle based, LagrangianI Gingold and Monaghan (1977),

Lucy (1977)

I PDEsI Complex problemsI Moving geometriesI Free surface problemsI . . .

SPH basics

I The SPH approximation

f (r) =∫

f (r ′)W (r − r ′, h)dr ′

I W (r − r ′, h): kernel, compact supportI h: size of the kernel

SPH basics

SPH basics

neighbors

kh

Nearest

SPH basics . . .

I Derivatives: transferred to the kernelI Lagrangian

I PDE → ODE

Scale-up for larger problemsrequires parallelization

Motivation

Reproducible and open research

Outline

Introduction

PySPH

Architecture

PySPH

I SPH + Parallel + PythonI pysph.googlecode.comI Open Source (BSD)

Requirements and installation

I Python, setuptoolsI numpy, Cython-0.12,

mpi4py-1.2I Mayavi-3.x (optional)I Standard Python package

install

High-level solver outline

Serial Dam break

so l ve r = FSFSolver ( t ime_step =0.0001 ,t o t a l _ s i m u l a t i o n _ t i m e =10. ,ke rne l=CubicSpline2D ( ) )

# create the two e n t i t i e s .dam_wall = So l i d (name= ’ dam_wall ’ )dam_f lu id = F lu i d (name= ’ dam_f lu id ’ )

# The p a r t i c l e s f o r the wa l l .rg = RectangleGenerator ( . . . )dam_wall . add_par t i c l es ( rg . g e t _ p a r t i c l e s ( ) )so l ve r . add_en t i t y ( dam_wall )

Serial Dam break . . .

# P a r t i c l e s f o r the l e f t column of f l u i d .rg = RectangleGenerator ( . . . )dam_f lu id . add_par t i c l es ( rg . g e t _ p a r t i c l e s ( ) )so l ve r . add_en t i t y ( dam_f lu id )

# s t a r t the so l ve r .so l ve r . so lve ( )

Parallel Dam break

so l ve r = Para l le lFSFSolver (t ime_step =0.0001 ,t o t a l _ s i m u l a t i o n _ t i m e =10. ,ke rne l=CubicSpline2D ( ) )

# Load p a r t i c l e s i n proc wi th rank 0 .

Outline

Introduction

PySPH

Architecture

Software architecture

I Particle kernelI SPH kernelI Solver frameworkI Serial and Parallel solvers

Software architecture

Solver Framework

NNPS

SPH

Cell Manager

Particle Arrays

Solvers

(Python & Cython)(Cython)

(Cython)

(Cython)

(Cython)

C−Arrays (Cython)

Particle kernel

I C-arrays: numpy-like but faster andresizable

I ParticleArray: arrays of propertiesI NNPS (Nearest Neighbor Particle

Search)I Cell, CellManagerI Caching

SPH kernel

I SPH particle interaction: interactionbetween 2 SPH particles

I SPH summation: interaction of allparticles

Solver framework

I Entities (made of particles)I Solver components (do various things)I Component manager (checks

property requirements)I IntegratorI Basic Solver

Parallel solver

I Parallel cell managerI Parallel solver components

Parallel solver

CellManager

Processor 1 Processor 2

Particles

Solver components

Particles

Solver components

Parallel case

ParallelCellManager ParallelCellManager

Serial case

Particles

Solver components

Challenges

I Particles move all overI Fixed partitioning will not workI Scale upI Automatic load balancing?

Terminology: Cell

Cells

Particles

Terminology: Region

Region 2

P1 P2

Region 1

P2

O

(−1,−1,0)

(1,1,0)

(1,−1,0)

(2,0,0)

P1

P3

P4

Terminology: Domain decomposition

Approach

I Domain decomposition: CellsI Cells: dynamically created/destroyedI Processors manage RegionsI Cells moved to balance load

Load Balancing

Donate CellsI Boundary cellsI Cells with least number of local

neighbors

Load balancing

Efficiency

Efficiency

Current capabilities

I Fully automatic, load balanced,parallel framework

I Relatively easy to scriptI Good performanceI Relatively easy to extend

I Free surface flows

Immediate improvements

I Solver framework redesignI More documentationI Reduce dependency on TVTK for

easier installationI Improved testing on various platformsI Gas dynamics (coming soon)I Solid mechanics (next year)

Thank you!