Presented to the HEPAP AARD Subpanel December 21, 2005

39
Accelerator Modeling: Present capabilities, future prospects, and applications to the HEP Program (with emphasis on SciDAC) Presented to the HEPAP AARD Subpanel December 21, 2005 Robert D. Ryne Lawrence Berkeley National Laboratory with contributions from Kwok Ko (SLAC) and Warren Mori (UCLA)

description

Accelerator Modeling: Present capabilities, future prospects, and applications to the HEP Program (with emphasis on SciDAC). Robert D. Ryne Lawrence Berkeley National Laboratory with contributions from Kwok Ko (SLAC) and Warren Mori (UCLA). - PowerPoint PPT Presentation

Transcript of Presented to the HEPAP AARD Subpanel December 21, 2005

Page 1: Presented to the HEPAP AARD Subpanel December 21, 2005

Accelerator Modeling:Present capabilities, future prospects, and applications to the HEP Program

(with emphasis on SciDAC)

Presented to the HEPAP AARD Subpanel December 21, 2005

Robert D. Ryne

Lawrence Berkeley National Laboratory

with contributions from

Kwok Ko (SLAC) and Warren Mori (UCLA)

Page 2: Presented to the HEPAP AARD Subpanel December 21, 2005

SciDAC Accelerator Science & Technology (AST) Project: Overview

• Goals:—Develop new generation of parallel accelerator modeling codes to solve

the most challenging and important problems in 21st century accel S&T—Apply the codes to improve existing machines, design future facilities,

help develop advanced accelerator concepts• Sponsored by DOE/SC HEP in collaboration with ASCR• Primary customer: DOE/SC, primarily its HEP, also NP programs

—codes have also been applied to BES projects• Funding: $1.8M/yr (HEP), $0.8M/yr (ASCR/SAPP)

—Strong leveraging from SciDAC ISICs• Duration: Currently in 5th (final) year• Participants:

—Labs: LBNL, SLAC, FNAL, BNL, LANL, SNL—Universities: UCLA, USC, UC Davis, RPI, Stanford—Industry: Tech-X Corp.

Page 3: Presented to the HEPAP AARD Subpanel December 21, 2005

SciDAC Accelerator Science & Technology (AST) Project: Overview cont.

• Management:—K. Ko and R. Ryne, co-PIs—Senior mgmt team: K. Ko, R. Ryne, W. Mori, E. Ng

• Oversight and reviews by DOE/HEP program mgrs—Vicky White—Irwin Gaines—Craig Tull (present)

• The project must—advance HEP programs (R. Staffin)—through synergistic collaboration w/ ASCR that advances the state-of-

the-art in advanced scientific computing (M. Strayer)

Page 4: Presented to the HEPAP AARD Subpanel December 21, 2005

SciDAC AST Overview: Focus Areas• Organized into 3 focus areas:

—Beam Dynamics (BD), R. Ryne—Electromagnetics (EM), K. Ko—Advanced Accelerators (AA), W. Mori

• All supported by SciDAC Integrated Software Infrastructure Centers (ISICs) and ASCR Scientific Application Partnership Program (SAPP)• Most funding goes to BD and EM; AA is very highly leveraged

Page 5: Presented to the HEPAP AARD Subpanel December 21, 2005

Why do we need SciDAC???

• Why can’t our community do code development just ourselves as we have done in the past?

• Why can’t it be done just as an activity tied to accelerator projects?

• Why can’t our community follow “business as usual?”

Page 6: Presented to the HEPAP AARD Subpanel December 21, 2005

Computational Issues

• Large scale:—simulations approaching a billion particles, mesh points—Huge data sets—Advanced data mgmt & visualization

• Extremely complex 3D geometry (EM codes)• Complicated hardware with multiple levels of memory

heirarchy, > 100K processors• Parallel issues

—Load balancing—Parallel sparse linear solvers—parallel Poisson solvers—particle/field managers

Page 7: Presented to the HEPAP AARD Subpanel December 21, 2005

Close collaboration w/ ASCR researchers (ISICs, SAPP) is essential

• A hallmark of the SciDAC project is that it built upon collaboration between applications/computational scientists with mathematicians, computer scientists, parallel performance experts, visualization specialists, and other IT experts.

• The AST project collaborates with several ISICs:—TOPS (Terascale Optimal PDE Solvers)—APDEC (Applied Partial Differential Equations Center)—TSTT (Terascale Simulation Tools & Technologies)—PERC (Performance Evaluation Research Center)

Page 8: Presented to the HEPAP AARD Subpanel December 21, 2005

Overview of the 3 focus areas

• Beam Dynamics (BD)• Electromagnetic Modeling (EM)• Advanced Accelerators (AA)

Page 9: Presented to the HEPAP AARD Subpanel December 21, 2005

Overview of the 3 focus areas

• Beam Dynamics (BD)• Electromagnetic Modeling (EM)• Advanced Accelerators (AA)

Page 10: Presented to the HEPAP AARD Subpanel December 21, 2005
Page 11: Presented to the HEPAP AARD Subpanel December 21, 2005

SciDAC Codes: Beam Dynamics

• Set of parallel, 3D multi-physics codes for modeling beam dynamics in linacs, rings, and colliders—IMPACT suite: includes 2 PIC codes (s-based, t-based);

mainly for electron and ion linacs—BeamBeam3D: strong-weak, strong-strong, multi-slice,

multi-bunch, multi-IP, head-on, crossing-angle, long-range —MaryLie/IMPACT: hybrid app combines MaryLie+IMPACT—Synergia: multi-language, extensible, framework; hybrid

app involves portions of IMPACT+MXYZPTLK—Langevin3D: particle code for solving Fokker-Planck

equation from first principles

Page 12: Presented to the HEPAP AARD Subpanel December 21, 2005

IMPACT suite becoming widely used;> 300 email contacts in FY05, > 100 already in FY06

•SLAC

•LBNL

•LANL

•Tech-X

•FNAL

•ANL

•ORNL

•MSU

•BNL

•Jlab

•Cornell

•NIU

•RAL

•PSI

•GSI

•KEK

Page 13: Presented to the HEPAP AARD Subpanel December 21, 2005

SciDAC code development involves large, multidisciplinary teams. Example: MaryLie/IMPACT code

Page 14: Presented to the HEPAP AARD Subpanel December 21, 2005

Development, reuse, and synthesis of code components. Examples: Synergia, e-cloud capability

Page 15: Presented to the HEPAP AARD Subpanel December 21, 2005

New algorithms and methodologies are key. Examples: (1) high aspect ratio Poisson solver; (2) self-consistent Langevin/Fokker-Planck

Self-Consistent Diffusion Coefficients vs. velocity

Spitzer approximation

First-ever 3D self-consistent Langevin/Fokker-Planck simulation

Electric field error vs. distance

Error in the computed electric field of a Gaussian distribution of charge (x=1mm and y=500mm). Even using a grid size of 64x8192, the standard

method (blue curve) is less accurate than the Integrated Green Function method (purple) on 64x64.

Page 16: Presented to the HEPAP AARD Subpanel December 21, 2005

SciDAC beam dynamics applications benefit DOE/SC programs, esp. HEP

• Beam-Beam simulation of Tevatron, PEP-II, LHC, RHIC• ILC damping rings (space-charge, wigglers)• FNAL Booster losses• CERN PS benchmark study• RIA driver linac modeling• SNS linac modeling• LCLS photoinjector modeling• CERN SPL (proposed proton driver) design• J-PARC commissioning• Publications:

—23 refereed papers since 2001 (including 5 Phys Rev Lett., 10 PRST-AB, 4 NIM-A, 2 J. Comp. Phys., Computer Physics Comm.), numerous conf proceedings papers

• USPAS course on computational methods in beam dynamics

Page 17: Presented to the HEPAP AARD Subpanel December 21, 2005

LHC beam-beamsimulationx1=x2= y1=y2=0.31,0=–0.0034

First-ever 1M particle, 1M turn strong-strong b-b simulation (J. Qiang, LBNL)

Examples: Collider modeling using BeamBeam3D

PEP-II luminosity calculation shows importance of multi-slice modeling (J. Qiang, Y. Cai, SLAC; K. Ohmi, KEK)

Code scalability depends strongly on parallelization methodology (J. Qiang)

Parameter studies of antiproton lifetime in Tevatron

Page 18: Presented to the HEPAP AARD Subpanel December 21, 2005

ILC damping ring modeling using ML/I

Results of MaryLie/IMPACT simulations of an ILC “dog-bone” damping ring (DR) design showing space-charge induced emittance growth using different space-charge models. Space charge is important for the ILC DR in spite of the high energy because of the combination of small emittance and large (16 km) circumference. Top (nonlinear space charge model): the beam exhibits small emittance growth. Bottom (linear space charge model): the beam exhibits exponential growth due to a synchro-betatron resonance. The instability is a numerical artifact caused by the simplified (linear) space-charge model. (M. Venturini, LBNL)

Page 19: Presented to the HEPAP AARD Subpanel December 21, 2005

FNAL booster modeling using Synergia

FNAL booster simulation results using Synergia showing the merging of 5 microbunches. SciDAC team members are working closely with experimentalists at the booster to help understand and improve machine performance.(P. Spentzouris and J. Amundson, FNAL; J. Qiang and R. Ryne, LBNL)

Page 20: Presented to the HEPAP AARD Subpanel December 21, 2005

Beam Dynamics under SciDAC 2(HEP program)

• Support/maintain/extend successful codes developed under SciDAC 1 (BD, EM, AA)

• Develop new capabilities to meet HEP priorities: LHC, ILC, Tevatron, PEP-II, FNAL main injector, booster, proton driver—Self-consistent 3D simulation of: e-cloud, e-cooling, IBS, CSR—Start-to-end modeling with all relevant physical effects

• Enable parallel, multi-particle beam dynamics design & optimization

• Performance and scalability optimization on platforms up to the petascale (available by the end of the decade)

• Couple parallel beam dynamics codes to commissioning, operations, and beam experiments

Page 21: Presented to the HEPAP AARD Subpanel December 21, 2005

Overview of the 3 focus areas

• Beam Dynamics (BD)• Electromagnetic Modeling (EM)• Advanced Accelerators (AA)

Page 22: Presented to the HEPAP AARD Subpanel December 21, 2005

SciDAC AST – Electromagnetics

Develop a comprehensive suite of parallel electromagnetic codes for the design and analysis of accelerators, (Ron’s talk)

Apply new simulation capability to accelerator projects across SC including those in HEP, NP and BES, (Ron’s talk)

Advance computational science to enable terascale computing through ISICs/SAPP collaborations. (this talk)

Under SciDAC AST, the Advanced Computations Dept. @ SLAC is in charge of the Electromagnetics component to:

Page 23: Presented to the HEPAP AARD Subpanel December 21, 2005

ACD is working with the TOPS, TSTT, PERC ISICs as well as SAPP researchers on 6 computational science projects involving 3 national labs and 6 universities.

ACD’s ISICs/SAPP Collaborations

Parallel Meshing – TSTT (Sandia, U Wisconsin/PhD thesis)

Adaptive Mesh Refinement – TSTT (RPI)

Eigensolvers – TOPS (LBNL), SAPP (Stanford/PhD thesis, UC Davis)

Shape Optimization – TOPS (UT Austin, Columbia, LBNL), TSTT (Sandia, U Wisconsin)

Visualization – SAPP (UC Davis/PhD thesis)

Parallel Performance – PERC (LBNL, LLNL)

Page 24: Presented to the HEPAP AARD Subpanel December 21, 2005

Parallel Meshing & Adaptive Mesh Refinement

Processor: 1 2 3 4

Parallel meshing is needed for generating LARGE meshes to model multiple cavities in the ILC superstructure & cryomodule

RFQ - Frequency Convergence

54.354.454.554.654.754.854.9

5555.155.2

0 1000000 2000000 3000000 4000000

Number of Unknowns

Frequency in MHz

RFQ − Q Convergence

5750

5800

5850

5900

5950

6000

6050

6100

0 1000000 2000000 3000000 4000000 Number of Unknowns

Q

Adaptive Mesh Refinement improves accuracy & convergence of frequency and wall loss calculations

Frequency Wall loss QRIA RFQ

Page 25: Presented to the HEPAP AARD Subpanel December 21, 2005

Eigensolvers & Shape Optimization

Omega3P

Lossless Lossy Material

PeriodicalStructure

ExternalCoupling

ESILISIL w/ refinement

Implicit RestartedArnoldi SOAR Self-Consistent

Loop

Complex eigensolver for treating external coupling is essential for computing HOM damping in ILC cavities.

Shape Optimization to replace manual, iterative process in designing cavities with specific goals subject to constraints.

Omega3PSensitivity

meshingsensitivity

optimization geometricgeometricmodelmodel

Omega3P meshinmeshingg

(only for discrete sensitivity)

Page 26: Presented to the HEPAP AARD Subpanel December 21, 2005

Visualization & Parallel Performance

Visualization is critical to mode analysis in complex 3D cavities, e.g. mode rotation effects

Solve & Postprocess Breakdown Communication Pattern

Parallel Performance studies are needed to maximize code efficiency and optimize use of computing resources.

Page 27: Presented to the HEPAP AARD Subpanel December 21, 2005

Proposed Projects for SciDAC 2

Parallel adaptive h-p-q refinement where h is mesh size, p is order of FE basis and q is order of geometry model

Parallel shape optimization (goals w/ constraints) and prediction (cavity deformations from HOM measurements) Parallel particle simulation on unstructured grids for accurate device modeling (RF guns, klystrons)

Integrated electromagnetics/thermal/mechanical modeling for complete design and engineering of cavities Parallel, interactive visualization cluster for mode analysis and particle simulations

SLAC will develop the NEXT level of simulation tools for NEXT generation SC accelerators (ILC, LHC, RIA, SNS) by continuing to advance Computational Science in collaborations with the ISICs/SAPP component of SciDAC

Page 28: Presented to the HEPAP AARD Subpanel December 21, 2005

Overview of the 3 focus areas

• Beam Dynamics (BD)• Electromagnetic Modeling (EM)• Advanced Accelerators (AA)

Page 29: Presented to the HEPAP AARD Subpanel December 21, 2005

Recent advances in modeling advanced accelerators: plasma based acceleration and e-clouds

W.B.Mori , C.Huang, W.Lu, M.Zhou, M.Tzoufras, F.S.Tsung, V.K.Decyk (UCLA)D.Bruhwiler, J. Cary, P. Messner, D.A.Dimtrov, C. Neiter (Tech-X)

T. Katsouleas, S.Deng, A.Ghalam (USC) E.Esarey, C.Geddes (LBL)

J.H.Cooley, T.M.Antonsen (U. Maryland)

Page 30: Presented to the HEPAP AARD Subpanel December 21, 2005

Accomplishments and highlights:Code development

• Four independent high-fidelity particle based codes—OSIRIS: Fully explicit PIC—VORPAL: Fully explicit PIC + ponderomotive guiding center—QuickPIC: quasi-static PIC + ponderomotive guiding center—UPIC: Framework for rapid construction of new

codes--QuickPIC is based on UPIC: FFT based

• Each code or Framework is fully parallelized. They each have dynamic load balancing and particle sorting. Each production code has ionization packages for more realism. Effort was made to make codes scale to 1000+ processors.

• Highly leveraged

Page 31: Presented to the HEPAP AARD Subpanel December 21, 2005

Full PIC: OSIRIS and Vorpal • Successfully applied to various

LWFA and PWFA problems

104

s(N)

Scale well to 1,000’s of processors

Colliding laser pulses

Particle beams

Self-ionized particle beam wake

3D LWFA simulation

Page 32: Presented to the HEPAP AARD Subpanel December 21, 2005

Quasi-static PIC:QuickPIC

Code features:• Based on UPIC parallel object-oriented plasma

simulation Framework.Model features:• Highly efficient quasi-static model for beam drivers• Ponderomotive guiding center + envelope model for

laser drivers.• Can be 100+ times faster than conventional PIC with

no loss in accuracy.• ADK model for field ionization.Applications:• Simulations for PWFA experiments,

E157/162/164/164X/167• Study of electron cloud effect in LHC.• Plasma afterburner design

afterburner

hosing

E164X

Page 33: Presented to the HEPAP AARD Subpanel December 21, 2005

3 Nature papers (September 2004) where mono-energetic electron beams with energy near 100 MeV were measured. Supporting PIC simulations were presented.

SciDAC members were collaborators on two of these Nature publications and SciDAC codes OSIRIS and Vorpal were used.

Phys. Rev. Lett. by Tsung et al. (September 2004) where a peak energy of 0.8 GeV and a mono-energetic beam with an central energy of 280 MeV were reported in full scale 3D PIC simulations.

Recent highlights: LWFA simulations using full PIC

Vorpal result on cover

Page 34: Presented to the HEPAP AARD Subpanel December 21, 2005

ΔEmax ~ 4GeV

(initial energy chirp

considered)

Modeling self-ionized PWFA experiment with QuickPIC

E164X experiment

QuickPIC simulation

-4

-2

0

2

4

6

8

0

2

4

6

8

10

12

0 100 200 300 400 500

(z μ )m

ΔEmax ~ 5− 0.5(β − tron radiation) = 4.5GeV

Located in the FFTB

e-

N=1-2· 1010

z=0.1 mm=30 E GeV

Ionizing Laser Pulse

(193 )nm Li Plasma

ne≈6 · 10 15 cm-3

L≈30 cm

CerenkovRadiator

Streak Camera(1 )ps resolution

-X RayDiagnostic

Optical TransitionRadiators Dump

25 m

∫Cdt

!Not to scale

Spectrometer

25 m

FFTB

Page 35: Presented to the HEPAP AARD Subpanel December 21, 2005

s = 0 m

s = 28.19 m

Afterburner simulation: 0.5 TeV ~ 1 TeV in 28 meters

Simulation done with QuickPIC in 5000 node-hours

Full PIC run would have taken 5,000,000 node-hours!

Page 36: Presented to the HEPAP AARD Subpanel December 21, 2005

Vision for the future: SciDAC 2High fidelity modeling of .1 to 1TeV plasma

accelerator stages

• Physics Goals:— A) Modeling 1to 10 GeV plasma acc stages: Predicting and designing near term experiments.— B) Extend plasma accelerator stages to 250 GeV - 1TeV range: understand physics & scaling laws— C) Use plasma codes to definitively model e-cloud physics:

• 30 minutes of beam circulation time on LHC• ILC damping ring

• Software goals:— A) Add pipelining into QuickPIC: Allow QuickPIC to scale to 1000’s of processors.— B) Add self-trapped particles into QuickPIC and ponderomotive guiding center Vorpal packages.— C) Improve numerical dispersion* in OSIRIS and VORPAL.— D) Scale OSIRIS, VORPAL, QuickPIC to 10,000+ processors.— E) Merge reduced models and full models— F) Add circular and elliptical pipes* into QuickPIC and UPIC for e-cloud.— G) Add mesh refinement into* QuickPIC, OSIRIS, VORPAL,and UPIC.— H) Develop better data analysis and visualization tools for complicated phase space data**

• *Working with APDEC ISIC• **Working with visualization center

Page 37: Presented to the HEPAP AARD Subpanel December 21, 2005

In Conclusion…

• Q: What is the scope of our research in regard to HEP short/medium/long-range applications?

• A: It is mainly short/medium.—Capabilities have been developed, codes applied to:

• Short: PEP-II, Tevatron, FNAL Booster, LHC• Medium: ILC• Long: Exploration of advanced accelerator concepts

– these activities are highly leveraged, represent 10% of the SciDAC AST budget

Page 38: Presented to the HEPAP AARD Subpanel December 21, 2005

Final remarks

• Future HEP facilities will cost ~$0.5B to ~$10B—High end modeling is crucial to

• Optimize designs• Reduce cost• Reduce risk

—Given the magnitude of the investment in the facility, the $1.8M investment in SciDAC is tiny, but the tools are essential

• Laser/plasma systems are extraordinarily complex—High fidelity modeling, used in concert with theory &

experiment, is essential to understand the physics and help realize the promise of advanced accelerator concepts

Page 39: Presented to the HEPAP AARD Subpanel December 21, 2005

Acronyms used

• SciDAC: Scientific Discovery through Advanced Computing• AST: SciDAC Accelerator Science & Technology project• ASCR: Office of Advanced Scientific Computing Research• BD: Beam dynamics activities of SciDAC AST• EM:Electromagnetics activities of SciDAC AST• AA: Advanced Accelerator activities of SciDAC AST• ISIC: SciDAC Integrated Software Infrastructure Center

—TOPS: Terascale Optimal PDE Solvers center—TSTT: Terascale Simulation Tools and Technologies center—APDEC: Applied Partial Differential Equations center—PERC: Performance Evaluation Research Center

• SAPP: Scientific Application Partnership Program (ASCR-supported researchers affiliated w/ specific SciDAC projects)