High Performance Computing at Mercury Marine
Arden Anderson
Mercury Marine Product Development and Engineering
2
Outline
• About Mercury Marine
• Engineering simulation capabilities
• Progression of computing systems
• HPC system cost and justification
• Summary
33
• Mercury Marine began as the Kiekhaefer Corp. in 1939
– Founded by E. Carl Kiekhaefer
• Mercury acquired by Brunswick Corporation in 1961
– Leader in active recreation: marine engines, boating, bowling, billiards, and fitness
Mercury’s 1st Patent
• Today, USA’s Only Outboard Manufacturer
• Employs 4,200 People Worldwide
• Fond du Lac, WI campus includes– Corporate Offices
– Technology Center, R&D Offices
– Outboard Manufacturing (Casting, Machining, Assembly to Distribution)
Mercury Marine Founded in Cedarburg, WI in 1939
44
The Most Comprehensive Product Offering In Recreational Marine
Outboard Engines (2.5 hp to 350 hp) Sterndrive Engines (135 hp to 1250 hp)
Props / Rigging / P&A Land ‘N’ Sea / Attwood
All new or updated in last 5 years All updated to new emissions standard in last year
Diversified, Quality Products, Connected to Parent Corporation
5
Outline
• About Mercury Marine
• Engineering simulation capabilities
• Progression of computing systems
• HPC system cost and justification
• Summary
6
Poll Question
3) How many compute cores do you use for your largest jobs?
a. Less than 4
b. 4-16
c. 17-64
d. More than 64
7
Standard FEA
Fatigue & Hardware Correlation Non-Linear Gaskets
Sub-ModelingSystem Assemblies with Contact
8
Explicit FEA
• System level submerged object impact– Method development was presented at the 2008 Abaqus Users Conference
9
Test = 35 MPH CFD = 33 MPHTest = 35 MPH CFD = 33 MPH
CFD
Transient Internal Flow
Flow distribution correlated to hardware
External Flow
Moving mesh propeller
Cavitation onset
Two Phase Flow
Vessel drag, heave, and pitch
A
A
A
A
10
Heat Transfer
Enclosure Air Flow & Component Temperatures
Conjugate Heat Transfer for Temperature Distribution & Thermal Fatigue
11
Overview of Mercury Marine Design Analysis Group
• Structural Analysis– Implicit Finite Element– Explicit Finite Element
• Dynamic Analysis• Fluid Dynamics• Heat Transfer• Engine Performance
• Pre and post processing• Dual Xeon 5160 (4 core), 3.0 GHz• Up to 16 GB RAM• 64 bit Windows XP
Simulation MethodsExperience
• FEA and CFD solvers• 80 core (10 nodes x 8 core/node)• Up to 40 GB RAM per node• InfiniBand switch• Windows HPC Server 2008
• Aerospace • Automotive and Off-Highway• Composites• Dynamic Impact and Weapons• Gas and Diesel Engine• Hybrid• Marine
HPC SystemAnalyst Workstations
12
Poll Question
3) How many compute cores do you use for your largest jobs?
a. Less than 4
b. 4-16
c. 17-64
d. More than 64
This slide is a placeholder for coming back to poll question responses
13
Outline
• About Mercury Marine
• Engineering simulation capabilities
• Progression of computing systems
• HPC system cost and justification
• Summary
14
Evolution of Computing Systems at Mercury Marine
• Pre and post processing on Windows PC, 2 GB RAM
• Computing on HP Unix Workstation
– Single CPU
– 4-8GB RAM
• ~$200k for 10 boxes
• Memory limitations on pre-post and limited model size
• Minimal parallel-processing (CFD only)
• Updated processing capabilities with Linux compute server
– 4 CPU Itanium, 32GB RAM for FEA
– 6 CPU Opteron for CFD
• $125k server
• Increased model size with larger memory
• Parallel processing for FEA & CFD
• ~Same number of processors as previous system with large increases in speed and capability
• Updated pre-post (2004 PC’s) with 2x2 core Linux workstations
– 3.0 GHz
– 4-16 GB RAM
• More desktop memory for pre-processing
• Increased computing by clustering the new pre-post machines
• Small & mid-sized Standard FEA on pre-post machines using multi-cpu’s
Introduce Windows HPC Server in 2009…
2004 2005 2007
15
2009 HPC Decision
• Emphasis on minimizing analysis time over maximizing computer & software utilization
• Limited availability to server room Linux support
• Cost Conscious
• Easy to implement
• Machine would run only Abaqus
• Reduce large run times by 2.5x or more
• Ability to handle larger future runs
• System needs to be supported by in-house IT support
• Re-evaluate software versus hardware balance
INFLUENCING FACTORS GOALS
16
Why Windows HPC Server?
• Limited access to Unix/Linux support group
• Unix/Linux support group has database expertise – little experience in high performance computing
• HPC projects lower priority than company database projects
• Larger Windows platform support group
• Benchmarking showed competitive run times
• Intuitive use and easy management– Job Scheduler– Add Node Wizard– Heat Map
17
Mercury HPC System Detail, 2009
• Windows Server 2008 HPC Edition
• 32 Core Server + Head Node
• 4 Compute nodes with 8 cores per node
• 40 GB/Node – 160 GB total
Head Node X3650
Processors: 2 x E5440 Xeon Quad 2.8GHz/12L2/1333busMemory: 16 GB 667 MHzHard Drives: 6 x 1.0 TB SATA in Raid 10
4 Compute Nodes X3450
Processors: 2 x E5472 Xeon Quad 3.0Ghz/12L2/1600busMemory: 40 GB 800MHzDrives: 2 x 750Gb SATA RAID 0
Gig
E s
witc
h
18
Outline
• About Mercury Marine
• Engineering simulation capabilities
• Progression of computing systems
• HPC system cost and justification
• Summary
19
Justification
• Request from management to reduce run turn around time – some run times 1 - 2 weeks as runs have become more detailed and complex
• Quicker feedback to avoid late tooling changes
• Need to minimize manpower down time
• Large software costs – need to maximize software investment
20
Budget Breakdown
• Computers are small portion of budget
• Budget skewed towards software over hardware
• Rebalancing hardware/software in 2009 slightly shifted this breakdown
Manpower
Software
Computers
Other
Manpower
Software
Computers
Other
2009 2010
21
Abaqus Token Balancing
• Previous Abaqus token count was high to enable multiple simultaneous jobs on smaller machines
• Re-balance tokens from several small jobs to fewer large jobs
CPU’s Tokens
4 8
8 12
16 16
32 21
Original 45 tokens
1 x 8 CPU
4 x 4 CPU
New 40 tokens
2 x 16 CPU
1 x 4 CPU
1 x 32 CPU
2 x 4 CPU
3 x 8 CPU
22
HPC System Costs (2009)
• System Buy Price with OS: $37,000
• 2 Year Lease Price: $16,000 per year
• Software re-scaled to match new system
• Incremental cost: $7,300 per year
23
Historic Productivity Increases
• Continual improvement in productivity
• Large increases in analysis complexity
0
20
40
60
80
100
120
2005 2006 2007 2008 2009 2010
Year
Pro
duct
ivity
( W
ork
/ Bud
get
)
0
20
40
60
80
100
120
2005 2006 2007 2008 2009 2010
Year
Pro
duct
ivity
( W
ork
/ Bud
get
)
24
Abaqus S4b Implicit Benchmark
• Cylinder Head Bolt-up
• 5,000,000 DOF
• 32 Gb Memory
System 4 CPU 8 CPU 16 CPU 32 CPU
Mercury Itanium Server
Itanium 1.5 Ghz, Gig-E – 32 Gb
1.5
Mercury HPC System
E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node
0.50 0.61 0.38
Run Time in Hours
25
Mercury “Real World” Standard FEA
• Block + Head + Bedplate
• 8,800,000 DOF
• 55Gb Memory
• Preload + Thermal + Reciprocating Forces
Run Time in Hours (Days)
System 4 CPU 8 CPU 16 CPU 32 CPU
Mercury Itanium Server
Itanium 1.5 Ghz, Gig-E – 32 Gb
213
(9)
Mercury HPC System
E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node
64
(3)
37
(1.5)
31
(1.3)
* Picture of Abaqus benchmark S4b
26
Mercury “Real World” Explicit FEA
• Outboard Impact
• 600,000 Elements
• dt = 3.5e-8s for 0.03s (857k increments)
System 8 CPU 16 CPU 32 CPU
Mercury Linux Cluster
4 nodes at 2 core/node
58
Mercury HPC System
E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node
29.5 16 11
Run Time in Hours
27
Outline
• About Mercury Marine
• Engineering simulation capabilities
• Progression of computing systems
• HPC system cost and justification
• Summary
28
Summary
• Mercury HPC has evolved over the last 5 years
• Each incremental step has lead to greater throughput and increased capabilities that have allowed us to better meet the demands of a fast paced product development cycle
• Our latest HPC server has delivered improvements in run times as high as 8x at a very affordable price
• We expect further gains in meshing productivity as we re-size runs to the new computing system
29
Progress Continues: Mercury HPC System Detail, 2010 Updates
• Windows Server 2008 HPC Edition
• Add 48 cores to existing server (combined total of 80 cores)– 6 Compute nodes with 8 cores per node
– 24 GB/Node
• Now running FEA and CFD on HPC system (~70/30 split)
Head Node X3650
Processors: 2 x E5440 Xeon Quad 2.8GHz/12L2/1333busMemory: 16 GB 667 MHzHard Drives: 6 x 1.0 TB SATA in Raid 10
4 Compute Nodes X3450
Processors: 2 x E5472 Xeon Quad 3.0Ghz/12L2/1600busMemory: 40 GB 800MHz per nodeDrives: 2 x 750 GB SATA RAID 0
6 Compute Nodes X3550
Processors: 2 x E5570 Xeon Quad-core, 3.0GhzMemory: 24 GB RAM per nodeDrives: 2 x 500 GB SATA RAID 0
Infin
iBan
d sw
itch
30
Thank You. Questions?
31
Contact Info and Links
• Arden Anderson– [email protected]
• Microsoft HPC Server Case Study– http://www.microsoft.com/casestudies/Windows-HPC-Server-200
8/Mercury-Marine/Manufacturer-Adopts-Windows-Server-Based-Cluster-for-Cost-Savings-Improved-Designs/4000008161
• Crash Prediction for Marine Engine Systems at 2008 Abaqus Users Conference– Available by searching conference archives for Mercury Marine:
http://www.simulia.com/events/search-ucp.html
Top Related