JDAT Production Hardware
description
Transcript of JDAT Production Hardware
Page 1 JSOC Review – 17 March 2005
JDAT Production Hardware
• Compute nodes (< 100)– 2-, 4-, 8-multicore processors per node– 16+ GB per processor– 64-bit linux
• Database nodes (< 5)– Oracle cluster– 5 TB shared database volume
• I/O nodes (< 10)– High-availability NFS server cluster– Multiple fibre channel connections to non-shared disks and tape drives– Multiple gigabit ethernet connections to compute nodes
• RAID disk storage– 400 TB initially– 100 TB annual increment– SATA drives (500 GB today)
• Tape Archive– Two PB-sized tape libraries initially– ½ PB per library annual increment– SAIT (500 GB, 30 MB/s today) or LTO (400 GB, 80 MB/s today)
Page 2 JSOC Review – 17 March 2005
JDAT Prototype
Page 3 JSOC Review – 17 March 2005
JDAT network
Page 4 JSOC Review – 17 March 2005
Reality Check
• AIA/HMI combined data volume 2 PB/yr = 60 MB/s
– read + write x 2
– quick look + final x 2
– one reprocessing x 2
– 25% duty cycle x 4
2 GB/s (disk)
1/2 GB/s (tape)
• NFS over gigabit ethernet 50 – 100 MB/s
– 4 – 8 channels per server, 5 servers (today)
• SAIT-1 native transfer rate 25 – 30 MB/s
– 10 SAIT-1 drives per library, 2 libraries (today)
Page 5 JSOC Review – 17 March 2005
HMI & AIA JSOC Architecture
Science TeamForecast Centers
EPOPublic
Catalog
Primary Archive
HMI & AIAOperations
House-keeping
Database
MOCDDS
Redundant Data
Capture System
30-DayArchive
OffsiteArchiv
e
OfflineArchiv
e
HMI JSOC Pipeline Processing System
DataExport& WebService
Stanford
LMSAL
High-LevelData Import
AIA AnalysisSystem
Local Archive
QuicklookViewing
housekeepi
ng GSFCWhite Sands
World