Cyber Science Infrastructure in Japan · 1. Cyber Science Infrastructure in Japan - NAREGI Grid...
Transcript of Cyber Science Infrastructure in Japan · 1. Cyber Science Infrastructure in Japan - NAREGI Grid...
-
1
Cyber Science Infrastructure in Japan- NAREGI Grid Middleware Version 1 and Beyond -
June 24, 2010
Kenichi Miura, Ph.D.
三浦 謙一Center for Grid Research and Development
National Institute of InformaticsTokyo, Japan
-
Outline
• NAREGI Grid Middleware • RENKEI Project• Relation with the Next Generation
Supercomputer Project
-
3
Hierarchical Computing Environment
National SupercomputerGrid
(Tokyo,Kyoto,Nagoya…)
Domain-specificResearch Organizations
(IMS,KEK,NAOJ….)
Next GenerationSupercomputer System
Domain-specificResearch
Communities
DepartmentalComputing Resources
Laboratory-levelPC Clusters
NAREGIGrid Middleware
Interoperability(GIN,EGEE,Teragrid etc.)
NLS
NIS
LLS
-
4
National Research Grid Initiative (NAREGI) Project:Overview
- Originally started as an R&D project funded by MEXT (FY2003-FY2007) 2 B Yen(~17M$) budget in FY2003
- Collaboration of National Labs. Universities and Industry in the R&D activities (IT and Nano-science Apps.) was promoted.
-Project redirected as a part of the Next GenerationSupercomputer Development Project (FY2006-…..)
MEXT:Ministry of Education, Culture, Sports, Science and Technology
-
(1)To develop a Grid Software System (R&D in Grid Middleware and Upper Layer) as the prototype of future Grid Infrastructure in scientific research in Japan
(2)To provide a Testbed to prove that the High-end Grid Computing Environment (100+Tflop/s expected by 2007) can
be practically utilized by the nano-science researchcommunity over the academic backbone network, SINET3.
(3) To Participate in International collaboration/Interoperability(U.S., Europe, Asian Pacific) , and to Contribute to
Standardization Activities, OGF GIN-RG, PGI-WG etc.
National Research Grid Initiative (NAREGI) Project:Goals
-
6
Grid MiddlewareIntegration and Operation Group
Grid MiddlewareAnd Upper Layer R&D
Project Leader: Dr. K.Miura
Center for Grid Research and Development(National Institute of Informatics)
Ministry of Education, Culture, Sports,Science and industry(MEXT)
Computational Nano Center(Inst. Molecular science)
R&D on Grand ChallengeProblems for Grid Applications
(ISSP, Tohoku-U, AIST,Inst. Chem. Research, KEK etc.)
ITBL
SINET3
Cyber ScienceInfrastructure(CSI)
Coordination and Operation Committee
Dir.: Dr. F.Hirata
Grid TechnologyResearch Center (AIST), JAEA
Computing and Communication Centers(7 National Universities)
etc.
TiTech, Kyushu-U,Osaka-U, Kyushu-Tech.,Fujitsu, Hitachi, NEC
Industrial Association for Promotion of
Supercomputing Technology
Collaboration
CollaborationJoint Research
Joint Research
Joint R&D
Collaboration
OparationAnd Collaboration
Unitization
Deployment
Organization of NAREGI
-
7
NAREGI Software Stack7
Computing Resources
NII IMS ResearchOrganizations Univ. SC Centers
SINET3
Grid-Enabled Nano-Applications
Grid PSE
Grid Workflow Tool
Grid Visualization
Data Grid
Information Service
Grid ProgrammingLibraries- GridRPC- GridMPI
High-Performance & Secure Grid Networking, Certification
Grid VM
Super Scheduler
WSRF(NAREGI implementation + Globus 4)
National Institute of Informatics
-
GridVM
GRAMIS
Local Scheduler
Computing Resource Computing Resource
GridFTP / Local disk
Info. ServiceResource
Info.
DG UFT
Super SchedulerPortal
Workflow Tool
Grid PSE
Data Grid
SS C
lient
MyProxyCA / RA
UMS /VOMS
File Server File Server
Gfarm Metadata Server
GridFTP for Gfarm
Global File System
Workflow submission
Reservation, Job submission,Job Control
Resource Query
GridVM
GRAMIS
Local Scheduler
Resource & Accounting Info.
App. Deploy &Registration
Data transfer / File staging
Singlesign-on
GridMPI
GridFTP / Local disk
NAREGI Architecture
-
Workflow GUI
-
10
A Sample Workflow based Grid FMO Simulations of Proteins
10
njs_png2002njs_png2012
njs_png2002
njs_png2003
njs_png2004
njs_png2010
njs_png2009
njs_png2008
njs_png2007
njs_png2006
njs_png2005
njs_png2011
njs_png2057
dpcd052
dpcd053
dpcd054
dpcd055
dpcd056
dpcd056dpcd057
dpcd052
dpcd053
dpcd054
dpcd055
dpcd056
dpcd056dpcd057
njs_png2002njs_png2012
njs_png2002
njs_png2003
njs_png2004
njs_png2010
njs_png2009
njs_png2008
njs_png2007
njs_png2006
njs_png2005
njs_png2011
njs_png2057
monomer calculation
dimer calculation
NII Resources
IMS Resources
fragment data
input data
total energy calculation
densityexchange
visuali-zation
Data component
source: Prof. Aoyagi (Kyushu Univ.) National Institute of Informatics
-
11
MPI
RISMJob
LocalScheduler
LocalScheduler
IMPIServer
GridVM
FMOJob
LocalScheduler
SuperScheduler
WFT
RISMsourceFMO
source
Work-flow
PSECA
Site A Site B(SMP machine)Site C
(PC cluster)
6: Co-Allocation
3: NegotiationAgreement
6: Submission
10: Accounting
10: Monitoring
4: Reservation
5: Sub-Job
3: Negotiation
1: Submissionc: Editb: Deploymenta: Registration
CA CA CA
Resource Query
GridVM GridVM
DistributedInformation
Service
GridMPI
RISMSMP
64 CPUs
FMOPC Cluster128 CPUs
Grid Visualization
Output files
Input files
IMPI
Scenario for Multi-sites MPI Job Execution
-
12
RISM FMO
Reference Interaction Site Model Fragment Molecular Orbital method
IMS
MPICH-G2, Globus
RISM FMO
NIIGridMPI
Data Transformationbetween Different Meshes
Electronic StructureAnalysis
Solvent DistributionAnalysis
Grid MiddlewareGrid Middleware
Electronic Structurein Solutions
Adaptation of Nano-science Applications to Grid Environment
(SINET 3)
-
13
Collaboration in Data Grid Area
• High Energy Physics(GIN)- KEK- EGEE
• Astronomy- National Astronomical Observatory
(Virtual Observatory)• Bio-informatics
- BioGrid Project
-
14
National Institute of Informatics
NAREGI Data Grid Environment14
Data 1 Data 2 Data nGfarm:Grid-wide File System
MetadataConstruction
Data Access Management
Data ResourceManagement
Job 1
Meta-data
Meta-data
Data 1
Grid Workflow
Data 2 Data n
Job 2 Job n
Meta-data
Job 1
Grid-wide DB Querying
Job 2
Job n
Data Grid Components Import data into workflow
Place & register data on the Grid
Assign metadata to data
Store data into distributed file nodes
-
15
VO Service15
IS
A.RO1 B.RO1 N.RO1
ResearchOrg (RO)1
Grid
VM
IS
Policy• VO-R01• VO-APL1• VO-APL2
Grid
VM
IS
Policy• VO-R01
Grid
VM
IS
Policy• VO-R01• VO-APL1
VO-RO1ISSS
Client
VO-APL1ISSS
IS
α.RO2 β.RO2 ν.RO2
RO2
Policy• VO-R02• VO-APL2
VO-RO2IS SS
Client
Grid
VM
IS
Policy• VO-R02
Grid
VM
IS
Policy• VO-R01• VO-APL1• VO-APL2
VO-APL2ISSS
Grid
VM
IS
Client
RO3Decoupling VOs and Resource Providers
VOs & UsersResource Providers
Grid Center@RO1 Grid Center@RO2
VOMS
VOMS
VOMS
VOMS
National Institute of Informatics
-
16
NAREGI Version 1
• Developed in FY2007• More flexible scheduling methods
- Reservation-based scheduling- Coexistence with locally scheduled jobs- Support of Non-reservation-based scheduling- Support of “Bulk submission” for parameter
sweep type jobs• Improvement in maintainability
- More systematic logging using Information Service (IS)• Easier installation procedure
- apt-rpm- VM
• Now V.1.1.5 (downloadable since March.’10)
Operability, Robustness, Maintainability
-
17
SINET3 Network Topology(FY2007 - )
10Gbps 40Gbps
Upgrade to SINET4 (FY2010) !
-
18
portal /…/cdas
RENKEI-Osaka
UMS
VOMS
GridVM Engines
GridVM Sch GridVM Sch
GridVM Engines
UMS/VOMS
GridVM Sch
SS Linkage
GridVM Engines
UMS/VOMS
png1054
User cert
Host cert
GridVM
sng0001(sx)
RENKEI-Naregi
(Linux)
GridVM Schdpc.kyushu
UMS/VOMS
VO1GSIC-VOS
GridVM Sch
(Solaris)
GridVM Sch
GridVM pbg2043
GridVM png1051
GridVM pfg1005
GridVM Schdpca064
GridVM Schdpca128
GridVM Schdpcb064
GridVM Schdpcb128
GridVM Schdpcc128
GridVM Schdpcd048GridVM Sch
dpcd049GridVM Sch
dpcd057GridVM Schsr11k
CDAS/GVM-S
vo1利用
GridVM S&EGridVM Sch
(Linux)
NAREGI CAOsaka Univ. Grid CA
Portal
SS SS SS
Portal
SS
CA/RA
IS-CDAS
Portal
IS-CDAS
CA/RA
IS-CDAS
Portal
RA
IS-NASIS-NAS
IS-CDAS
IS-NAS
IS-CDAS
rcs
Osaka Univ. TiTech NII Inst. Mol. Sci.
Kyushu Univ.
Nagoya Univ.
Large scale Evaluation of Grid Middleware (March 2008)(~50 Tflops connected)
-
19
Federation Test between NAOJ and KEK
• Setup astro libraries at KEK site• Job submission to KEK with Work Flow
Tool(WFT) at the NAO Portal• Input data are transferred from NAO and
Output data are staged-out to NAO portal
• Output data was processed with vis. software as shown in the right picture.
NAREGIServers
Astro lib(1GB)Data 2.7GB
NAREGIServers
KEKNAOJ job subInput Data:(2.7 GB)10 CCD mosaic images
160MB x 17
Process: 10 Hourssenser calib.adjust deformationpositioningmosaicingsumming 17frames
Visualization
retrieve
50,000 objects identified in this frame.
visualization
SUBARU Telescope in Hawaii
-
Deployment of NAREGIGrid Middleware to 9 Supercomputer Centers
(in progress)
-
21
Grid Operation & Coordination (FY2008 - )21
GOC
SINET
UPKI
university/laboratory
SINET3
users
admin.
CAtraining program
help desk
VO VO admin.
CP/CPS
NOC admin.
NII
NAREGIdevelopers
users
National Institute of Informatics
-
Sharing Vector Supercomputer Resources(Tohoku Univ. and Osaka Univ.)
-
Sharing Vector Supercomputer Resources- Tohoku Univ, and Osaka Univ.-
Tohoku Univ. Osaka Univ.
NAREGI Infrastructure
Site C Site D
-
Real-time Collaboration in Astrophysics
Princeton
Galaxy Formation
Collaboration with
Convert raw data to visualization
Data PreprocessingInteractive selection and display
Interactive Feature Display
Visualization
Commands issued through chat interfaceExecute n-body problem in NAREGI
Online Processing Request
Operations:• Request simulation from NAREGI • Play• Rewind• Stop• Zoom-in / Zoom-out
Operation
Real-time display of kinetic/potential energy
Dynamic Value Display
Analysis
•Galaxy Formation (offline)•N-Body Problem (online)
Galaxyformation
Cray XT4
N-bodyproblem
Computation
Exploration and Discovery in Astrophysics using NII Open Science Sim
Flexible Simulation Operation
-
Real-time Collaboration in Bio-Molecular Science
Commands issued through chat interface
Flexible Simulation Operation
Execute Molecular Dynamics computation on GROMACS / NAREGI
Online Processing Request
Operations:• Compute MD on GROMACS & NAREGI• Show full & backbone structure• Play animation• Rewind• Stop• Zoom-in / Zoom-out
MD Computation
Molecular Modeling and Dynamics in NII OpenScienceSim
Commands issued through chat interface
Flexible Simulation Operation
Visualization
bi-directional communication
Commands of single-user BALLView interfaceMolecular Modeling and Dynamics
GROMACSMolecular Dynamics
Operations:• Change structureo Add Hydrogen• Run minimization• Run MD simulation
Operations:• Change structureo Add Hydrogen • Run minimization• Run MD simulation
MolecularDynamics
Operation
Changes in molecular structure are synchronized
between BALLView &virtual world
Single-user modeling Collaborative modeling
-
26
TeraGrid
CondorGlobusDEISA
EGEEGrid5000UNICORE
NAREGI
Collaborations with International Grid Projects
UK e-Science
・ UK e-Science: Technical Exchange, Use of GridSAM・ UNICORE (Germany):Use of UNICORE in αVersion・ EGEE (EU): Interoperation Demo at GIN-WG (OGF)・ DEISA (EU): Technical Exchange on SuperScheduler・ TeraGrid (USA):Technical Exchange・ Globus (USA): Use of GT4 in βVersion &V1.0 ・ Condor (USA):Technical Collaboration in αversion・ Grid5000 (France):Network Measurement using NAREGI software
-
27
Indu
stry
/Soc
ieta
l Fee
dbac
k
Inte
rnat
iona
l Inf
rast
ruct
ural
Col
labo
ratio
n
Restructuring Univ. IT Research ResourcesExtensive On-Line Publications of Results
Deployment of NAREGI Middleware
Virtual LabsLive Collaborations
Cyber-Science Infrastructure for R & D
UPKI: National Research PKI Infrastructure
Cyber-Science Infrastructure (CSI)
● ★
★
★★★
★
★
☆
SINET3 and Beyond: Lambda-based Academic Networking Backbone
Hokkaido-U
Tohoku-U
Tokyo-UNII
Nagoya-U
Kyoto-U
Osaka-U
Kyushu-U
(Titech, Waseda-U, KEK, etc.)
NAREGIOutputs
GeNii (Global Environment forNetworked Intellectual Information)
NII-REO (Repository of ElectronicJournals and Online Publications
-
28
Expansion Plan of NAREGI Grid
National SupercomputerGrid
(Tokyo,Kyoto,Nagoya…)
Domain-specificResearch Organizations
(IMS, KEK, NAOJ….)
PetascaleComputing Environment
Domain-specificResearch
Communities
DepartmentalComputing Resources
Laboratory-levelPC Clusters
NAREGIGrid Middleware
Interoperability(GIN,EGEE,Teragrid etc.)
NLS
NIS
LLS
-
RENKEI Project:Resource Collaboration Technologies
for e-Science Communities(FY2008-2011)
-
Description of RENKEI ProjectThe “RENKEI Project” is a new R&D project, which started in
September 2008 under the auspices of MEXT*. In this project, a new light-weight grid middleware and software tools will be developed in order to provide the connection between the NAREGI Grid environment and wider research communities.
In particular, technology for the flexible and seamless access between the national computing center level and the departmental/laboratory level resources, such as computers, storage and databases is highly emphasized. Also, this newly developed grid environment will be made interoperable with the major international grids.
*MEXT: the Ministry of Education, Culture, Sports, Science and Technology
-
computation/data intensive application users
the file sharing mechanism among laboratory resources, computer center resources and multiple grid middleware environments
the seamless job execution between laboratory resources and computer center resources
the federation among different types of databasesthe management of user identity information
computer centers
the application interface for multiple grid middleware environments
Comput- intensive application users
DBDB
the operation of the testbedthe collaboration with computing centers and end user communities for the experiments
Database users Application developers
laboratory
DB
DB
grid middleware
internationalinter-operation
computer users
grid middleware (e.g. NAREGI)
Overview of RENKEI Project
-
Test-Bed
Infra-Structur
e
Super-computin
gCenters
Organization
(1)Computation Linkage
Tamagawa U.・H. Usami
Fujitsu・H.Kanazawa
( 3)Database Linkage
AIST・S.Sekiguchi
NII・K.Aida
(2)Data Sharing
Osaka U.・H.Matsuda
(4)API for Multigrid Environment
KEK・T.Sasaki
(5)Evaluation & User Interface
TiTECH・S.Matsuoka
Tsukuba U.・O.Tatebe
Project LeaderNII: Ken Miura
NIIGOC
Feedback
Dev. Dev.
Dev. Dev.
Requirement
RENKEI POP
Feedback
Technical SupportResearch Collaboration
32
Application Scientists Layer
-
SAGA/JSAGA
System Concept
GridSAM
UK e-Sci. AHEetc.
InternationalLight-weight
Grid Middleware
Current Project
LLS
NIS
SGE
LLSResources
PSEAHS ExtendedWFT
Application Developers’ Tools (KEK)
EGEE/gLite
WFT
Data sharing/
Database sharing
Clients ( End Users)NAREGI Portal
NAREGI Others
Covered by:
OSG/VDT
NAREGIResources
GridVMBES I/F
NAREGI SS/IS BES Clientglobus
condor
SchemaTranslator
Storage
FileServer
Storage
FileServer
Storage
FileServer
Globally Distributed File System
Interoperation withOther Grids
33
-
pre process
post process
sim 1 sim. 3sim. 2
LAN
WAN
post process
sim 1
sim. 2
interoperation
application catalogue
End user
LANsim. 3 Application
developersdeploy, register
deploy, register
virtual distributed file system (Gfarm2)
Laboratory B
Laboratory A
Workflow
importcomputer
centerspre process
Seamless Connection between NIS and LLS
NIS: National Infrastructure System, LLS: Laboratory Level System
-
NAREGI-VO 1(Computer Center)pre
process
post process
Sim. 1 sim. 3sim. 2
NA
RE
GI-
WF
engi
ne
Supe
r Sc
hedu
ler GridVM
GridVM
MyProxy,VOMS,UMS IS
NAREGI-Portal
sim. 2
sim. 3
Workflow
Job Description LanguageRSLJSDLWFML(NAREGI –WorkFlow Markup Languageembedded with JSDL)
WFML
LLS Portal
WFT
AHS WF
engi
ne
SS C
lient
GridSAM(globus-PBS
SGE)
WSGram(PBS/SGE)
pre process
post process
Sim. 1
Laboratory Resources
JSDL
RSL
Extended Workflow System
-
Interoperation between Two Different Grid Middleware
ObjectiveMutual job execution between NAREGI and gLite
FY2008SchedulerNAREGI (SS) → gLite (lcgCE) ,gLite (WMS) → WS-GRAM
Information ServiceCollection of WS-GRAM Resources/usage Info,and Storing to IS
36
NodeNodeNode
EGEE user NAREGI user
gLite-BDII NAREGI-ISGIN-BDII
lcgCEPreWS-GRAM
gLite-UI
GridVM Scheduler
WSGRAM
WMSCondor-G
GT4-GAHP
NAREGI-SSNAREGI-SC
Interop-SC
NAREGIPortal
WN GridVMEngineGridVMEngine
NodeWN
jobjob
Info.
-
Interoperation between Two Different Grid Middleware
Objective
Mutual job execution between NAREGI and gLite FY2008SchedulerNAREGI (SS) → gLite (lcgCE) ,gLite (WMS) → WS-GRAM
Information ServiceCollection of WS-GRAM Resources/usage Info,and Storing to IS
37
FY2009Scheduler:BES compliantOGSA WSRF Basic Profile compliantJob submission via BES (NAREGI SS → BES Resources)
Information Service:GLUE2.0 compliantGLUE2.0→ NAREGI SchemaNAREGI Schema→ GLUE2.0 (as much as possible)
-
38
Interoperation between two different Grid Middleware
NodeNodeNode
EGEE user NAREGI user
gLite-BDII NAREGI-ISGIN-BDII
lcgCE
PreWS-GRAM
gLite-UI
GridVM Scheduler
WSGRAM
WMS
Condor-GGT4-GAHP
NAREGI-SSNAREGI-SC
Interop-SC
NAREGIPortal
WN GridVMEngineGridVMEngine
Node
WN
jobjob
Info.
-
conducted @ 5th IEEE eScience Conf.(Oxford UK) • Application Program : Minem (Plasma Charge Minimization)
– Minimization of the Total Electric Charges over the Surface of a Sphere– Local environment for Pre/Post Processing and Grid Environment for
Computation• Realization of Job submission to Multiple Grids via HPCBP
: HPCBP(BES, JSDL, etc.): FTP or GridFTPOxford e-Research Centre
Pre-processing:Input Data Preparation
(Perl)
Main Processing:• Data Staging• Job submission from BES Client to Grid (CUI)
Post Processing 1:Selection of Optimal Result
(Perl)Pst Processing 2:
Uploading to Web server
Minem
Minem
UNICORE(DEISA)
GridSAM(UK-NGS/OMII-UK)
ARC(NorduGrid)
Genesis II(U of Virginia)
RENKEI/NAREGI(NII)
Minem
Minem
Minem
Minem
MinemLocal Environment
Grid environment
Web server
Image: Copyright OeRC
Internet
Demonstration of Interoperability
39Source: D.Wallom, NGS
-
Applications and Knowledge Sharingwith Applications Hosting Service (AHS)
International Research Community
ApplicationDevelopers
ApplicationUsers
A Univ.
B Univ.
C Univ.
D Univ.
VO Management by Groups
-
41
Nation-wide Distributed File System• Goal: Development of distributed file system technology spread
over nation-wide with comparative performance of local fileserver
• Research Topics:–Optimal automatic placement of file replicas based on Gfarm 2.0.–Fault tolerance with file replicas
File Server 1
Storage
Client
File Server 2
Storage
File Server3
Storage
Virtual Distributed File System
Client
FileReplica
FileReplica
File Replica
Client
File
Optimal Replica Placement
-
File Catalog Federation
Labo A Labo B
RNS File Catalog #1
Labo C
Register File Location (EPR)
File Transfer
RNS File Catalog #2
Heterogeneous Grid Environments(e.g., LFC/SRM on gLite, SRB on TeraGrid)
Load Balancing by Federation
Computer Center XComputer Center Y
Data Access
Data Access
Nation-wide Distributed File System (Tsukuba Univ.)
Migrate Catalog Information (EPR)
Overview of Data-Sharing Project
-
43
File Catalog ServiceGoal: Development of interoperable file catalog service between heterogeneous Grid environments.
• Current file catalog systems (LFC (EGEE gLite), MCAT (SRB), etc.) do not have interoperability among each other.
• Development of standardized file catalog based on RNS (Resource Namespace Service) specifications (OGF).
EGEE gLite File Server
File Catalog System
SRB or iRODS File Server
Japan e-Science Distributed File System
Client(1) Logical File Name
(3) File Access with GridFTP
(2) Physical File Location (EPR)
-
Database Federation
Large SacaleLandslide Simulation
Land Altitude Model
(Satellite Data)
AMEDAS (Rainfall Data) Hazard MapGeological Data
ASTER
GEOMET
-
KEK and Collaborating Organizations
gLite• CERN• Academia Sinica
(Taiwan)• Tohoku Univ.• Tsukuba Univ.• Nagoya Univ.• Kobe Univ.• Hiroshima Inst. Tech• Etc.
NAREGI/RENKEI• NII• CC-IN2P3 (Lyon, France)• NAOJ
45
Interoperability• GIN• JSAGA
-
RENKEI PoPPlanned Sites (10Gbps connections)
- 8sites,200TB(raw)・100TB(stable,with Apps.) –(Storage not included)
Tohoku U.Material Science
Nagoya U.Solar-EarthEnvironment
Osaka U.Laser
Energy
Osaka U.Nuc. Phys
Osaka U.HVEM
FY2008FY2009
TiTECH KEK
AIST
NII
Tsukuba U.
-
47
Expansion Plan of NAREGI Grid
National SupercomputerGrid
(Tokyo,Kyoto,Nagoya…)
Domain-specificResearch Organizations
(IMS, KEK, NAOJ….)
PetascaleComputing Environment
Domain-specificResearch
Communities
DepartmentalComputing Resources
Laboratory-levelPC Clusters
NAREGIGrid Middleware
Interoperability(GIN,EGEE,Teragrid etc.)
NLS
NIS
LLS
-
The Next Generation Supercomputer Project
Policy:development, installation and application of an advanced high performance supercomputer system, as one of Japan’s “Key Technologies of National Importance”
Total Budget:about 115 billion Yen ( 1.15 billion US dollars )
Period of Project:FY2006 – FY2012
48Courtesy of RIKEN
-
Goals of the Next Generation Supercomputer Project
49
1. Development and installation of the most
advanced high performance supercomputer system
2. Development and wide use of application software to utilize the supercomputer to the maximum extent
3. Provision of flexible computing environment by sharingthe next generation supercomputer through connection with other supercomputers located at universities and research institutes
4. Establishment of “Institute for Computational Science”
Courtesy of RIKEN
-
FY2008 FY2009 FY2010 FY2011
Computerbuilding
Researchbuilding
FY2007FY2006 FY2012
Shared file system
Processing unit
Front-end unit(total system software)
Next-GenerationIntegrated NanoscienceSimulation
Next-GenerationIntegratedLife Simulation VerificationDevelopment, production, and evaluation
Tuning and improvement
Verification
Production, installation, and adjustment
Production, installation, and adjustment
ConstructionDesign
ConstructionDesign
Prototype andevaluationDetailed design
Conceptualdesign
Detailed designBasicdesign
Development, production, and evaluation
Production and evaluation
SystemB
uildings
Detailed designBasicdesign
Schedule of Project
Applications
present
-
Organization of the Project
Evaluation Scheme
EvaluationCommittees
Industries
Industrial Forumfor Promotion
of Supercomputing
R&D Scheme
Advisory
Board
Computer Companies
RIKEN: Project HQNext-Generation
Supercomputer R&D Center (Ryoji Noyori)
NII: Grid Middleware and Infrastructure
IMS: Nano Science Simulation
Riken Wako Institute: Life Science Simulation
(Note) NII: National Institute of Informatics, IMS: Institute for Molecular Science
Project Leader: Tadashi Watanabe
MEXT: Policy & FundingOffice for Supercomputer Development Planning
Project Committee
(MEXT and CSTP)
Visiting Researchers from Univ. & National Labs.
R&D G.: Ryutaro Himeno
-
System Configuration
Compute Nodes
Multi-dimensional Mesh/TorusNetwork
Global File System
Local File System
Global I/O Networks
Netw
orks for Control and M
anagement
Internet
FrontendServers
ControlServers
ManagementServers
SystemConfiguration
Job & UserManagement
Users
Courtesy of RIKEN
C Fujitsu Limited
-
Characteristics of the System【Massively Parallel/Distributed Memory architecture】
Logical 3-dimensional torus Network(Tofu)Next-Generation Supercomputer
• Ultra high-speed/ high-reliable CPU– Advanced 45nm process technology– 8cores/CPU, 128GFLOPS– Error recovery ( ECC, Instruction retry, etc.)
• High performance/highly reliable network– Direct interconnection network by a multi-dimensional mesh/torus network– Expandability and reliability
• System Software– Linux OS– Compilers for Fortran and C programming languages– MPI & mathematical libraries– Distributed parallel file system
C Fujitsu LimitedC Fujitsu Limited
C Fujitsu Limited
-
Major Applications of Next Generation Supercomputer
Targeted as grand challenges
-
Next-Generation Energy
Solar energy fixation
Fuel alcoholFuel cells
Electric energystorage
Electrons and
moleculesElectrons
Domain
Electron theory of solids
Quantum chemistry
Doping of fullerene and carbon nanotubesMolecular
dynamics
Condensed matters
Integrated system
5nmSelf- organized
magnetic nanodots
Semi-macroscopic
Molecular assembly
Next-Generation Nano Biomolecules
Next-Generation information Function Materials
One-dimensional crystal of silicon
Polio virus
Orbiton(orbital waves)
Ferromagnetic half-metals
“off” “on”
lightlightOptical switch
Liposome
Nafion
Water
Nafion
Water15nm
Mesoscale structure of naflon membrane
Self-assembly Capsulation
Nafion membrane
Medicines, New drug, and DDS
RMSD 4.8 Å(all Cα)
Protein folding
Nonlinear optical DeviceNano quantum devices
Spin electronicsUltra high-density storage devices
Integrated electronic devices
Water molecules inside lisozyme
cavity
Electronic conduction in
integrated systems
VirusesAnticancer drugsProtein control
Nano processes for DDC
light
27 nm
46 nm
27 nm
46 nm
To create next-generation nano-materials
(new semiconductor materials, etc.) by integrating
theories (such as quantum chemistry, statistical dynamics and solid electron theory)
and - simulation techniques in the fields of new-generation information
Nano-Science Simulation Software
-
To provide new tools for breakthroughs against various problems in life science by means of petaflops-class simulation technology, leading to comprehensive understanding of biological phenomena and the development of new drugs/medical devices and diagnostic/therapeutic methods
Whole bodyCardiovascular system
Cells OrgansTissues
Micro MacroMeso
Microscopic approach
MD/first principle/quantum chemistry simulations Continuous entity simulations
Size
-
Computer WingTotal Floor Area:17,500m2Area of one floor: 4,325m22 Floors for Computer & SE rooms2 Floors for heat exchanger and blowerCompletion Date: May 2010
Kobe Facility
Kobe Port-Island
As of Nov.26,2009
-
Computer WingTotal Floor Area:17,500m2Area of one floor: 4,325m22 Floors for Computer & SE rooms2 Floors for heat exchanger and blowerCompletion Date: May 2010
ICS: Institute for Computational Science
Kobe Port-Island
As of Nov.26,2009
• Computer science and Computational science
• Both researchers will gather and expect to develop new research fields and methodologies
• Currently, we are designing the center and operation policy of the supercomputer– The users will be chosen by a new committee
independent from RIKEN to select valuable subjects
-
59
Future Computational Research Environmentbased on Cyber Science Infrastructure Concept
Source: http://www.nii.ac.jp/en
-
60
Summary• NAREGI Version 1.1.5 was released in March, 2010• NAREGI Grid middleware will enable seamless federation of
heterogeneous computational resources.• NAREGI Grid Middleware is being deployed to the national
supercomputer centers as the important component of the Japanese Cyber Science Infrastructure Framework.
• A new project (RENKEI) started in FY 2008 to provide seamless access between NAREGI and the 3rd Tier resources.
• NAREGI is planned to provide the access and computational infrastructure for the Next Generation Supercomputer System
(Discussion underway).
スライド番号 1Outline Hierarchical Computing Environmentスライド番号 4National Research Grid Initiative (NAREGI) Project:Goalsスライド番号 6NAREGI Software StackNAREGI Architectureスライド番号 9A Sample Workflow based Grid FMO �Simulations of Proteinsスライド番号 11Adaptation of Nano-science Applications to Grid EnvironmentCollaboration in Data Grid AreaNAREGI Data Grid EnvironmentVO ServiceNAREGI Version 1SINET3 Network Topology�(FY2007 - )スライド番号 18Federation Test between NAOJ and KEKDeployment of NAREGI�Grid Middleware to 9 Supercomputer Centers �(in progress)Grid Operation & Coordination (FY2008 - )スライド番号 22スライド番号 23スライド番号 24スライド番号 25スライド番号 26スライド番号 27Expansion Plan of NAREGI GridRENKEI Project:�Resource Collaboration Technologies for e-Science Communities�(FY2008-2011)Description of RENKEI Projectスライド番号 31OrganizationSystem Conceptスライド番号 34スライド番号 35Interoperation between Two Different Grid MiddlewareInteroperation between Two Different Grid Middlewareスライド番号 38conducted @ 5th IEEE eScience Conf.(Oxford UK) Applications and Knowledge Sharing�with Applications Hosting Service (AHS)Nation-wide Distributed File Systemスライド番号 42File Catalog ServiceDatabase Federation KEK and Collaborating OrganizationsRENKEI PoP�Planned Sites (10Gbps connections)�- 8sites,200TB(raw)・100TB(stable,with Apps.) –�(Storage not included)Expansion Plan of NAREGI Gridスライド番号 48スライド番号 49スライド番号 50スライド番号 51System ConfigurationCharacteristics of the Systemスライド番号 54スライド番号 55スライド番号 56Kobe FacilityICS: Institute for Computational ScienceFuture Computational Research Environment�based on Cyber Science Infrastructure ConceptSummary