LOFAR project

17
LOFAR project Astroparticle Physics workshop 26 April 2004

description

LOFAR project. Astroparticle Physics workshop 26 April 2004. LOFAR concept. Combine advances in enabling IT : inexpensive environmental sensors 10.000’s of sensors wide area optical broadband networks custom+GigaPort/Géant high performance computing IBM BlueGene/L - PowerPoint PPT Presentation

Transcript of LOFAR project

Page 1: LOFAR project

LOFAR project

Astroparticle Physics workshop

26 April 2004

Page 2: LOFAR project

LOFAR concept

• Combine advances in enabling IT:• inexpensive environmental sensors 10.000’s of sensors • wide area optical broadband networks custom+GigaPort/Géant • high performance computing IBM BlueGene/L

to make a ‘shared aperture multi-telescope’

but also:

to sense and interpret the environment in innovative ways

System spec driver

Page 3: LOFAR project

LOFAR SensorsSensor type ApplicationsHF-antenna: astrophysics

astro-particle physics

VHF-antenna: cosmology, early Universesolar effects on Earth, space weather .

Geophones: ground subsidence .gas/oil extraction

Weather: micro-climate predictionprecision agriculturewind energy .

Water: precision agriculturehabitat managementpublic safety

Infra-sound: atmospheric turbulencemeteors, explosions, sonic booms

Page 4: LOFAR project

LOFARPhase 1

- Radio telescope - Seismic imager - Precision weather for agriculture, wind energy

Sensor field

Central processor

Fibre data transport

Integrate LOFAR networkinto regional fibre network,sharing costs with schools,

health centres etc.

Page 5: LOFAR project

Radio Telescope Specifications

• Frequency range: – 20 – 80 MHz, 120 – 240 MHz

• Angular resolution– few – 10 arcsec

• Sensitivity– 100x previous instruments at these frequencies

• Shared aperture multi-telescope– up to 8 independent telescopes

• plus geophone, weather etc arrays– operated from remote Science Operations Centers

• similar to LHC ‘tier-1’ centers

Page 6: LOFAR project

One day in the life ofLOFAR, the radio

telescope

Tel

esco

pe n

r.

Page 7: LOFAR project

Challenges

• Data rate• ~ 15 Tbits / sec total data generated (increasing later)

• ~ 330 Gbits / sec input data rate to central processor

• ~ 1 Gbit / sec to distributed Science Operations Centres

• Computational resources• ~ 34 TFLOP/s in custom co-processor (IBM BG/L)

• ~ 500 TBytes on-line temporary storage

• Calibration• adaptive multi-patch all-sky phase correction

• 10 sec duty cycle

VC Beam formerCalibration Beam s

VirtualCoreSubStation beam s(full bandwidth)

Rem oteStation beam s

phase,gainparameters

(time dependent)

SC-Selfcal

T EM PORARYDAT A

ST ORE

Station-Core Visibilities

Synchronisation &route

Synchronisation &route

C entra l P rocessor : D ataFlow P rocess ing C entra l P rocessor : S torage C entra l P rocessor : D ataS et A na lys is

pre-process

FringeCorrection

Calibration

Imaging

Data Archive

Instrum entM odel

EnvironmentM odel

Sky M odel

QuickView

Central RFIMitigation

Station-Core Correlator

Station-Station Correlator

CorrectedStation-StationVisibilities

Store:25 Gbps

Store:25 Gbps

Input rate> 300 Gbps

Input rate> 300 Gbps

Products

~1 Gbps

Products

~1 Gbps

3 T-ops

5 T-flops

2 T-ops

Transpose~300 Gbps

Storage: >500 TB

Within correlator:20 Tbps

15 T-ops

Page 8: LOFAR project

IBM BlueGene/L

• IBM– 1st research machine on road to multi-peta-FLOP/s

– 3 BG/L machines under construction LLNL, LOFAR, IBM Research

– numbers 1-10 of Top-500 supercomputers in one machine (LLNL)

– SOC technology, standard components for reliability– dual PowerPC 440 chips per node with 700 MHz clock

– scalability– to many times 100.000 nodes

– low power, air cooled– ~ 20W per node

Page 9: LOFAR project

IBM BlueGene/L

• LOFAR– BG/L is our 1st non-custom central processor

• total CPU power is ‘interesting’ (34 TFLOP/s) and scalable• component failure rate: one every 3 months, DRAM dominated

– BG/L is embedded co-processor in LINUX cluster– stripped down LINUX kernal on-chip– general purpose capability allows complex modelling on-line, real time

– efficient for complex arithmetic, streaming applications• 330 Gb/s input data rate initially; 768 Gb/s max

– low power• 150 kW for LOFAR ( 6k nodes )

– scalable beyond LOFAR to SKA requirements

Page 10: LOFAR project

Tier-0 computing LHC, LOFARin 2006

  CPU(SPECint95)

No. of Processors

Disk storage (TB)

Tape storage (PB)

LAN throughput

(Gb/s)

LHC / exp’t x 4 exp’ts( Tier-0 )

 

2,8 106

 5600 / 11200

(?)

 

2160 

12 

368

LOFAR( EOC )

 

3,4 106

 6144 / 12288

 

~ 500 

?? 

> 330

Page 11: LOFAR project

LOFAR with Bsik financing

Central core- plus -

45 stations150 km max baseline

Page 12: LOFAR project

Mid-LOFARwould extend into

Lower Saxony,Schleswig-Holstein,

Northrhein-Westphalen

Max-LOFARwould have stations from Cambridge UK

to Potsdam DE,from Nançay FR

to Växjö SE

Page 13: LOFAR project

1-10 Gbps

China

USA

South Africa

Russia

Post-2005:JIVE + LOFAR data processing centre

30 Gbps – 2 Tbps

LOFAR, the Sensor Network is under consideration as FP7

‘Technology Platform’

Page 14: LOFAR project

LOFAR project timeline

• PDR in June/Oct 2003: M€ 14 expended• Dutch funding end 2003: M€ 52 for ‘infrastructure’

• funding must be matched by ‘partners’ – 18 member consortium: additional partners possible

• formal goal is economic positioning w.r.t. ‘adaptive sensor networks’– RF, seismic, infra-sound, wind-energy sensors

• prototyping of a full station is in progress• 100 low frequency antennas in field, now are making all-sky videos• end 2004, expect 2 beam web-based system on-line (to gain experience)

– issues: calibration, RFI, adaptive re-allocation of resources• BlueGene/L delivery in 1Q-2005

• FDR start in mid-2004, complete mid-2005• procurement start mid-2004, end mid-2006

• Initial operational status: end-2006 (solar minimum)• full operational status: mid-2008

Page 15: LOFAR project

Remaining tasksfor which partners are being sought

• Array configuration size: new stations !– extension of array size to 400+ km is highly desirable

• cost is ~ € 500k per station• fiber connections through Géant, national academic networks

• Definition, designation of operations centers– Science Operations Centers are remote, on-line

• basic data taking and archiving of observations• financing mostly local, plus contribution to common services

– Engineering Operations Center in Dwingeloo• monitor system, perform maintenance• integrated operations team (with WSRT, possibly JIVE)

• Operational modelling and User interface• use of (quasi-real-time) GRID technologies foreseen• work packages not funded / manned yet

Where?

Where?

Page 16: LOFAR project

User involvement

• Test User Group• Heino Falcke, leader

– Lars Bähren, Michiel Breintjens, Stefan Wijholds etc

• ‘open’, ‘remote’ access to developing system– step-wise functionality improvements until 2006

• 1st user workshop Dwingeloo, May 24-25, 2004• ASTRON is ready to host a (limited) number of

young researchers to test, help develop the system

• Formal operations from 2007• scheduling will be an ‘interesting’ problem

Page 17: LOFAR project

LOFAR Research Consortium

Universities Research Institutes Commercial

Univ. of Amsterdam

ASTRON(management org.)

Ordina TechnicalAutomation bv

TU Delft CWI Dutch Space bv

TU Eindhoven IMAG Twente Institute for Wireless and Mobile Communications bv

Univ. of Groningen KNMI Science[&]Technology bv

Leiden Univ. TNO-NITG

Nijmegen Univ. LOPES Consortium

Uppsala Univ. MPIfR-Bonn