K. Shortridge *, T.J. Farrell, J.A. Bailey & L.G. Waller (Anglo-Australian Observatory) The Data...

1
K. Shortridge *, T.J. Farrell, J.A. Bailey & L.G. Waller (Anglo- Australian Observatory) The Data Flow System for the AAO2 Controllers The system running in simulation mode under UNIX Detector Control and Data Acquisition Computer CCD Controller Cryostat CCD Telescope Control Room Instrument Focus Fibre Optic Command and Data Link Power Supply Shutter Overview of the AAO2 controller hardware case DCT_FOLHelper::DataCode : { // This is probably the most important of all signals. We've // got new data. All we have to do is collect together all the // information we need to 'kick' the 'PROCESS' action in the // IRT. The read thread will have read the data into one of // the shared memory sections maintained by the shared memory // list object in the process information structure, the // object called ProcessInfoPtr->MemList. The read thread // passes us - through the argument it uses for the signal - // the id obtained for the shared memory section it used, and // from that we can use ProcessInfoPtr->MemList to provide the // rest of what we need. BUTL_SharedMemId SharedMemId = (BUTL_SharedMemId) ArgumentPtr->GetSharedMemId(); DitsSharedMemInfoType* SharedMemInfoPtr = ProcessInfoPtr->MemList.GetDitsMemInfo(SharedMemId); // Now kick the IRT with the new data. DCT_LogEvent ("Main thread kicked with data at %p\n",SharedMemInfoPtr); DCT_LogEvent ("InfoStructureAddr: %p\n”,ProcessInfoPtr->MemList.InfoStructureAddr(SharedMemId)); DCT_LogEvent ("Name: '%s'\n",SharedMemInfoPtr->Name); I_IRT_Task.Kick ("PROCESS",SharedMemInfoPtr,true,0,Status, IRTKickCompleteWrapper,IRTKickErrorWrapper,IRTBulkHandledWrapper, IRTBulkProcessedWrapper, ProcessInfoPtr->MemList.InfoStructureAddr(SharedMemId)); DCT_LogEvent ("Kick sent to IRT, status %ld\n",(long)*Status); if (*Status != STATUS__OK) { DCT_LogError ("Unable to kick PROCESS action in IRT\n"); } else { ProcessInfoPtr->CurrentBulkTransfers++; ProcessInfoPtr->CurrentKicks++; } THE HARDWARE The AAO2 controller 1 sits close to the detector cryostat, and is connected via a high speed (250 Megabits/second) full duplex fibre optic link to a custom in-house designed interface in the control and data acquisition computer in the telescope control room. Commands are sent to the controller over this link, and responses to the commands and data from the detector are sent back down the link. The fibre optic link interface is a DMA controller with FIFO buffering and is managed by a 68040 Single Board Computer (SBC) running a real-time operating system (VxWorks). The interface transfers data from the controller to a shared 1 Gigabyte VMEbus memory board. An UltraSPARC VMEbus SBC running Solaris accesses the data for processing, analysis, display, and storage. THE DATA FLOW The diagram to the left shows the detailed data flow in the VME system. Under the control of the Detector Control Task (DCT), running on the VxWorks system, the raw image data from the controller is directed, via DMA, into a buffer in the shared 1Gbyte VME memory board. A DRAMA message is then sent from the DCT to the Image Reconstructor Task (IRT) which runs on the Solaris system. The DRAMA call that sends this message includes the details of the VME shared memory section containing the data, so when the IRT receives this message it knows both that the raw image data is available and where to find it. (By the time the IRT gets this message, the DRAMA bulk data sub-system will have transferred the data by DMA from the VME shared memory board into a new shared memory section in the on-board memory of the Solaris system.) The IRT reconstructs this image into yet another on-board shared memory section, correcting for any windowing effects and the interleaving of the data from the various corners of the detector. It then sends another message to the next step in the data processing chain, the Data Processing Task (DPT), which combines the images sent to it, producing new shared images suitable for real-time display as the exposure proceeds and a final image for recording. These are passed on to the Real-Time Display task (RTD), which is based on the ESO Real-Time Display 7 , and the Data Recording Task (DRT). The DRT collects ancillary information from all the various components of the system and writes this, together with the final processed image, as a FITS file. The Detector Control Task (DCT) code that sends a new image as bulk data to the Image Reconstructor Task (IRT). Taken directly from the code with only some error checking edited out for clarity. THE DIFFICULTIES Access to the shared VME memory from the SPARC board is slower than access to the SPARC board on-board memory, and this slowed the Image Reconstructor task more than had been anticipated. A behind-the-scenes DMA transfer from VME memory into SPARC on- board memory was added to the DRAMA bulk data sub-system to overcome this. The VME interface board for the fibre optic link has only a small FIFO to buffer the data flow from the controller. Together with the rather complex control requirements of the DMA controller used, this introduced some quite severe constraints on the design of the VxWorks thread dedicated to reading the data from the controller. Supporting an ‘abort’ cleanly required particularly careful coding in the driver for the FOL. Mapping the large shared VME memory slowed down the booting of the VxWorks system considerably. In the end, only a subset of the available 1 Gbyte was used and mapped. The bi-directional fibre optic link is used for both commands and data. However, the data rate made it difficult to use a proper packaging scheme to separate image data from command responses, and this limited the interaction that was possible with the controller during readout. REFERENCES 1. Waller, L., Barton, J., Griesbach, J., Mayfield, D., 2004. AAO2: a general purpose CCD controller for the AAT. Proc. SPIE, 5499-51. In preparation. 2. Bailey, J.A., Farrell, T. & Shortridge, K., 1995. DRAMA: an environment for distributed instrumentation software. Proc. SPIE, 2479, 62. 3. Tinney, C. G. et al., 2004. IRIS2: A Working Infra-red Multi- object Spectrograph and Imager. Proc. SPIE, 5492-35. In preparation. 4. Shortridge, K. 1997. Interprocess Message Passing (IMP) System, (AAO/IMP_MANUAL_8, Drama Software Report 8) (Sydney: Anglo-Australian Observatory) 5. Bailey, J., 1993. A Self-Defining Hierarchical Data System, in ASP Conf. Ser., Vol. 52, Astronomical Data Analysis Software and Systems II, eds. R. J. Hanisch, R. J. V. Brissenden, & J. Barnes (San Francisco: ASP), 553. 6. Hill, N., Gaudet, S., Dunn, J., Jaeger, S., & Cockayne, S. 1999. The Client Server Design of the Gemini Data Handling System, in ASP Conf. Ser., Vol. 172, Astronomical Data Analysis Software and Systems VIII, eds. D. M. Mehringer, R. L. Plante, & D. A. Roberts (San Francisco: ASP), 155 7. Herlin, T., Brighton, A., & Biereichel, P., 1996. The VLT Real Time Display, in ASP Conf. Ser., Vol. 101, Astronomical Data Analysis Software and Systems V, eds. G.J. Jacoby & J. Barnes. (San Francisco: ASP), 396 The data flow in the AAO2 control computer system THE PROBLEM Infra-red detectors, such as the 1024 x 1024 Rockwell HAWAII- 1 HgCdTe device used by the AAO’s new IRIS2 3 infra-red imager and spectrograph, are usually operated in a continuous readout mode. The normal readout time for IRIS2, for example, is just .25 seconds a frame, meaning a continuous data rate of around 8 Mbytes/sec must be maintained by the system. The control system has to get this amount of data out of the detector, and into a computing system that can process it, without missing a single beat, and has to do so continuously. For AAO2, the data processing has a number of steps: 1) The raw data from the detector is read out by the controller and sent down the fibre optic link to the control computer. 2) The individual image frames are extracted from this raw data stream as it is read from the fibre optic link. 3) Each such frame has to be ‘reconstructed’. Because the data is read from up to four corners of the detector at once, the data stream from the detector is not in a simple row, column order. This jumbling of the data can be made even more complex if readout ‘windows’ are used and the data is coming from selected areas of the chip rather than the whole chip. The reconstruction step corrects for these effects and produces what is recognisable as a single image – something that can be displayed in a meaningful way. 4) Individual frames are then combined to produce a final image. A number of different algorithms may be employed, but the one most commonly used for IRIS2 data is to take each image pixel separately and fit a line through all the data values for that pixel in the various frames. The individual frames are also made available for display as they arrive. 5) The combined data is displayed as it is processed, and at the end of the exposure the final image, together with additional information collected from all parts of the system, is recorded in FITS format. The initial reading of the data from the detector and the sending of the data down the fibre-optic link, is performed by the controller itself. The reading of the raw data from the controller and its packaging into image frames is a critical real-time problem, requiring a computer system with a guaranteed real-time response, while the subsequent steps, the data processing, display, and recording, are best handled using a UNIX-type workstation. There is also the problem of how best to move the data between the various stages. * E-mail: [email protected] The VME control computer system SUMMARY The AAO’s new detector controllers handle both infra-red and optical detectors. IR detectors place considerable demands on a data handling system. The AAO2 detector controller has a control computer system based on a VME chassis that contains both a real- time processor running VxWorks and a SPARC processor running UNIX (Solaris). These processors share access to a common 1 Gbyte of VME memory, the VxWorks system reading image data from the controller into this shared memory and the UNIX system reading these images from the shared memory and processing it. The AAO’s data acquisition environment, DRAMA, was modified to integrate this use of shared VME memory into its standard bulk-data sub- system. This allowed the system to be tested in simulation mode under UNIX, and then deployed almost unchanged on the VxWorks/UNIX VME system. We discuss aspects of this system, including some of the problems that were encountered. THE SOFTWARE For a long time now, the AAO’s data acquisition environment, DRAMA 2 , has included facilities to simplify the transfer of large amounts of data. The routines provided for this comprise the ‘Bulk data’ sub-system for DRAMA. It is possible to associate a region of shared memory with a standard DRAMA message. The system will then transmit this bulk data along with the message in the most efficient way possible. When the message is passed from one task to another on the same machine, all that needs to be passed in practice is enough information to enable the receiving task to locate the shared memory in question. If the sending and receiving tasks are on different machines, the DRAMA networking tasks have to transfer the data over the network, sending it directly from the mapped memory of the sending task into an equivalent area of memory mapped by the receiving task. There are mechanisms provided to keep the sending task aware of the progress of the transfer, so it knows when the shared memory in question can be released or re-used. The underlying sub-systems that handle messaging and the machine-independent hierarchical data structures used by DRAMA (respectively IMP 4 and SDS 5 ) have been used by the Gemini data handling system (DHS 6 ). To support the multi-processor shared memory used by the AAO2 system, the DRAMA bulk data sub-system was extended to support the use of such memory. When the bulk data sub-system is told that the shared memory in question is VME memory that can be accessed by both the sending and receiving machines, it can take advantage of this to avoid the network data transfer usually required when sending and target tasks are on different machines. This involved a relatively simple change to the IMP layer of DRAMA, and allowed the VxWorks code for the detector control task (the DCT, the part of the system that runs on the VxWorks processor) to be tested under UNIX and then deployed, essentially unchanged, under VxWorks. In the main code for the Detector Control Task (DCT), the bulk data section has one line that differs between the UNIX and VxWorks cases - the code for the type of shared memory used is different. The code for the receiving task has no changes at all, since the message that delivers the shared memory contains the shared memory type code and DRAMA handles this automatically. There are no critical real-time aspects to running in simulation mode under UNIX. The fibre-optic link is simulated by a named pipe, and the DCT code is linked with an alternative version of the FOL interface routines that uses this named pipe instead of the actual fibre-optic link. At the other end of the named pipe the controller is simulated under UNIX by a program that mimics its responses to commands and generates simulated data which it writes to the pipe. Courtesy of The General Libraries, The University of Texas at Austin. The Orion Nebula as imaged by IRIS2 in the H2 v=1-0 molecular line To disk To screen C omm ands Fibreoptic li nk (FOL )betweenV ME system and controller Reconstructed im ages Finalprocessed im ages C opy ofcurrent image Currentim age buffer Image reconstruction task (IRT ) Data processing task (DP T) D ata recording task (DR T) R eal-t ime display (RTD) VxWo rks MC 68040 board 1 Gbyte shared VM E mem ory board SPAR C on- board mem ory SPAR C board running SO LAR IS Set of im age buffers Detector control t ask (DCT) DM A FO L interface board Imagedata R esponses DM A Shared mem ory sections

Transcript of K. Shortridge *, T.J. Farrell, J.A. Bailey & L.G. Waller (Anglo-Australian Observatory) The Data...

Page 1: K. Shortridge *, T.J. Farrell, J.A. Bailey & L.G. Waller (Anglo-Australian Observatory) The Data Flow System for the AAO2 Controllers The system running.

K. Shortridge*, T.J. Farrell, J.A. Bailey & L.G. Waller (Anglo-Australian Observatory)

The Data Flow Systemfor the AAO2 Controllers

The system running in simulation mode under UNIX

Detector Controland Data

AcquisitionComputer

CCDController

Cryostat

CCD

TelescopeControlRoom

Instrument FocusFibre Optic

Command andData Link

Power Supply

Shutter

Overview of the AAO2 controller hardware

case DCT_FOLHelper::DataCode : {

// This is probably the most important of all signals. We've // got new data. All we have to do is collect together all the // information we need to 'kick' the 'PROCESS' action in the // IRT. The read thread will have read the data into one of // the shared memory sections maintained by the shared memory // list object in the process information structure, the // object called ProcessInfoPtr->MemList. The read thread // passes us - through the argument it uses for the signal - // the id obtained for the shared memory section it used, and // from that we can use ProcessInfoPtr->MemList to provide the // rest of what we need.

BUTL_SharedMemId SharedMemId = (BUTL_SharedMemId) ArgumentPtr->GetSharedMemId(); DitsSharedMemInfoType* SharedMemInfoPtr = ProcessInfoPtr->MemList.GetDitsMemInfo(SharedMemId);

// Now kick the IRT with the new data.

DCT_LogEvent ("Main thread kicked with data at %p\n",SharedMemInfoPtr); DCT_LogEvent ("InfoStructureAddr: %p\n”,ProcessInfoPtr->MemList.InfoStructureAddr(SharedMemId)); DCT_LogEvent ("Name: '%s'\n",SharedMemInfoPtr->Name); I_IRT_Task.Kick ("PROCESS",SharedMemInfoPtr,true,0,Status, IRTKickCompleteWrapper,IRTKickErrorWrapper,IRTBulkHandledWrapper, IRTBulkProcessedWrapper, ProcessInfoPtr->MemList.InfoStructureAddr(SharedMemId)); DCT_LogEvent ("Kick sent to IRT, status %ld\n",(long)*Status); if (*Status != STATUS__OK) { DCT_LogError ("Unable to kick PROCESS action in IRT\n"); } else { ProcessInfoPtr->CurrentBulkTransfers++; ProcessInfoPtr->CurrentKicks++; }

THE HARDWARE

The AAO2 controller1 sits close to the detector cryostat, and is connected via a high speed (250 Megabits/second) full duplex fibre optic link to a custom in-house designed interface in the control and data acquisition computer in the telescope control room. Commands are sent to the controller over this link, and responses to the commands and data from the detector are sent back down the link. The fibre optic link interface is a DMA controller with FIFO buffering and is managed by a 68040 Single Board Computer (SBC) running a real-time operating system (VxWorks). The interface transfers data from the controller to a shared 1 Gigabyte VMEbus memory board. An UltraSPARC VMEbus SBC running Solaris accesses the data for processing, analysis, display, and storage.

THE DATA FLOW

The diagram to the left shows the detailed data flow in the VME system. Under the control of the Detector Control Task (DCT), running on the VxWorks system, the raw image data from the controller is directed, via DMA, into a buffer in the shared 1Gbyte VME memory board. A DRAMA message is then sent from the DCT to the Image Reconstructor Task (IRT) which runs on the Solaris system. The DRAMA call that sends this message includes the details of the VME shared memory section containing the data, so when the IRT receives this message it knows both that the raw image data is available and where to find it. (By the time the IRT gets this message, the DRAMA bulk data sub-system will have transferred the data by DMA from the VME shared memory board into a new shared memory section in the on-board memory of the Solaris system.) The IRT reconstructs this image into yet another on-board shared memory section, correcting for any windowing effects and the interleaving of the data from the various corners of the detector. It then sends another message to the next step in the data processing chain, the Data Processing Task (DPT), which combines the images sent to it, producing new shared images suitable for real-time display as the exposure proceeds and a final image for recording. These are passed on to the Real-Time Display task (RTD), which is based on the ESO Real-Time Display7, and the Data Recording Task (DRT). The DRT collects ancillary information from all the various components of the system and writes this, together with the final processed image, as a FITS file.

The Detector Control Task (DCT) code that sends a new image as bulk data to the Image Reconstructor Task (IRT). Taken directly from the code with only some error checking edited out for clarity.

THE DIFFICULTIES

Access to the shared VME memory from the SPARC board is slower than access to the SPARC board on-board memory, and this slowed the Image Reconstructor task more than had been anticipated. A behind-the-scenes DMA transfer from VME memory into SPARC on-board memory was added to the DRAMA bulk data sub-system to overcome this.

The VME interface board for the fibre optic link has only a small FIFO to buffer the data flow from the controller. Together with the rather complex control requirements of the DMA controller used, this introduced some quite severe constraints on the design of the VxWorks thread dedicated to reading the data from the controller. Supporting an ‘abort’ cleanly required particularly careful coding in the driver for the FOL.

Mapping the large shared VME memory slowed down the booting of the VxWorks system considerably. In the end, only a subset of the available 1 Gbyte was used and mapped.

The bi-directional fibre optic link is used for both commands and data. However, the data rate made it difficult to use a proper packaging scheme to separate image data from command responses, and this limited the interaction that was possible with the controller during readout.

REFERENCES1. Waller, L., Barton, J., Griesbach, J., Mayfield, D., 2004. AAO2: a general purpose CCD controller for the AAT. Proc. SPIE, 5499-51. In preparation.2. Bailey, J.A., Farrell, T. & Shortridge, K., 1995. DRAMA: an environment for distributed instrumentation software. Proc. SPIE, 2479, 62.3. Tinney, C. G. et al., 2004. IRIS2: A Working Infra-red Multi-object Spectrograph and Imager. Proc. SPIE, 5492-35. In preparation.4. Shortridge, K. 1997. Interprocess Message Passing (IMP) System, (AAO/IMP_MANUAL_8, Drama Software Report 8) (Sydney: Anglo-Australian Observatory)5. Bailey, J., 1993. A Self-Defining Hierarchical Data System, in ASP Conf. Ser., Vol. 52, Astronomical Data Analysis Software and Systems II, eds. R. J. Hanisch, R. J. V. Brissenden, & J. Barnes (San Francisco: ASP), 553.6. Hill, N., Gaudet, S., Dunn, J., Jaeger, S., & Cockayne, S. 1999. The Client Server Design of the Gemini Data Handling System, in ASP Conf. Ser., Vol. 172, Astronomical Data Analysis Software and Systems VIII, eds. D. M. Mehringer, R. L. Plante, & D. A. Roberts (San Francisco: ASP), 1557. Herlin, T., Brighton, A., & Biereichel, P., 1996. The VLT Real Time Display, in ASP Conf. Ser., Vol. 101, Astronomical Data Analysis Software and Systems V, eds. G.J. Jacoby & J. Barnes. (San Francisco: ASP), 396

The data flow in the AAO2 control computer system

THE PROBLEM

Infra-red detectors, such as the 1024 x 1024 Rockwell HAWAII-1 HgCdTe device used by the AAO’s new IRIS23 infra-red imager and spectrograph, are usually operated in a continuous readout mode. The normal readout time for IRIS2, for example, is just .25 seconds a frame, meaning a continuous data rate of around 8 Mbytes/sec must be maintained by the system. The control system has to get this amount of data out of the detector, and into a computing system that can process it, without missing a single beat, and has to do so continuously.

For AAO2, the data processing has a number of steps:

1) The raw data from the detector is read out by the controller and sent down the fibre optic link to the control computer.

2) The individual image frames are extracted from this raw data stream as it is read from the fibre optic link.

3) Each such frame has to be ‘reconstructed’. Because the data is read from up to four corners of the detector at once, the data stream from the detector is not in a simple row, column order. This jumbling of the data can be made even more complex if readout ‘windows’ are used and the data is coming from selected areas of the chip rather than the whole chip. The reconstruction step corrects for these effects and produces what is recognisable as a single image – something that can be displayed in a meaningful way.

4) Individual frames are then combined to produce a final image. A number of different algorithms may be employed, but the one most commonly used for IRIS2 data is to take each image pixel separately and fit a line through all the data values for that pixel in the various frames. The individual frames are also made available for display as they arrive.

5) The combined data is displayed as it is processed, and at the end of the exposure the final image, together with additional information collected from all parts of the system, is recorded in FITS format.

The initial reading of the data from the detector and the sending of the data down the fibre-optic link, is performed by the controller itself. The reading of the raw data from the controller and its packaging into image frames is a critical real-time problem, requiring a computer system with a guaranteed real-time response, while the subsequent steps, the data processing, display, and recording, are best handled using a UNIX-type workstation. There is also the problem of how best to move the data between the various stages.

*E-mail: [email protected]

The VME control computer system

SUMMARY

The AAO’s new detector controllers handle both infra-red and optical detectors. IR detectors place considerable demands on a data handling system. The AAO2 detector controller has a control computer system based on a VME chassis that contains both a real-time processor running VxWorks and a SPARC processor running UNIX (Solaris). These processors share access to a common 1 Gbyte of VME memory, the VxWorks system reading image data from the controller into this shared memory and the UNIX system reading these images from the shared memory and processing it. The AAO’s data acquisition environment, DRAMA, was modified to integrate this use of shared VME memory into its standard bulk-data sub-system. This allowed the system to be tested in simulation mode under UNIX, and then deployed almost unchanged on the VxWorks/UNIX VME system. We discuss aspects of this system, including some of the problems that were encountered.

THE SOFTWARE

For a long time now, the AAO’s data acquisition environment, DRAMA2, has included facilities to simplify the transfer of large amounts of data. The routines provided for this comprise the ‘Bulk data’ sub-system for DRAMA. It is possible to associate a region of shared memory with a standard DRAMA message. The system will then transmit this bulk data along with the message in the most efficient way possible. When the message is passed from one task to another on the same machine, all that needs to be passed in practice is enough information to enable the receiving task to locate the shared memory in question. If the sending and receiving tasks are on different machines, the DRAMA networking tasks have to transfer the data over the network, sending it directly from the mapped memory of the sending task into an equivalent area of memory mapped by the receiving task. There are mechanisms provided to keep the sending task aware of the progress of the transfer, so it knows when the shared memory in question can be released or re-used. The underlying sub-systems that handle messaging and the machine-independent hierarchical data structures used by DRAMA (respectively IMP4 and SDS5) have been used by the Gemini data handling system (DHS6).

To support the multi-processor shared memory used by the AAO2 system, the DRAMA bulk data sub-system was extended to support the use of such memory. When the bulk data sub-system is told that the shared memory in question is VME memory that can be accessed by both the sending and receiving machines, it can take advantage of this to avoid the network data transfer usually required when sending and target tasks are on different machines. This involved a relatively simple change to the IMP layer of DRAMA, and allowed the VxWorks code for the detector control task (the DCT, the part of the system that runs on the VxWorks processor) to be tested under UNIX and then deployed, essentially unchanged, under VxWorks. In the main code for the Detector Control Task (DCT), the bulk data section has one line that differs between the UNIX and VxWorks cases - the code for the type of shared memory used is different. The code for the receiving task has no changes at all, since the message that delivers the shared memory contains the shared memory type code and DRAMA handles this automatically.

There are no critical real-time aspects to running in simulation mode under UNIX. The fibre-optic link is simulated by a named pipe, and the DCT code is linked with an alternative version of the FOL interface routines that uses this named pipe instead of the actual fibre-optic link. At the other end of the named pipe the controller is simulated under UNIX by a program that mimics its responses to commands and generates simulated data which it writes to the pipe.

Courtesy of The General Libraries, The University of Texas at Austin.

The Orion Nebula as imaged by IRIS2 in the H2 v=1-0 molecular line

Todisk

Toscreen

Commands

Fibre optic link (FOL) between VME system and controller

Reconstructedimages

Final processedimages

Copy of currentimage

Current imagebuffer

Imagereconstructiontask (IRT)

Dataprocessing task(DPT)

Data recordingtask (DRT)

Real-timedisplay (RTD)

VxWorksMC68040 board

1 Gbyte sharedVME memoryboard

SPARC on-board memory

SPARC boardrunning SOLARIS

Set of imagebuffers

Detectorcontrol task(DCT)

DMA

FOL interface board

Image data

Responses

DMA

Shared memorysections