A Tutorial on Multi Sensor Integration and Fusion

download A Tutorial on Multi Sensor Integration and Fusion

of 16

Transcript of A Tutorial on Multi Sensor Integration and Fusion

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    1/16

    A Tutorial on Multisensor Integration and FusionREN C. LUO ANDMICHAEL G .KA Y

    Robotics and Intelligent Systems LaboratoryDepartment of Electrical and Computer EngineeringNorth Carolina State UniversitvRaleigh, NC

    &mt--lhis paper presents a tutond intmhtion to the subject ofmultisensor ntegrdion and usion 'he role of multisensor integtation andfusion in the operation of intelligent system is defined n tenns of theunique type of information multiple sensors can providp. Multisensorintegration is dscussed in tem of basic integration functions andmultisensorfusion in terms of the dffe mt levels at which fusion cdn takeplace. Numetical exanples are given to illu shd e a vatiety of afferent fusionmethods. The paper concludes with speculations concemillg possiblemean3 f u t m dred'ons m d api&o survey md review papers in the areaof multisensor ntegration andfusion.

    I. INTRODUCI'IONHE SYNERGISTIC use of multiple sensors by machinesand systems is a major factor in enabling some measure ofintelligence to be incorporated into their overall operation so thatthey can interact with and operate in an unstructured environmentwithout the complete control of a human operator. The use ofsensors in an intelligent system is an acknowledgement of the factthat it may not be possible or feasible for a system to know apriori the state of the outside world to a degree sufficient for itsautonomous operation. The reasons a system may lack sufficientknowledge concerning the stateof the outside world may be dueeither to the fact that the system is operating in a totally unknownenvironment, or, while partial knowledge is available and isstored in some form of a world model, it may not be feasible tostore large amounts of this knowledge and it may not be possiblein principle to know the state of the world a priori if it is

    dynamically changing and unforeseen events can occur. Sensorsallow a system to learn the state of the world as needed and tocontinuously update its ow n model of the world. The motivationfor using multiple sensors in a system is a response to the simplequestion: If a single sensor can increase the capability of asystem, would the use of more sensors increase it even further?Over the past decade a number of researchers have beenexploring this question from both a theoretical perspective and byactually building multisensor machines and systems for use in avariety of areas of application. Typical of the applications thatcan benefit from the use of multiple sensors are automatic targetrecognition, mobile robot navigation, industrial tasks likeassembly, military command and control for battlefieldmanagement, target tracking, and aircraft navigation.There are a number of different means of integrating theinformation provided by multiple sensors into the operation of asystem. The most straightforward approach to multisensorintegration is to let the information from each sensor can serve asa scparate input to the system controller. This approach may bethe most appropriate if each sensor is providing informationconceming completely different aspects of the environment. The

    27695-7906

    major benefit gained through this approach is the increase in theextent of the environment able to be sensed. The only interactionbetween the sensors is indirect and based on the individual effecteach sensor has on the controller. If there is some degree ofoverlap between the sensors concerning some aspect of theenvironment that they are able to sense, it may be possible for asensor todirectly influence the operation of another sensor so thatthe value of the combined information that the sensors provide isgreater than the sum of the value of the information provided byeach sensor separately. This synergistic effect from themultisensor integration can be achieved either by using theinformation from one sensor to provide cues or guide theoperation of other sensors, or by actually combining or fusing theinformation from multiple sensors. The information from thesensors can be fused at a variety of levels of representationdepending upon the needs of the system and the degree ofsimilarity between the sensors. The major benefit gained throughmultisensor fusion is that the system can be provided withinformation of higher quality conceming, possibly, certainaspects of the environment that can not bedirectly sensed by anyindividual sensor operating independently.11. THEROLE OFMULTISENSORINTEGRATIONAND FUSION ININTELLIGENT SYSTEMS

    This section describes the role of multisensor integration andfusion in the operation of intelligent machines and systems. Therole of multisensor integration and fusion can best be understoodwith reference to the type of information that the integratedmultiple sensors can uniquely provide the system. The potentialadvantages gained through the synergistic use of thismultisensory information can be decomposed into a combinationof four fundamental aspects: the redundancy, complementarity,timeliness, and cost of the information. Multisensor integrationand the related notion of multisensor fusion are defined anddistinguished. The different functional aspects of multisensorintegration and fusion in the overall operation of a system arepresented and serve to highlight the distinction between thedifferent types of integration and the different types of fusion.The potential advantages in integrating multiple sensors are thendiscussed in terms of four fundamental aspects of the informationprovided by the sensors, and the problems associated withcreating a general methodology for multisensor integration andfusion are discussed in terms of the methods used for handlingthe different sources of possible error or uncertainty.

    Multisensor integration, as defined in this paper, refers to thesynergistic use of the information provided by multiple sensory

    087942-600-4/90/1100-0707.%)1.00Q 1990 IEEE

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    2/16

    devices to assist in the accomplishment of a task by a system. Anadditional distinction ismade between multisensor integration andthe more restricted notion of multisensor fusion. Multisensorfus ion , as defined in this paper, refers to any stage in theintegration process where there is an actual combination (orfusion) of different sources of sensory information into onerepresentationalformat. The information to be fused may comefrom multiple sensory devices during a single period of time orfrom a single sensory device over an extended time period.Although the distinction of fusion from integration is not standardin the literature, it serves to separate the general system-levelissues involved in the integration of multiple sensory devices atthe architecture and control level, from the more specificmathematical and statistical ssues involved in the actual fusion ofsensory information.A . Potential Advan tages in Integrating Multiple Sensors

    The purpose of external sensors is to provide a system withuseful information concerning some features of interest in thesystems environment. The potential advantages in integratingand/or fusing information from multiple sensors are that theinformation canbe obtained more accurately, concerning featuresthat are impossible to perceive with individual sensors, in lesstime, and at a lesser cost. These advantages correspond,respectively, to the notions of the redundancy, complementarity,timeliness, and cost of the information provided the system.Redundant information is provided from a group of sensors(or a single sensor over time) when each sensor is perceiving,possibly with a different fidelity, the same features in theenvironment. The integration or fusion of redundant informationcan reduce overall uncertainty and thus serve to increase theaccuracy with which the features are perceived by the system.Multiple sensors providing redundant information can also serveto increase reliability in the case of sensor error or failure.Complementary information from mu1 tiple sensors allowsfeatures in the environment to be perceived that are impossible toperceive using just the information from each individual sensoroperating separately. If the features to be perceived areconsidered dimensions in a space of features, thencomplementary information isprovided when each sensor is onlyable to provide information conceming a subset of features thatform a subspace n the feature space, i.e., each sensor can be saidto perceive features that are independent of the features perceivedby the other sensors; conversely, the dependent featuresperceived by sensors providing redundant information wouldform a basis in the feature space.More timely information, as compared to the speed at which itcould be provided by a single sensor, may be provided bymultiple sensors due to either the actual speed of operation ofeach sensor, or the processing parallelism that may be possible toachieve as part of the integration process.Less costly information, in the context of a system withmultiple sensors, is information obtained at a lesser cost whencompared to the equivalent information that could be obtainedfrom a single sensor. Unless the information provided by thesingle sensor is being used for additional functions in the system,the total cost of the single sensor should be compared to the totalcost of the integrated multisensor system.

    The role of multisensor integration and fusion in the overalloperation of a systemcanbedefined as he degree to which eachof these fouraspects is present in the information provided by thesensors to the system. Redundant information can usually befused at a lower level of representation compared tocomplementary information because it canmore easily be madecommensurate. Complementary information is usually eitherfused at a symbolic level of representation, or provided directly todifferent parts of the system without being fused. While in mostcases the advantages gained through the use of redundant,complementary, or more timely information in a system can bedirectly related to possible economic benefits, in multisensortarget tracking fused information is sometimes used in adistributed network of target tracking sensors just to reduce thebandwidth required for communication between groups ofsensors in the network.B . Possible Prob lem

    Many of the possible problems associated with creating ageneral methodology for multisensor integration and fusion, aswell as developing the actual systems that use multiple sensors,center around the methods used for modeling the error oruncertainty in the integration and fusion process, the sensoryinformation, and the operation of the overall system including thesensors. For the potential advantages in integrating multiplesensors to be realized, solutions to these problems will have to befound that are both practical and theoretically sound.1) Error in the Integration and Fusion Process: The majorproblem in integrating and fusing redundant information frommultiple sensors is that of registration-the determination thatthe information from each sensor is referring to the same featuresin the environment. The registration problem is termed thecorrespondence and data association problem in stereo vision andmultitarget tracking research, respectively. Barniv and Casasent[51 have used the correlation coefficient between pixels in thegrey level of images as a measure of the degree of registration ofobjects in the images from multiple sensors. Hsiao [26] hasdetailed the different geometric transformations needed for

    registration. Lee and Van Vleet [3 11andHolm [25]have studiedthe registration errors between radar and infrared sensors. Leeand Van Vleet have presented an approach that is able to bothestimate and minimize the registration error, and Holm hasdeveloped a method that is able to autonomously compensate forregistration errors in both the total scene as perceived by eachsensor (macroregistration), and the individual objects in thescene (microregistration).2 ) Error in Sensory Information: The error in sensoryinformation is usually assumed to be caused by a random noiseprocess that can be adequately modeled as a probabilitydistribution. The noise is usually assumed not to be correlated inspace or time (i.e., white), Gaussian, and indepenuent. Themajor reasons that these assumptions are made is that they enablea variety of fusion techniques to be used that have tractablemathematics and yield useful results in many applications. If thenoise is correlated in time (e.g., gyroscope error) it is stillsometimes possible to retain the whiteness assumption throughthe use of a shaping filter [37]. The Gaussian assumption canonly be justified if the noise is caused by a number of small

    708

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    3/16

    independent sources. In many fusion techniques the consistencyof the sensor measurements is increased by first eliminatingspurious sensor measurements so that they are not included in thefusion process. Many of the techniques of robust statistics (e.g.,&-contamination) can be used to eliminated spuriousmeasurements. The independence assumption is usuallyreasonable so long as the noise sources do not originate fromwithin the system.3 ) Error in System Operation: When error occurs duringoperation due to possible coupling effects between components

    of a system, it may still be possible to make the assumption thatthe sensor measurements are independent if the error, aftercalibration, is incorporated into the system model through theaddition of an extra state variable [37]. In well-knownenvironments the calibration of multiple sensors will usually notbe a difficult problem, but when multisensor systems are used inunhown environments, it may not be possible to calibrate thesensors. Possible solutions to this problem may require thecreation of detailed knowledge bases for each type of sensor sothat a system can autonomously calibrate itself. One otherimportant feature required of any intelligent multisensor system isthe ability to recognize and recover from sensor failure (cf. [8 Iand [27]).

    MULTISENSOR INTEGRATIONThe means by which multiple sensors are integrated into theoperation of an intelligent machine or system are usually a majorfactor in the overall design of the system. The specific

    capabilities of the individual sensors and the particular form ofthe information they provide will have a major influence on thedesign of the overall architecture of the system. These factors,together with the requirements of the particular tasks the systemis meant to perform, make it difficult to define any specificgeneral-purpose methods and techniques that encompass all ofthe different aspects of multisensor integration. Instead, whathas emerged from the work of many researchers is a number ofdifferent paradigms, frameworks, and control structures forintegration that have proved to be particularly useful in the designof multisensor systems (see [33] for a review).Many of the paradigms, frameworks, and control structuresused for multisensor integration have been adapted with little orno modification from similar high-level constructs used insystems analysis, computer science, control theory, and artificialintelligence (AI). In fact, much of multisensor integrationresearch can be viewed as the particular application of a widerange of fundamental systems design principles. Commonthemes among these constructs that have particular importance formultisensor integration are the notions of modularity,hierarchical structures,and adaptability. In a manner similarto structured programming, modularity in the design of thefunctions needed for integrationcan reduce the complexity of theoverall integration process and can increase its flexibility byallowing many of the integration functions to be designed to beindependent of the particular sensors being used. Modularity inthe operation of the integration functions enables much of theprocessing to be distributed across the system. The object-oriented programming paradigm and the distributed blackboard

    control structure are two constructs that are especially useful inpromoting modularity for multisensor integration. Hierarchicalstructures are useful in allowing for the efficient representation ofthe different forms, levels, and resolutions of the informationused for sensory processing and control; e.g., the NBS Sensoryand Control Hierarchy [47] and logical sensor networks [22].Adaptability in the integration process can be an efficientmeansof handling the error and uncertainty inherent in the integration ofmultiple sensors. The use of the artificial neural networkformalism allows adaptability o be directly incorporated into theintegration process.A . The Basic Integration Functions

    Although the process of multisensor integration can take manydifferent forms depending on the particular needs and design ofthe overall system, certain basic functions are common to mostimplementations. The diagram shown in Fig. 1representsmultisensor integration as being a composite of these basicfunctions. A group of n sensors provide input to the integrationprocess. In order for the data from each sensor to be used forintegration it must first be effectively modeled. A sensor modelrepresents the uncertainty and error in the data from each sensorand provides a measure of its quality that can be used by thesubsequent integration functions. A common assumption is thatthe uncertainty in the sensory data can be adequately modeledasaGaussian distribution. After the data from each sensor has beenmodeled it can be integrated into the operation of the system inaccord with three different types of sensory processing: fusion,separate operation, and guiding or cueing. The data fromSensors 1and 2 are shown in the figure as being fused. Prior toits fusion, the data from each sensor must be madecommensurate. Sensor registration refers to any of the means(e.g., geometrical transformations) used to make the data fromeach sensor commensurate in both its spatial and temporaldimensions, i.e., that the data refer to the same location in theenvironment over the same time period. The different types ofpossible sensor data fusion are described in Section ID. If thedata provided by a sensor is significantly different from thatprovided by any other sensors in the system, its influence on theoperation of the other senors may be indirect. The separateoperation of of such a sensor will influence the other sensorsindirectly through the effects the sensor has on the systemcontroller and the world model. A guiding or cueing type ofsensory processing refers to the situation where the data from onesensor is used to guide or cue the operation other sensors. Atypical example of this type of multisensor integration is found inmany robotics applications where visual information is used toguide the operation of a tactile array mounted on the end of amanipulator.The results of the sensory processing function serve as inputsto the world model. A world model is used to store informationconcerning any possible state of the environment the system isexpected to be operating in. A world model can include both apriori information and recently acquired sensory information.High-level reasoning processes can use the world model to makeinferences that can be used to direct the subsequent processing ofthe sensory information and the operation of the systemcontroller. Depending on the needs of a particular application,

    709

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    4/16

    information stored in the world model can take many differentforms: In object recognition tasks he world model might containjust the representations of the objects the system is able torecognize, while in mobile robot navigation tasks the worldmodel might contain the complete representation of the robotslocal environment, e.g., the objects in the environment as well aslocal terrain features. The majority of the research related to thedevelopment of multisensor world models has been within thecontext of the development of suitable high-level representationsfor multisensor mobile robot navigation and control. Luo andKay [33] escribe a number of examples of world models used inmobile robots. The last multisensor integration function, sensorselection, refers to any means used to select the most appropriateconfiguration of sensors (or sensing strategy) from among thesensors available to the system. In order for selection to takeplace, some type of sensor performance criteria need to toestablished. In many cases the criteria require that the operationof the sensors be modeled adequately enough so that a cost valuecan be assigned to measure their performance. Two differentapproaches to th e, selection of the type, number, andconfiguration of sensors to be used in the system can bedistinguished: preselection during design or initialization, andreal-time selection in response to changing environmentalsystem conditions, e.g., sensor failure.

    MULTISENSORINTEGRATIONSENSORY PROCESSING

    SENSOR REGISTRATION4 4

    Fig. 1. Functional diagram of multisensor integration and fusion in theoperation of a system.B . Networks and Rule-Based Systems

    Networks and rule-based systems are the most common formsof control structures used for multisensor integration. They canbe used either individually or combined together as part of anoverall control structure. They are especially useful when thesensors in a system are very dissimilar and the data they provideneeds to be fused at multiple levels of representation, i.e., fromsignal- through symbol-level fusion. Because of the particular

    advantages of each structure, a rule-based system is mosteffective when it is used for top-level control and groups ofnetwork structures (e.g., Bayesian and neural networks) are usedfor lower-level control functions. Mitiche, Henderson, andLaganiere[a]ave advocated the use of decision networks formultisensor integration. In a decision network, Bayesian andneural networks can be used as evaluating mechanisms at thenodes of a tree-structured production rule-based control network.The use of network enable hierarchical structures of beefficiently represented (e.g., networks of logical sensorsdescribed in [22]) and allow the same formalism to be used toencode both the representational structure as well as the controlstructure; e.g., a hierarchical network canbe used to both modelan object and to control the decision process in multiple-hypothesis object recognition. The use of rule-based systemsenable the implementation of many AI-based control schemes thatoffer extreme flexibility for integrating multiple sensors incomplex systems because knowledge, in the form of productionrules, can be added to the control structure in a modular andincremental fashion, and the production rules used in manysystems can themselves be used for symbol-level fusion (seeSection 111-E-4).The major problem with rule-based systems that limits theirapplication to all levels of the control structure needed formultisensor integration is that, unless each rule in the systemrepresents an inference that is independent of all the other rules inthe system (i.e., the rule base forms a tree structure), improperconclusions can be drawn during the reasoning process; e.g.,bidirectional inferences are not corre~tly andled, conclusionscannot be easily retracted, and correlated sources of evidence areimproperly treated [41]. A means of overcoming thesedifficulties, in portions of the control structure where individualrules cannot be isolated from the effects of other related nodes, isthrough the use of the Bayesian formalism in which conditionalprobability expressionsare used to represent factual or empiricalinformation. A problem with the straightforward use ofconditional probability expressions is that in order to assert a factone must know its conditional dependencies with all of the otherknown facts. Bayesian networks [41] an be used to encodethese dependencies asdirected arcs between neighboring nodes inan acyclic graph so that they can be quickly identified. Thenetwork offers a complete inference mechanism that can identify,in polynomial time, every conditional dependence relationship.

    III. MULTISENSORFUSIONThe fusion of the dataor information from multiple sensors ora single sensor over time can take place at different levels ofrepresentation (sensory information can be considered data froma sensor that has been given a semantic content throughprocessing and/or the particular context in which it was

    acquired). As shown in Fig. 1, a useful categorization is toconsider multisensor fusion as taking place at the signal, ixel,feature, andsymbol levels of representation. Most of the sensorstypically used in practice providedata that can be fused at one ormore of these levels. Although the multisensor integrationfunctions of sensor registration and sensor modeling are shownin Fig. 1as being separate from multisensor fusion, most of the

    710

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    5/16

    TABLEICOMPARISONOFFUSION LEVELS* Characteristics Signal Level pixel Level Feature Level Symbol Level

    Type of sensory single- or multi- multiple images features extracted from symbol representinginformation dimensional signals signals and images decisionRepresentation level lowof information low to medium mediumModel of sensory random variable stochastic process on non-invariant symbol with

    informatior. corrupted by image or pixels with geometrical form, associateduncorrelated noise multidimensional orientation,position, uncertainty measureattributes and temporal extent offeaturesDegree ofregistration:SpatialtemporalMeansofregistration:Spatial

    temporalFusion method

    Improvement due tofusion

    sensor coalignmentsynchronization orestimationsignal estimation

    reduction inexpected variance

    hi@medium

    sensor coalignment orshared opticssynchronizationimage estimation orpixel attribute

    combinationincrease inperformanceof image processingtasks

    mediummedium

    geometricaltransformationssynchronizationgeometrical andtemporal

    correspondence,andfeature attributecombinationreduced processing,increased featuremeasurement accuracy,andvalue of additionalfeatures

    lowlow

    spatial attributes ofsymbol if necessarytemporal attributes ofsymbol if necessarylogical and statisticalinference

    increase in truth orprobability values

    methods and techniques used for fusion make very strongassumptions, either explicitly or implied, conceming how thedata from the different sensors is modeled and to what degree thedata is in registration. A fusion method that ma y be very soundin theory can be difficult to apply in practice if the assumedsensor model does not adequately describe the data from a realsensor, e.g., the presence of outliers due to sensor failure in anassumed normal distribution of the sensory data can render thefused data useless, or the degree of assumed sensor registrationmay be impossible to achieve, e.g, due to the limited resolutionor accuracyof the motors used to control the sensors.The different levels of multisensor fusion can be used toprovide information to a system that can be used for a variety ofpurposes; e.g., signal-level fusion can be used in real-timeapplications and can be consideredas ust an additional step in theoverall processing of the signals, pixel-level fusion can be usedto improve the performance of many image processing tasks ikesegmentation, and feature- and symbol-level fusion can be usedto provide an object recognition system with additional featuresthat can be used to increase its recognition capabilities. Thedifferent levels can be distinguished by the type of informationthey provide the system, how the sensory information ismodeled, the degree of sensor registration required for fusion,the methods used for fusion, and the means by which the fusionprocess improves the quality of the information provided thesystem. A comparison of the different levels of fusion is givenbelow and summarized in Table I.

    Most methods of multisensor fusion make explicit assumptionsconcerning the nature of the sensory information. The mostcommon assumptions include the use of a measurement modelfor each sensor that includes a statistically independent additiveGaussian error or noise term (i.e., location data), and anassumption of statistical independence between the error termsfor each sensor. Many of the differences in the fusion methodsincluded below center on their particular techniques (e.g.,calibration, hresholding) used for transforming raw sensory datainto a form so that the above assumptions become reasonable anda mathematically tractable fusion method can result. Anexcellentintroduction to the conceptual problems inherent in any fusionmethod based on these common assumptions has been providedby [42]. Their paper provides a proof that the inclusion ofadditional redundant sensory information almost alwaysimproves the performance of any fusion method that is based onoptimal estimation.Clark and Yuille [ l l ] have used a Bayesian formulation ofsensory processing to provide a mathematical foundation uponwhich data fusion algorithmscan be created and analyzed. Thefusion algorithms are classified into two general types,distinguishedby the manner in which information in the form ofsolution constraints is combined to obtain an overall solution tothe multisensory processing problem. In weakly coupled datafusion algorithms the operation of the different sensoryprocessing modules is not affected by the fusion process; instrongly coupled algorithms the output of a module does

    711

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    6/16

    interact with the other modules and effects their operation.Examples are given of data fusion applied to feature-level stereo,binocular and monocular depth cue, and shape from shadingalgorithms.A . Fusion Levels in Automatic Target Recognition

    Fig. 2 provides an example of how the different levels ofmultisensor fusion can be used in the task of automatic targetrecognition. In the figure, five sensors are being used by thesystem to recognize a tank: two millimeter-wave radars thatcould be operating at different frequencies, an infrared sensor(e.g., a forward-looking infrared sensor), a camera providingvisual information, and a radio signal detector that can identifycharacteristic emissions originating from the tank. Thecomplementary characteristics of the information provided by thissuite of sensors can enable the system to detect and recognizetargets under a variety of different operating conditions, e.g., theradars provide range information and their signals are lesseffected by atmospheric attenuationas compared to the infraredimage, while the infrared sensor provides information of greaterresolution than the radars and, unlike the camera, is able tooperate at night.The two radars are assumed to be synchronized and coalignedon a platform so that their data is in registration and can be fusedat the signal level. The fused signal is shown in the figure asbeing sent both to the system, where it can be immediately usedfor the improved detection of targets, and as input to generate arange image of the target. The range image from the radars canthen be fused at the pixel level with the intensity image providedby the infrared sensor located on the same platform. In mostcases, an element from the range image can only be registeredwith a neighborhood of pixels from the infrared image becausethe differences in resolution between the millimeter-wave adarsand the infrared sensor. An image from The fused image is sentboth to the system, where it canbe immediately used to improvetarget segmentation, and as input so that useful target features canbe extracted from the image. The features from the pixel-levelfusion can then be fused at the feature level with similar featuresexfracted from visual image provided by the camera. The cameramay be located on a different platform because the sensorregistration requirements for feature-level fusion are less stringentthan those for signal- and pixel-level fusion. The fused featuresare then sent both to the system, where they can be used toimprove the accuracy in the measurement of the orientation orpose of the target, and as input features to an object program.The output of the program is a symbol, with an associatedmeasure of its quality (0.7), indicating the presence of the tank.The symbol can then be fused at the symbol level with a similarsymbol derived from the radio signal detector that also indicatesthe presence of the tank. The fused signal is then sent to thesystem for the final recognition of the tank. As shown in thefigure, the measure of quality of the fused symbol (0.94) isgreater than the measures of quality of either of the componentsymbols and represents the increase in the quality associated withthe symbol as a result of the fusion, i.e., the increase in thelikelihood that the target is a tank.

    SYSTEM

    L -1

    ' ex3ECOGNITION RECOGNITIONI t

    GENERATION GENERATION

    INFRAFEDbENSOR

    [PROCESSING PMXESSING

    r

    MILLIMEER-WAVEMig. 2. Possible uses of signal-, ixel-, feature-, and symbol-level fusion inth e automatic recognition of a tank.

    The transformation from lower to higher levels ofrepresentationas the information moves up through the targetrecognition structure shown in Fig. 2 is common in mostmultisensor integration processes. At the lowest level, raw712

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    7/16

    sensory data are transformed into information in the form of asignal. As a result of a series of fusion steps, the signal istransformed into progressively more abstract numeric andsymbolic ,representations. This signals-to-symbolsphenomenon is also common in computational vision and AI.B . Signal-Level Fusion

    Signal-level fusion refers to the combination of signals of agroup of sensors to provide a signal that is usually of the sameform as the original signals but of greater quality. The signalsfrom the sensors can be modeled as random variables corruptedby uncorrelafed noise, with the fusion process consideredas anestimation procedure. As compared to the other types of fusion,signal-level fusion requires the greatest degree of registrationbetween the sensory information. If multiple sensors are usedthcir signals must be in temporal aswell as spatial registration. Ifthe signals from the sensors are not synchronized they can be putinto temporal registration by estimating their values at commonpoints of time. The signals can be registered spatially by havingthe sensors coaligned on the same platform. Signal-levelfusionis usually not feasible if the sensors are distributed on differentplatforms due to registration difficulties and bandwidthlimitations involved in communicating the signals between theplatforms. The most common means of measuring theimprovement in quality is the reduction in the expected varianceof the fused signal (see, e.g., Fig. 3(d)). One means ofimplementing signal level fusion is by taking a weighted averageof the composite signals, where the weights are based on theestimated variances of the signals. If the signals aremultidimensional the Kalman filter, for example, can be used forfusion.

    I ) Weighted Average: One of the simplest and most intuitivegeneral methods of fusion is to first threshold redundant sensoryinformation provided by a group of sensors to eliminate spuriousmeasurements, and then take a weighted average of theinformation and use this as the fused value. While this methodallows for the real-time processing of dynamic low-level data, inmost cases the Kalman filter is preferred because it provides amethod that is nearly equal in processing requirements and, incontrast to a weighted average, results in estimates for the fuseddata that are optimal in a statistical sense (a Kalman filter for aone-dimensional signal provides optimum weighting factors).

    2 ) Kalman Filter: The Kalman filter [37] for a generalintroduction) is used in a number of multisensor systems when itis necessary to fuse dynamic low-level redundant data in realtime. The filter uses the statistical characteristics of themeasuremcnt model to recursively determine estimates for thefused data that are optimal in a statistical sense. If the system canbe described with a linear model and both the system and sensorerror can be modeled as white Gaussian noise, the Kalman filterwill provide unique statistically optimal estimates for the fuseddata. The recursive nature of the filter makes it appropriate foruse in systems without large data storage capabilities. Examplesof the use of the filter for multisensor fusion include: objectrecognition using sequences of images, robot navigation,multitarget tracking, inertial navigation, and remote sensing. Insome of these applications the U-D (unit upper triangular anddiagonal matrix) covariance factorization filter or the extended

    Kalman filter is used in place of the conventionalKalman filterif, respectively, numerical instability or the assumption ofapproximate linearity for the system model present potentialproblems. Durrant-Whyte, b o , nd Hu 15 ] have used theextended Kalman filter in a decentralized fusion architecture, andAyache and Faugeras [3] have used it forbuilding and updatingthe three-dimensional world model of a mobile robot.3 ) Consensus Sensors:. Luo, Lin, and Scherp 1341 havedeveloped a method for the fusion of redundant information frommultiple sensors that can be used within a hierarchical phase-template paradigm for multisensor integration. The central ideabehind the method is to first eliminate from consideration thesensor information that is likely to be in error and then use theinformation from the remaining consensus sensors to calculatea fused value. The information from each sensor is representedas probability density function and the optimal fusion of theinformation is determined by finding the Bayesian estimator thatmaximizes the likelihood function of the consensus sensors.C . Pixel-Level Fusion

    Pixel-level fusion can be used to increase the informationcontent associated with each pixel in an image formed through acombination of multiple images, e.g., the fusion of a range imagewith a two-dimensional intensity image adds depth information toeach pixel in the intensity image that can be useful in thesubsequent processing of the image. The different mages to befused can come from a single imaging sensor (e.g., amultispectral camera) or a group of sensors (e.g., stereocameras). The fused image can be created either through thepixel-by-pixel fusion or through the fusion of associated localneighborhoods of pixels in each of the component images. Theimages to be fused can be modeled asa realization of a stochasticprocess defined across the image (e.g., a Markov random field),with the fusion process considered as an estimation procedure, orthe information associated with each pixel in a component imagecanbe consideredas an additional dimension of the informationassociated with its corresponding pixel in the fused image (e.g.,the two dimensions of depth and intensity associated with eachpixel in a fused range-intensity image). Sensor registration is nota problem if either a single sensor is used or multiple sensors areused that provide images of the same resolution and share thesame optics and mechanics (e.g., a laser radar operating at thesame frequency as an infrared sensor and sharing the same opticsand scanning mechanism). If the images to be fused are ofdifferent resolution, then a mapping needs to be specifiedbetween corresponding regions in the images. The sensors usedfor pixel-level fusion need to be accurately maligned so that theirimages will be in spatial registration. This is usually achievedthrough locating the sensors on the same platform. The disparitybetween the locations of the sensors on the platform can be usedas an important source of information in the fusion process, e.g.,to determine a depth value for each pixel in binocular fusion. Theimprovement in quality associated with pixel-level fusion canmost easily be assessed through the improvements noted in theperformance of image processing tasks (e.g., segmentation,feature extraction, and restoration) when the fused mage isbeingused as compared to the use of the individual component images.

    713

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    8/16

    The fusion of multisensor data at the pixel level can serve toincrease the useful information content of an image so that morereliable segmentation can take place and more discriminatingfeatures can be extracted for further processing. Pixel-levelfusion can take place at various levels of representation: thefusion of the raw signals from multiple sensors prior to theirassociation with a specific pixel, the fusion of correspondingpixels in multiple registered images to form a composite or fusedimage, and the use of corresponding pixels or local groups ofpixels in multiple registered images for segmentation and pixel-level feature extraction (e.g., an edge image). Fusion at the pixellevel is useful in terms of total system processing requirementsbecause use is made of the multisensordata prior to processing-intensive functions like feature matching, and can serve toincrease overall performance in tasks like object recognitionbecause the presence of certain substructures like edges in animage from one sensor usually indicates their presence in animage from another sufficiently similar sensor. Duane [141 hasreported better object classification performance using featuresderived from the pixel-level fusion of TV and forward-lookinginfrared images as compared to the combined use of featuresderived independently from each separate image.In order for pixel-level fusion to be feasible, the data provided

    by each sensor must be able to be registered at the pixel level and,in most cases, must be sufficiently similar in terms of itsresolution and information content. The most obvious candidatesfor pixel-level fusion include sequences of images from a singlesensor and images from a group of identical sensor (e.g., stereovision). Other sensor combinations that make extensive use ofpixel-level fusion include a coaligned laser radar and forward-looking infrared sensor. Although it is possible to use many ofthe general multisensor fusion methods for pixel-level fusion,e.g., Bayesian estimation, four methods are discussed in thissection that are particularly useful for fusion at the pixel level:logical filters, mathematical morphology, image algebra, andsimulated annealing. What makes these four methods useful forpixel-level fusion is that 1) each method facilitates highly parallelprocessing because, at most, only a local group of pixels are usedto process each pixel, and 2) each method can easily be used toprocess a wide variety of images from different types of sensorsbecause no problem or sensor specific probability distributionsfor pixel values are required, thus alleviating the need for eitherassuming a particular distribution or estimating a distributionthrough supervised training (only very general assumptionsconcerning pixel statistics are needed in simulated annealing tocharacterize a Markov random field for an image).1 ) Logical Filters: One of the most intuitive methods of fusionthe data from two pixels is to apply logical operators, e.g., if thevalues of bothpixels are above particular thresholds the resultingAND filter is assumed to be true. Features derived from andimage to which the AND filter was applied could then beexpected to correspond to significant aspects of the environment.

    In a similar manner, an OR filter could be used to very reliablysegment an image because all of the available information wouldbe available. Ajjimarangsee and Huntsberger [2]have made useof some of the results concerning the means by whichrattlesnakes fuse visual and infrared information to develop a setof logical filters that canbe used for the unsupervised clustering

    of visual and infrared remote sensing information. Six logicalfilters are applied to remote sensing information ha t correspondto the six types of bimodal neurons found in the optic tectum ofthe rattlesnake: AND, OR, visible enhanced infrared, infraredenhanced visual, visible inhibited infrared, and infrared inhibitedvisible filters. The two inhibitory filters in effect implement anexclusive OR filter.2 ) Mathematical Morphology: Mathematical morphology 1201is a method of image analysis that transforms each pixel of animage through theuseof a set of morphological operators derivedfrom the basic operations of set union, intersection, pifference,and their conditional combinations; e.g., dilation and erosionoperators are used to expand and shrink an image, respectively.Lee [203 has used binary morphological processing for the fusionof registered images from a pair of millimeter-wave radarsoperating at different frequencies. The fused image was found toimprove road and terrain boundary detection. The binarymorphological processing startswith two pixel-level feature setsextracted from the images of both radars. A high-confidencecore feature set is derived from both feature sets through setintersection if both sets support each other, and through setdifference if both sets are competing. A potential eature set isderived from both feature sets through set union if the sets are

    supporting, and through the union of one set with thecomplement of the other if both sets are competing. Themorphological operations of conditional dilation and conditionalerosion are used to fuse the core and potential feature sets.Conditional dilation extracts only those connected components ofthe potential feature set that have a non-empty intersection withthe core feature set and is especially useful in rejecting clutterwhen the potential feature set includes both good feature andclutter components. Conditional erosion is useful for filling inmissing segments of component boundaries in the core featureset.3 ) mage Algebra: Image algebra [43] is a high-level algebraiclanguage for describing image processing algorithms that issufficiently complex to provide an environment for multisensor

    Pixel-level fusion. The four basic types of image algebraoperands are coordinate sets, value sets, images, and templates.Coordinate sets can be defined as rectangular, hexagonal, ortoroidal discrete arrays, or layers of rectangular arrays, andprovide a coherent approach to the representation of sensorimages that may have different tessellations or resolutions. If theimages from the multiple sensors used for fusion have identicalunderlying coordinate systems the coordinate set is calledhomogeneous; otherwise it is termed heterogeneous. Value setsusually correspond to the set of integers, real or complexnumbers, or binary numbers of a fixed length, and have the usualarithmetical and logical operations defined on them. A value setis called homogeneous if all of its values come from the same setof numbers; otherwise it is termed heterogeneous. Images are themost fundamental of image algebras operands and are defined asthe graph of a function from a coordinate set to a value set.Templates and template operations are the most powerful tool ofimage algebra and serve to unify and generalize into onemathematical entity the concepts of templates,masks, windows,the structuring elements of mathematical morphology, and otherfunctions defmed on pixel neighborhoods. Template operations

    714

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    9/16

    are used to transform images, and can be used to define aparticular image processing algorithm in an implementationandmachine independent way. The three basic template operationsused to transform real-valued images are generalized convolution,multiplicative maximum, and additive maximum. Templateoperations can be used for local and global convolutions and forchanging the dimensionality, or size and shape, of images.

    Ritter, Wilson, and Davidson [ U ] have discussed the use ofimage algebra for multisensor pixel-level fusion. They havedefined an image to be a multisensor mage if its coordinate setis heterogeneous, and multivalue if its value set ishomogeneous or multidata if it isheterogeneous. A data fusionfunction is any function that maps a value set of higher dimensionto one of lower dimension. The most common data fusionoperation is the reduceoperation where, e.g., a vector-valuedimage is reduced to a real-valued image. A multilevel templatecan be thought of as a stack of templates that can operatedifferently on the different levels of a multivalue or multisensorimage.Chen [9 ] has extended the basic image algebra formalism toinclude incomplete and uncertain information. A stochastic imagealgebra is defined for image analysis through the use of a three-dimensional Markov random field as a model of an image. Theuse of a Markov random field to model images allows relaxationtechniques like simulated annealiing tobeused for processing.

    4 ) Simulated Annealing: Simulated annealing is a relaxation-based optimization technique that, when used in imageprocessing applications [191, [50], amounts to viewing pixelvalues and the neighborhood in which they reside as states ofatoms or molecules in a physical system. An energy function isassigned to the physical system and determines its Gibbsdistribution. Due to the equivalence of the Gibbs distribution to aMarkov random field, the energy function also determines animage model if the image canbe represented as a Markov randomfield. Gradual temperaturereductions in the energy function areused to relax or anneal the physical system towards a globalminimum energy state which corresponds to the maximum aposleriori estimate of the true image given an initial image that isin some way corrupted. Locala nergy states are avoidedduring the relaxation process because changes in the systemtoward lower energy states are only favored and not strictlyimposed.The use of simulated annealing for pixel-level fusion reduces tothe problem of finding energy functions that can adequatelydescribe appropriate constraints on the final fused image. Wright[Sl] has generalized the basic simulated annealing technique forimage processing by creating a probability measure on a Markovrandom field which, in addition to modeling the informationcontent of a single image, takes into consideration the informationcontent of other registered images of the same view. Imagesfrom dissimilar sensors can be fused at the pixel level because themeasure does not directly use the absolute values of the pixels ineach image. Landa and Scheff [28] and Clifford and Nasrabadi[121 have used simulated annealing for the pixel-level binocularfusion of the images from two cameras in order to estimate depth.Clifford and Nasrabadis fusion method uses intensity- and edge-based images together with optical flow data to compensate forpartially occluded regions in images.

    D.Feature -Level FusionFeature-level fusion can be used to both increase the likelihood

    that a feature extractedfrom the information provided by a sensoractually corresponds to an important aspect of the environmentandas a means of creating additional composite features for useby the system. A feature provides for data abstraction and iscreated either through the attachment of some type of semanticmeaning to the results of the processing of some spatial and/ortemporal segment of the sensory data or, in the case of fusion,through a combination of existing features. Typical featuresextracted from an image and used for fusion include edges andregions of similar intensity or depth. When multiple sensorsreport similar features at the same location in the environment, thelikelihood that the features are actually present canbe increasedand the accuracy with which they are measured can e improved;features that do not receive such support can be as spuriousartifacts and eliminated. An additional feature, createdas a resultof the fusion process, may be either a composite of thecomponent features (e.g., anedge that is composed of segmentsof edges detected by different sensors) or an entirely new type offeature that is composed of the attributes of its componentfeatures (e.g, a three-dimensional edge formed through the fusionof corresponding edges in the images provided by stereocameras). The geometrical form, orientation, and position of afeature, together with its temporal extent, are the most importantaspects of the feature that need to be representedso that it can beregistered and fused with other features. Insome cases, a featurecan be made invariant to certain geometrical transformations(e.g., translation and rotation in an image plane) so that all ofthese aspects do not have tobeexplicitly represented. The sensorregistration requirements for feature-level fusion are less stringentthan hose for signal- and pixel-level fusion, with the result thatthe sensors can distributed across different platforms. Thegeometric transformation of a feature can be used to bring it intoregistration with other features or with a world model. Theimprovement in quality associated with feature-level fusion canbe measured through the reduction in processing requirementsresulting from the elimination of spurious features, the increasedaccuracy in the measurement of a feature (used, e.g., todetermine the pose of an object), and the increase in performanceassociated with the use additional features created through fusion(e.g., increased object recognition capabilities).E. Symbol-Level Fusion

    Symbol-level fusion allows the information from multiplesensors to be effectively used together at the highest level ofabstraction. Symbol-level fusion may be the only means bywhich sensory information can be fused if the informationprovided by the sensors is very dissimilar or refers to differentregions in the environment. The symbols used for fusion canoriginate either from the processing of the information providedby the sensors in the system, or through symbolic reasoningprocesses that may make we of a priori information from a worldmodel or sources extemal to the system (e.g., intelligence reportsindicating the likely presence of certain targets in theenvironment). A symbol derived from sensory information715

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    10/16

    COLD COLD HOT HOTAI(a) Four objects.

    (b) Sensorl.

    (c) Sensor2.

    (d) Sensors 1 and 2.4Y3

    I bSOUARE ROUND x3.2

    (e) Sensors 1,2, and 3.Fig. 3 . Th e discrimination of four different objects using redundant andcomplementary information from three sensors. (a) Four objects (A ,B ,

    C, and D) istinguished by the features shape (square vs . round) andtemperature (hot vs . cold). (b) 2-D distributions from Sensor 1(shape). (c) Sensor 2 (shape). (d) 2-D distributions resulting fromfusion of redundant shape information from Sensors 1 and 2. (e) 3-Ddistributions resulting from fusion of complementary information fromSensors 1and 2 (shape), and Sensor (temperature).

    represents a decision that has been made concerning some aspectof the environment (symbol-level fusion is sometimes termeddecision-level fusion). The decision is usually made bymatching features derived from the sensory information to amodel. The symbols used for fusion typically have associatedwith them a measure of the degree to which the sensoryinformation matches the model. A single uncertainty measure isused to represent both the degree of mismatch and any of theinherent uncertainty in the sensory information provided by thesensors. The measure canbe used to indicate the relative weightthat a particular symbol should be given in the fusion process.Sensor registration is usually not explicitly considered in symbol-

    level fusion because the spatial and temporal extent of the sensoryinformation upon which a symbol is based has already beenexplicitly considered in the generation of the symbol, e.g., theunderlying features upon which a group of symbols arebased arealready in registration. If the symbols to be fused are not inregistration, spatial and temporal attributescanbe associated withthe symbols and used for their registration. Different forms oflogical and statistical inference are used for symbol-level fusion.In logical inference the individual symbols to be fused representterms in logical expressions and the uncertainty measuresrepresent the truth values of the terms. If a logical expressionrepresents a production rule, the uncertainty measures can beused to create certainty factors for the terms. In statisticalinference the individual symbols to be fused are represented asconditional probability expressions and their uncertaintymeasures correspond to the probability measures associated withthe expressions. The improvement in quality associated withsymbol-level fusion is represented by the increase in the truth orprobability values of the symbols created as a result of theinference process.Henkind and Harrison [24] have analyzed and compared fourof the uncertainty calculi used in many symbol-level fusiontechniques: Bayesian estimation, Dempster- Shafer evidentialreasoning, fuzzy set theory, and the confidence factors used inproduction rule-based systems. Thecomputational complexity ofthese calculi are compared and their underlying assumptions aremade explicit. Cheng and Kashyap [101 have compared the useof Bayesian estimation and Dempster-Shaferevidential reasoningfor evidence combination.

    1 ) An Object Recognition Example: Fig. 3 illustrates thedistinction between complementary and redundant information nthe task of object recognition. Four objects are shown in Fig.3(a). They are distinguished by the two independent featuresshape and temperature. Sensors 1 and 2 provide redundantinformation concerning the shape of an object, and Sensor 3provides information conceming its temperature. Fig. 3(b) and(c) show hypothetical frequency distributions for both squareand round objects, representing each sensors historical (i.e.,tested) responses to such objects. The bottom axes of bothfigures represent the range of possible sensor readings. Theoutput values x, and x, correspond to some numerical degree ofsquareness or roundness of the object as determined by eachsensor, respectively. Because Sensors 1 and 2 are not able todetect the temperatureof an object, objectsA andC aswell as Band0) an not be distinguished. The dark portion of the axis ineach figure corresponds to the range of output values where thereis uncertainty as to the shape of the object being detected. Thedashed line in each figure corresponds to the point at which,depending on the output value, objects can be distinguished interms of a feature. Fig. 3(d) is the frequency distributionresulting from the fusion of x, and x2 . Without specifying aparticular method of fusion, it is usually m e hat the distributioncorresponding to the fusion of redundant information would haveless dispersion than its component distributions. Under verygeneral assumptions, a plausibility argument can be made that therelative probability of the fusion process not reducing theuncertainty is zero 1421. The uncertainty in Fig. 3(d) is shown asapproximately half that of Fig. 3(b) and (c). In Fig. 3(e),

    116

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    11/16

    complementary information from Sensor 3 concerning theindependent feature temperature is fused with the shapeinformation from Sensors 1 and 2 shown in Fig. 3(d). As aresult of the fusion of this additional feature, it is now possible todiscriminate between all four objects. This increase indiscrimination ability is one of the advantages resulting from thefusion of complementary information. As mentioned above, theinformation resulting from this second fusion could be at a higherrepresentational level (e.g., the result of the first fusion,x ~ , ~ ,aystill be a numerical value, while the result of the second, ~1,2,3,could be a symbol representing one of the four possible objects).

    2 ) Bayesian Estimation: Bayesian estimation provides aformalism for multisensor fusion that allows sensory informationto be combined according to the rules of probability theory.Uncertainty is represented in terms of conditional probabilitiesP(Y I X ) , where P ( Y ) = P(Y I X ) if X remains constant. EachP(Y I X ) takes a value between 0 and 1, where 1 representsabsolute belief in proposition Y given the information representedby propositionX and 0 represents absolute disbelief. Bayesianestimation is based on the theorem from basic probability theoryknown as Bayes ule:

    P ( X I Y ) P ( Y )P ( X ) (Y I X)where P( Y IX) , the posterior probability, represents the beliefaccorded to the hypothesis Y given the information representedby X which is calculated by multiplying the prior probabilityassociated with Y , P ( Y ) , by the likelihood P ( X I Y ) ofreceiving X given that Y is true. The denominator P ( X ) is anormalizing constant.The redundant information from a group if n sensors, SIthrough Sa,can be fused together using the odds and likelihoodratio formulation of Bayes rule. The information epresented by

    X i concerning Y from Si is characterized by P(Xi I Y) and thelikelihood P ( X i I %Y) given the negation of Y, or by thelikelihood ratio:

    Defining the prior odds onY as

    and assuming that the operation of each sensor is independent ofthe operation of the other sensors in the system, the posteriorodds on Y given the informationX I , ..,X , from the n sensorsare given by the productO(Y I X I , ...,X.) =O ( Y ) u x i I Y ).0

    The posterior odds are related to the posterior probability by

    The above formulation can also be used to fuse together asequence of information from a single sensor provided that theuncertainty of the information can be assumed to be independentover time.The application of Bayesian estimation for multisensor fusioncan be illustrated using the object recognition example givenzbove. Sensors 1 and 2, S I and Sz r provide redundant

    information relative to each other concerning the shape of theobjects to be recognized. Let the propositionsS and R representthe hypotheses that the object being sensed is square or round,respectively, and let S I ,R1, z, nd R z represent the shapeindicated in the information provided by SI and Sz.Given the information P(Sl I S) = 0.82 and P (S , IS) = 0.71from SI and Sz concerning the hypothesisS, and assuming thatsquare or round objects are equally likely to be encountered, i.e.,P ( S ) = P ( R ) = 0.5, the posterior odds on S given the fusion ofthe information from both sensors are

    0.5 0.82 0.71=which corresponds to a posterior probability of

    In a similar manner, given the information P(R,IR) = 0.12 andP(RzIR)= 0.14, the posterior probability accorded the hypothesisR can be determined to be 0.02. The posterior probabilities ofboth hypotheses do not sum to unity in this example due to anassumed inherent uncertainty in the operation of SI nd Sz of 6and 15 percent, respectively. If, for example, it is known apriorithat only a third of the objects likely to be encountered are square,the posterior odds on S would be reduced by half and the oddsonR would double.3 ) Dempster-Shafer Evidential Reasoning: Garvey, Lowrance,and Fischler [181 introduced the possibility of using Dempster-Shafer evidential reasoning for multisensor fusion. The use ofevidential reasoning for fusion allows each sensor to contributeinformation at its own level of detail, e.g., one sensor may beable to provide information that can be used to distinguishindividual objects, while the information from another sensorma y only be able to distinguish classes of objects; the Bayesianapproach, in contrast, would not be able to fuse the informationfrom both sensors. Dempster-Shaferevidential reasoning [461 isan extension to the Bayesian approach that makes explicit anylack of information concerning a propositions probability byseparatingf i i elief for the proposition from just its plausibility.In the Bayesian approach all propositions (e.g., objects in theenvironment) for which there is no information are assigned anequal a priori probability. When additional information from asensor becomes available and the number of unknownpropositions is large relative to the number of knownpropositions, an intuitively unsatisfying result of the Bayesianapproach is that the probabilitiesof known propositions becomeunstable. In the Dempster-Shafer approach this is avoided by notassigning unknown propositions an a priori probability(unknown propositions are assigned instead to ignorance).Ignorance is reduced (i.e., probabilities are assigned to thesepropositions) only when supporting information becomesavailable.InDempster-Shafer evidential reasoning the set0, ermed theframe of discernment, s composed of mutually exclusive andexhaustivepropositions termed singletons. The level of detailrepresented by a singleton corresponds to the lowest level ofinformation that is able to be discerned through the fusion of

    717

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    12/16

    information from a group of sensors or other informationsources, e.g., a knowledge base. Given n singletons, the powerset of 0, enoted by 2e, contains 2elements and is composedof all the subsets of 0 including 0 itself, the empty set @ andeach of the singletons. The elements of 2e are termedpropositions and each subset is composed of disjunction ofsingletons. The set of propositions {A , I A, E 2s) for which asensor is able to provide direct information are termed its focalelements. For each sensor S ; , the function

    mi: (A, IA, E 28) +=0 , 11 ,termed a basic probability assignment, maps a unit ofprobability mass or belief across the focal elements of Si subjectto the conditionsmi(@>= 0,

    and

    Any probability mass not assigned a proper subset of 0 isincluded in m i ( @ ) and is assumed to represent the residualuncertainty of S i that is distributed in some unknown manneramong its focal elements.

    A belief or support function, defined for Si asbeZi(A)= C m i ( A j ) ,A j A

    is used to determine the lower probability or minimum likelihoodof each proposition A . In a similar manner, doubt,plausibility, and uncertainty functions are defined asdbt,(A)= beli(AC),p l ~ i ( A ) 1- dbt,(A),

    andUi(A)=plSi(A) - beli(A).

    The degree of doubt in A is the degree of belief in thecomplement of A . The plausibility function determines the upperprobability or maximum likelihood of A and represents the massthat is free to move to the belief of A as additional informationbecomes available. The uncertainty of A represents the mass thathas not been assigned for or against belief in A . The Bayesianapproach would correspond to the situation where ui(A)= 0 forall A E 2 8 . The Dempster-Shafer formalism allows therepresentation of total ignorance for a propositionA since bel(A)= 0 does not imply db t (A) > 0, even though db t (A) = 1 doesimply be l (A) = 0. The interval [ b e l ( A ) , p l s ( A ) ] s termed abelief interval and represents, by its magnitude, nowconclusive the information is for proposition A , e.g., totalignorance concerning A is representedas [0, 11,while [0,01 and[ I , 11 representA as being false and true, respectively.Dempsters ule of combination is used to fuse together thepropositionsX and Y from the two sensors S i and S,:

    whenever A f 4, and where mi j is the orthogonal sum miOm,and X , Y E 23. The denominator is a normalization factor that

    forces the new masses to sum to unity, and may be viewed as ameasure of the degree of conflict or inconsistency in theinformation provided by Si and S,. If the factor is equal to 0 thesensors are completely inconsistent and the orthogonal sumoperation is undefined. The combination rule narrows the set ofpropositions by distributing the total probability mass into smallerand smaller subsets, and can be used to find positive belief forsingleton propositions that may be embedded in thecomplementary information (i.e., focal elements composed ofdisjunctions of singleton propositions) provided by a group ofsensors.The application of Dempster-Shafer evidential reasoning formultisensor fusion can be illustrated using the object recognitionexample given above. 0 s composed of the four singletonpropositions A , B , C,and D , corresponding to the four objects tobe recognized. Each of the three sensors used to recognize theobjects is only able to provide information to distinguish aparticular class of objects, e.g., square versus round objects.Sensors 1 and 2 , SI and Sz, provide redundant informationrelative to each other concerning the shape of the objects,represented as the focal elements A v C (square) and B v D(round). The information from SIand Sz is the same as that usedto illustrate Bayesian estimation. Sensor 3, S 3 , providescomplementary information relative to S Iand Sz concerning thetemperature of the objects, represented as the focal elementsAvB(co1d) and C v D (hot).The mass assignments resulting from the fusion of theinformation from SI and S2 using Dempsters ule are shown inTable II. The probability mass assigned to each of the focalelements of the sensors reflects the difference in the sensorsaccuracy indicated by the frequency dismbutions shown in Fig.3(b) and (c); e.g., given that the object being sensed is mostlikely square, the greater mass attributed to m , ( A v C ) ascompared to mz(AvC)reflects Slsgreater accuracy as comparedto Sz. The difference in mass attributed to the object possiblybeing round reflects the amount of overlap in the distributions foreach shape class. The mass attributed to m ( 0 ) for each sensorreflects the amount by which the focal element masses have beenreduced to account for the inherent uncertainty in the informationprovided by each sensor. The normalization factor is calculatedas 1 minus the sum of the two k s in the table, or 1- 0.2 = 0.8.As a result of the fusion, the belief attributed to the object beingsquare has increased from b e l l ( A v C )= 0.82 and beZ,(AvC) =0.71 to bel l , , (AvC)= 0.93475 (the sum of the ~ Z ~ , ~ ( A V C )n thetable). This increase is also indicated by the narrowerdistribution shown for the fused information in Fig 3(d) ofChapter 1.Table II I shows the mass assignments resulting from the fusionof the combined information from S , and Sz S,,2,with the focalelements of Ss.As a result of the fusion, positive belief can beattributed to the individual objects. The most likely object is A ,as indicated by

    bel, , , (A) = m l , 3 (A) = 0.85997,d b t l J . 3 ( A ) = m1,23(B) + ml,2,3(C)+ m l , d D ) + m 1 , 2 . ~ ( B v D )

    and+ m l , t (CvD>

    718

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    13/16

    TABLE11FUSION USING SENSORS 1 AN D 2.

    s2

    ~ ml(BvD) = 0.12 k = 0.0852 mlz(BvD) = 0.021 ml(BvD) = 0.0225m,(Q) = 0.06 m,,z(AvC) = 0.05325 m,(BvD) = 0.0105 m12(8) = 0.01 125

    TABLEDlFUSION USING SENSORS 1,2, AN D 3.

    S.

    = 0.11076.The evidence for this conclusion s quite conclusiveas indicatedby a small uncertainty and a narrow belief interval forA:

    u1,2,3(A) = pls1,2,3(A) - be11,2,3(A) = 0.02927,~ l s , , ~ , ~ ( A ) lW.85997, 0.889241,

    where the plausibility of A ispl~1,2,3(A) 1 - dbtl,23(A)= 0.88924.

    The least likely object is also quite conclusively D . If additionalinformation becomes available, e.g., that the object was storedinside a refrigerated room, it can easily be combined with theprevious evidence to possibly increase the conclusivenessof theconclusions.4 )ProductionRules with Corgfidence Factors: production rulescan be used to symbolically represent the relation betweensensory information and an attribute hat can be inferred from theinformation. Production rules that are not directly based onsensory information can be easily combined with sensoryinformation-based rules aspart of an overall high-level reasoningsystem, e.g., expert systems. The use of production rulespromotes modularity in the multisensor integration processbecause additional sensors can be added to the system withoutrequiring the modification of existing rules.The production rules used for multisensor fusion can berepresented as the logical implication of a conclusion Y given apremise X, denoted as ifX then Y or X + Y. The premise Xmay be composed of a single proposition or the conjunction,disjunction, or negation of a group of propositions. Theinference process can proceed in either a forward or backward

    chaining manner: in forward-chaining nference, a premise isgiven and its implied conclusions are derived; in backward-chaining inference, a proposition is given as a goal to be provengiven the known information. In forward-chaining inference, thefusion of sensory information takes place both through theimplication of the conclusion of a single rule whose premise iscomposed of a conjunction or disjunction of information from

    different sensors and through the assertion of a conclusion that iscommon to a group of rules.Uncertainty is represented in a system using production rulesthrough the association of a certainty factor (CF) with eachproposition and rule. Each CF s a measure of belief or disbeliefand takes a value -1 I CF I 1,where CF = 1 corresponds toabsolute belief, CF = -1 to absolute disbelief, and, for aproposition, CF = 0 corresponds to either a lack of informationor an equal balance of belief and disbelief concerning theproposition. Uncertainty is propagated through the system usinga certainty factor calculus, e.g., the EMYCIN calculus [7].Each proposition X and its associatedCF is denoted asx cf (CF[XI),

    where CF[X] is initially eitherknown or assumed to be equal to0. Given the set 33of rules in a system, each rule ri E 33 and itsassociated CF isdenoted asri :X + Y cf (CFi[X, Y]).TheCF of the premise X in r; canbedefinedasICF[X] else.CF[Xl if X =x1min(CF[x~]...,CF[X,])cFi[xl=max(CFIXl], ..,CFIXnl) if X =XIA... AXnif X =X lv...VXnwhere each Xi is a proposition inX and -X is the negation of X.TheCF of the conclusionY in ri canbe determined using

    -CFi[X].CFi[X, Y] if both CF S

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    14/16

    be the set of rules with known premises and Y as theirconclusion. GivenN = IRA such rules,CF[U = CF[U, =N,

    where, for every ri E RY, FIYIo= 0 andCF[Y],-l + CFi[Y].(l-CF[Y],_1) both CF > 0

    CF[Y]j= CF[Y],-l + CFi[Y]*(l+CF[Y]j-1) bOthCF < 0Ifor j= 1 to N.The application of production rules with certainty factors formultisensor fusion can be illustrated using the object recognitionexample given above. The information from the three sensorsS , , S2 , and S3 is the same as that used in the illustrations ofBayesian estimation and Dempster-Shafer evidential reasoning.Let S, cf (0.87) and R , cf (-0.87) be the known propositionsprovided by S , concerning whether the objects being sensed areeither square(S) r round (R ) , espectively. The two rules

    r, :S,+ S f (0.94) andr2 : , + R cf(O.94)account for an inherent uncertainty of 6 percent in the informationprovided by SI. Using only SI , he certainty that the object beingsensed is square is S cf (0.82) and that it is round is R cf (-0.82). The information S2cf (0.84) andR2 f (-0.84) from S, ,together with the additional rulesr3 :S2+ S cf (0.85) andr4 :R2+ R cf (0.85)

    can be fused with the redundant information from S , to increasethe belief that the object is square toS cf(0.9478) and to increasethe disbelief that it is round to R cf (-0.9478), where CF[S] =corresponding to RS (r,, r3) andqR (r,, r4), espectively.Let C3 cf (0.94) and H 3cf (-0.94) be the known propositionsprovided by S3concerning whether the objects are either cold (C )or hot (H), respectively. The two rules

    0.82 + 0.71(1 - 0.82) and CF[R] = -0.82 - 0.71(1 - 0.82)

    r, : C3+ C cf(O.98) andr, :H3+ H cf (0.98)

    to account for the inherent uncertainty in S3, together with theadditional rulesr7 :SAC3 A cf(l.O),r, :RAC+ A cf(l.O),rg : SAH + A cf(l.O), andrl0 :RA H + A cf(l.O),

    enable the information from S 3 to be fused with thecomplementary information from S , andS2 o determine the CFassociated with the propositions A, B , C, nd D, correspondingto the four possible types of objects. Having determined that Ccf (0.92) and H cf (-0.92),

    CF[A] = CF,[SAC] .CF,[SAC,A]

    = min(CF[Sl~CF[l?l) 1.0= min(0.9478, 0.92) = 0.92.

    In a similar manner, CF[B], CF[C], and CF[D] can bedetermined to be -0.9478, -0.92, and -0.9478, respectively.The definition of a certainty factor calculus to use withproduction rules for multisensor fusion is ad hoc and will dependupon the particular application for which the system is beingused. For example, the results of the object recognition examplewould more closely resemble the results found using Dempster-Shafer evidential reasoning if the definition of the CF of aconjunction of propositions in the premise of a rule was changedto correspond to the creation of a separate rule for eachproposition, e.g., S+ A and C + A instead of SAC+ A in r7.Using this definition, the resulting C F s for A, B , C, and Dwould be 0.99. -0.014, 0.014, and -0.99, respectively (where aCF of 0 corresponds o a probabilitymass of 0.5).

    Iv.CONCLUSIONA. Future Research Directiom

    In addition to multisensor integration and fusion researchdirected at finding solutions to the problems already mentioned,research in the near future will likely be aimed at developingintegration and fusion techniques that will allow multisensorysystems to operate in unknown and dynamic environments. Ascurrently envisioned, multisensor integration and fusiontechniques will play an important part in the Strategic DefenseInitiative in enabling enemy warheads to be distinguished fromdecoys [l ]. Many integration and fusion techniques will beimplemented on recently developed highly parallel computerarchitectures o take full advantageof the parallelism inherent inthe techniques. The development of sensor modeling andinterface standards would accelerate the design of practicalmultisensor systems [23]. Lyons and Arbib [351 have initiatedthe construction of a formal model of computation for sensory-based robotics that they term robot schemas. Future extensionsto their model will make it possible to reason about sensoryinteractions n a consistent and well-defined manner, and shouldfacilitate the creation of the complex control programs requiredfor multisensor robots. Continued research in the areas ofartificial intelligence and neural networks will continue to provideboth theoretical and practical insights. AI-based research mayprove especially useful in areas like sensor selection, automatictask error detection and recovery, and the development of high-level representations; research based on neural networks mayhave a large impact in areas like object recognition through thedevelopment of distributed representations suitable for theassociative recall of multisensory information, and in thedevelopment of robust multisensor systems that are able to self-organize and adapt to changing conditions (e.g., sensor failure).

    The development of integrated solid-state chips containingmultiple sensors has been the focus of much recent research [45].As current progress in VLSI echnology continues, it is likely thatso-called smart sensors [38] will be developed that containmany of their low-level signal and fusion processing algorithmsin circuits on the chip. In addition to a lower cost, a smart sensor720

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    15/16

    might provide a better signal-to-noise atio, and abilities for self-testing and calibration. Currently, it is common to supply amultisensor system with just enough sensors for it to complete itsassigned tasks; the availability of cheap integrated multisensorsmay enable some recent ideas concerning highly redundantsensing [49] to be incorporated into the design of intelligentmultisensor systems-in some cases, high redundancy mayimply the use of up to ten times the number of minimallynecessary sensors to provide the system with a greater flexibilityand insensitivity to sensor failure. In the more distant future, thedevelopment of micro or gnat [151robots will necessarily entailthe advancement of the state of the art in multisensor integrationand fusion.B . Guide to Survey and Review Papers

    A number of recent papers have surveyed and revieweddifferent aspects of multisensor integration and fusion. An articleon multisensor integration in the EncycZopedia of ArtificialIntelligence has focused on the issues involved in objectrecognition [4]. Mitiche and Aggarwal [39] discuss some of theadvantages and problems involved with the integration ofdifferent image processing sensors, and review recent work inthat area. Garvey I171 has surveyed some of the differentartificial intelligence approaches to the integration and fusion ofinformation, emphasizing the fundamental role in artificialintelligence of the inference process for combining information.A number of the different knowledge representations, inferencemethods, and control strategies used in the inference process arediscussed in his paper. Mann 36] provides a concise literaturereview as part of his paper concerning methods for integrationand fusion that are based on the maintenance of consistent labelsacross different sensor domains. Luo and Kay [32, [331 andBlackman [6 ] have surveyed some of the issues of and differentapproaches to multisensor integration and fusion, with Blackmanproviding an especially detailed discussion of the data associationproblem, and Hackett and Shah [21] have surveyed a number ofmultisensor fusion papers and have classified them into thefollowing six categories: scene segmentation, representation,three-dimensional shape, sensor modeling, autonomous robots,and object recognition. Recent research workshops have focusedon the multisensor integration and fusion issues involved inmanufacturing automation [23] and spatial reasoning [28].Techniques for multisensor integration and fusion have beenincluded in recent textbooks on artificial intelligent [131 andpattern recognition [48].

    REFERENCES[l] Adam, J.A. (1989). Star Wars in transition. IEEE Spectrum, 26(3),[2] Ajjimarangsee, P., and Huntsberger, T.L. (1988). Neural network

    model for fusion of visible and infrared sensor outputs. In P.S.Schenker (Ed.), Proc. SPIE. vol. 1003, Sensor Fusion: SpatialReasoning and Scene Interpretation (pp. 153-160). Cam bridge , MA.[3] Ayache. N., and Fauge ras, 0. (1989). Maintaining representations ofthe environm ent of a mobile robot. IEEE Trans. Robot. Automat.,[4] Bajcsy, R., and Allen, P. (1986). Multisensor integration.Encyclopedia of Artificial Intelligence (pp. 632438). New York:Wiley.

    32-38.

    RA-5(6). 804-819.

    [5] B amiv , Y., and Casa sent, D. (1981). Multisen sor image registration:Experimental verification. W.H. Carter (Ed.), Proc. SPIE, vol. 292,Process. Images and Data from Optical Sensors (pp.160-171). SanDiego, CA.[6] Blackman, S.S. (1988). Theoreticalapproaches to da ta association andfusion. In C.W. Weaver (Ed.), Proc. SPIE, vol. 931. Sensor Fusion(pp. 50-55). Orlando, F Lv ] Buchanan, B.G., and Shortliffe. E.H., (Eds.). (1984). Rule-BasedExpert Systems: The MYCIN Expe rimen ts of the Stanford HeuristicProgrammingProject. Reading, MA: Addison-Wesley.[8] Bullo ck, T.E., Sangsu k-iam , S.,Pietsch. R., and Boudreau, E.J.(1988). Sensor fusion applied to system performance under sensorfailures. In C.W. Weaver (Ed.), Proc. SPIE, vol. 931, Sensor Fusion(pp. 131-138). Orlando, FL.[9] Chen, S.S. (1989). Stocha stic image algebra for multisensor fusionand spatial reasoning: A neural approach. In M.R. Weathersby (Ed.),Proc. SPIE, vol. 1098, Aerospace Pattern Recog. (pp. 146-154).Orlando, FL.[lo ] Cheng, Y., and Kashyap , R.L. (198 8). Com parison of Bayesian andDempsters rules in ev idenc e combination. In G.J. Erickson and C.R.Smith (Eds.), Maxim um-Entropy and Bayesian Methods in Scienceand Engineering (vol. 2. pp. 427-433). Dordrecht, The Netherlands:Kluwer.[ll] Clark, J.J., and Yuille, A.L. (1990). Data Fusion for SensoryInformation Processing Systems. Norwell, MA. Kluwer.[12] Clifford , S.P., an d Nasrabad i, N.M. (1 988). Integration of stere ovision and optical flow using Markov random fields.Pro c. IEEE Int.CO$.Neural Networkr @p. 1-577-1-584). San Diego, CA.[13] Dougherty, E.R., and Gia rdina, C.R. (1988). Mathem atical Methodsfor Artificial Intelligence and Autonomous Systems. Englewcod

    [14] Duane, G. (1988). Pixel-level sensor fusion for improved objectrecognition. In C.W. W eaver (Ed.), Proc. SPIE, vol. 931, SensorFusion @p. 180-185). Orlando , FL.[15] Du rra nt-W hF. H.F., Rao, B.Y.S., and Hu. H. (1990). Toward a fullydecentralized architecture or multi-sensordata usion.Proc. IEEE In!.Conf. Robotics and Automat. (pp. 1331-1336). Cinc innati , OH.[16] F ~ Y M , .M., and Brooks, R.A. (1988). MIT mobile robots-Whatsnext? Pro c. IEEE In!. Conf. Robotics and Automat. (pp. 611-617).Philadelphia, PA.[17] Garvey, T.D. (1987). A survey of AI approaches to the integration ofinformation. In R.G. Buser and F.B. Warren (Eds.), Proc. SPIE, vol.782, Infiared Sensors and Sensor Fusion (pp. 68-82). O rlando, FL.[18] Garv ey, T.D., Lowran ce, J.D., a nd Fischle r, M.A. (1981) . Aninference techniqu e or integra ting knowledge from disparate sources.Pro c. 7th In!. Joint Conf. Artificial Intell., (pp. 319-325). Vanc ouver ,BC, Canada.[19] Geman, S., and Geman, D. (1984). Stochastic relaxation, Gibbsdistributions, and the Bayesian restoration of images. IEEE Trans.Pattern Anal. Machine Intell., PAMI-6(6), 721-741.[20] Fardina. C.R., and Dougherty, E.R. (1988). Morphological M ethodsVI Image and Signal Processing. E n g le w d Cliffs, N J Prentice-Hall .[21] Hackett, J.K., and Shah, M. (1990). Multi-sensor fusion: Aperspective. Proc. IEEE In t . Conf. Ro6otics and Automat. (pp. 1324-1330). Cincinnati. OH.[22] Henderson, T., and Shilcrat, E. (1984). Logical sensor systems. J.

    [23] He nderson, T.C., A llen, P.K., Mitic he, A., Durran t-Wh yte, H.. andSnyder, W. (Eds.). (1987). Workshop on multisensor integration inmanufacturing automation (Tech. Rep. UUCS-87406). Univ. ofUtah, Snow bird: Dept. of Comp ut. Sci.[24] Henkind, S.J., and Hamson, M.C. (1988). An analysis of fouruncertainty calculi. IEEE T rans. Syst., Man Cyb ern., SMC-18(5),700-714.[25] Holm, W.A. (1987). Air-to-ground dual-modeMMWm sensor sceneregistration. In R.G. Buser and F.B. Warren (Eds.), Proc. SHE, vol.782, Infrared Sensors and Sensor Fusion (pp. 20-27). Orlando, F L1261 Hsiao. M. (1988). Geometric registra tionmethod for sensor fusion. InP.S. Schen ker (Ed.), Proc. SPIE, vol. 1003. Senror Fusion: SpatialReasoning and Sce ne Interpretation (pp. 214-221). Cam bridge , MA.[27] Jakubowicz, O.G. 1988). Autonomous reconfiguration of sensorsystems using neural nets. In C.W. Weaver (Ed.), Proc. SPIE, vol.93 1, Sensor Fusion (pp. 197-203). Orlan do, FL.[28] Kak, A., and Chen, S. (Eds.). (1987). Spatial Reasoning and Multi-Sensor Fusion: Proc. 1987 Workshop. Los Altos, CA: MorganKaufmann.[29] L anda , J., and Sche ff, K. (1987). Bino cular fusion using simulatedannealing. Proc. IEEE 1st In!. Conf. Neural Networkr (pp. IV-327-IV-334). San Diego, CA.

    Cliffs, N J P~n tice -Ha ll, p. 273-277.

    Robot. Syst ., l(2). 169-193.

    721

  • 8/6/2019 A Tutorial on Multi Sensor Integration and Fusion

    16/16

    [30] Lee, J.S.J. (1988). Multiple sensor fusion based on morphologicalprocessing. In P.S. Schenker (Ed.), Proc. SPIE, vol. 1003, SensorFurion: Spatial Reasoning and Scene Interpretation (pp. 94-100).Cambridge, MA.[31] Lee, R.H., and Van Vleet, W.B. (1988). Registration error analysisbetween dissimilar sensors. In C.W. Weaver (Ed.), Proc. SPIE, vol.931, Sensor Fusion (pp. 109-114). Orlando, FL.[32] Luo, R.C., and K ay, M.G. (1988). Multisensor integration andfusion: Issues and approaches. In C.W. Weaver, Ed, Proc . SPIE, vol.93 1, Sensor Fusion (pp. 42-49). Orland o, FL.[33] Luo, R.C., and Kay, M.G. (1989). Multis ensor integration and fusionin intelligent systems. IEEE Trans. Syst., Man Cy bern., vol. SMC-

    [34] Luo, R.C., Lin, M., and Scherp, R.S. (1988). Dynamic multi-sensordata fusion system for intelligent robots. IEEE J.Robot. Automat..[35] Lyons, D.M., and Arbib, M.A. (1989). A form al model ofcomputation for sensory-based robotics. IEEE Trans. Robot.[36] Mann, R.C. (1987). Multi-sensor integration using concurrentcomputing. In R.G. Buser and F.B. Warren (Eds.), Proc. SPIE, vol.782, Infrared Sensors and Sensor Fusion (pp. 83-90). Orlando, FL.[37] Maybeck, P.S. (1979, 1982). Stochastic Models, Estimation, andControl (Vols. 1 and 2). New York: cademic.1381 Middelhoek, S., an d Hoogerwerf,A.C. (1985). Smart sensors: Whenand where? Sensors and Actuators, 8.39-48.[39] Mitiche, A ., and Aggarwal, J.K. (1986).