A Photogrammetric Vision System For Robots...A computer vision system typically consists of an...

8
S. F. EL-HAKIXI National Research Council of Canadu Ottawa, Ontario KIA OR6, Canada A Photogrammetric Vision System for Robots* The skills required of an intelligent robot include the ability to recognize shapes and motion and to make accurate measurements of the position and orientation of objects in the scene. R OBOTS CAN BE CLASSIFIED into two main groups, depending on their ability to interact with the external environment. The first group includes ro- bots that can perform sequences of prograln~ned op- erations but cannot detect nor adapt to changes in their working environment. The second group in- cludes advanced, or intelligent, robots which have their own visual system to acquire scene information and to recognize shapes and motion. An intelligent robot can, for example, select the objects it needs from a group of randomly placed different objects. Most of the existing industrial robots are of the first type. The intelligent robot is still in an experimental In the design of a vision system, the following requirements must be satisfied: Practical speed. Because it is not practical to store more than one digital image (or video frame) in the computer memory, each image should be pro- cessed in real time. This means that the time re- quired to process one frame must be less than the data acquisition time of the next frame. The latter is 1/30 of a second for a CCD (charge-coupled device) camera with a resolution of 488 by 380 pixels. Therefore, until the speed of present general-pur- pose co~nputers increases drastically, most image- processing operations will have to be performed through hardware or, at least, by plug-in modules. ABSTRACT: Most existing robots can follow only spec& sets of instructions. The skills required of an intelligent robot include the ability to recognize shapes and motion and to make accurate measurements of the position and orientation of objects in the scene. This can be achieved if the robot has a cision system capable of measuring, in three dimensions, the coordinates of surface points of the objects. Photogrammetric techniques, although suitable for surface measurements, 1zaz;e not been considered in the past for vision systems mainly because the data must be acquired and processed in real time. By acquiring the data directly in digital form, real-time processing can be feasible. A cision system which combines the efficiency of photogrammetric algoritlzms and a practical, simple, and fast liglzt- sensing technique is outlined. stage and capable of dealing only with very limited arrangements of objects and scenes. In spite of this, intelligent systems are becoming increasingly useful in industry, particularly in applications such as au- tomobile assembly lines and inspection for quality control (Callahan, 1982; Taboada, 1981). By a re- duction of their present limitations, a multitude of industrial applications can be expected to benefit. The total time required to process all the images and to perform other vision operations, such as ob- ject recognition, depends on the requirements of each application. As a result, some operations will be performed by hardware, and others by software. A hardware nodule is 10 to 100 times faster than regular software, while special-purpose hardware can be 100 to 1000 times faster (Hale and Saraga, 1974; Lockton, 1982). Needless to say, the cost also increases proportionally. * Based on a paper presented at the XV ISPRS Con- Performance. The main reason for using robots in gress held in Rio de Janiero, Brazil, in June 1984 and industry is to increase efficiency and to improve published in the International Archives of Photogram- productivity. Errors made by the robots and fail- metry and Retnote Sensing, Vol. XXV, Part A5, pp. ures requiring human intervention waste time and 223-231. add to the cost of the operation. PHOTOCRAMMETRIC ENGINEERING AND REMOTE SENSING, 545 0099-1 112/85/5105-0545$02.2510 \bl. 51, No. 5, May 1985, pp. 545-552 0 1985 American Society for Photogrammetry and Remote Sensing

Transcript of A Photogrammetric Vision System For Robots...A computer vision system typically consists of an...

  • S . F. EL-HAKIXI National Research Council of Canadu

    Ottawa, Ontario KIA OR6, Canada

    A Photogrammetric Vision System for Robots* The skills required of an intelligent robot include the ability to recognize shapes and motion and to make accurate measurements of the position and orientation of objects in the scene.

    R OBOTS CAN BE CLASSIFIED into two main groups, depending on their ability to interact with the external environment. The first group includes ro- bots that can perform sequences of prograln~ned op- erations but cannot detect nor adapt to changes in their working environment. The second group in- cludes advanced, or intelligent, robots which have their own visual system to acquire scene information and to recognize shapes and motion. An intelligent robot can, for example, select the objects it needs from a group of randomly placed different objects. Most of the existing industrial robots are of the first type. The intelligent robot is still in an experimental

    In the design of a vision system, the following requirements must be satisfied:

    Practical speed. Because it is not practical to store more than one digital image (or video frame) in the computer memory, each image should be pro- cessed in real time. This means that the time re- quired to process one frame must be less than the data acquisition time of the next frame. The latter is 1/30 of a second for a CCD (charge-coupled device) camera with a resolution of 488 by 380 pixels. Therefore, until the speed of present general-pur- pose co~nputers increases drastically, most image- processing operations will have to be performed through hardware or, at least, by plug-in modules.

    ABSTRACT: Most existing robots can follow only spec& sets of instructions. The skills required of an intelligent robot include the ability to recognize shapes and motion and to make accurate measurements of the position and orientation of objects in the scene. This can be achieved if the robot has a cision system capable of measuring, in three dimensions, the coordinates of surface points of the objects. Photogrammetric techniques, although suitable for surface measurements, 1zaz;e not been considered in the past for vision systems mainly because the data must be acquired and processed in real time. By acquiring the data directly in digital form, real-time processing can be feasible. A cision system which combines the efficiency of photogrammetric algoritlzms and a practical, simple, and fast liglzt- sensing technique is outlined.

    stage and capable of dealing only with very limited arrangements of objects and scenes. In spite of this, intelligent systems are becoming increasingly useful in industry, particularly in applications such as au- tomobile assembly lines and inspection for quality control (Callahan, 1982; Taboada, 1981). By a re- duction of their present limitations, a multitude of industrial applications can be expected to benefit.

    The total time required to process all the images and to perform other vision operations, such as ob- ject recognition, depends on the requirements of each application. As a result, some operations will be performed by hardware, and others by software. A hardware nodule is 10 to 100 times faster than regular software, while special-purpose hardware can be 100 to 1000 times faster (Hale and Saraga, 1974; Lockton, 1982). Needless to say, the cost also increases proportionally.

    * Based on a paper presented at the XV ISPRS Con- Performance. The main reason for using robots in gress held in Rio de Janiero, Brazil, in June 1984 and industry is to increase efficiency and to improve published in the International Archives of Photogram- productivity. Errors made by the robots and fail- metry and Retnote Sensing, Vol. XXV, Part A5, pp. ures requiring human intervention waste time and 223-231. add to the cost of the operation.

    PHOTOCRAMMETRIC ENGINEERING A N D REMOTE SENSING, 545 0099-1 112/85/5105-0545$02.2510

    \bl. 51, No. 5, May 1985, pp. 545-552 0 1985 American Society for Photogrammetry and Remote Sensing

  • Noise tolerance. Due to the nature of the working environment, image noise will always exist. There- fore, the system should be able to distinguish noise from actual data. Noise reduction techniques, such as smoothing or ensemble averaging, should be ap- plied (Rosenfeld and Kak, 1976).

    Other desirable features such as flexibility, sirn- plicity, and adaptability to different tasks should also be taken into consideration when designing a vision system.

    Satisfying all the above requirements is not an easy task due to problems that exist in the algo- rithms and in the hardware. These problems are discussed in the following section.

    A computer vision system typically consists of an all-digital data acquisition that records image point locations (x,y) and a grey-level value for each point, a digital image processing unit to classify image points and to extract features, a surface measure- ment by computing three-dimensional coordinates of scene points from image points, and an object recognition facility which analyzes these scene co- ordinates. This results in "seeing" some objects in the field of view and paves the way for subsequent manipulation with any particular object in the scene. However, the seeing process is very compli- cated due to the following problems:

    (1) Classification and extraction of features from im- ages is dependent on the grey values of the image components (pixels). There are many factors af- fecting the grey value, particularly surface mate- rial, intensity and angle of illumination, camera location and orientation, and atmospheric condi- tions. It is very difficult to deter~nine in advance how each of these factors affects the pixel grey value. Thus, computer distinction between object and background could be difficult if neither is a priori known to be brighter than the other.

    (2) Determining three-dimensional object coordi- nates from two-dimensional image coordinates using, say, two overlapping photographs is a simple operation if the image points in these pho- tographs are well defined and matched by a human operator. However, automatic matching by computer, using correlation or convolution techniques, is very difficult in close-range, large height-change imagery, which is the case in a robot working environment. This is mainly be- cause (a) some patches may appear with different

    brightness in the two images; (b) some surfaces may appear in one photograph

    only; and (c) for some surfaces with certain shape or tex-

    ture, where the light intensity is the same for all surface points or follows a periodic pattern, there will be no unique solution.

    These problems could be a great obstacle even to a human operator. An alternative to the two-

    camera system is the use of one camera and a projected pattern. But even then, a time-con- sulning analysis of the line pattern is necessary in order to automatically determine absolute line identity, which is required to convert the image coordinates into obiect coordinates (Frobin and Hierholzer, 1983). '

    (3) Conventional optical systems have a limited depth-of-field. Thus, only objects falling within this limit can be properly dealt with.

    (4) Extracting information from images requires an enormous amount of data. Managing these data through all the different processing stages and trying to satisfy the above-mentioned require- ments of speed, performance, and noise tolerance could be ilnpossible without advanced electronics and algorithms.

    (5) Vision systems can only recognize objects which have been a priori "learned" by the computer. Objects that cannot be matched to the stored ob- ject data will not be seen by the system. Failure to match could easily occur if an object, a priori known to the system, is partly obstructed by an- other object or, in some objects, is oriented at certain angles. Because of this problem, most of the existing vision systems can only recognize simple objects at a very limited number of ar- rangements.

    The vision system presented in this paper is based on the light-strip sensing technique (Shirai and Suwa, 1971; Agin and Binford, 1973; Vanderburg et al., 1979); however, photogrammetric algorithms have been used. The system is designed, as will be discussed later, to solve the problems of image matching (item 2 above) and to drastically reduce the effect of the problems of image classification (item 1 above) and large data handling (item 4 above).

    The object recognition problem (item 5 above) is also expected to be reduced by dealing with three- dimensional coordinates in object space where shape and size are not affected by object orientation.

    Figure 1 shows the elements of the light-strip sensing vision system. There are five main ele- ments:

    data acquisition system, to capture the scene data directly in digital form; digital image processor, to produce classified, or identifiable, image coordinates; three-dimensional convertor, to obtain object co- ordinates from image coordinates; shape and position recognition facility, to deter- mine which of the measured objects is the one of interest; and robot command unit, to manipulate the recognized object, from the previous step, knowing its location with respect to the robot arm.

    Some of these elements will be discussed in detail.

  • A PHOTOGRAMMETRIC VISION SYSTEM FOR ROBOTS

    FIG. 1. Elements of the robot-vision system.

    For easier automatic image processing, a single camera and a projector with a raster diapositive are preferred to two cameras (Frobin and Hierholzer, 1983). This system has the advantage of extracting all the information needed for three-dimensional measurement of a surface from a single photograph (Figure 2). This results in reduction of the time needed for processing and eliminates the need for correlation. The process can be further simplified by using a line raster instead of a grid. The three- dimensional object coordinates of any point on the raster line can be determined if each line is uniquely defined so that its coordinate, or identity number, on the raster image plane is known. An automatic procedure for unique line definition is given by Frobin and Hierholzer (1983) but, being time con- suming, it is not suitable for real-time processing. Any other procedure using this type of raster would

    FIG. 2. A photograph of the projected grid.

    probably not be as fast as wanted. Therefore, in the data acquisition system chosen here, the projected pattern is that of a single straight line (Figure 3). This line scans the scene using a scanning device- an optical beam deflection assembly-that can pro- vide an accurate location (coordinate) of the line on the projector plane at a high speed (Figure 4). A control unit to synchronize each line location with camera frame is also needed. This arrangement solves the problem of the automatic determination of three-dimensional object coordinates from image- coordinates (problem 2 above). All the correlation problems are avoided because only one image is needed. Also, the problem of identifying, or coding, each line in a grid or multi-line projection is elim- inated because only one line with a known location is projected. The light source is an argon ion laser which emits up to 4 watts of green light at a wave- length of 514.5 nm. This laser has many advantages. It has a stable beam pointing and power, low optical noise, and almost negligible beam divergence. It thus provides a very bright line of light which has the same width regardless of the object distance from the light source. The laser deflection assembly

    FIG. 3. Single line projection.

  • SOLID STATE CAMERA*

    I

    OBJECT CONTROL UNIT

    FIG. 4. Data acquisition system. FIG. 5. Video signal at and near projected line.

    has been designed and built in-house (Pekelsky, 1984). Its main purpose is to direct the line of light to scan the scene and to ~rov ide a verv ~rec i se lo- , * cation of the line on the projector plane (the y" co- ordinate defined below).

    The other component in the data acquisition system is the camera. A charge-coupled device (CCD) camera with a 488 by 380 pixel resolution op- erating at a 7.16 MHz clock rate is input to an 8 bit 20 MHz high-speed analog-to-digital (ND) converter (Real, 1983). The digitized image points are stored in a frame-grab memory ready for processing. In this approach, it is not necessary to store and pro- cess all the frame points but only the points in the range where the projected line might appear. This range can be computed in advance from the known line location on the projector plane, camera location and orientation with respect to the projector, and the possible variation in depth in the field of view of the camera. By limiting the amount of data to this range, the storage needed and computation time for image processing will be reduced significantly.

    All the components of the data acquisition system, with the exception of the camera, are being built in house at a low cost.

    The digitized image, obtained by the above-men- tioned data acquisition system and stored in the frame grabber, consists of n x m ~ixels, each with a known location (x,y) and a grey-level value. The image processor is then needed to identify each pixel or to search for pixels of interest such as those belonging to edges, objects, and targets. It is ob- vious that the success of this step depends largely on factors such as the illumination of the work area and on the algorithms used for the identification. There are many algorithms available for precisely locating edges, lines, and targets from the different grey levels in the image with an error much smaller than the pixel size (Mikhail, 1983). We are inter- ested here in locating points along the line of light projected on the scene. As shown in Figure 5, the projected line can be located by analyzing the video signal-to-pixel-location relationship. For a line in the x-direction, which is the case in this system, the analyzed pixel location in the camera scan line is along the y-direction. Because the line is usually the most brightly illuminated target, the peak in this

    direction represents the line location. The exact pixel, or subpixel, that represents the precise loca- tion has to be determined by some thinning tech- niques (see Kreis et al., 1982) because the detected line usually spans over a few pixels. However, be- fore locating the line, the noise has to be removed. There are many well known methods for removing noise, such as ensemble averaging, local averaging, and low-emphasis filtering (Rosenfeld and Kak, 1976). The main objective of these techniques is to remove noise from an image without blurring of de- tails. The method adopted here is based on the local averaging approach. In this method, the grey-scale intensity of every point is compared to the weighted average of the image intensity within a window around the point, with lower weights given to points further away from the window center. The grey level of a point is replaced by the weighted average if the difference from this average exceeds a certain threshold.

    After the line points have been identified, the next step is to identify which points belong to each object. This means that the edges between different objects, object and background, or surfaces of the same object must be located. Edges between dif- ferent objects or object and background can be de- fined as points where a sharp discontinuity of the line is detected (Figure 6). At certain line locations on the object surface, and depending on the camera angle and the distance between the object and back- ground, this discontinuity might be too small to be detected. However, this may happen only at a small

    E -most probabk edge bsataon nnps

    FIG. 6. Edge detection.

  • A PHOTOGRAMMETRIC VISION SYSTEM FOR ROBOTS

    number of line locations on the object, and because the line is scanning the object at a large number of locations there will be a sufficient number of lines where the edges are clearly detectable, and from which the whole object can be recognized. Edges Dellectlcn Assembly

    between different surfaces of the same object can be defined as points with a sudden change in slope without discontinuity in the line. Identifiing edge points is thus possible by analyzing the image co- ordinates of line points in succession from one end of the line to the other. The result is the classifica- ca

    points into groups belonging to certain ob- F ~ ~ . 7. Configuration of camera and light source. jects, different surfaces of the same object, or back- ground. The image coordinates of object points are transferred to the next processing step. reference system for the plane of light. For the

    In the above description of the image processing camera these parameters are the three rotation an- operations, the only steps where the grey level is gles for the matrix R, and Xo, Yo, Zo, x,,, y,,, andf. critical are noise elimination and the location of the For the plane of light, the parameters are the three line of light. No other feature extraction depends rotation angles for a reference plane, the three co- on the grey level because edges are detected by ordinates of origin o,, and the location of the ref- analyzing image coordinates of line points. There- erence plane & (Figure 7). The distancef, is known fore, grey-level problems (problem 1) are less crit- for the assembly unit. ical in this technique. A specially designed test object is used for cali-

    bration (the largest of the three objects in Figure THREE-DIMENSIONAL SURFACE MEASUREMENT 2). Figure 8 shows a frontal view of this object with

    The object coordinates of any point on the pro- 67 control points on its surface. The X, Y, and Z jected line can be determined from the image co- coordinates of the control points are accurately de- ordinates (x1,y') obtained from the digital image pro- termined by direct measurement, and they have cessor and from the known location of the plane of been selected to have a wide range of variation in the projected line, provided the camera and the depth. In addition to the test object, a second projector relative positions and orientation are camera is employed. The image coordinates are nei- known. This requires an a priori calibration of the ther needed to be automatically measured nor pro- system, using control points. In both the calibration cessed in real-time in the calibration process. An procedure and the object coordinate determination, operator viewing the acquired images on a monitor the mathematical models for the camera are the col- performs the image matching and point location. linearity equations (see Figure 7): The calibration is applied in the following steps:

    and

    where me (i , j = 1 to 3) are elements of the rotation matrix RT; X,, Y,, Z p are the object coordinates of point p; X,, Yo, Z, are the projection center coordinates; X ~ > Y P are the image coordinates of point p; xu,, y,, are the principal point coordinates; and f is thk principal distance.

    For the plane of light, Equation 2 is applied to any point on the line (all which have the same y" coordinate). This makes the total number of equa- tions, for any object point, equal three.

    The objective of this procedure is to determine the orientation parameters of the camera and of a

    The test object is placed in front of the two cameras and the line projector in such a way that the line of light strikes the object. One frame is digitized for each camera. The images of the control points and other well defined points, such as edge points on the line of light, are the points to be digitized. Using the object coordinates of the control points and their image coordinates and applying Equa- tions l and 2, the unknown orientation parameters are determined for each camera.

  • 550 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING, 1985

    Using the orientation parameters and the image coordinates of the selected line-points, the object coordinates are computed, also by Equations 1 and 2. These object coordinates are associated with one line location in the deflection assembly unit, say location yl". For two other line locations, ye" and y:, the object coordinates of some selected line points are deter- mined as in the previous step. Applying Equation 2 with the line object points and their corresponding locations in the deflection as- sembly unit, y,", yp", and y;, the orientation pa- rameters of this unit are determined. Now that the orientation parameters for the camera and the projecting unit are determined with re- sDect to an arbitrarv coordinate svstern located in t i e object space, the system can be shifted and rotated by a change of parameters so that the co- ordinates of point O1 and the rotation angles of the assembly unit all become zero. From then on all the coordinates will be determined with respect to a coordinate system located at the origin 01, as shown in Figure 7.

    The frequency of calibration depends mainly on the stability of the camera. Because, in this vision system, the camera and the projecting unit are lo- cated in the same housing, their relative location and orientation with respect to each other remain stable for a long period of time and can be cali- brated, for example, once a year. However, a more frequent calibration is needed for the determination of the principal point and the principal distance of the camera.

    REAL-TIME THREE-DIMENSIONAL OBJECT COORDINATES

    With all the orientation parameters known from the calibration process, the same mathematical model is used with the X, Y, and Z of surface object points as the only unknowns. This means that for

    Frc. 8. Front view of test object with control points.

    each point p we have three equations (one for each x', yr, yrr) and three unknowns, and thus only a simple direct solution without adjustment. Al- though not a rigorous solution in an operator-per- formed measurement, it can be accepted in an au- tomated measurement on account of the extremely high precision and the much smaller chance of a blunder. The above procedure applied to each ob- ject point on the projected line gives a profile of the objects in the scene. For each picture frame, we have a new profile at a different interval. All the coordinates are in a coordinate system located at the projector and are now ready for the next phase of processing.

    OBJECT SHAPE AND POSITION RECOGNITION

    Before being able to recognize different objects, the computer has to learn and memorize them. This can be achieved by placing each object into the view of the data acquisition system and applying all the previously explained steps. The computed coordi- nates, or cross sections, are then used to extract all the necessary information about the object. This in- cludes fitting a function, S(X,Y,Z,n) = 0, usually a polynomial or a simpler function, to the surface, or each surface, of the object. The parameters are determined and stored. Other useful information such as spatial relations between surfaces, the pe- rimeter area, and the coordinates of the center of gravity (X,,Y,,Z,) in the object coordinate system are also determined and stored. For example, if we have a cylindrical object (Figure 9(a)) with two sur- faces shown in the camera view, the cylindrical sur- face can be described by the equation

    where r is the radius of the circular cylinder and X and Y are surface coordinates with respect to center c. But because in our computation the object co- ordinates are given in another coordinate system located at projector point O,, additional equations are needed for transformation: i.e.,

    where R is the rotation matrix between the two sys- tems; X,, Y,, and Z, are the shift coordinates; and X,, Yo, and Zo are the computed coordinates in the projector system. The above equations are solved simultaneously for all points to determine X, Y, Z, X,, Y,, Z,, and R. From these parameters and co- ordinates, other useful information about the sur- face can be extracted. The second surface can be determined in a similar way and also the angle be- tween the two surfaces can be computed. The trans- formation Equations 4 are valid for any surface while the constraints Equation 3 is different for each type of surface.

    When the actual real-time operation starts, the

  • A PHOTOGRAMMETRIC VISION SYSTEM FOR ROBOTS

    pending on the time limits set by the system's even- tual user, some of the software must be converted into modular hardware. A study of the metric quality and the error sources of the CCD camera is now under way. This will result in the assessment of the accuracy of this system and in the determi- nation of the frequency of calibration.

    ACKNOWLEDGMENTS l b l

    The author wishes to thank Manfred Paulun and

    20 A1 Way-Nee of his affiliation for their help in imple- menting some of the ideas resented in this paper

    xo at the National Research Council of Canada (NRCC)

    Frc. 9. The different object coordination systems. Laboratories'

    objects in the field of view are processed as men- tioned earlier to determine their three-dimensional coordinates. It is now required to identify which of these objects match a certain stored, or memorized, object. This requires fitting X, Y, and Z coordinates to the function S(X,Y,Z,z) = 0 using the previously determined values of a for the object of interest. In addition, because the object will usually differ in orientation and location from the reference object (Figure 9(b)), the transformation Equations 4 must be applied with the surface function, and the parain- eters X,, Y,, Z,, and R are treated as unknowns in addition to the coordinates X, Y, and Z. A match is declared if the RMS errors of the residuals are within a preset tolerance; otherwise, another surface or object is processed.

    The efficiency and reliability of this approach de- pends on the complexity of the objects and their visibility to the data acquisition system. A significant improvement can be achieved if several data acqui- sition systems cover the objects from different an- gles. These systems will have a common recognition facility into which the information from all the units is entered for processing. This also alleviates the problem of limited depth-of-field of the camera which remains unsolved in a single data acquisition system.

    The vision system outlined above is able to rec- ognize, in real time, shapes and positions of objects for robot manipulation. At the present time, many of the elements are in a feasibility study stage. The elements tested so far are the three-dimensional surface measurements and the recognition of simple objects. The main purpose was to test the data ac- quisition system and the efficiency and workability of the algorithms. Although all the computations were carried out by software on a general-purpose computer, the surface measurements of one scene took only about one second while object recognition took about five seconds on the VAX 111780 com- puter. This time will increase for more complex scenes and object arrangements and thus, de-

    Agin, G. J., and T. 0. Binford, 1973. Computer Descrip- tion of Curved Objects, Proceedings, Third Interna- tional Joint Conference on Artificial Intelligence, pp. 629-640.

    Callahan, J. M., 1982. The State of Industrial Robotics, BYTE, pp. 128-142.

    Cohen, P. R., and E. A. Feigenbaum (Editors), 1982. The Handbook of Artificial Intelligence, Vol. 111, William Kaufmann Inc., pp. 127-321.

    El-Hakim, S. F., 1984. A Photogra~nmetric Vision System for Robots International Archices of Photogrammetry and Remote Sensing, Vol. XXV, Part A5, pp. 223-231.

    Frobin, W., and E. Hierholzer, 1983a. Automatic Mea- surement of Body Surfaces Using Rasterstereog- raphy-Part I: Image Scan and Control Point Mea- surement, Photogratninetric Engineering and Remote Sensing, Vol. 49, No. 3, pp. 377-384.

    , 1983b. Automatic Measurement of Body Surfaces Using Rosterstereography-Part 11: Analysis of the rasterstereographic Line Pattern and Three-Dimen- sional Surface Recognition, Photogrammetric Engi- neering and Remote Sensing, Vol. 49, No. 10, pp. 1463-1452.

    Geisler, W., 1982. A Vision System for Shape and Position Recognition of Industrial Parts, Proceedings, Second International Conference on Robot Vision and Sen- sory Control, SPIE, Vol. 392, pp. 253-262.

    Hale, J. A. G., and P. Saraga, 1974. Digital Image Pro- cessing, Opto-Electronics, Vol. 6, pp. 333-348.

    Hall, E., J. B. K. Tio, C. A. McPherson, and E A. Sad- jadi, 1982. Measuring Curved Surfaces for Robot Vi- sion, Journal of IEEE Computer Society, Vol. 15, No. 12, pp. 42-54.

    Joel, H. C., 1974. The Corpograph-A Simple Photo- graphic Approach to Three-Dimensional Measure- ments, Biostereometrics 74, Proceedings of the Sym- posium of ISP Commission V , Washington D.C.

    Kreis, Th., H. Kreitlow, and W. Juptner, 1982. Detection of Edges by Video Systems, Proceedings, Second Zn- ternational Conference on Robot Vision and Sensory Control, SPIE, Vol. 392, pp. 9-19.

    Lockton, J. W., 1982. Intellect-A Building Block for In- dustrial Machine Vision, Proceedings, Second Znter- national Conference on Robot Vision and Sensory Controls, SPIE, Vol. 392, pp. 285-292.

    Marr, D., 1982. Vision, W. H. Freeman and Company, San Francisco.

  • PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING, 1985

    Mikhail, E. M., 1983. Photogrammetric 'Earget Location to Subpixel Accuracy in Digital Images, Proceedings of the 39th Photogrammetric Week, pp. 217-230.

    Pekelsky, J. R., 1984. Unpublished Notes, Division of Physics, National Research Council of Canada.

    Pugh, A. (Editor), 1983. Robot Vision, IF? publications Ltd., U.K.

    Real, R. R., 1983, Matrix Camera with Digital Image Pro- cessing in Photogrammetric Applications, Proceedings of the Annual Meeting of ASP, pp. 255-266.

    Rosenfeld, A., and A. .C. Kak, 1976. Digital Picture Pro- cessing, New York: Academic Press.

    Shirai, Y., and M. Suwa, 1971. Recognition of Polyhedrons with a Rangefinder, Proceedings, Second Znterna- tionalJoint Conference on Artificial Intelligence, pp. 80-87.

    Sucui, R. E., and A. P. Reeves, 1982. A Comparison of Differential and Moment Based Edge Detectors, ZEEE, 82CH1761-6, pp. 97-106.

    Taboada, J., 1981. Coherent-Optical Methods for Appli- cations in Robot Visual Sensing, SPIE, Vol. 283, pp. 25-29.

    Turner-Smith, A. R., and J. D. Harris, 1984. ISIS-An Automated Shape Measurement and Analysis System, Moire Fringe Topography and Spinal Deformity, Pro- ceedings of the 3rd International Symposium, Oxford, England.

    Vanderburg, G. J., J. S. Albus, and E. Barkmeyer, 1979. A Vision System for Real-Time Control of Robots, Proceedings, Ninth International Symposium and Ex- position on Industrial Robots, pp. 213-231.

    Wood, G. A., 1983. Realities of Automatic Correlation Problems, Photogrammetric Engineering and Remote Sensing, Vol. 49, No. 4, pp. 537-538.

    (Received 3 April 1984; revised and accepted 25 January 1985)

    New Sustaining Member

    Satellite Hydrology Associates

    6606 Midhill Place, Falls Church, VA 22043; (703) 534-6607

    S ATELLITE HYDROLOGY ASSOCIATES is a rapidly growing organization of independent consultants with in- ternationally recognized expertise in applying space-age technology to the solution of water-resource, coastal-zone, and environmental problems, and to operational and management decisions.

    SHA serves as managers or consultants on all phases of water or water-related projects that can benefit from applied satellite technology. SHA works with leading high technology firms in the Washington area or worldwide. This arrangement permits acquisition of the optimum satellite or aircraft data, selection of the most suitable processing equipment, and the best support expertise for each application, thus insuring efficient and cost-effective project design.

    Satellites, computers, and the constantly evolving technology of remote sensing have provided powerful new tools for observing the hydrosphere and man's impact upon it. Further, satellite technology now permits many traditional types of water-resource and environmental investigations and monitoring to be performed more rapidly, accurately, and effectively than ever before-with great confidence and at less cost.

    The beneficial applications of satellite and computer technology to water exploration, development, monitoring, planning, and management have been frequently and convincingly demonstrated during the past decade. Administrators, resource managers, and decisionmakers faced with the myriad of problems related to uncertain water supplies, floods and droughts, agricultural crises, and environmental degradation cannot afford to ignore these new tools of space technology. SHA believes, very strongly, that this tech- nology should now be used operationally to help solve the serious water-supply, water-hazard, and envi- ronmental problems that confront mankind.

    SHA has provided advisors, consultants, and instructors to universities, commercial firms, government and international organizations, responding to the specific remote-sensing needs and data-processing or space-relay needs of these organizations relating to hydrometeorological data networks, water-resource appraisals, ground-water exploration, civil engineering projects, flood mapping, pollution detection and tracking, hydrologic information systems, hydrologic hazards, flood-plain, estuarine and coastal-zone dy- namics, drought and desertification assessments, and dam-site analysis.