00161131

3
 WHEELS IDENTIFI CATION USING MACHINE VISION TECHNOLOGY Behrouz N. habestari, John W. V. Miller and Victoria Wedding Edison Industrial Systems Center 1700N Westwood, Suite 2286 Toledo, Ohio 43607 A sample set of wheels was used to create a cluster for each wheel type. The Mahalonobis minimum distance classifier was then used for identification [3.4]. I I DATA ACQ UISITION AND PROCESSING ABSTRACT An application of an on-line vision system using statistical pattern recognition algorithms for identifying various polycast wheels is described. The recognition is independent of part orientation and position in camera field of view. Simplicity, efficiency, low-cost and easy training for new designs are important criteria of the system. Software and algorithms are developed t o locate the wheel, exclude the windows o f the wheel and extract features which are used for classification. The results indicate a constraint-free system with real time recognition rate and considerable increase in recognition accuracy. L INTRODUCTION In a certain production process, automotive wheels of wide variety of style and sizes are placed on a conveyor in random order and moved through the process. Prior t a subsequent operation, each wheel must be identified for the process according to its type. An operator previously identified each wheel using a control panel which contained a series of push button switches. An additional switch was used to distinguish a painted from an unpainted wheel. A programmable logic controller (PLC) reads the selected switch and initiates the proper manufacturing sequence for the identified wheel. Style varieti es and similarities combined with conveyor speed made reliable manual recognition difficult. Errors were inevitable. The result of misidentification was unnecessary scrap and reworks. Automated wheel identifi cation using machine vision technology resulted in higher accuracy and an immediate economic payback [ 1,2]. Simplicity, efficiency and low cost were important criteria of the vision system. This paper describes how a lowcost PC-based vision system was used effectively in real time to recognize images of different types of wheels [ 11. Each wheel was analyzed based on the shapes of its windows using a small set of features. Images of the w heels were acquired, digiti zed and processed. Useful features were extracted from the wheel windows. An image of each wheel was acquired using a standard CCD matrix camera. A frame grabber digitized the camera signal to a 320 by 240 pixels image with 256 gray levels as shown in Figure 1.  An effective and c o m t recognition of such an image for the wide variety of wheel styles with location and orientation variation would involve a huge amount of data processing unless the number of reference patterns could be reduced. Hence, the following procedure was used to allow redescripti on of the images in terms of significant information regardless of location and orientation. In order to speed the recognition process efficiently, it was necessary to run-length encode the gray-scale image very rapidly by recording the locations of the transitions pixels on a given scan that were greater or less than the estimated threshold. Simultaneously, t he histogram of the image was generated by using the gray-scale values as indices to an array of long integers and incrementing selected elements. This histogram was used to determine a painted from an unpainted wheel before classification. Following image encoding, connectivity analysis was performed by linking strings on adjacent scans that overlap until all strings have been assigned to blobs (connected regions) [5]. The resulting blobs are show n in Figure 2.  Blob analysis was performed and each blob attributes were obtained from stored information. The first order moments were calculated to provide the x and y centers of mass for the calculation of second order central moments [6,7]. The second order moments were fitted to an ellipse to provide orientation invariant information. Blob major and m inor axis were obtained by fitting the ellipse. The area of each blob in pixels was determined and the blob aspect ratio was calculated by the ratio of minor and major axis These calculations are relatively straightforward and are described briefly in Appendix A. Blobs not associated with window s were eliminated based on specific geometric properties. Blob area was used to eliminate blobs that were either too big or too small to be windows. The geometrical center of the remaining blobs were then used to estimate the wheel center. Next, the distance of each blob’s 73 CH3051-0/91/0000-0273 1.00 1991 IEEE

description

gfjhh

Transcript of 00161131

  • WHEELS IDENTIFICATION USING MACHINE VISION TECHNOLOGY

    Behrouz N. Shabestari, John W. V. Miller and Victoria Wedding

    Edison Industrial Systems Center 1700 N. Westwood, Suite 2286

    Toledo, Ohio 43607

    A sample set of wheels was used to create a cluster for each wheel type. The Mahalonobis minimum distance classifier was then used for identification [3.4].

    II. DATA ACQUISITION AND PROCESSING

    ABSTRACT

    An application of an on-line vision system using statistical pattern recognition algorithms for identifying various polycast wheels is described. The recognition is independent of part orientation and position in camera field of view. Simplicity, efficiency, low-cost and easy training for new designs are important criteria of the system. Software and algorithms are developed to locate the wheel, exclude the windows of the wheel and extract features which are used for classification. The results indicate a constraint-free system with real time recognition rate and considerable increase in recognition accuracy.

    L INTRODUCTION

    In a certain production process, automotive wheels of wide variety of style and sizes are placed on a conveyor in random order and moved through the process. Prior to a subsequent operation, each wheel must be identified for the process according to its type. An operator previously identified each wheel using a control panel which contained a series of push button switches. An additional switch was used to distinguish a painted from an unpainted wheel. A programmable logic controller (PLC) reads the selected switch and initiates the proper manufacturing sequence for the identified wheel.

    Style varieties and similarities, combined with conveyor speed made reliable manual recognition difficult. Errors were inevitable. The result of misidentification was unnecessary scrap and reworks. Automated wheel identification using machine vision technology resulted in higher accuracy and an immediate economic payback [ 1,2]. Simplicity, efficiency and low cost were important criteria of the vision system.

    This paper describes how a lowcost PC-based vision system was used effectively in real time to recognize images of different types of wheels [ 11. Each wheel was analyzed based on the shapes of its windows using a small set of features.

    Images of the wheels were acquired, digitized and processed. Useful features were extracted from the wheel windows.

    An image of each wheel was acquired using a standard CCD matrix camera. A frame grabber digitized the camera signal to a 320 by 240 pixels image with 256 gray levels as shown in Figure 1.

    An effective and c o m t recognition of such an image for the wide variety of wheel styles with location and orientation variation would involve a huge amount of data processing unless the number of reference patterns could be reduced. Hence, the following procedure was used to allow redescription of the images in terms of significant information regardless of location and orientation.

    In order to speed the recognition process efficiently, it was necessary to run-length encode the gray-scale image very rapidly by recording the locations of the transitions pixels on a given scan that were greater or less than the estimated threshold.

    Simultaneously, the histogram of the image was generated by using the gray-scale values as indices to an array of long integers and incrementing selected elements. This histogram was used to determine a painted from an unpainted wheel before classification.

    Following image encoding, connectivity analysis was performed by linking strings on adjacent scans that overlap until all strings have been assigned to blobs (connected regions) [5]. The resulting blobs are shown in Figure 2.

    Blob analysis was performed and each blob attributes were obtained from stored information. The first order moments were calculated to provide the x and y centers of mass for the calculation of second order central moments [6,7]. The second order moments were fitted to an ellipse to provide orientation invariant information. Blob major and minor axis were obtained by fitting the ellipse. The area of each blob in pixels was determined and the blob aspect ratio was calculated by the ratio of minor and major axis. These calculations are relatively straightforward and are described briefly in Appendix A.

    Blobs not associated with windows were eliminated based on specific geometric properties. Blob area was used to eliminate blobs that were either too big or too small to be windows. The geometrical center of the remaining blobs were then used to estimate the wheel center. Next, the distance of each blobs

    273 CH3051-0/91/0000-0273 $1.00 0 1991 IEEE

  • geometrical center from the wheel center was determined to eliminate blobs that were too near or far from the wheel center. The remaining blobs represent the wheels windows which were used for classification as shown in Figure 3.

    III. FEATURE EXTRACTION AND CLASSIFICATION

    The remaining blobs features were median filtered to reduce error and select the most typical window. The following attributes of the window were used for classification.

    1. Blob major axis

    2. Blob minor axis

    3. Blob aspect ratio

    4. Blob area in pixels

    5. Blob radius (the distance from the center of the wheel to the geometrical center of the blob)

    To measure the similarity which will assign the pattern to the domain of a particular cluster, it was necessary to establish mean and covariance statistics for each wheel on the a production line.

    Class statistics were determined with a set of samples. The mean vector mi and covariance matrix C i of each population i were determined. The Mahalonobis distance from X to mi then is calculated by

    where C i - 1 is the inverse of sample covariance mamx C i . The distance D i is a measure of similarity; the smaller the distance, the greater the similarity.

    The Mahalanobis distance was calculated to each class mean in the five-dimensional feature space. Features with small variances were weighted more heavily then features with large variances. The wheel was assigned to the class whose class mean is closest to the features of the most typical blob in the five-dimensional space after normalization.

    IV. DISCUSSION

    The initial approach used a back lighting scheme on quarter wheel. A single high resolution window was extracted and analyzed. However, due to the excessive location variation and additional request to identify painted from unpainted wheels. a different approach was taken. A front lighting scheme was used to determine the presence or absence of paint based on reflectivity. A larger field of view was used to cover whole wheel and location variations. A ring light was used for uniform lighting to minimize shadowing.

    Since the system is sensitive to light variations, efficiency could be improved by adjusting for illumination variations. This can be accomplished by daily calibration using a golden part or implementation of auto-thresholding algorithms based on scene properties.

    Figure 1. Gray Level h u g e

    Figure 2. Binary Image after ConnecUvity

    Figure 3. Final Blobs

    274

  • V. CONCLUSION

    This project has successfully demonstrated a low-cost PC- based vision system for identifying automotive wheels. The results indicate that statistical pattern recognition algorithms were well suited for such applications. The system recognizes unusual patterns such as an empty conveyor and could be easily trained to include new designs for identification. The time required for identification on a 25 MHz 386 PC compatible machine was approximately 2 seconds. The identification accuracy was over 99 percent.

    APPENDIX A

    The two-dimensional moments of a digital image are usually approximated by double summations as

    VI. ACKNOWLEDGEMENTS

    This work was funded by Motor Wheel Corporation, Luckey, Ohio. The authors wish to thank Mr. Richard J. Ashman, Engineering Manager and his p u p at Motor Wheel for their assistance in implementation of the system.

    REFERENCES

    B. N. Shabestari, J. W. V. Miller and V. Wedding, "Wheel Identification System," Technical Report, Edison Industrial Systems Center, July 1990.

    R. E. Tjia, K. J. Cios and B. N. Shabestari, "Neural Networks In Identification of Car Wheels from Gray Level Images," Fourth Conference on Neural Networks and Parallel Distributed Processing, Indiana University-Purdue University, April 1991.

    J. T. Tou and R. C. Gonzalez, Pattem Recognition Principles. Massachusetts: Addison-Wesley Publishing Company, 1974.

    J. Sklansky and G. N. Wassel, Pattem Classifiers and Trainable Machines. New York Springer-Verlag

    R. Cunnigham, "Segmenting Binary Images," Robotics Age, Vol. 3, No. 4, July/August 1981.

    C. H. Teh and R. D. Chin, "On Digital Approximation of Moment Invariants." Computer Vision, Graphics, and Image Processing, Vol. 33, No. 3, March 1986.

    N. Zuech, Applying Machine Vision. New York: John Wiley and Sons, 1988.

    (1) Mmn = cc xmynp (x, Y)

    where ~ m n is (m,n)th joint moment and p (x, y) is intensity matrix in which x and y are discrete locations of image pixels. The central moments hn of p ( x , y ) are defined as

    XY

    where Mio/Moo is the x center of mass and Moi/Moo is the y center of mass.

    Second order central moments are used for fitting an ellipse to the data. The matrix A of second moments and first cross moments A is defined as

    A = P20 Pll

    P l l Po2 ( 3 )

    The eigenvalues and eigenvectors are found such that

    XTAX = @ ( 4 )

    where @ is a diagonal matrix containing the eigenvalues of A, X is the associated eigenvectors, and XT is the transpose of X.

    The directions of the orthogonal eigenvectors correspond to the orientation of the axes of an ellipse with moments as given in equation (3). The eigenvector associated with the dominant eigenvalue points is in the direction of the major axis and may be used as the orientation of the object.

    275