[IEEE 2009 Third Asia International Conference on Modelling & Simulation - Bundang, Bali, Indonesia...

6
Multi-Sensor Autonomous Robot based Manipulation of Valves for Process Control Prabhakar Mishra, †† H. N. Shankar, †† Jayesh Sudhir Bhat, †† Sumanth R. Kubair, †† Sameera Bharadwaja H, †† Anudhan S, †† Divya Kamath Hundi, *Goutham Kamath Author for correspondence, PES Centre for Intelligent Systems, Department of Telecommunication Engineering, Email: [email protected] , †† Department of Telecommunication Engineering, *Department of Information Science and Engineering, PES Institute of Technology, Bangalore, India Abstract An autonomous robot finds applications in process industries to perform control operations such as manipulation of valves, especially in hazardous environments. This would require localization and mapping capabilities for efficient navigation, coupled with a dexterous arm for manipulation of the valves. Dynamic path planning and obstacle avoidance are the necessary requirements for autonomous navigation in the work area of a process industry. This paper presents a way to perform the above tasks using an autonomous robot having multi-sensory inputs, and using vision based inferences. 1. Introduction 1.1. Overview of the problem In many process industries, certain routine tasks such as manipulation of valves are uncomfortably demanding in terms of human fatigue and safety of the environment in which the tasks need to be performed. Such tasks are repetitive and the use of robots to perform them is attractive. Automating the process of controlling the valves is of special interest if the environment is hazardous for humans. In this paper, we propose the use of robots to autonomously navigate through the dynamic environment of a process industry, successfully avoiding obstacles and identifying valves which need manipulation. The term ‘obstacle’ refers to anything in the robot’s path other than the target, which may be industry materials (stationary or mobile) and human workers. When the robot reaches the target valve, it has to turn it full clockwise or anticlockwise through a known number of turns. The process of using an on- board mechanical arm to turn the valve, using vision- based algorithms is explained; assuming that the robot knows the height of the valve above the ground level. The problem may be seen as three fold: 1) Autonomous navigation through a dynamically changing environment with obstacle avoidance, 2) Detection of the valve and finding its initial orientation and 3) Manipulation of the valve. The autonomous navigation of a robot can be divided into three sub tasks namely 1) Goal seeking, 2) Self-localization and 3) Obstacle avoidance. The navigation is done by dead reckoning, which is elaborated in section 2.1. Self-localization is an error- correcting process incorporated by the use of dead reckoning for autonomous navigation of the robot. Self-localization is accomplished using Scale Invariant Feature Transform (SIFT) as proposed by David G. Lowe [7], which is elaborated in section 2.2. Real-time static obstacle avoidance or the autonomous navigation of a robot may be achieved by many techniques like the Virtual Force Field (VFF) method and the Virtual Field Histogram (VFH) method, as jointly proposed by Boreinstein and Koren in [1] and [2], Retroactive position correction and model based vision mechanism for non-stop navigation and self-localization respectively were proposed by Akihisa Ohya, et al. [3]. In this paper, we adopt a grid- based fuzzy-controlled approach, integrating the VHF, VFH and Retroactive probabilistic algorithm (RPA) methods as proposed by Prabhakar Mishra et al [4] as it is found to be most suitable for detecting static obstacles in an industry. As regards to dynamic 2009 Third Asia International Conference on Modelling & Simulation 978-0-7695-3648-4/09 $25.00 © 2009 IEEE DOI 10.1109/AMS.2009.92 442

Transcript of [IEEE 2009 Third Asia International Conference on Modelling & Simulation - Bundang, Bali, Indonesia...

Page 1: [IEEE 2009 Third Asia International Conference on Modelling & Simulation - Bundang, Bali, Indonesia (2009.05.25-2009.05.29)] 2009 Third Asia International Conference on Modelling &

Multi-Sensor Autonomous Robot based Manipulation of Valves for Process

Control

†Prabhakar Mishra, ††H. N. Shankar, ††Jayesh Sudhir Bhat, ††Sumanth R. Kubair, ††Sameera Bharadwaja H, ††Anudhan S, ††Divya Kamath Hundi, *Goutham Kamath

†Author for correspondence, PES Centre for Intelligent Systems, Department of Telecommunication Engineering, Email: [email protected], ††Department of

Telecommunication Engineering, *Department of Information Science and Engineering, PES Institute of Technology, Bangalore, India

Abstract

An autonomous robot finds applications in process

industries to perform control operations such as manipulation of valves, especially in hazardous environments. This would require localization and mapping capabilities for efficient navigation, coupled with a dexterous arm for manipulation of the valves. Dynamic path planning and obstacle avoidance are the necessary requirements for autonomous navigation in the work area of a process industry. This paper presents a way to perform the above tasks using an autonomous robot having multi-sensory inputs, and using vision based inferences.

1. Introduction

1.1. Overview of the problem In many process industries, certain routine tasks

such as manipulation of valves are uncomfortably demanding in terms of human fatigue and safety of the environment in which the tasks need to be performed. Such tasks are repetitive and the use of robots to perform them is attractive. Automating the process of controlling the valves is of special interest if the environment is hazardous for humans. In this paper, we propose the use of robots to autonomously navigate through the dynamic environment of a process industry, successfully avoiding obstacles and identifying valves which need manipulation.

The term ‘obstacle’ refers to anything in the robot’s path other than the target, which may be industry materials (stationary or mobile) and human

workers. When the robot reaches the target valve, it has to turn it full clockwise or anticlockwise through a known number of turns. The process of using an on-board mechanical arm to turn the valve, using vision-based algorithms is explained; assuming that the robot knows the height of the valve above the ground level. The problem may be seen as three fold: 1) Autonomous navigation through a dynamically changing environment with obstacle avoidance, 2) Detection of the valve and finding its initial orientation and 3) Manipulation of the valve.

The autonomous navigation of a robot can be divided into three sub tasks namely 1) Goal seeking, 2) Self-localization and 3) Obstacle avoidance. The navigation is done by dead reckoning, which is elaborated in section 2.1. Self-localization is an error-correcting process incorporated by the use of dead reckoning for autonomous navigation of the robot. Self-localization is accomplished using Scale Invariant Feature Transform (SIFT) as proposed by David G. Lowe [7], which is elaborated in section 2.2.

Real-time static obstacle avoidance or the autonomous navigation of a robot may be achieved by many techniques like the Virtual Force Field (VFF) method and the Virtual Field Histogram (VFH) method, as jointly proposed by Boreinstein and Koren in [1] and [2], Retroactive position correction and model based vision mechanism for non-stop navigation and self-localization respectively were proposed by Akihisa Ohya, et al. [3]. In this paper, we adopt a grid-based fuzzy-controlled approach, integrating the VHF, VFH and Retroactive probabilistic algorithm (RPA) methods as proposed by Prabhakar Mishra et al [4] as it is found to be most suitable for detecting static obstacles in an industry. As regards to dynamic

2009 Third Asia International Conference on Modelling & Simulation

978-0-7695-3648-4/09 $25.00 © 2009 IEEE

DOI 10.1109/AMS.2009.92

442

Page 2: [IEEE 2009 Third Asia International Conference on Modelling & Simulation - Bundang, Bali, Indonesia (2009.05.25-2009.05.29)] 2009 Third Asia International Conference on Modelling &

obstacles, we use a modification or rather, an extension of the method proposed for static obstacles by [4]. These above mentioned obstacle detection and avoidance techniques are elaborated in section 2.3.

When the robot navigates to the valve in question and locates it in the frame of the snapshot picture taken using the on-board camera (valve detection is elaborated in section 3), it has to use its mechanical arm to turn the valve. The manipulation of an object using a robot arm can be achieved by several methods, one of which is Sensor Dependent Task Definition as proposed by Koh Hosada [5]. The merit of this idea is that one control-module for each sensor can be derived without dealing with all the degrees of freedom and all sensors at a time. But owing to the complexity of the above design due to the presence of three fingers and hence more degrees of freedom, we have developed a method using concepts in [6] to control a simple mechanical structure having lesser degrees of freedom, as this is sufficient for the task at hand. The valve manipulation using robotic arm is elaborated in section 4.

1.2. Architecture of the robot

Our robot ‘Freelancer-2’ is a multi-sensor robot

custom-made to attack the problem described in previous section 1.1. It has 1) A four-wheel differential drive system, 2) A single camera used as an input sensor for self-localization and valve detection algorithms, 3) A digital compass for precise orientation, 4) Four sonar sensors, one at each corner for detection and ranging of obstacles up to 3m, 5) 15 infrared LED sensor pairs to detect obstacles in close proximity of the robot to facilitate guiding through a clutter of closely spaced obstacles, 6) A speed-distance encoder which is calibrated to provide distance traveled in a specific time interval by the robot, 7) Distributed control architecture with a MOSFET based full-bridge chopper drive in Class-E configuration driven by a microcontroller (powered by two sources – one for the drive and the other for control) and 8) A robotic arm comprised of extendable twin sliding joints (for horizontal and vertical movements) and one rotary joint supporting two-fingered fork.

2. Autonomous navigation

2.1. Goal Seeking

The robot, when commissioned, is fed with a map

consisting of the co-ordinates of all the valves that are required to be operated by the robot with its present position taken to be (0, 0) in 2-D spatial co-ordinate system and (0) as its camera position in the vertical

plane (elevation). The map also has information about the circumference of the robot’s navigable area and the permanent obstacles in the surroundings (e.g. a wall). When it is required for robot to operate a particular valve, its 2-D spatial co-ordinates are to be specified by the operator.

Initial path approximation taken is a set of interconnecting straight lines connecting the valve’s co-ordinates and the robot’s present co-ordinates taking into account the outer periphery and permanent obstacles from the map. The robot, in obstacle-free environment, tends to follow this path. The updating of the robot’s co-ordinates is done by dead reckoning. The speed-distance encoder that is calibrated to give the distance that the robot has traversed in a given time period and an electronic compass which provides the orientation of the robot are used as sensors for this purpose. The errors incorporated due to this method are nullified as far as possible by the self-localization algorithm. When an obstacle is encountered, the control jumps to the obstacle avoidance algorithm.

2.2. Self-localization

The aim is to nullify the translation errors due to

slipping of wheels or errors in the speed-distance encoder and rotational errors due to electronic compass itself. SIFT approach transforms an image into a large collection of local feature vectors, each of which is invariant to image translation, scaling and rotation, and partially invariant to illumination changes and affined for 3D projection. This method consists of three basic steps namely 1) Key localization, 2) Local description and 3) Mapping. The smoothing functions used in order to achieve scale invariance (for scale space analysis) are 2-D Gaussian kernel and its derivatives with σ = √2. To achieve rotation invariance and a high level of efficiency, we have chosen to select key locations at maxima and minima of a difference of Gaussian function applied in scale space. This can be computed very efficiently by building an image pyramid with re-sampling between each level. Furthermore, it locates key points at regions and scales of high variation, making these locations particularly stable for characterizing the image. The detailed procedure of this algorithm is given in [7].

At each of the set of co-ordinates (called ‘locale coordinates’) chosen by the operator, these local feature vectors of one or a set of images corresponding to particular pose(s) called as expectation map is stored on the robot’s database when it is commissioned. This process is often referred to as the training/learning phase. When the robot is at or ‘in the vicinity’ of any one of these locale points during navigation, it takes the image of the surroundings with the camera’s pose

443

Page 3: [IEEE 2009 Third Asia International Conference on Modelling & Simulation - Bundang, Bali, Indonesia (2009.05.25-2009.05.29)] 2009 Third Asia International Conference on Modelling &

matching to that during its training phase at that point and the local features are extracted from this image. It is compared with that of the database image (invariant vectors/map). If an error is detected, it is mapped on to the error/offset either in the position or the orientation or both of the robot, which is corrected thereby nullifying the errors incorporated by use of dead reckoning only and correcting its position.

2.3. Obstacle avoidance

As mentioned earlier, we are using grid-based

fuzzy controlled approach for obtaining the information about the surroundings from which the obstacle information is extracted and used as input information for obstacle avoidance algorithm. The priorities for this information are assigned by fuzzy controller depending upon the condition of the surroundings (e.g. obstacle density), since the response (efficiency) of the algorithm changes with the dynamically changing surroundings.

The fuzzy controller consists of three basic sub-units as given in [4]. The input signals to this controller are 1) heading of the nearest obstacle relative to the robot, and 2) its range as measured by the sonar sensor. A positive/negative heading angle is assigned according to whether the obstacle is located to the right or left of the robot. The outputs of the controller are 1) the instantaneous speed and 2) the heading, which in turn yields the four-wheel drive command. The Gaussian membership functions with appropriate mean and variance for fuzzification and a min-max inference engine in conjunction with centroid defuzzification method determines control action. The detailed procedure-explanation of above-mentioned method is given in [4].

The input information to the obstacle avoidance algorithm is processed. The result of this process determines the changes to be made in instantaneous speed and heading of the robot so as to avoid the collision with the obstacle. This method when extended further so as to apply for the dynamic obstacle, it is better to have at least an estimate of the obstacle trajectory relative to the robot owing to the stochastic motion of the obstacle, using which the collision path of the robot and the obstacle if any depending upon the existence of point of intersection of the trajectories can be found and measures such as altering instantaneous speed and the heading of the robot can be made to avoid the same by driving the robot away from it and thereby avoiding the collision. Once the collision is avoided, the robot self-localizes itself thus coming back on the original goal-path (or new goal-path is planned from the present location) and proceeds towards the goal as before.

3. Valve detection

For processing purposes, the valve must have a

groove or marking at the outer edge near the circumference. A reference image of the valve should be captured earlier, using the same on-board camera of the robot, at a known distance from the valve. Thus, for a particular reference distance, the diameter (in number of pixels) is calculated. The distance as well as the diameter is stored in the robot’s memory, to be treated and used as reference. The reference image is shown in fig. 2. After the robot successfully navigates to the valve and positions itself in front of it one unit back, it is required to manipulate the valve according to the instructions given to it by the operator. A snapshot image of the valve is then taken using the camera. Its distance from the valve can be measured by making magnification calculations from the initial reference image stored in the robot’s memory.

Based on the fact that a valve looks more or less circular when viewed from a direction normal to the plane of the valve, (the valve’s horizontal span is equal to the vertical span, which is equal to the diameter), the robot can align itself so as to have its arm’s orientation perpendicular to the valve. If the valve looks as an ellipse in the captured image (the vertical span being equal to the valve diameter, and the horizontal span being lesser than the vertical span), then the robot moves in an arc in the horizontal plane so as to have the valve look like a circle, so that it is now oriented properly. The experimental results have been tabulated in section 6. Once the robot has been positioned one unit in front of the valve, we need to align the arm to the valve so that the valve can be manipulated as required.

4. Orientation of mechanical arm

4.1. The arm structure

The robot’s configuration has two degrees of

freedom associated with the arm, which are radial traverse (a prismatic joint) and rotational traverse (a rotary joint) as shown in fig. 1. The prismatic joint is for the extension or retraction of the arm from the vertical centre of the robot. The rotary joint provides the rotation of the arm about the vertical axis. The small ‘U’ shaped structure attached to the arm will lock on to the valve wheel. The specifications of the arm are as follows: The sliding arm is 6” long, the fingers protrude up to a distance of 1” from the joint and the ends of the fingers are 4” apart. The arm moves parallel to the ground along the breadth of the robot,

444

Page 4: [IEEE 2009 Third Asia International Conference on Modelling & Simulation - Bundang, Bali, Indonesia (2009.05.25-2009.05.29)] 2009 Third Asia International Conference on Modelling &

providing two feet range, centered with the frame. The web camera is placed to the side of the arm and it is ensured that the arm does not obstruct the view of the camera when it is rotating.

Fig. 1. The robot’s mechanical arm

4.2. Alignment of arm The arm need to be positioned in front of the valve

and then LED flashers are used to align the arm. The valve and the LED flashers are placed such that the actuator is aligned to the valve when the LED flasher is positioned at the horizontal centre of the captured frame. Using IR sensors the current position of the valve with respect to the horizontal center of the frame is found. If the position is within 10 pixels error, stop correction and move forward to the valve. Else the arm is moved to the right or left suitably. After the arm has been aligned it needs to move forward so as to lock into the valve.

4.3. The transformation matrix

A body attached coordinate frame will be

established along the joint axis for each link. Using the transformation matrix we relate the body-attached coordinate frame to the reference frame which is the centre of the robot. A point at rest in link i can be represented in terms of the base co-ordinate system using the transformation matrix T i.e.

If p is the position vector then p 1−i =T p i

The differential translation along the principal axis

(dx dy dz)T

and the differential rotation about the

principal axis (δ x δ y δ z)T

can be found using (dx

dy dz δ x δ y δ z)T

= J (dd i d iθ )T

, where

is the manipulator Jacobian, ‘d1’ is the length of

the base link and ‘d2’ is the range on either side of the prismatic joint.

For example, if a rotation of 30 degrees is to be obtained for the end effector from its present orientation i=2 joint offers no translational motion but

only rotational, (dd i d iθ ) T

= (0 30)T

. Thus multiplying this with the Jacobian we obtain (dx dy dz δ x δ y δ z)

T= (0 -3.14 0 .52 0 0)

T.

4.4. Motor control

The robot also houses a DC servomotor acting as

an actuator which provides the actual motive force for the robot joints. Depending on whether the back-emf produced is positive or negative, the link attached to the prismatic joint slides towards or away from the valve. The DC servomotor is also used to rotate the rotary joint in the clockwise or anticlockwise direction as required. The translation and rotation with respect to the principal axis calculated using the manipulator Jacobian serves as input to the joint controllers which provide the signal to move the arm into the required position.

5. Valve manipulation

The fingers are initially placed in a default

position, say, parallel to the base. Using the pictures obtained from the camera the angle between the horizontal base plane and the line joining the mark (green) on the valve and the center of the valve, which is the angle of rotation of the rotary joint of the arm, is obtained. This angle is processed by the controller and the necessary signals actuate the rotary joint to rotate. A sensor on the fingers gives the angle by which it has rotated. This is monitored by the controller. Once the required angle is obtained the fingers are now ready to be placed in the valve.

The arm now needs to slide so as to fix the fingers in the loops of the valve. The IR sensors give the distance of the valve from the robot. Using this the distance from the base of the mechanical arm is

445

Page 5: [IEEE 2009 Third Asia International Conference on Modelling & Simulation - Bundang, Bali, Indonesia (2009.05.25-2009.05.29)] 2009 Third Asia International Conference on Modelling &

calculated. The Jacobian now gives the distance by which the arm should slide in order to place the fingers in the valve. Binary sensors placed at the centre of the inner edges of the fingers help detect when the fingers have been placed in the valve.

Once the arm is locked in the valve it performs the operation of opening or closing the valve. The operation and the number of turns for it are provided by the controller. Automobile windshield wiper motors used for driving the robot and the arm, provide the required torque for the intent. Once the operation has been performed the arm is retracted and the default settings are restored.

6. Results and observations

The initial orientation of the valve, in terms of the

angle which the reference point on the circumference makes with the horizontal, is found using image-based calculations as highlighted below:

Assuming the valve is red, the valve is identified from the background and other objects in the picture, using color specific filtering. The edge of this image is found.

The centre of the valve is calculated using the edge. The centre of the reference mark near the circumference is also found similarly. The radius and hence the diameter of the valve is calculated. Using the reference image, magnification calculations are done to find the distance of the robot from the valve.

Using the same reference mark on the valve, the initial orientation of the valve is found which enables the arm to be inserted into the valve’s gaps.

We have simulated and tested our algorithm in MATLAB. It was found to satisfactorily work in good as well as extremely low lighting conditions. The images used as references are in sections 6.1.

6.1. Reference image taken in ambient lighting.

Fig. 2. Reference Fig. 3. Red component valve image

Fig. 4. Edge of red component

The results of image processing on the test images taken by the robot’s on-board camera are shown in sections 6.2 and 6.3. Note the change in distance of the robot to the valve, such that the valve appears to be smaller in size than the reference images. The magnification calculations from the diameter thus obtained, gives a straight estimate of the robot’s distance from the valve.

6.2. Results of test image taken in good lighting

Fig. 5. Test Image Fig. 6. Red component

Fig. 7.Green Fig. 8. Edge of red component component (Mark on valve)

6.3. Results of test image in bad lighting.

Fig. 9. Test image Fig. 10. Red component

446

Page 6: [IEEE 2009 Third Asia International Conference on Modelling & Simulation - Bundang, Bali, Indonesia (2009.05.25-2009.05.29)] 2009 Third Asia International Conference on Modelling &

Fig, 11. Green Fig. 12. Edge of red component component

(mark on valve)

The experimental results for distance calculation using ‘magnification-coefficient’ are tabulated below assuming that during the training phase of the robot appropriately large number of readings of distances and corresponding magnification co-efficients were taken so as to make present distance estimation (using interpolation or extrapolation) efficient (satisfactory):

Table 1. Reference images

Table 2. Test images

7. Conclusions

We address the problem of autonomously

navigating through the dynamic environment of an industry, locating a particular valve and manipulating the same. Using an arm with more degrees of freedom, possibly with the use of fingers can perform more complex tasks. Functionality can be diversified in the area of target identification with the use of task-specific image processing. The navigation with obstacle avoidance can be enhanced with the use of motion vector analysis of a moving body. We are

presently working on this aspect of autonomous navigation.

8. Acknowledgments

The work was undertaken at PES Center for

Intelligent Systems, and supported by PES Institute of Technology, Bangalore. We thank the management of PES Institute of technology for all the support and encouragement.

9. References

[1] J. Borenstein & Y. Koren, “Real-time Obstacle

Avoidance for Fast Mobile Robots,” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 19, No. 5, Sept/Oct 1989, pp. 1179-1187.7.

[2] J. Borenstein & Y. Koren, “The vector field

histogram- Fast obstacle avoidance for mobile robots,” IEEE Transactions on Robotics and Automation, Vol. 7, Issue 3, June 1991, pp.278-288.

[3] Akihisa Ohya, Akio Kosaka, and Avinash Kak,

“Vision-Based Navigation by a Mobile Robot with Obstacle Avoidance Using Single-Camera Vision and Ultrasonic Sensing”. IEEE Transaction on Robotics and Automation, vol. 14,no. 6, pp.969-978.DDDecember 1998.

[4] Prabhakar Mishra, H. N. Shankar, Dev Priya,

Prabhanshu Chaturvedi, Adnan Anwar and Jagdish Gupta, “A fuzzy controller for a multisensor-based autonomous robot navigating in an unknown environment.”

[5] Koh Hosada, Takuya Hisano and Minoru Asada,

“Sensor Dependent Task Definition: Object Manipulation by Fingers with Uncalibrated Vision”. Intelligent Autonomous System 6 by Enrico Pagello, F. Groen, T. Arai, R. Dillmann, A. Stentz, IOS Press, 2000.pp. 843-851

[6] Robotics: Control, Sensing, Vision and Intelligence.

International edition, McGraw-hill, K.S.Fu, R.C.Gonzalez, C.S.G.Lee, 1987, pp.12-144 & 544-555

[7] David G. Lowe, ”Object Recognition from Local

Scale-Invariant Features.” Proceedings of the International Conference on Computer vision. Vol. 2, pp. 1150, September 20-25,1999.

Sl.No. Lighting Diameter

(no. of pixels)

Distance from valve (cm)

1 Ambient 403 16

Sl. No.

Lighting Diameter

(no. of pixels)

Valve rotation

(Degrees)

Distance from valve (cm)

1. Good 231 49.66 27.91

2. Bad 232.5 50.07 27.73

447