Design of Image Segmentation Algorithm for Autonomous Vehicle Navigationusing Raspberry Pi

4
Int. Journal of Electrical & Electronics Engg. Vol. 2, Spl. Issue 1 (2015) e-ISSN: 1694-2310 | p-ISSN: 1694-2426 NITTTR, Chandigarh EDIT -2015 184 Design of Image Segmentation Algorithm for Autonomous Vehicle Navigation using Raspberry Pi 1 Ankur S. Tandale, 2 Kapil K. Jajulwar 1 M.Tech Student, 2 Research scholar 1,2 Department of Communication Engineering, G.H.Raisoni College of Engineering,Nagpur 1 [email protected], 2 [email protected] AbstractIn the past few years Autonomous vehicles have gained importance due to its widespread applications in the field of civilian and military applications. On-board camera on autonomous vehicles captures the images which need to be processed in real time using the image segmentation algorithm. On board processing of video(frames)in real time is a big challenging task as it involves extracting the information and performing the required operations for navigation. This paper proposes an approach for vision based autonomous vehicle navigation in indoor environment using the designed image segmentation algorithm. The vision based navigation is applied to autonomous vehicle and it is implemented using the Raspberry Pi camera module on Raspberry Pi Model-B+ with the designed image segmentation algorithm. The image segmentation algorithm has been built using smoothing,thresholding, morpho- logical operations, and edge detection. The reference images of directions in the path are detected by the vehicle and accordingly it moves in right or left directions or stops at destination. The vehicle finds the path from source to destination using reference directions. It first captures the video,segments the video(frame by frame), finds the edges in the segmented frame and moves accordingly. The Raspberry Pi also transmits the capture video and segmented results using the Wi-Fi to the remote system for monitoring. The autonomous vehicle is also capable of finding obstacle in its path and the detection is done using the ultrasonic sensors. Index TermsAutonomous Vehicle, Graphical User Inter- face(GUI), Raspberry Pi, Segmentation, Ultrasonic Sensor I. I NTRODUCTION In the recent years, Autonomous vehicles have gained importance due to its widespread applications in various fields such as Military, Civilian, industrial etc. Autonomous vehicle navigation has the ability to determine its ow position and finding the path from source to destination. Navigation mainly defines the self localisation and finding the destination path. Vehicle navigation has long been a fundamental goal in both robotics and computer vision research. While the problem is largely solved for robots equipped with active range- finding devices, for a variety of reasons, the task still remains challeng- ing for vehicles equipped only with vision sensors. On-board computing using the computer vision is the most demanding areas of robotics. The need for autonomy in vehicles in indoor based navigation systems demands high computational power in the form of image processing capabilities. The Simultaneous localisation and mapping(SLAM) algorithm performs the self Fig. 1. Prototype of Autonomous vehicle Moving in Right direction localisation and maps the environment using the predefined indoor environment area and the vision based form. It involves complex computations and geometry to find the path and obstacles in the path to map the environment. In vision based autonomous vehicle navigation ,segmenta- tion of the captured frame is the fundamental step in image processing. Segmentation is the process of grouping pixels of an image depending on the information needed for further processing. Various segmentation techniques are present based on the region,edges,textures and intensities. As vehicles pro- ceeds with navigation using on- board processing it possess a problem to the use of powerful computational units; secondly cost of the system hardware, though having dropped in recent years, is still a limitation in robotics [1]. Therefore, robots requires powerful and fast processing speed to perform on board processing of images. In the last few years the demand for autonomous vehicles and robots has increased which have brought us a range of ARM architecture computational devices such as the Raspberry Pi or the even more powerful Quad- Core ODROID-U2 and these devices can perform on board real time image segmentation. The proposed work uses a Raspberry Pi for real time processing and a camera connected to the raspberry pi for providing the vision. The prototype of the autonomous vehicle is implemented as shown in figure 1. It is having onboard Raspberry Pi, Microsoft Lifecam, Ultrasonic sensor, power supply and DC motors etc. The captured real time video is processed such that it is first segmented and the edges are found depending upon which the

Transcript of Design of Image Segmentation Algorithm for Autonomous Vehicle Navigationusing Raspberry Pi

Page 1: Design of Image Segmentation Algorithm for Autonomous Vehicle Navigationusing Raspberry Pi

Int. Journal of Electrical & Electronics Engg. Vol. 2, Spl. Issue 1 (2015) e-ISSN: 1694-2310 | p-ISSN: 1694-2426

NITTTR, Chandigarh EDIT -2015 184

Design of Image Segmentation Algorithm forAutonomous Vehicle Navigation

using Raspberry Pi1Ankur S. Tandale, 2Kapil K. Jajulwar

1M.Tech Student, 2Research scholar1,2Department of Communication Engineering, G.H.Raisoni College of Engineering,Nagpur

[email protected],[email protected]

Abstract—In the past few years Autonomous vehicleshave gained importance due to its widespreadapplications in the field of civilian and militaryapplications. On-board camera on autonomous vehiclescaptures the images which need to be processed in realtime using the image segmentation algorithm. On boardprocessing of video(frames)in real time is a bigchallenging task as it involves extracting theinformation and performing the required operations fornavigation.This paper proposes an approach for vision basedautonomous vehicle navigation in indoor environmentusing the designed image segmentation algorithm. Thevision based navigation is applied to autonomous vehicleand it is implemented using the Raspberry Pi cameramodule on Raspberry Pi Model-B+ with the designed imagesegmentation algorithm. The image segmentation algorithmhas been built using smoothing,thresholding, morpho-logical operations, and edge detection. The referenceimages of directions in the path are detected by the vehicleand accordingly it moves in right or left directions orstops at destination. The vehicle finds the path from sourceto destination using reference directions. It first capturesthe video,segments the video(frame by frame), finds theedges in the segmented frame and moves accordingly. TheRaspberry Pi also transmits the capture video andsegmented results using the Wi-Fi to the remote system formonitoring. The autonomous vehicle is also capable offinding obstacle in its path and the detection is done usingthe ultrasonic sensors.

Index Terms—Autonomous Vehicle, Graphical User Inter-face(GUI), Raspberry Pi, Segmentation, Ultrasonic Sensor

I. INTRODUCTIONIn the recent years, Autonomous vehicles have gainedimportance due to its widespread applications in variousfields such as Military, Civilian, industrial etc. Autonomousvehicle navigation has the ability to determine its owposition and finding the path from source to destination.Navigation mainly defines the self localisation and findingthe destination path. Vehicle navigation has long been afundamental goal in both robotics and computer visionresearch. While the problem is largely solved for robotsequipped with active range- finding devices, for a variety ofreasons, the task still remains challeng- ing for vehiclesequipped only with vision sensors. On-board computingusing the computer vision is the most demanding areas ofrobotics. The need for autonomy in vehicles in indoor basednavigation systems demands high computational power inthe form of image processing capabilities. The Simultaneouslocalisation and mapping(SLAM) algorithm performs theself

Fig. 1. Prototype of Autonomous vehicle Moving in Rightdirection

localisation and maps the environment using the predefinedindoor environment area and the vision based form. Itinvolves complex computations and geometry to findthe path and obstacles in the path to map theenvironment.In vision based autonomous vehicle navigation ,segmenta-tion of the captured frame is the fundamental step in imageprocessing. Segmentation is the process of groupingpixels of an image depending on the information needed forfurther processing. Various segmentation techniques arepresent based on the region,edges,textures and intensities.As vehicles pro- ceeds with navigation using on- boardprocessing it possess a problem to the use of powerfulcomputational units; secondly cost of the system hardware,though having dropped in recent years, is still a limitation inrobotics [1]. Therefore, robots requires powerful and fastprocessing speed to perform on board processing of images.In the last few years the demand for autonomous vehiclesand robots has increased which have brought us a range ofARM architecture computational devices such as theRaspberry Pi or the even more powerful Quad- CoreODROID-U2 and these devices can perform on boardreal time image segmentation.The proposed work uses a Raspberry Pi for real timeprocessing and a camera connected to the raspberry pi forproviding the vision. The prototype of the autonomousvehicle is implemented as shown in figure 1. It ishaving onboard

Raspberry Pi, Microsoft Lifecam, Ultrasonic sensor,power supply and DC motors etc. The captured realtime video is processed such that it is first segmentedand the edges are found depending upon which the

Page 2: Design of Image Segmentation Algorithm for Autonomous Vehicle Navigationusing Raspberry Pi

Int. Journal of Electrical & Electronics Engg. Vol. 2, Spl. Issue 1 (2015) e-ISSN: 1694-2310 | p-ISSN: 1694-2426

185 NITTTR, Chandigarh EDIT-2015

vehicle moves in right, left or in certain angles. Thecomplete task of segmentation is done using Raspberry Pion board the vehicle in real time. The captured video usingthe Raspberry Pi camera is also transmitted using the WiFi tothe remote computer.

II. RELATED WORKNavigation can be done by designing proper Image segmen-tation algorithm. In literature [2], the stereo vision applied tosmall water vehicles using the low cost computers, which candrive autonomous vehicles capable of following other vehicleor boats in water is developed. The system uses 2 stereovisioncameras which are connected to raspberry-pi for real time im-age processing using open computer vision libraries(OpenCV).This autonomous vehicle performs control of yaw and speed,line tracking and detecting obstacles. This system is capable ofidentifying and following targets in a distance of over 5 meters.In literature [3], the image segmentation algorithm is used forreal time image processing as it is demanded by micro airvehicle(MAV) for navigation. Here, the image segmentationis implemented on FPGA for on board fast processing. Thesystem finds vast application in military applications and forsurveillance of structures like roads and rivers [4].Real time autonomous visual navigation system is presented in[5] using approaches like region segmentation to find the roadappearance and road detection to compute road shape.Monocular cameras along with proximity sensors are used todetect roads. Two algorithm are designed and there outputsare combined using a Kalman filter to produce a robustestimation of road which is used as a control policy forautonomous navigation in indoor and outdoor environments.Image matching is another approach for navigation and it isoften used in unmanned aerial vehicle (UAVs) as used in [6].The images can also be used in infrared range using CCDsensors for the purpose of navigation in day and night time.[7].

III. BLOCK DIAGRAM OF PROPOSED SYSTEM

The proposed title aims to design the Segmentation al-gorithm for autonomous vehicles on Raspberry Pi to helpfind obstacles and navigate the vehicles in an unknownenvironment. The below block diagram in figure 2. showsthe proposed system for raspberry-pi Camera Feedback forNavigation based mobile robot navigation. The navigationis provided by designing the segmentation algorithm usingimages captured through camera on board the vehicles.• Camera: Camera is connected to the Raspberry pi and itacquires the video(24fps) from which the frame is taken asinput and it is further processed.• Filter: The filter removes the noise from the acquiredimage so that the necessary information in image is not lost.

TABLE IFEATURES OF RASPBERRY-PI MODEL B+

Features Raspberry-Pi Model B+

CPU 700MHz-ARM 11-S core

Memory 512MB RAM(shared with GPU)

On board Ethernet 10/100

Memory Storage uSD Card Slot 8/16GB

Power Ratings 700mA-1.8mA, 5V DC

USB Ports 4

Video Outputs HDMI

Operating Systems Raspbian OS, Debian OS

Processing Unit: The processing unit is where theimage segmentation is performed such that thegradient and edge tracking is done. From the edgesit is possible to determine the reference image andso the vehicle moves accordingly. The ultrasonicsensor also gives the input to this unit so that thedistance between the obstacle and autonomousvehicle is known and if the obstacle is near then thevehicle stops and starts moving in other direction toovercome it. All this processing is performedusing the minicomputer called as Raspberry-Pi. Theimage segmentation algorithm is processed usingthe raspberry- pi.• Display Unit (GUI): The display unit is wherethe cap- tured video and segmented output isdisplayed using the WiFi sdapter on the remotescreen.• Feedback: The segmented output is continuouslymoni- tored to find the gradient and edges and it isgiven as feedback to Raspberry Pi along with thesensor output to check the obstacle continuously.

IV. INTRODUCTION TO RASPBERRY PI

The design of image segmentation algorithm isdone using C++ on Raspberry Pi board using theOpen Computer Vision (OpenCV) [8]. TheRaspberry pi is a handheld computer on the boardconsists of ARM processor and best suitable for realtime operation. It runs on raspbian operating systemwhich has the Linux environment. Officiallylaunched in February 2012, the Raspberry Pipersonal computer took the world by storm, sellingout the 10,000 available units immediately.It is an inexpensive credit card sized exposedcircuit board, a fully programmable PC running thefree open source Linux operating system. TheRaspberry Pi can connect to the Inter- net; can beplugged into a TV, and costs very less. Originallycreated to spark school childrens interest incomputers, due to the variety of features mentionedin Table I, the Raspberry

Fig. 2. Block Diagram of Proposed System

Page 3: Design of Image Segmentation Algorithm for Autonomous Vehicle Navigationusing Raspberry Pi

Int. Journal of Electrical & Electronics Engg. Vol. 2, Spl. Issue 1 (2015) e-ISSN: 1694-2310 | p-ISSN: 1694-2426

NITTTR, Chandigarh EDIT -2015 186

V. PROPOSED METHODOLOGY FOR AUTONOMOUS 4) Finding Gradients: Only local maxima are marked as

VEHICLE NAVIGATION edges and they are marked where the image has large

Fig. 3. Raspberry pi Model-B+ Setup

Pi has caught the attention of home hobbyist, entrepreneurs,and educators worldwide. Estimates shows the sales figuresaround 1 million units as of February 2013.The figure3. shows the Raspberry Pi model B+ setup with monitorand Ethernet connected to it. Qt creator is used for Qt GUIapplication development framework. Qt creator is a crossplatform C++,javascript integrated developmentenvironment. The program is build on Qt creator and it iscompiled using the Linux terminal.

Fig. 4. Example of Autonomous Vehicle Navigation in IndoorRoom Environment

Fig. 5. Reference Directions Symbol

Ability to navigate in ones environment is important for afully autonomous vehicle (AV) system. One critical task innavigation is to be able to recognize and stay on thepath.The Raspberry Pi is connected with MicrosoftLifeCam and the number of frames per second is 24fps. Theimage processing algorithm makes use of OpenCV libraries.The Video captured is first processed using the designedsegmentation algorithm and the processing of input is doneframe by frame.

A. Image Segmentation Algorithm

The image segmentation algorithm is designed usingsmoothing, thresholding,morphological operations, edgede- tection and tracking.The vehicle continuously tracks therefer- ence direction to move the vehicle from source todestination. Once the reference direction arrow is detectedthe Raspberry Pi processes the captured by using thealgorithm. The indoor room environment with designedalgorithm for Autonomous Vehicle navigation is shown infigure 4. The arrows are reference direction marks pasted orstuck on the wall at ground level. The wall is detected asobstacle and the Autonomous vehicle moves in backwardsand checks for reference direction. The reference directionsshown in figure 5. such as GO , left arrow, right arrow andSTOP are used as mentioned above. When the cameradetects the reference images it performs following:1) The video captured is processed frame by frame.2) The frame is converted into gray scale to limit thecomputational requirements.3) Smoothing: The image is then blurred to remove thenoise.

magnitudes .5) Double Thresholding: Potential edges are thendeter- mined by thresholding.6) Edge Tracking By Hystersis: Final edges aredetermined by suppression of all edges that are notconnected to certain edges.The detected edges should be as close as possibleto real edges to determine the reference direction andmove vehicle accordingly. The GO reference imagedenotes start, for Right arrow, the vehicle moves inright direction and so on.

B. Vision Based Navigation:The vision based navigation can be done by referenceobject color recognition and the other method which isimplemented here using the reference direction images.The figure 6. shows the flow chart of the proposedmethodology for autonomous vehicle navigation. Whenthe system is started all the libraries which are used inprocessing will be first initialized. When the RaspberryPi onboard the vehicle starts, the camera on board thevehicle captures the GO image frame and the vehiclestarts. This is done by capturing the GO referenceimage and then it is processed using the imagesegmentation algorithm, and the output is thethreshold canny image which is then matched withedges in image and defined action with input databaseimage of GO. Once the edges are matched, the vehiclestarts moving forward until it finds the secondreference image in the same direction to reach finaldestination. The vehicle then keeps moving andsearching for the second reference image on wall for thenext action in order to reach the final destination.When it finally captures the STOP(at destination)reference image it keeps on processing the frames andwhen any frame

Page 4: Design of Image Segmentation Algorithm for Autonomous Vehicle Navigationusing Raspberry Pi

Int. Journal of Electrical & Electronics Engg. Vol. 2, Spl. Issue 1 (2015) e-ISSN: 1694-2310 | p-ISSN: 1694-2426

187 NITTTR, Chandigarh EDIT-2015

Fig. 6. Flow Chart of Proposed System

edges matches with the database image STOP edges,the vehicle stops at the final destination. Like thisautonomous vehicle performs the navigation to reachdestination.

C. Obstacle Detection

The ultrasonic sensors are used with the Raspberry Pito detect the obstacles in vehicles path. The ultrasonicsensors are mounted on the vehicle and they areinterfaced with Raspberry Pi. They mainly calculatethe distance between the obstacle and vehicle andgives the output to the Raspberry Pi which thenprocesses the inputs and the vehicle moves in backwarddirection and then it moves left and try to avoidcollision with obstacle. The detection distance ofultrasonic sensors is 2cm-450cm.

VI. RESULTSThe image segmentation result of Right and STOPreference image captured in real time is shown infigure 7. and figure8. The segmented outputs denotes the movement ofvehicle in certain directions. The autonomous vehicle iscapable of moving in indoor environment and detectsthe obstacles.The vehicle moves slowly due to lowerRPM DC motors(10RPM) used due to highcomputation speed requirement for image segmentationalgorithm. The Raspberry Pi also displays the capturedvideo and segmented output on remote desktop usingthe WiFi network.

Fig. 7. Segmented output of Right Reference Image

Fig. 8. Segmented output of STOP Reference Image

VII. CONCLUSION

The autonomous vehicle navigation implemented usingthe reference directions images on wall(at ground level)is done using the designed image segmentationalgorithm. In the implementation, the vehicle is affecteddue to rough surfaces in indoor. At smooth surface, thevehicle moves properly towards desired direction toreach final destination by using reference images. Thespeed of vehicle can be increased by using good RPMmotors but the computation(segmentation algorithm)tasks for each frame makes it difficult to obtain desiredresults using high RPM motors. In the future,autonomous vehicle navigation can be performed usingthe color object recognition in the indoorenvironment. Different color objects will berecognised by calculating the HSV values and thenperforming the segmentation of color object such thatevery color has certain movement defined in thesystem. Also,the mapping of indoor environment canbe done by using the video frames on remote desktopto map environment using the MATLAB.

REFERENCES[1] C. K. Chang, C. Siagian, and L. Itti, “Mobile robot monocular

vision navigation based on road region and boundary estimation,”in Proceedings of IEEE/RSJ International Conference onIntelligent Robots and Systems, Vilamoura, Algarve, Portugal,October 2012, pp. 1043–1050.

[2] R. Neves and A. C. Matos, “Raspberry pi based stereo visionfor small size asvs,” in IEEE International Conference.

[3] Shankardas, D. Bharat, A. I. Rasheed, and V. K. Reddy,“Design and asic implementation of image segmentationalgorithm for autonomous mav navigation,” in Proceedings of2013 IEEE Second International Conference of ImageInformation Processing (ICIIP-2013), 2013, pp.352–357.

[4] S. Rathinam, P. Almeida, Z. Kim, and S. Jackson, “Autonomoussearching and tracking of a river using an uav,” in Proceedings ofAmerican Control Conference, New York City,USA, July 2007,pp. 359–364.

[5] L. F. Posada, K. K. Narayanan, F. Hoffmann, and T. Bertram,“Floor seg- mentation of omnidirectional images for mobile robotvisual navigation,” in IEEE,RSJ International conference onIntelligent Robots and Systems, Taipei,Taiwan, October 2010, pp.804–809.

[6] Z. Zhang, B. Sun, K. Sun, and W. Tang, “A new imagematching algorithm based on multi-scale segmentation applied foruav navigation,” in IEEE, 2010.

[7] A. Lenskiy and J.-S. Lee, “Terrain images segmentation ininfra-red spectrum for autonomous robot navigation,” inIEEE, IFOST 2010 Proceedings, 2010.

[8] G.Bradski and A.Kaehler, Learning OpenCV. ’Reeilly MediaInc., 2008.