Immersive Environment For Robotic Teleoperation

6
Immersive environment for robotic tele-operation Nishant Bugalia Indian Institute of Technology, Delhi [email protected] Arunasish Sen Xerox Research Centre, India [email protected] Prem Kalra Indian Institute of Technology, Delhi [email protected] Subodh Kumar Indian Institute of Technology, Delhi [email protected] ABSTRACT In various modern day situations, controlling a robot from a remote location is essential due to hazardous environmental conditions (for human operators) near the robot. Thus arises the need for an intuitive user interface for tele-operation, which must be efficient as well as easy to use. In this paper we present an innovative user interface and overall frame- work for robotic tele-operation and demonstrate its applica- tion to simple bin-picking and hole-packing tasks. We have adopted technologies from Virtual Reality (VR) systems for environment mapping and used modern interface devices to provide haptic feedback. The user interface of our framework renders a virtual replica of the remote site in which the virtual objects are animated based on the tracking information received from cameras and robot placed at the remote location. A haptic device is used by the human operator to control the remote robotic arm while simultaneously aided by the haptic feed- back received from the robotic arm. A tele-operation system using our framework is developed in laboratory environment and the usability of our system is verified by a user survey. Keywords Immersive environment, teleoperation, Virtual reality 1. INTRODUCTION A tele-operation system allows a user to remotely control a slave robotic arm by using a master robotic arm. Re- cent technological developments have increased the role of tele-operation systems in various applications ranging from space, mine and underwater exploration to remote surgery. This system is also used for handling hazardous material and in dangerous environments such as nuclear reactors [10]. The major advantage of tele-operation is that it provides safety and ease of use to the operator by allowing him to control the robot from a remote location. However, this advantage Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full cita- tion on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- publish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. AIR ’15, July 02 - 04, 2015, Goa, India c 2015 ACM. ISBN 978-1-4503-3356-6/15/07. . . $15.00 DOI: http://dx.doi.org/10.1145/2783449.2783498 comes at the cost of increased interaction complexity and additional physical and mental strain. In a dynamic environ- ment, the interaction complexity further increases, making it more challenging for tele-operation. Some of the proposed tele-operation systems [14], [4] have used Augmented Real- ity (AR) based techniques to counter the time delay. In one of these systems [14], a robotic arm is augmented over the remote workplace video while in the other system [4], vi- sual cues and annotations are superimposed on the live view video. In these systems AR is used as a predictive tool to eliminate time delay and for operation guidance. The prob- lem with these systems is that the operator is restricted to a fixed view of the remote scene captured using a camera. Use of Pan-Tilt cameras at remote location gives some flex- ibility [4] to the operator, but the view is still limited to a certain camera position. Using multiple pan-tilt cameras to improve capture area also increases the overall system and network bandwidth. In this paper, we have proposed a virtual reality based tele-operation framework for manipulating a remote slave robotic arm using a local human controlled master arm. The target is to guide the robot arm to pick pellets from a pile and then pack them into one or more pipes on a work- bench. Our framework uses various techniques such as single view reconstruction, fiducial marker [2] based localization, stereoscopic display, object detection using background sub- traction [1], view dependent robot control, intuitive virtual camera control, etc. for providing ease of use, better accu- racy and excellent control to the operator. The contribution of this work is the framework as a whole for development of intuitive, user friendly and reliable platform for robotic tele-operation. The usability of our framework is proved by conducting a user survey and comparing it with state of the art and real system. The design and development of our system followed sev- eral experimental iterations. Our design is, to a large ex- tent, guided by feedback received from experts working in bin-picking environments. The various experiments done during the development of our framework helped us from two perspectives. First, they helped us visualize the trade- offs between different design choices, and thereby guided our creative design process. Second, they aided us in find- ing the best visualization, control and communication tech- niques while developing our system.

description

In various modern day situations, controlling a robot from aremote location is essential due to hazardous environmentalconditions (for human operators) near the robot. Thus arisesthe need for an intuitive user interface for tele-operation,which must be efficient as well as easy to use. In this paperwe present an innovative user interface and overall framework for robotic tele-operation and demonstrate its application to simple bin-picking and hole-packing tasks. We haveadopted technologies from Virtual Reality (VR) systems forenvironment mapping and used modern interface devices toprovide haptic feedback.The user interface of our framework renders a virtualreplica of the remote site in which the virtual objects areanimated based on the tracking information received fromcameras and robot placed at the remote location. A hapticdevice is used by the human operator to control the remoterobotic arm while simultaneously aided by the haptic feedback received from the robotic arm. A tele-operation systemusing our framework is developed in laboratory environmentand the usability of our system is verified by a user survey

Transcript of Immersive Environment For Robotic Teleoperation

Page 1: Immersive Environment For Robotic Teleoperation

Immersive environment for robotic tele-operation

Nishant BugaliaIndian Institute of Technology,

[email protected]

Arunasish SenXerox Research Centre, [email protected]

Prem KalraIndian Institute of Technology,

[email protected]

Subodh KumarIndian Institute of Technology,

[email protected]

ABSTRACTIn various modern day situations, controlling a robot from aremote location is essential due to hazardous environmentalconditions (for human operators) near the robot. Thus arisesthe need for an intuitive user interface for tele-operation,which must be efficient as well as easy to use. In this paperwe present an innovative user interface and overall frame-work for robotic tele-operation and demonstrate its applica-tion to simple bin-picking and hole-packing tasks. We haveadopted technologies from Virtual Reality (VR) systems forenvironment mapping and used modern interface devices toprovide haptic feedback.

The user interface of our framework renders a virtualreplica of the remote site in which the virtual objects areanimated based on the tracking information received fromcameras and robot placed at the remote location. A hapticdevice is used by the human operator to control the remoterobotic arm while simultaneously aided by the haptic feed-back received from the robotic arm. A tele-operation systemusing our framework is developed in laboratory environmentand the usability of our system is verified by a user survey.

KeywordsImmersive environment, teleoperation, Virtual reality

1. INTRODUCTIONA tele-operation system allows a user to remotely control

a slave robotic arm by using a master robotic arm. Re-cent technological developments have increased the role oftele-operation systems in various applications ranging fromspace, mine and underwater exploration to remote surgery.This system is also used for handling hazardous material andin dangerous environments such as nuclear reactors [10]. Themajor advantage of tele-operation is that it provides safetyand ease of use to the operator by allowing him to controlthe robot from a remote location. However, this advantage

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected].

AIR ’15, July 02 - 04, 2015, Goa, Indiac© 2015 ACM. ISBN 978-1-4503-3356-6/15/07. . . $15.00

DOI: http://dx.doi.org/10.1145/2783449.2783498

comes at the cost of increased interaction complexity andadditional physical and mental strain. In a dynamic environ-ment, the interaction complexity further increases, makingit more challenging for tele-operation. Some of the proposedtele-operation systems [14], [4] have used Augmented Real-ity (AR) based techniques to counter the time delay. Inone of these systems [14], a robotic arm is augmented overthe remote workplace video while in the other system [4], vi-sual cues and annotations are superimposed on the live viewvideo. In these systems AR is used as a predictive tool toeliminate time delay and for operation guidance. The prob-lem with these systems is that the operator is restricted toa fixed view of the remote scene captured using a camera.Use of Pan-Tilt cameras at remote location gives some flex-ibility [4] to the operator, but the view is still limited to acertain camera position. Using multiple pan-tilt cameras toimprove capture area also increases the overall system andnetwork bandwidth.

In this paper, we have proposed a virtual reality basedtele-operation framework for manipulating a remote slaverobotic arm using a local human controlled master arm.The target is to guide the robot arm to pick pellets froma pile and then pack them into one or more pipes on a work-bench. Our framework uses various techniques such as singleview reconstruction, fiducial marker [2] based localization,stereoscopic display, object detection using background sub-traction [1], view dependent robot control, intuitive virtualcamera control, etc. for providing ease of use, better accu-racy and excellent control to the operator. The contributionof this work is the framework as a whole for developmentof intuitive, user friendly and reliable platform for robotictele-operation. The usability of our framework is proved byconducting a user survey and comparing it with state of theart and real system.

The design and development of our system followed sev-eral experimental iterations. Our design is, to a large ex-tent, guided by feedback received from experts working inbin-picking environments. The various experiments doneduring the development of our framework helped us fromtwo perspectives. First, they helped us visualize the trade-offs between different design choices, and thereby guidedour creative design process. Second, they aided us in find-ing the best visualization, control and communication tech-niques while developing our system.

Page 2: Immersive Environment For Robotic Teleoperation

2. TELE-OPERATION CHALLENGESWe have encountered various challenges while developing

this system. Before detailing our framework, we first discussthese challenges.

• Network delay: The delay, also known as propagationDelay [12] occurs due to the communication distancebetween the operator and the robot.

• Remote feedback: The operator controlling the roboticarm is deprived of visual and sensory feedback as theoperator is physically removed from remote environ-ment.

• Environment modeling: The remote environment iscomposed of many different types of moving objectsand modeling them is a challenging task. The higheraccuracy requirements make this problem more diffi-cult to solve.

• Object localization and tracking: The location and ori-entation of the modeled objects is required to createthe virtual view of the remote environment. This in-formation must be precise and accurate to make tele-operation possible.

• User experience design: It is the process of enhancinguser satisfaction by improving the usability, simplicityand engagement provided by the interaction betweenthe user and the control interface. For the success ofsuch a system, this is one of the critical element.

All the above issues are addressed by our framework andare discussed in coming sections.

3. PROPOSED FRAMEWORKOur framework is generic in nature and can be used in

a variety of tele-operation tasks. The framework is com-posed of different modules, each of which is responsible fora predefined task.

As shown in fig. 1, our framework can be divided into twomajor components: control block and remote block. Thehuman operator and master interface (haptic device) residesat the control block whereas the slave robotic arm (industrialrobotic arm) resides at the remote block. A third blockcalled network block between these two blocks is used forcommunication.

The control block activities are handled by control ap-plication and the remote block activities are handled byremote application. These two applications communicatewith each other using the network block. These applicationsand their modules are discussed below:

3.1 Control applicationThe control application displays a three dimensional re-

constructed view of the remote location. The operator usesa mouse to control the virtual view and a haptic device tocontrol the robotic arm. The various modules of controlapplication are discussed below:

3.1.1 3D scene managerThis module renders the reconstructed model of remote

environment using OpenGL [13]. The 3D models and theirlocations are obtained in the beginning of system setup as

Control Application

Virtual 3D environment

Camera

Remote Application

Control Block Remote Block

3D Scene Manager

Feedback Manager

Error Handler

Video Manager

Control manager

Co

mm

un

icat

ion

Blo

ck

Application state Manager

Error Handler

Actual environment

Haptic device

Control manager

Communication Manager

Application state Manager

Communication Manager

Figure 1: Block diagram of the framework.

discussed in section 4. The robotic arm is animated by ren-dering the arm joints as per the angles received from theactual robot.

The virtual camera of the scene is controlled by a mouseto set the desired view point. The virtual scene can berendered in stereo mode to provide operator a better depthperception.

3.1.2 Feedback managerOur framework provides two different types of feedback

to the operator: Visual and Haptic. These feedback aremanaged by this module and are discussed below:

• Visual feedback: The visual feedback consists of visualclues, hints and warnings, which are shown to the op-erator on the screen. These hints include 2D positionalmap of workbench objects and robotic arm (fig. 6, topleft area), showing the distance between various ob-jects and the robotic arm, highlighting objects nearestto the gripper, etc. Critical warnings (eg. collision)are also displayed on the screen by flashing the screenwith red color.

• Haptic feedback: The haptic feedback consists of theactual force felt by the robotic arm gripper at theremote location. This force is generated by a force-torque sensor fixed between the gripper and roboticarm and is sent to the control application in the formof a vector. The direction of this vector is the direc-tion of the force and the magnitude of this vector is theamount of the force felt by the robotic arm. This vec-tor is reduced by a scale to prevent the haptic devicefrom being damaged. The scale factor is the ratio ofthe maximum force returned by the force-torque sensorand the maximum force supported by haptic device.

3.1.3 Control managerThe misalignment between the axis of 3D view and the

actual robotic arm makes it very hard to control the roboticarm in an intuitive way. For a valid control, the coordinatesystems of the view and haptic device should be aligned sothat a left movement of the haptic device causes the robotto move left, a right movement makes the robot move rightand so on. In case of a fixed view, the user can train him-self for some time to understand the relation between theview and haptic device, but in a 3D environment when the

Page 3: Immersive Environment For Robotic Teleoperation

virtual camera is able to show the view from any angle, notraining is helpful. The view guided control manager solvesthis problem by aligning the coordinate of the haptic devicewith the virtual view before controlling the robotic arm.

In a tele-operation system, sometimes it can be hard forthe operator to move the robotic arm in a very specific orrestricted manner. For example, moving the robot arm in astraight line, or in a plane is a tough task and requires a lotof practice, but can be easily achieved with some assistance.This module also works as a control assistant and providespredefined keys to move the robotic arm on an axis, in aplane, in a straight line, etc. This module also provides keysto the operator to increase or decrease the robot operationspeed.

3.1.4 Error handlerAccidents are eventually inevitable in a tele-operation sys-

tem. All the errors happen either due to operator negligenceor due to system failure. The error handler module preventsthe errors happening due to operator negligence by applyingthe following checks during tele-operation:

• Speed check: The control data captured when thespeed of haptic device movement is above a thresholdis not used to manipulate the robotic arm.

• Collision check: If the robotic arm is moving in a di-rection which can cause a collision, this check slowsdown the robotic arm movement.

3.1.5 Communication managerIn a tele-operation environment, much information is com-

municated between the control application and the remoteapplication. The communication module has two parts forboth sending and receiving data.

The data sent from the control application contains theinformation to position the robotic arm and set its gripperstate (open/close). The data received by the control appli-cation contains the robotic arm joint angles, object locationand force vector. This data is transferred using the UDPprotocol.

3.1.6 Application state managerThis module manages application specific information that

is required during execution. For the control application,this information includes the location of 3D models, com-munication port number, key mapping information, hapticdevice id., etc.

This information is stored in an XML file, which can bemodified by the user as required. This data is loaded by theapplication before beginning execution.

3.2 Remote applicationThis application runs at the remote block. The primary

task of this application is to track the objects and manipu-late the robotic arm based on the information received fromthe control application. This application consists of the fol-lowing modules:

3.2.1 Video managerThis module captures video at the remote location using

three different cameras. The video captured from the Firstcamera is used for localization of the robotic arm in theremote environment. The Second camera is fixed on the

robotic arm to find the location of the workbench with re-spect to the robotic arm. These two cameras are used onlyonce during the initial setup process. For the localization ofpellets, a Third camera is used which is suspended over theworkbench to capture the workbench top. The localizationis done with the help of fiducial markers and the techniqueis discussed in detail in section 4.

3.2.2 Control managerThis module manipulates the robotic arm based on the

coordinates received from the control application. Thesecoordinates are received in the haptic device’s coordinatesystem and are converted to the robotic arm’s coordinatesystem before being used to control the actual movement.

3.2.3 Error handlerThis module checks for the possible errors that may occur

during the tele-operation process. Error checking is doneby measuring the actual values with the boundary condi-tions of the environment. If the actual values cross the limitvalues, the ongoing operation is stopped. The limits whichare checked by this module include workspace limits, speedlimits and robot joint angle limits.

3.2.4 Application state managerThis module is similar to the control application state

manager module. For remote application, this module man-ages information such as camera id., communication port,fiducial marker id., etc. Again, this information is storedin an XML file that can be modified by the user as per thesystem requirement. The application reads this file beforeexecution and loads the required data.

3.2.5 Communication managerThis module acts as the transmitter and receiver for the

remote application. It receives gripper status and new roboticarm coordinates from the control application and sends backrobot joint angles, object positions and force feedback to theremote application.

4. REMOTE ENVIRONMENT MODELINGOur framework uses a modeled replica of the remote loca-

tion. This provides better visualization and control withoutoverloading the network. The 3D model of the remote en-vironment is built during the initial system setup processand once created, the operator can navigate inside this en-vironment as per his viewing requirements. The modelinginvolves reconstruction and localization of objects as dis-cussed below.

4.1 ReconstructionTo simplify the modeling process, the remote environment

is divided into different components that are reconstructedindividually. Primarily these components are: (1) Staticenvironment (2) Robotic arm (3) Workbench and (4) Pel-lets. After reconstruction, these components are placed ina virtual environment as per their real world location. Thereconstruction of these components is discussed below.

4.1.1 Static environment reconstructionThe static environment consists of the stationary part

(physical structure and texture) of the remote environment.

Page 4: Immersive Environment For Robotic Teleoperation

Modeling of such environment can be done by different tech-niques (eg. laser scanning, depth cameras, structured light,etc.) based on the amount of reconstruction detail required.In our case, modeling is done by using single view reconstruc-tion [8], an image based 3D reconstruction method. This isa simple method that does not require a calibrated cam-era and is best suited for structured scenes where planes areeasy to identify. As shown in fig. 2, this method takes manu-ally registered images of different planes to generate a cubeshaped 3D model of the structure with plane and textureinformation.

Images from real environment Synthetic 3D View

Figure 2: Environment modeling using ”single viewreconstruction” method

4.1.2 Robotic arm reconstructionThe 3D models of the robotic arm is easily obtained from

the manufacturer. In our setup, the robotic arm used isKUKA KR5 for which we have obtained the 3D modelsfrom KUKA [7]. The DH parameters for attaching referenceframes to the robotic arm links are calculated using inversekinematics [9]. Once these parameters are known, the com-plete arm can be assembled and manipulated by changingthe arm joint angles. By using the joint angles from actualrobot, the virtual robot is animated to copy the actions ofthe real robot.

3D models of KUKA parts Complete Kuka Model

Figure 3: Robotic arm reconstruction

4.1.3 Workbench reconstructionGenerally the workbench is planar and horizontal and

placed near the robotic arm. A planar workbench is modeled

in virtual environment with the help of a 3D plane and itsarea is defined automatically by using fiducial markers. Amore complex workbench can be modeled via laser scanningor by using CAD tools.

4.1.4 Pellet reconstructionThe last component to model are the pellets, each a texture-

less black colored small cylinder. The pellets are of a roughlyuniform height and diameter and are assumed to be placedupright on the workbench. We have used cylinder to modelthe shape of the pellets in virtual environment.

4.2 LocalizationOnce the environment components are modeled, the next

challenge is to find their location in real world so that theycan be placed at similar position in the virtual world. Thestatic environment is set at origin (0,0,0) and other objectsare placed in a relative manner. The robotic arm is placedrelative to the static environment, the workbench is placedrelative to the robotic arm, and the pellets are placed rela-tive to the workbench. Different methods are used for local-ization of these objects as discussed below.

4.2.1 Robotic arm localizationThe location of the robotic arm is computed by using fidu-

cial markers. With this technique, the rotation and trans-lation of a marker is calculated with respect to the view-ing camera. For robot localization, our framework uses twomarkers – one is placed on the environment wall and theother on the robotic arm base. The image captured fromthe First camera contains both markers. The rotation andtranslation between these two markers is computed and therobotic arm is placed with reference to the static environ-ment wall.

4.2.2 Workbench localizationFor workbench localization, we use four fiducial mark-

ers, which are placed on the workbench with their cornersaligned with workbench corners. The area inside the cor-ners is the active area where pellets lie. The 3D position ofthese markers is calculated from the video captured by theSecond camera . These four corner points are used to findthe equation of the workbench plane using the least squaresmethod [5] which is discussed below. Once the workbenchplane is identified, the four corner points computed earlierare used to define the workbench area.

Least squares method: This method is used to find abest fit plane for 3D points in space by minimizing the sumof squared perpendicular distance between points and theplane.

The equation of a plane is given by: z = Ax + By + Cwhich can be determined from the 3D points (obtained frommarkers) [(xi, yi, zi)]

mi=1 by following below steps:

• Compute a 3x3 symmetric matrix M whose entries are:

M =

∑mi=1 x

2i

∑mi=1 xiyi

∑mi=1 xi∑m

i=1 xiyi∑m

i=1 y2i

∑mi=1 yi∑m

i=1 xi

∑mi=1 yi

∑mi=1 1

• Compute a 3 element vector b whose entries are:

b =

∑mi=1 xizi∑mi=1 yizi∑mi=1 zi

Page 5: Immersive Environment For Robotic Teleoperation

• Solve Mx = b for the given M and b to find the com-ponents A, B and C:∑m

i=1 x2i

∑mi=1 xiyi

∑mi=1 xi∑m

i=1 xiyi∑m

i=1 y2i

∑mi=1 yi∑m

i=1 xi

∑mi=1 yi

∑mi=1 1

ABC

=

∑mi=1 xizi∑mi=1 yizi∑mi=1 zi

4.2.3 Pellet localization

The position of pellets is calculated by using backgroundsubtraction from the video captured using the Third cam-era . As shown in fig. 4, there are four steps to pellet local-ization. Initially the workbench image is extracted from thevideo and the position of the four marker corners is deter-mined. In the next step, the image inside the four markerpoints is cropped and is affine[6] corrected by calculating ho-mograpy matrix H[6]. A homography is a mapping betweentwo images of a planar scene and is defined with the relation:I’ = H I, where I’ is the new image and I is the original im-age. The homography matrix H is calculated from 4 pointsusing DLT method[6] and is applied to every pixel in thecropped image to obtain an affine corrected version of thecropped image. Finally a threshold is applied on this imageto detect pellet blobs and the center of these blobs are usedto represent pellet centers.

Workbench top image Affine corrected Image

Thresholded image Pellet localization

Figure 4: Pellet localization

5. EXPERIMENTAL SETUPTo verify the feasibility, performance and usability of our

framework, we have built a prototype system. The entireexperimental setup of our prototype is shown in fig. 5, ex-hibiting both the control block and the remote block. Thereconstructed virtual environment displayed to the operatoris shown in fig. 6.

In our setup, we have used KUKA robot as remote ma-nipulator, two ordinary web-cameras for video capturing,Samsung 3D monitor for display and phantom Omni [3] forhaptic feedback based controller. The software is developedon windows platform using C++ with visual studio 2012 asIDE. We have used OpenGL library for 3D rendering andArUco library [11] for fiducial marker detection.

Workbench with Pellets & markers

Robotic Arm

Remote blockControl block

Haptic device

Virtual Environment

Operator

Figure 5: System setup

3D Robotic arm

Workbench top view (2D)

Pellet positions

3D Pellets

Robotic Arm

position

Workbench

Figure 6: Virtual environment

6. USER SURVEYTo evaluate the practicability of our system, we conducted

an initial user study and report here lessons learned from it.Our primary aim of this survey is to validate the ease of useprovided by our framework. In the study, a fixed task wasconducted in three different scenarios and the results weremeasured.

6.1 ParticipantsFifteen participants were chosen randomly from the uni-

versity campus as subjects. Most of these participants hadno significant prior experience in controlling a robotic armor using a haptic device.

6.2 MethodFor the survey, a predefined task was assigned in three dif-

ferent conditions: 1) Local robot control 2) Video based tele-operation and 3)Immersive environment based tele-operation.In condition #1, the user had to manipulate the robotic armwhile directly looking at it, sitting next to it. In condition#2, the subject was provided with the front and the sideview videos of the remote workbench, captured by two cam-eras. In condition #3, the tele-operation was performed us-ing our system. In the first two conditions, only the hapticdevice was used to control the robotic arm. In third con-dition, the keyboard was also used to move the robot in afixed axis or plane.

For all conditions, the task was to pick a pellet and place

Page 6: Immersive Environment For Robotic Teleoperation

it over another. In the beginning, every participant receivedan introduction to the system, followed by 5 minutes of prac-tice. After the practice session, the participant was asked toperform the task in all the three conditions, presented in arandom order.

6.3 ResultsFor each experiment, the captured data included: (1) the

time taken to complete the task, (2) the number of failuresand (3) the overall user experience rating on a scale of 1 to10, where 10 represents the best experience.

The mean data of all users is graphed in fig. 7. Graph#1 shows the mean time taken in performing the activity inthree different conditions. Graph #2 shows the number offailure cases encountered during the experiment and Graph#3 shows the user rating for the three conditions.

28.7

43.4

33.6

0

10

20

30

40

50

Local control Video basedcontrol

ImmersiveEnvironment

Activity time

6.5

4.9

8.2

0

2

4

6

8

10

Local control Video basedcontrol

ImmersiveEnvironment

User rating

17.9

21.4

3.60

5

10

15

20

25

Local control Video basedcontrol

ImmersiveEnvironment

Failures percentage

Seconds

Gra

ph

1G

rap

h 2

Gra

ph

3

Figure 7: User survey results.

From these graphs, it’s evident that the time taken whileperforming a task using our system is similar to the localcontrol system, which may be considered as a baseline. Itis also clear that the failure percentage is the lowest in oursystem and our system has been highly admired by the usersfor its ease of use and operating flexibility.

7. CONCLUSION AND FUTURE WORKThe tele-operation framework presented in this paper uses

various techniques that allows the operator to perform binpicking task easily without compromising on the accuracy.

The techniques include building a modeled replica of theremote environment, that allows complete view control; onscreen clues, which enhances visual feedback; view depen-dent control mechanism, that provides intuitive haptic con-trol and error prediction system, that prevents any possibleaccident. Methods used in our framework are simple in na-ture and do not cause overhead to the system resources.

The current framework lacks some features such as contin-uous tracking of pellets, integration of physics engine in thevirtual environment etc. These improvements will be addedin future to make our framework more useful for realtimecontrol as well as offline training purposes.

8. ACKNOWLEDGMENTSThe financial support to this work by BRNS/BARC Mum-

bai under the“Programme for setting Autonomous RoboticsLab” at IIT Delhi is sincerely acknowledged. The authorsalso wish to thank Suvam Parta and Arun Dayal Uday fortheir helpful suggestions.

9. REFERENCES[1] Background subtraction.

http://en.wikipedia.org/wiki/Background subtraction.

[2] Fiducial marker.http://en.wikipedia.org/wiki/Fiducial marker.

[3] Phantom omni haptic device. http://www.dentsable.com/haptic-phantom-omni.htm.

[4] L. Basanez Villaluenga, J. Rosell Gratacos,L. Palomo Avellaneda, E. Nuno Ortega,H. Portilla Rodrıguez, et al. A framework forrobotized teleoperated tasks. 2011.

[5] D. Eberly. Least squares fitting of data.

[6] R. Hartley and A. Zisserman. Multiple view geometryin computer vision. Cambridge university press, 2003.

[7] KUAK. Kr 5 arc - industrial manipulator.http://www.kuka-robotics.com/.

[8] A. Kushal, V. Bansal, and S. Banerjee. A simplemethod for interactive 3d reconstruction, cameracalibration from a single view. In ICVGIP, 2002.

[9] F. L. Lewis, C. T. Abdallah, and D. M. Dawson.Control of robot manipulators, volume 236. MacmillanNew York, 1993.

[10] S. Lichiardopol. A survey on teleoperation. Universityof Eindhoven, Department Mechanical EngineeringDynamics and Control Group Eindhoven, 2007.

[11] R. Munoz-Salinas. Aruco: a minimal library foraugmented reality applications based on opencv, 2012.

[12] O. Networks. What is network latency and why doesit matter?

[13] OpenGL. Opengl is the premier environment fordeveloping portable, interactive 2d and 3d graphicsapplications. https://www.opengl.org/.

[14] T. Xie, L. Xie, L. He, and Y. Zheng. A generalframework of augmented reality aided teleoperationguidance. Journal of Information and ComputationalScience, 10(5):1325–1335, 2013.