MICCAI 2006 Workshop Proceedings -...

120
MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics: Systems and Technology towards Open Architecture 5 October 2006 Copenhagen, Denmark Editors Kevin Cleary, Georgetown University, Washington, DC, USA Nobuhiko Hata, Brigham and Women’s Hospital, Boston, USA Peter Kazanzides, Johns Hopkins University, Baltimore, USA Workshop Committee Brian Davies, Imperial College, UK Ichiro Sakuma, University of Tokyo, Japan Jocelyne Troccaz, Laboratoire TIMC/IMAG, France Vance Watson, Georgetown University Medical Center, USA http://www.caimr.georgetown.edu/workshops/miccai2006.htm

Transcript of MICCAI 2006 Workshop Proceedings -...

Page 1: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

MICCAI 2006 Workshop Proceedings

Workshop on Medical Robotics:

Systems and Technology towards Open Architecture

5 October 2006 Copenhagen, Denmark

Editors

Kevin Cleary, Georgetown University, Washington, DC, USA Nobuhiko Hata, Brigham and Women’s Hospital, Boston, USA Peter Kazanzides, Johns Hopkins University, Baltimore, USA

Workshop Committee

Brian Davies, Imperial College, UK Ichiro Sakuma, University of Tokyo, Japan

Jocelyne Troccaz, Laboratoire TIMC/IMAG, France Vance Watson, Georgetown University Medical Center, USA

http://www.caimr.georgetown.edu/workshops/miccai2006.htm

Page 2: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

I

Program

9:00 Welcome and Introductions: Kevin Cleary, USA

9:10 Invited Speaker: Jocelyne Troccaz, France Systems Architecture and Research Issues for Medical Robotics

9:40 Morning Talks I

Moderator: Brian Davies, UK • A modular architecture for autonomous and teleoperated medical

robots, Gilles Mourioux, Cyril Novales, Pierre Vieyres and Gérard Poisson.

• Requirements of a telesurgery system for hepatic metastases of colorectal cancer, Jose M. Azorin, Antonio F. Compan, Jose M. Sabater, Nicolas M. Garcia, and Carlos Perez.

• Telemanipulation of Snake-Like Robots for Minimally Invasive Surgery of the Upper Airway, Ankur Kapoor, Kai Xu, Wei Wei, Nabil Simaan, and Russell H. Taylor.

• Automatic Registration of a Needle Guide Robot for Minimally Invasive Interventional Procedures Using Computed Tomography, R. Stenzel, G. Kronreif, M. Kornfeld, R. Lin, P. Cheng, and K. Cleary.

10:40 – 11:00 Break

11:00 Morning Talks II

Moderator: Ichiro Sakuma, Japan • Motion Compensated Surgical Robot for MRI-guided Cryotherapy of

Liver Cancer, Nobuhiko Hata, Jan Lesniak, Kemal Tuncali. • Robotic MRI-guided Prostate Needle Placement, G.S. Fischer,

S.P. DiMaio, I. Iordachita, and G. Fichtinger. • Needle Insertion Point and Heading Optimization with Application to

Brachytherapy, Ehsan Dehghan and Septimiu E. Salcudean. • Composite Visual Tracking of the Moving Heart Using Texture

Characterization, Aurélien Noce, Jean Triboulet, and Philippe Poignet.

• Steady-Hand Manipulator for Retinal Surgery, Iulian Iordachita, Ankur Kapoor, Ben Mitchell, Peter Kazanzides, Gregory Hager, James Handa, and Russell Taylor.

12:15 – 13:30 Lunch break 13:30 Invited Speaker: Vance Watson, USA

Clinical perspective on robotics

Page 3: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

II

14:00 Afternoon Talks I Moderator: Jocelyne Troccaz, France

• Robot-assisted image-guided targeting for minimally invasive neurosurgery: intraoperative robot positioning and targeting experiment, R. Shamir, M. Freiman, L. Joskowicz, M. Shoham, E. Zehavi, and Y. Shoshan.

• Automatic Positioning of a Laparoscope by Preoperative Workspace Planning and Intraoperative 3D Instrument Tracking, Atsushi Nishikawa, Kanako Ito, Hiroaki Nakagoe, Kazuhiro Taniguchi, Mitsugu Sekimoto, Shuji Takiguchi, Yosuke Seki, Masayoshi Yasui, Kazuyuki Okada, Morito Monden, and Fumio Miyazaki.

• Port Placement Based on Robot Performance Optimization, Ana Luisa Trejos, Rajni Patel, Bob Kiaii, and Ian Ross.

• A Novel Method for Robotic Knot Tying, Shuxin Wang, Longwang Yue, and Huijuan Wang.

• A Robotic Neurosurgery System with Autofocusing Motion Control for Mid-infrared Laser Ablation, Ryoichi Nakamura, Shigeru Omori, Yoshihiro Muragaki, Katsuhiro Miura, Masao Doi, Ichiro Sakuma, and Hiroshi Iseki.

15:15 – 15:30 Afternoon Break

15:30 Invited Speaker: Brian Davies, UK

Hands-on (synergistic) robotics 16:00 – 17:00 Panel Discussion

Open architecture and open source for medical robotics.

Moderators: Peter Kazanzides, USA; Nobuhiko Hata, USA Participants: Kevin Cleary, USA; Brian Davies, UK; John Haller, USA; Ichiro Sakuma, Japan ; Jocelyne Troccaz, France;

Page 4: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

III

Table of Contents

A modular architecture for autonomous and teleoperated medical robots Gilles Mourioux, Cyril Novales, Pierre Vieyres, Gérard Poisson . . . . 1

Requirements of a telesurgery system for hepatic metastases of colorectal cancer

Jose M. Azorin, Antonio F. Compan, Jose M. Sabater, Nicolas M. Garcia, Carlos Perez . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Telemanipulation of Snake-Like Robots for Minimally Invasive Surgery of the Upper Airway

Ankur Kapoor, Kai Xu, Wei Wei, Nabil Simaan, Russell H. Taylor . . . 17 Automatic Registration of a Needle Guide Robot for Minimally Invasive Interventional Procedures Using Computed Tomography

R. Stenzel, G. Kronreif, M. Kornfeld, R. Lin, P. Cheng, K. Cleary . . . 26 Motion Compensated Surgical Robot for MRI-guided Cryotherapy of Liver Cancer

Nobuhiko Hata, Jan Lesniak, Kemal Tuncali . . . . . . . . . . . . . . . 33 Robotic MRI-guided Prostate Needle Placement

G.S. Fischer, S.P. DiMaio, I. Iordachita, G. Fichtinger . . . . . . . . . 37

Needle Insertion Point and Heading Optimization with Application to Brachytherapy Ehsan Dehghan and Septimiu E. Salcudean . . . . . . . . . . . . . . . 47

Composite Visual Tracking of the Moving Heart Using Texture Characterization Aurélien Noce, Jean Triboulet, Philippe Poignet . . . . . . . . . . . . . 54 Steady-Hand Manipulator for Retinal Surgery

Iulian Iordachita, Ankur Kapoor, Ben Mitchell, Peter Kazanzides, Gregory Hager, James Handa, Russell Taylor . . . . . . . . . . . . . . 66

Robot-assisted image-guided targeting for minimally invasive neurosurgery: intraoperative robot positioning and targeting experiment

R. Shamir, M. Freiman, L. Joskowicz, M. Shoham, E. Zehavi, Y. Shoshan . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

Automatic Positioning of a Laparoscope by Preoperative Workspace Planning and Intraoperative 3D Instrument Tracking

Atsushi Nishikawa, Kanako Ito, Hiroaki Nakagoe, Kazuhiro Taniguchi, Mitsugu Sekimoto, Shuji Takiguchi, Yosuke Seki, Masayoshi Yasui, Kazuyuki Okada, Morito Monden, and Fumio Miyazaki . . . . . . . . . 82

Page 5: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

IV

Port Placement Based on Robot Performance Optimization

Ana Luisa Trejos, Rajni Patel, Bob Kiaii, Ian Ross . . . . . . . . . . . . 92

A Novel Method for Robotic Knot Tying Shuxin Wang, Longwang Yue, Huijuan Wang . . . . . . . . . . . . . . 100

A Robotic Neurosurgery System with Autofocusing Motion Control for Mid-infrared Laser Ablation

Ryoichi Nakamura, Shigeru Omori, Yoshihiro Muragaki, Katsuhiro Miura, Masao Doi, Ichiro Sakuma, and Hiroshi Iseki . . . . . . . . . . . . . . 108

Page 6: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

A modular architecture for autonomous and teleoperated

medical robots

Gilles Mourioux, Cyril Novales, Pierre Vieyres and Gérard Poisson

LVR, Orleans University, IUT de Bourges, 18020 Bourges, France

{firstname.lastname}@bourges.univ-orleans.fr

Abstract. This paper presents a specific architecture based on a multilevel

formalism to control either autonomous or teleoperated robots that can be used

in the medical field. This formalism separates precisely each robot

functionalities (hardware and software) and provides a global scheme to

position them and to model data exchanges among them. Our architecture was

originally built from the classical control loop. Two parts can thus be defined:

the Perception part which manages the processing and the models construction

of incoming data (the sensor measurements), and the Decision part which

manages the processing of controlled outputs. These two parts are divided in

several levels and, depending on the robot, the control loops that have to be

performed are located at different levels. This general scheme permits to

integrate different modules issued from various robot control theories. This

architecture has been designed to model and control autonomous robots. It also

integrates a third part, called “teleoperated part”, especially needed to perform

remote and accurate medical tools displacements. This architecture merges two

antagonist concepts of robotics, i.e. teleoperation and autonomy, and allows a

sharp distribution of the functionalities of these two fields. Some results are

given in this paper to validate the proposed formalism in the case of medical

robotics.

1 Introduction

When robots are teleoperated for any remote task such as tele-echography [13] or

laparoscopy [14], any failure may be hazardous for the distant patient; switching from

a teleoperated mode to an autonomous one may bring a strong level of safety in the

functioning of the overall system. The designer has to choose the way to provide

autonomy to his robot. It has mainly two orientations: “reactive” capacities or

“deliberative” capacities. These two families of capacities are complementary to let a

robot perform a task autonomously. The robot designer must built a coherent

assembly of various functions achieving these capacities. This is particularly true

when robots have to interact with a human environment as it is the case in medical

robotics either for minimally invasive or non-invasive applications. Therefore to

manufacture an autonomous and teleoperated robot implies the design of a control

architecture with its elements, its definitions and its rules. The proposed architecture,

thanks to its modularity and openness, will offer a strong safety functioning for new

robotics systems. This aspect is of prime importance for medical based applications.

1

Page 7: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

One of the first author who expressed the need for a control architecture was R.A.

Brooks [1]. In 1986, he presented an architecture for autonomous robots called

“subsumption architecture”. It was made up of various levels which fulfil separately

precise function, processing data from sensors in order to control the actuators with a

notion of priority. It is a reactive architecture in the sense that there is a direct link

between the sensors and the actuators. This architecture has the advantage to be

simple and thus easy to implement, nevertheless, the priorities given between the

different actions to perform are fixed in time and offer a limited flexibility. Then other

various architectures were developed based on different approaches, generally

conditioned by the specificity of robotic application that the architecture had to

control:

- the architecture 4-D/RCS developed by the Army Research Laboratory [2] has

the main characteristic to be made up of multiple calculative nodes,

- the CLARAty [3] proposed by the Jet Propulsion Laboratory developed in

collaboration with NASA where one of the interests of this representation is to work

at the decisional level only on one model emanating from the functional level,

- the LAAS architecture (Laas Architecture for Autonomous System) [4] is made

up of 3 levels: decisional, executive and functional,

- the NASREM architecture proposed by the NIST (ex-NBS) [5],

- R.C. Arkin describes and uses a hybrid control architecture, called AuRA for

Autonomous Robot Architecture [7], including a deliberative part and a reactive part,

- A. Dalgalarrondo [8] from the DGA/CTA presents a hybrid control architecture

including four modules: perception, action, attention manager and behavior selector,

- the DAMN [6] architecture results from work undertaken at the Carnegie Mellon

University in response to navigation problems. Multiple modules share

simultaneously the robot control by sending votes which are combined according to a

system of weight attribution,

- Kim Hong-Ryeol & Al patented a five hierarchical level architecture in 2005 [9].

All these architectures show the diversity of the approaches mainly due to the

robotic applications or the topics of the designers. In our proposed architecture, the

advantages of these methods (robutness, flexibility, easiness to develop…) are kept

and is added with a rigorist frame and a stronger modularity aspect adapted to all

robotic applications with medical specification.

2 Principle of the proposed architecture

The architecture, propose here, is based on the same architectures principles that

have been suggested since the Nineties. It relies on the concept of levels initially

developed by R. Brooks and which appear in architectures proposed by AuRA or

LAAS. Similarly to the latter one, we have an upstream flow of information from the

robot corresponding to its perception, and a downstream flow going to the robot and

corresponding to the control part. The specificity is to structure this robot control

architecture in levels. Each level can communicate with either higher or lower levels

by data transfers which follow a predefined format and processing according to the

given robot application. Data do not follow a unique or predefined path. They can be

routed via multiple paths to perform various control loops. Even when affecting

2

Page 8: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

different levels, all these control loops have a common path through the physical part

of the robot (i.e. the articulated mechanical system or AMS).

Control loops are interwoven and pass through a more or less great number of

levels. Loops of a low level are faster processed than loops of a higher level, as data

are processed successively by a lesser number of levels. We then propose the

concepts of “deliberative” levels in the higher levels and “reactive” levels in the lower

levels.

Figure 1 – Robot = AMS part + Perception part + Decision part

The AMS is surmounted by two parts: an upstream part corresponding to the

perception of the environment of this AMS, and a downstream part corresponding to

the decision to transmit to this AMS. These two parts are divided into levels along

which the various control loops are closed (Figure 1). Each level of each part must be

clearly specified to allow the designer of the robot to place the respective control

loops (articular control, Cartesian control, visual control...). This architecture

embedded in the robot, defines the autonomous mode of the robot.

The following section describes the terms and functioning rules of the proposed

architecture.

3 The autonomous parts

The whole architecture is made up of 2 parts divided into 5 levels (Servoings,

Pilot, Navigator, Path-planner and Mission generator) representing the autonomous

mode of the robot. In fact, these autonomous parts of the architecture are embedded in

the robot: the two parts – ‘Decision’ and ‘Perception’ – and the Articulated

Mechanical Structure (level 0) constitutes what is usually defined as the ’Robot’.

3.1 Basic levels

The AMS is referred to as level 1. Servoings and sensors are located at level 1,

immediately above the AMS. This level corresponds to the shortest – and the fastest –

loop of the robot which include sensors and their control (in the perception part), and

PID articular control (in the decision part). At level 1, various loops function in

parallel; for example articular servoings of each robot articulation. We thus define one

module for each servoings/sensor system; modules are clustered within each level of

each part.

3

Page 9: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Figure 2 – Full architecture including tele-operated and autonomous mode (e.g. Tele-operation level 3)

The setting points of the servoings represent the inputs of the cluster of the level 1

of the Decision part. The cluster outputs of level 1 of the perception part are all the

filtered sensors measures. These measures can also be transmitted to the level 1

module of the Decision part to carry out the servoings.

3.2 Level 2: the pilot

The ‘Pilot’ cluster generates the setting points (e.g. articular) needed for level 1,

based on a trajectory provided as an input. This trajectory is expressed in a different

frame (e.g. Cartesian frame) from that of the setting points. This trajectory describes,

in time, the position, kinematics and/or dynamic parameters of the robot in its

workspace. The pilot function is to convert these trajectories into setting points to be

performed by the servoings. However, this ‘Pilot’ cluster contains IKM (Inverse

Kinematics Model) module and other modules which give the robot the possibility to

take into account information of its environment. This information comes from this

very level 2 and from the perception part (i.e. the ‘Proximity Model’ of the robot

environment). This ‘Proximity Model’ cluster contains various modules which

transform filtered measurements (coming from the ‘Sensor’ cluster of level 1

perception) into a model in the same frame as that of the trajectory (e.g. Cartesian

frame). This transformation is performed on line using sensors measures; no temporal

memorizing is carried out. The ‘Proximity Model’ obtained is thus limited to the

horizon of sensor measures. Typically, this ‘Proximity Model’ cluster contains

modules which apply the Direct Geometrical /Kinematics /Dynamic Models of the

robot to the proprioceptive data, and which express the exteroceptive data in a local

frame centred on the robot. Depending on the robot application, the 'Pilot' cluster uses

data resulting from the ‘Proximity Model’ cluster to carry out – or not – corrections

on the reference trajectory provided to it as input.

4

Page 10: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

3.3 Level 3: the navigator

The ‘Pilot’ cluster must receive its trajectory from the upper level of the Decision

part called the ‘Navigator’ cluster. It must generate the trajectories for the ‘Pilot’

cluster based on data received from the upper level. These input data are of a

geometrical type, still in a Cartesian frame, but not necessarily in the robot frame.

Moreover, these data do not integrate dynamics or kinematics aspects; contrary to the

trajectory, there is not a strict definition of the velocity, the acceleration or the force

versus time (for the AMS). These input data are called path – continuous or

discontinuous – in Cartesian frame. The ‘Navigator’ cluster must translate a path into

a trajectory. The navigator is situated at the interface between the “request” and the

“executable”: it is the most noticeable level of the proposed architecture. According to

the robot applications, the modules gathered in this cluster are based on various

theoretical methods. On the top of the mechanical constraints of the AMS, this

‘Navigator’ cluster must generate a trajectory in agreement with the robot

environment. Therefore, it needs information modelling its environment and coming

from the same level, i.e. the ‘local model’ cluster. This cluster models the local

environment of the robot beyond the range of its sensors in order for the ‘Navigator’

to test the validity of the trajectories before to deliver them to the ‘Pilot’. It uses the

displacement of the robot to locally model the environment around the robot.

3.4 Level 4: the path-planner

The ‘Navigator’ receives as input a path resulting from the path-planner cluster of

level 4 of the Decision part. This cluster generates the path using as input a goal, a

task or a mission. This functionality is performed in a long run in order to project the

path on a large temporal horizon. This high control loop corresponds to the

“deliberative” levels of comparable architectures. To be valid, this path must

imperatively be placed in a known environment. The path-planner uses information

resulting in a priori map from the global map of the perception part. Hence, the path-

planner can validate its path on this pre-established map. This map does not need to

be accurate with respect to the metric direction (absence of obstacle, errors of

dimension...) but must be correct topologically (closed area, connexity...). This model

can be either built on-line, by using data resulting from the lower level (“Local

Model” cluster), or pre-established and updated by the lower level.

3.5 Level 5: the mission generator

The ‘Mission Generator’ is the Decision part highest level cluster. It must generate

a succession of goals, tasks or missions for the ‘Path-planner’, according to the

general mission of the robot. It is the “ultimate” robot autonomy concept: the robot

generates its own attitudes by using its own decisions. The ‘Mission Generator’

cluster does not really have any input. It only needs general information on its

environment, its states and its possible attitudes. This is provided by the ‘General

5

Page 11: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

model’ cluster of the perception part of the same level. This cluster can also use a data

base. It corresponds to the “smart” attitude of the robot.

This architecture makes it possible to model any autonomous robot whatever its

application or its degree of autonomy. Its functioning is ensured by modules that are

clustered in different levels. Depending on the applications, all modules are not

required for all clusters and data flows may vary. Reversely, to produce a specific

robot for a dedicated application, this architecture can be used in order to specify each

module in each cluster, before carrying out the programming and the implementation.

4 The teleoperated part

To optimize the safety functioning and remote control of a tele-operated medical

robot, we have leveled the tele-operation mode similarly to the autonomous one. This

leveled tele-operation complements the levels of autonomy of a robot, substituting

itself to the eventually or voluntary missing degrees of autonomy. Hence, a third part

distant to the robot called ‘Tele-operation’ is defined. Each level of the ‘Tele-

operation’ part receives data resulting from the corresponding levels of the

‘Perception’ part and can replace the corresponding level of the ‘Decision’ part by

generating in its place data necessary to the lower level of the ‘Decision’ part.

Thus, several possible levels of tele-operation are identified:

- level 1 of tele-operation allows a human operator to actuate directly the

Articulated Mechanical Structure. This is a remote control in open loop where the

robot does not have any autonomy.

- level 2 of the tele-operation uses information of the ‘Proximity Model’ to allow a

human operator to replace level 2 of the ‘Decision’ part (i.e. the ‘Pilot’). It delivers

setting points necessary to level 1 to control the robot. This level of tele-operation can

preserve the level 1 autonomous loop of the robot. Not that there is no flow of

information between level 1 and level 2 of the tele-operation; the reason being that

they excluded each other: when the robot is tele-operated, it is done either from level

1 or from level 2.

- level 3 of the tele-operation allows an operator to generate a trajectory for the

‘Pilot’, hence taking the role of the ‘Navigator’. The human operator who carries out

this task uses data resulting from the ‘Local Model’ of the ‘Perception’ part. Thus, he

acts in place of the level 3 loop of the autonomy parts. The operator tele-operates the

robot still using the lowest levels of autonomy (2 and 1).

- level 4 of the tele-operation makes it possible for the human operator to send a

path to the ‘Navigator’ using the ‘Global Model’ of the ‘Perception’ part. He thus

replaces the level 4 loop of the autonomous parts and is assisted in the control by the

lowest levels.

- Finally, level 5 gives the possibility for an human operator to choose a mission

carried out by the robot autonomously, thanks to its autonomy levels (from 4 to 1).

This dynamic evolution of the tele-operation gives the possibility to keep the

human control on all the levels of an autonomous robot and provides an operational

security for the autonomous robot: currently, a human operator can take over the

robot when it cannot solve material or software failures. Of course, according to the

tele-operation level where the human operator acts, the man-machine interface is

6

Page 12: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

different. It can be a simple keyboard/joystick for the lower levels, a graphic

environment for the median levels or a textual analyzer for the higher level. Finally,

nothing prohibits the tele-operation to be ensured by computers instead of a human

operator.

5 The tele-echography robot: “OTELO”

OTELO is a European project (IST-2001-32516) coordinated by the LVR

laboratory during which 3 dedicated robots (Otelo 1 2 & 3) have been developed to

perform an end to end tele-echography. One of these probe holder robots can follow

the medical gesture performed by an ultrasound expert at a distance using an input

device. The expert receives and analyzes ultrasound images and modifies the probe

holder orientation accordingly to make a diagnosis [8]. OTELO 3, the last generation

of probe holder robot, has been developed based on the proposed open architecture

with some specificities of the tele-echography application.

Figure 3 – Control Architecture used for the OTELO tele-echography robot

‘A1_Servo’ module performs the position servoings of the 6 axes of the OTELO

robot. This dedicated robot is technologically complex: it has 3 DC-motors, 3 step-

motors, incremental encoders, analog absolute encoders, lvdt, force sensor and

various digital I/Os like optical switches. ‘P1_Proprio’ module gives all the inputs. In

fact, these two modules have their software distributed in many visual-C++

functions/objects, using functionalities of the advanced PMAC boards (advanced

motion controller boards made by Delta Tau Data System). Level 1 ensures the

articular servoings of the 6 joints of the robot. An ultrasound probe is held by the

robot which is maintained by a paramedic on the patient’s skin. However, companies

manufacturing ultrasound devices with advanced functionalities restrict the access to

the transducer signal; they only deliver ultrasound dynamic images. The ultrasound

device is thus considered as an isolated device covering 3 levels of the ‘Perception’

part. The 2D ultrasound image is considered as a local model of the environment and

it compressed by the ‘P3_Cprs’ module. The compressed ultrasound images are

7

Page 13: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

transmitted to the ‘T3_GUI’ module of the level 3 of the ‘Tele-operation’ part. This

module provides the dynamic image (on a monitor) for the medical expert. He uses a

pseudo haptic input device, the ‘T3_InDev’ module which transmits trajectories to the

‘Pilot’ cluster. The ‘Pilot’ cluster contains the ‘A2_IGM’ module which translates

trajectories in articular setting points for the level 1.

OTELO has been successfully tested during several trans-continental experiments

(Cyprus-Spain, Morocco and France) using various communication links (ISDN,

Satellite, Internet), since 2002, and has proved the validity of the tele-echography

concept in the medical environment.

6 Conclusion

As medicals robotics address the development of dedicated robots for a marketing

and application niche manufacturing companies face the challenge of cost effective

production line. The proposed architecture, supported by an open and modular

approach, will provided companies with such an optimal and efficient tools to develop

robotics system for a large variety of applications independently from theories and

methods.

REFERENCES

[1] Brooks R.A., “A robust layered control system for a mobile robot”, IEEE Journal of Robotics and

Automation, vol. 2, n°. 1, pp. 14–23, 1986.

[2] Albus J. S., “4D/RCS A Reference Model Architecture for Intelligent Unmanned Ground Vehicules”,

Proceedings of the SPIE 16th Annual International Symposium on Aerospace/Defense Sensing,

Simulation and Controls, Orlando, FL, April 1 – 5, 2002

[3] Volpe R., et al., “CLARAty: Coupled Layer Architecture for Robotic Autonomy.” JPL Technical

Report D-19975, Dec 2000.

[4] Alami R., Chatila R., Fleury S., Ghallab M., and Ingrand F., “An architecture for autonomy”, The

International Journal of Robotics Research, Special Issue on Integrated Architectures for Robot

Control and Programming, vol. 17, no 4, pp. 315-337, 1998.

[5] R. Lumia, J. Fiala, A. Wavering, "The NASREM Robot Control System Standard," Robotics &

Computer-Integrated Manufacturing, Vol. 6, no. 4, 1989, pp. 303–308

[6] Rosenblatt J., “DAMN: A Distributed Architecture for Mobile Navigation”, In proceedings of the

1995 AAAI Spring Symposium on Lessons Learned from Implemented Software Architectures for

Physical Agents, AAAI Press, Menlo Park, CA, 1995.

[7] Arkin R.C. “Behavior-based Robotics”, MIT Press, 1998

[8] Dalgalarrondo A., “Intégration de la fonction perception dans une architecture de contrôle de robot

mobile autonome”, Thèse de doctorat, Université Paris-Sud, Orsay, 2001.

[9] Hong-Ryeol K., Dae-Won K., Hong-Seong P., Hong-Seok K.and Hogil L. “Robot control software

framework in open distributer process architecture”, patent WO2005KR01391 20050512, May 2005.

[10] R. Zapata, "Quelques aspects topologiques de la planification de mouvements et des actions reflexes

en robotique mobile", Thèse d'état, Université Montpellier II, juillet 1991.

[11] Joseph Canou, Gilles Mourioux, Cyril Novales and Gérard Poisson, “A local map building process for

a reactive navigation of a mobile robot”, ICRA’2004, IEEE International Conference on Robotics and

Automation, April-Mai 2004, New Orleans.

[12] Gilles Mourioux, Cyril Novales and Gérard Poisson, “A hierarchical architecture to control

autonomous robots in an unknown environment”, ICIT’2004, IEEE International Conference on

Industrial Technology, December 2004, Hammamet.

[13] European OTELO project : “mObile Tele-Echography using an ultra Light rObot”, IST n°2001-

32516, leaded by the LVR (2001–2004), consortium : LVR (F), ITI-CERTH (G), Kingston Universite

(UK), CHU of Tours (F), CSC of Barcelona (E), Ebit (I), Sinters (F), Elsacom (I) and Kell (I).

[14] Marescaux J, et al. Transatlantic “Robot-Assisted Telesurgery”, Nature 2001;413:379-380

8

Page 14: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Requirements of a telesurgery system for hepatic

metastases of colorectal cancer

Jose M. Azorin1, Antonio F. Compan2, Jose M. Sabater1, Nicolas M. Garcia1,and Carlos Perez1

1 Virtual Reality and Robotics Lab, Universidad Miguel Hernandez de Elche, Spain?,[email protected]

2 Departamento de Patologia y Cirugia, Universidad Miguel Hernandez de Elche,Spain

Abstract. This paper describes the requirements a telesurgery systemmust verify in order to be applied to minimally invasive surgery (MIS)of hepatic metastases of colorectal cancer. First, the paper explains thehepatic metastases of colorectal cancer operation through manual MIS.Then, the paper presents the concept architecture of a telesurgery systembased on master and slave robots that is being developed to perform thisoperation in a remote way.

1 Introduction

Minimally invasive surgery (MIS) is an operation technique established in the1980’s. It differs from open surgery in that the operation is performed with in-struments and viewing equipment inserted in the human body through smallincisions. Usually, four small incisions are necessary: two for the surgical instru-ments, one for the laparoscope, and one for insufflating CO2, see Figure 1. Themain advantage of this technique is the limited trauma to healthy tissue, reduc-ing the post-operative hospital stay of the patient. However this technique hassome disadvantages for the surgeon as reduced sight, loss of haptic feedback, andloss of the direct hand eye coordination [1].The robot-assisted surgery can be used to avoid the drawbacks of manual

MIS. Robot-assisted surgery has became a new research focus of the roboticscommunity during the last years. The followed objective is to develop “a part-nership between man (the surgeon) and machine (the robot) that seeks to ex-ploit the capabilities of both to do a task better than either can do alone” [2].Telesurgery and surgical simulators belong to this new robotics area. Telesurgeryallows surgeons to perform remote surgical operations using slave robots andhaptic master interfaces. On the other hand, surgical simulators are based invirtual environments where the surgeon uses haptic interfaces to control virtualmedical robots. These virtual environments incorporate accurate and reliablemathematical models of the human body part that is going to be operated and

? This work has been supported by the Ministerio de Educacion y Ciencia of theSpanish Government through project DPI2005-08203-C02-02.

9

Page 15: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 1. Description of a Minimal Invasive Surgery setup.

of the rigid bodies of the medical apparatus involved. Surgical simulators can beapplied to surgical training [3], and to biomedical research. In addition, thesesurgical simulators become an indispensable tool in telesurgery, since any actionperformed by the surgeon over the patient can be verified in the simulator beforeit would be executed by the remote robot.Currently several international research groups are working on telesurgery

systems [1]. In addition, the daVinci system from Intuitive Surgical Inc. is com-mercially available [4].In this paper a new telesurgery system is presented. The novelty of this

telesurgery system is its application to MIS of hepatic metastases of colorectalcancer, because the authors do not know any other similar system for this op-eration. The paper describes in detail the manual MIS of hepatic metastases ofcolorectal cancer. From this knowledge, the architecture of a telesurgery systembased in master and slave robots is explained. This architecture will allow toperform this operation remotely and will provide more benefits than the manualprocedure.The paper is organized as follows. Section 2 explains the manual MIS of

hepatic metastases of colorectal cancer. The requirements that must verify atelesurgery system in order to perform this operation in a remote way are de-scribed in section 3. Finally, the conclusions of the paper are summarized insection 4.

2 Minimally invasive surgery of hepatic metastases of

colorectal cancer

Colorectal cancer is one of the most common malignancy in the western world.The most common reason these patients die is due to the cancer spreading to

10

Page 16: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

the liver (hepatic metastases). Surgical resection is the treatment of choice formost patients whose cancer has metastasized to the liver.

The prognosis for a patient with hepatic metastases is most likely to bedependent on: the presence of tumours found outside the liver (extra hepaticmetastases); the number of liver tumours and the proportion of liver replacedby tumour.

Several studies have shown that liver resection is the most effective treatmentcurrently available for patients with hepatic metastases [5]. In order to maximizethe benefits of surgical resection, it is critical for a surgeon to rule out the pres-ence of disease outside the liver. With few exceptions, patients who have extrahepatic disease do not benefit from surgical resection. The preoperative evalua-tion then becomes a critical factor in maximizing the benefit of liver resection inthese patients. A careful algorithm for the follow-up and evaluation of patientswith colorectal cancer will identify those patients with hepatic metastases whowill benefit from liver resection.

The best screening method for patients with colorectal cancer is not clear.Some physicians favour frequent, aggressive monitoring such as checking levelsof an antigen specific to colorectal cancer (CEA) and using CT scans to identifytreatable local recurrence in the colon or distant metastases outside the colon[6].

All patients should have a carefully preoperative evaluation including a goodquality chest x-ray or a chest CT scan because patients with systemic metas-tases should not undergo liver resection. Although, a single lung metastasis inthe presence of liver metastases is not a contraindication to liver resection whenlung resection is part of the treatment plan. A surgeon will administer an ab-dominopelvic CT scan to identify local recurrence or any new disease in theabdominal region. A CT scan or even magnetic resonance imaging (MRI) is es-sential to surgical planning. Both tests have similar specificity and sensitivity inidentifying hepatic metastases and can detect tumours less than one centimetre.Surgeons need to know the number and size of tumours and their relationshipto major vascular and biliary structures. These studies must be supplementedby intraoperative evaluation of the liver using ultrasound in order to identifyotherwise undetected disease. The goals of liver resection are complete removalof all cancerous tissue and a minimum risk of morbidity and mortality. Patientswere considered to be candidates for liver resection if they had three or fewertumours confined to one hepatic lobe but, newer data indicate that the numberof tumours may be less important than the completeness of resection.

The treatment of choice of patients with liver metastases or gold standardhas been the complete resection of all the hepatic tumours by open laparotomyleaving a sufficient tissue for avoiding hepatic failure [7]. This intervention pro-vide good results but, in certain cases, postoperative complications can appearlike bile leaks, haemorrhages, pain, intraabdominal abscess, fluid in the lungs,etcetera. Logically, this method will require a number of conditions, specifically atraining team: nurses, anaesthesiologists and overall, specialized surgeons in ma-

11

Page 17: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

jor hepatic surgery, technologically advance instrumental and devices, intensivecare unit and so on.The most recent advances in the field of minimally invasive surgery have fo-

cused on solid organ surgery even laparoscopic liver resection. Challenges of livermanipulation, parenchymal transaction and haemostasis each presented hurdlesto the progress of laparoscopy hepatic surgery. Advantages to laparoscopic liverresection are similar to other minimally invasive surgical procedures.The technique of laparoscopic hepatectomy has been evolving over the past

five years and the procedure can be roughly broken down into the next steps:1. Port selection and placement; 2. Liver mobilisation; 3. Laparoscopic intraop-erative ultrasound; 4. Hilar vascular dissection; 5. Inferior vena cava control; 6.Parenchymal transection; 7. Specimen removal; 8. Haemostasis.After placing the ports, the liver is mobilised by taking down the round and

falciform ligaments. Laparoscopic intraoperative ultrasound is performed and itsutility cannot be overstated. It is very sensitive in identifying lesions that areless than 1 cm in size and is also valuable to identify vascular landmarks andtumours margins to permit safe ligation and obtain clear margins.Hilar vascular dissection is performed. For formal right hepatic lobectomy, the

liver is elevated off the inferior vena cava with control of the short hepatic veinsand right hepatic vein. The intended liver parenchymal transaction is routinelymarked on the liver surface using cautery. The liver capsule and parenchyma aredivided. Once 2 - 3 cm of depth has been reached or whenever major blood vesselsare encountered, vascular staplers can be used to crush the parenchyma anddivide the vessels. Others devices that surgeons can need are: harmonic scalpel,saline-cooled radiofrequency coagulation device (Tissue Link), Ligasure device,Argon beam coagulation, and so on. Of course, this is a very complex procedureand several years of training are needed for performing these operations.

3 Requirements of a telesurgery system

A possibility for providing quality care to remote regions or performing hep-atic metastases of colorectal cancer operations in small hospitals with generalsurgeons not specialized in hepatic surgery will be the use of telesurgery androbotics systems. In addition, these systems will solve some surgical problems asfatigue or tremors, and they will allow to define computer-controlled exclusionzones. This way, the chance of accidental damage to nearby tissue working nearsensitive areas (e.g. major hepatic vessels) is eliminated. In these systems, a 3Dmodel of the patient organs must be obtained. This model must be generatedfrom MRI or CT images. This model will be used to surgical planning or tosurgical simulation.In a manual MIS of hepatic metastases habitually the following people is

involved: three surgeons (the expert surgeon and two assistant surgeons), twonurses (the instrumentalist nurse and the auxiliar nurse), and one anesthesiol-ogist. In a telesurgery system the clinical requirements are very similar. In theremote area where the patient is placed there are two surgeons (the main sur-

12

Page 18: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

geon and the assistant), two nurses and the anesthesiologist. However the expertsurgeon could be physically in a remote place.Taking into account the previous considerations, a telesurgery system for

hepatic metastases of colorectal cancer is being developed. Next, the hardwareand software architecture of this system is described.

3.1 Hardware architecture

Figure 2 shows the hardware architecture of the telesurgery system. Three dif-ferent zones can be identified in the figure. The different robotic systems areplaced in the zone I, while in the zone II, the control systems of these devicesare located. Both zones are placed inside the operating room. In the zone III(remote area), the Remote Expert Surgeon (RES) performs the operation usingan assistant system. Between zones I and II the communications are establishedsatisfying the real-time requirement in order to guarantee the reliability of thesystem. The main devices that are shown in this scheme are described below.

OR Main Surgeon

OR Assistant

MPW1 MPW2

MEND

END

PW1

PW2

OR Assistants

VEND

V3D

VRSim

Simulator

VEND

V3D

VRSim

RemoteExpertSurgeon

MSim1 MSim2

Simulator

delayedcommunications

Real Timecommunications

Workstation

Robotloop

Zone IZone IIZone III

Fig. 2. Hardware architecture of the telesurgery system.

END Endoscope. The endoscope is controlled by the assistant surgeon in theoperating room (OR Assistant) through a orientation device (MEND).

PW1 and PW2 Parallel Wrists. There are two robot arms composed by aserial-parallel hybrid system. The position of these arms is controlled by acartesian robot, while the orientation is controlled using parallel wrists withspherical configuration. This design avoids kinematics singularities withoutusing redundant mechanisms. The possibility of modifying the location ofthe center of rotation of the spherical wrist will also allow to get a betterworkspace for laparoscopic operations. The wrists PW1 and PW2 allow tosense the interaction forces between the surgical tools and the tissues. These

13

Page 19: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

arms are commanded by two haptic masters of 6 d.o.f. (MPW1 and MPW2).Nowadays the Omni masters of Sensable are used. These masters allow toreflect 3 d.o.f. forces. In this way the forces of the end of the tool are fedbackto the surgeons.

Simulator The main computer allows the connection between the differentzones. This computer contains the dynamic telesurgery simulator [8]. Thissimulator has a virtual model of the remote environment (organs and tissuesof the patient, and slave robots). This model is updated with the informationfedback from the cameras and sensors of the zone I. The main functions ofthe simulator are two:– It is a planning tool for the surgeons. The surgeons can previously verifysome actions in the simulator. In order to simulate the operation, thesimulator generates virtual forces based in the deformable model of thetissues and organs.

– It allows to receive the instructions from the RES surgeon (explainedbelow).

A reduced version of the simulator is available in the zone III. This simulatorallows that the RES surgeon can plan the operation and he can aid to the ORmain surgeon. This way, the RES surgeon would perform the operation in thesimulator, and the OR surgeon would visualize the procedure of the operation.The RES surgeon can send the needed instructions to the operating room peopleby oral communication. The haptic masters used by the RES are the same thatthe masters used by the OR surgeon. The only difference is that, in this case,these masters control the virtual devices of the simulator and do not have anycontrol over the devices of the zone I. Finally, in the zone III a display shows allthe views of the remote operation zone: a 3D view from two stereoscopic camerasthat observe the scene, a view from the laparoscope, and a view of the virtualscene modeled in the simulator. The delay in the communication between thezone III and zone II does not affect to the operation because the RES surgeonguides the OR surgeon in a off-line mode.

3.2 Software architecture

Different software libraries is being used to developed the architecture presentedpreviously. The software architecture of the telesurgical system is shown in Fig-ure 3. The main software modules of this architecture are explained below.

– In order to get the 3D model of the patient organs, a C++ software thatgenerates and visualizes a 3D solid model from CT/MRI images has beendeveloped (see figure 4). The main characteristic of this application is theability to create a soft organ (like a liver) from not so good resolution images(like CT images). This software has been developed using the VTK opensource library [9].

– For the rendering of the forces in the haptic devices, the libraries providedby Sensable have been used. The HDAPI library is used to control some

14

Page 20: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

VTK libCT / MRIimages

QT

OpenGL

OpenTissue

HLAPI

AR toolkit

RCLibto / fromdevices

to / fromdevices

HDAPI

Fig. 3. Software architecture of the telesurgery system.

low level characteristics of the devices, while the HLAPI has been used tointegrate the devices in the simulator.

– The dynamic simulator is developed in C++ using the OpenTissue opensource library [10]. Nowadays the algorithms to calculate the forces in thedeformable models are being modified in order to provide more reliability.The actual software uses a mass-damper algorithm to solve the importedmesh from the 3D-images software,(see figure 5). The virtual environmenthas been represented using OpenGL graphic libraries, and the user interfacehas been programmed with Qt libraries.

– Finally, the RCLib libraries have been developed for the remote control ofthe robotic devices located in the zone I. These libraries contain the kinemat-ics, planning of trajectory, and dynamic control algorithms of the cartesianrobot and the parallel wrists. With the help of these libraries, the singular-ity problem of the wrists are easily overcomed, and the robotic devices areintegrated with the simulator system.

Fig. 4. 3D software to generate soft organs.

4 Conclusions

In this paper, the requirements that must verify a telesurgery system for MIS ofhepatic metastases of colorectal cancer have been described. The development

15

Page 21: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 5. 3D software to generate soft organs.

of a telesurgery system for this operation will provide, besides performing theoperation remotely, different clinical benefits, as to solve the fatigue and tremorproblems, or to define computer-controlled exclusion zones.

References

1. Ortmaier, T.: Motion compensation in minimally invasive robotic surgery. PhDthesis, Lehrstuhl fur Realzeit-Computersysteme Technische Universitat Munchen(2002)

2. Taylor, R., Paul, H., Kazanzides, P., Mittelstadt, B., Hanson, W., Zuhars, J.,Williamson, B., B.Musits, Glassman, E., Bargar, W.: Taming the bull: Safety in aprecise surgical robot. In: Proc. IEEE Int. Conf. on Advanced Robotics (ICAR).Number 1 (1991) 865–870

3. Khnapfel, U., Camak, H., Maab, H.: Endoscopic surgery training using virtualreality and deformable tissue simulation. Computer and Graphics 24 (2000) 671–682

4. Guthart, G., Salisbury, J.: The intuitive telesurgery system: Overview and ap-plication. In: 2000 IEEE International Conference on Robotics and Automation.(2000)

5. Adam, R.: Current surgical strategies for the treatment of colorectal cancer livermetastases. European Journal of Cancer supplements 2 (7) (2004) 21–26

6. Faria, S., Tamm, E., Varavithya, V., Phongkitkarum, S., Daur, H., Szkalruk, J.,DuBrow, R., Charnsangavej, C.: Systematic approach to the analysis of cross-sectional imaging for surveillance of recurrent colorectal cancer. European Journalof Radiology 53 (3) (2005) 387–396

7. Blumgart, L., Fong, Y.: Surgical options in the treatment of hepatic metastasisfrom colorectal cancer. Curr Probl Surg 32 (5) (1995) 333–421

8. Sabater, J., Azorin, J., Garcia, N., Perez, C.: Open architecture haptics simulatorfor robot-assisted surgery. In: Computer Assisted Orthopaedic Surgery (5th AnnualMeeting of CAOS-Int. Proceedings, Helsinki, Finland. Volume 1. (2005) 392–393

9. Schroeder, W., Martin, K., Lorensen, B.: The Visualization Toolkit. An objectoriented approach to 3D graphics. Kitware (2004)

10. Erleben, K.: Stable, robust and versatile multibody dynamics animation. PhDthesis, DIKU, Computer Science, Copenhagen (2004)

16

Page 22: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Telemanipulation of Snake-Like Robots for

Minimally Invasive Surgery of the Upper Airway

Ankur Kapoor1, Kai Xu2, Wei Wei2, Nabil Simaan2, and Russell H. Taylor1

1 Johns Hopkins Universitykapoor, [email protected]

2 Columbia Universitykx2102, ww2161, [email protected]

Abstract. This research focuses on developing and testing the high-level control of a novel eight DOF hybrid robot using a DaVinci mastermanipulator. The teleoperation control is formulated as weighted, multiobjective constrained least square (LS) optimization problems - one forthe master controller and the other for the slave controller. This allowsus to incorporate various virtual fixtures in our control algorithm as con-straints of the LS problem based on the robot environment. Experimentalvalidation to ensure position tracking and sufficient dexterity to performsuturing in confined spaces such as throat are presented.

Minimally invasive surgery (MIS) of the chest and abdomen provide multipleports to access the anatomy and allows large motions at the proximal jointsof telesurgical slave robots. Compared to MIS of the chest and abdomen, it isvital to minimize the motions of the proximal joints in single entry port MISdue to strict space limitations. In addition, the slave robots must have a highdistal dexterity and a large number of DOF. This paper presents a general high-level control method for telemanipulation using a linear least square optimiza-tion framework. This framework allows easy incorporation of virtual fixtures inhigh-DOF telesurgical systems for single entry port MIS. The validation of ourframework is performed on a novel system for MIS of the throat [1] using aDaVinci master manipulator in our laboratory.

MIS of the throat is characterized by a single entry port (patient’s mouth)through which surgical tools operate. Current manual tools are hard to manipu-late precisely, and lack sufficient dexterity to permit common surgical tasks suchas suturing. This clinical problem motivated the development of a novel systemfor MIS of the upper airway. Our proposed solution is a telesurgical robot witha hybrid slave manipulator that has a snake-like unit (SLU) at its distal end,which provides high tool-tip dexterity. In previous works [2–5] novel telesurgi-cal slave robots implementing snake-like units for distal dexterity enhancementwere presented, but few address the issue of human interfaces to these complexmechanisms. Csencsits et al. [6] evaluated interfaces for snake-like devices usingjoysticks. Our method allows the addition of virtual fixtures to enable intuitivecontrol of snake-like devices by a human user. Moreover, virtual fixtures can beused to augment MIS tasks that require better-than-human levels of precision.

Virtual fixtures (VF), which have been discussed previously in the literaturefor cooperative robots [7, 8], are algorithms which provide anisotropic behav-

17

Page 23: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

ior to surgeons motion command besides filtering out tremor to provide safetyand precision. Virtual fixtures have been implemented on impedance-type tele-operators under various forms in [9–11]. It has been shown that implementingforbidden-region virtual fixtures using impedance control techniques can leadto instability [12]. Moreover, these works are based either on a specific robottype (admittance or impedance) or on a specific task. We present a method thatcovers implementation of both guidance and forbidden regions and is suitablefor both types of robots. In doing this we extend the work of Funda et al. [13]and Li and Taylor [14, 15] to teleoperated systems with impedance-type masterrobots. This will enable us to implement stable virtual fixtures on these typesof robots.

1 Methods

1.1 Control Algorithm Overview

PD Master+

+ ++

High level Controller

For each joint

PID Slave�

+

High level Controller

�+

For each joint

+

Low-level Controller

Low-level Controller

LAN

Transformation

Transformation

Master Controller

Slave Controller

Fig. 1. Block diagram of current implementation of master-slave controller

In this section we outline a new method to address telemanipulation of anadmittance-type snake-like robot using an impedance-type robot. To achieve this

18

Page 24: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

we mimic an admittance-type behavior on the master manipulator. The overallstructure of the control algorithm is shown in Figure 1. There are separatecontrollers for the master and the slave connected through a communicationnetwork. In the following sections subscripts m and s are used for master andslave respectively and the double subscript m, s is used for variables that arerelated to both master and slave. The overall method works as follows

1. Individual joints of the manipulator are servoed with low-level controllers(PD or PID) to position set points.

2. A desired Cartesian velocity is calculated for each manipulator. Sections 1.2and 1.3 discuss the methods to compute the desired Cartesian velocities formaster and slave respectively.

3. A constrained least square problem is solved for the joint velocities by thehigh-level controller. The least square problem has an objective functiondescribing desired outcome. It may also include constraints that considerany motion constraints due to VF, joint limits, and velocity limits. Thisproblem has the general form of

arg min∆qr/∆t ‖(Ar(qr, qr) · ∆xr/∆t − ∆xdr/∆t)‖,

s.t. Hr(qr, qr) · ∆xr/∆t ≥ hr, and ∆xr/∆t = Jr∆qr/∆t(1)

where r ∈ {m, s} for master or slave. ∆qr is the desired incremental motionof the joint variables. The desired incremental motion in Cartesian space,∆xd

r = g(fuser, qr) is a function of user’s input, fuser, and joint variables,qr. Matrix Jr is the Jacobian of the manipulator. ∆t is the small time intervalof high-level control loop. Matrices Hr and Ar along with vector hr definethe behavior of the robot to a given input. Without any constraints, theabove constrained LS problem, which is implemented in the high level blockshown in Figure 1, is equivalent to resolved rate control.

4. Numerically integrate the joint velocities to arrive at a new set of joint po-sitions. We assume that for each iteration loop, the incremental motions aresufficiently small and ∆xr/∆t = Jr∆qr/∆t represents a good approximationto the relationship between ∆xr/∆t and ∆qr/∆t.

1.2 Implementation on Master Manipulator

In this section, we discuss the desired behavior of the master manipulator toan user input and formulate a specific constrained least square problem basedon the general form presented in (1). We model the manipulator as a kinematicdevice having a position pm ∈ ℜ3 and orientation given by a rotation matrixRm ∈ ℜ3×3. The frame Fm =

(

Rm, pm

)

is computed using the actual encoder

joint positions qm and the forward kinematics. The frame F cm =

(

Rcm, pc

m

)

iscommanded frame, specified by commanded joint positions qc

m (reference setpoints for servo controller) and the forward kinematics.

Desired Cartesian Velocity In the admittance-type devices a force sensormeasures the user input, fuser. The desired Cartesian velocity is computed by

19

Page 25: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

multiplying the user input by the user supplied admittance gain matrix Ka. Ithas been shown in [16] that for 3 DOF impedance-type robots, under quasi-static condition, the applied user force, fuser can be measured approximatelyby the position error. We extend it to the 6 DOF case by defining the positionand orientation errors as pe

m = pcm − pm and Re

m = Rcm · R−1

m , respectively.Further we make use of the small angle approximation to Rodriguez formula forthe orientation error,

Rem = I + (sin θm)ωm + (1 − cos θm)(ωm)2 ≈ I + θmωm;

∥ωm

2= 1 (2)

where ωm is a skew symmetric matrix corresponding to vector ωm. Scalar θm

is the rotation angle about unit vector ωm. We can replace the force sensormeasurement in the admittance control law by a wrench that is a six vectorobtained by concatenating pe

m and θmωm, that is, ∆xdm/∆t = Ka

[

pem ; θmωm

]

.A small dead-band is used on this error to avoid motion that might arise due tosmall errors in the servo control.

Objective function In [13] Funda et al presented a constrained least squareframework for robot position control. In this work we extend it for teleoperationcontrol of a master-slave. We identify three objective criteria for the tip frame,that are required to achieve desired motion of the tip. First, we require thatan incremental motion of master be as close as possible to the desired Carte-sian velocity, ∆xd

m/∆t, which is the function of user input. We express this as:min∆qm/∆t

∥∆xm/∆t − ∆xdm/∆t

∥.

The second objective criterion concerns the teleoperator that connects themaster and slave tips. The slave follows the master positions, but the master mustalso provide resistance to the user based on the position of the slave and/or theforce at the slave tip. For example if the slave is lagging behind, the master canhaptically display this to the user by opposing the user’s motion that increasesthe master-slave tracking error. We model this virtual coupling as a spring,which results in: min∆qm/∆t

∥∆xm/∆t − ∆xem,s/∆t

∥, where ∆xem,s is a function

of both the master and slave positions. We defer the discussion on computationof ∆xe

m,s until section 1.3.

Finally we would like to minimize the extraneous motion of the joints, andavoid large incremental joint motions that could occur near singularities, thatis, min∆qm/∆t

∥∆qm/∆t∥

∥.

We can combine the three objective criteria by using a diagonal matrix ofweighting factors Wm,t, Wm,s and Wm,j associated with each of the objectives.The diagonal elements of Wm,t specify the relative importance of each componentof the desired Cartesian motion ∆xd

m. The ratio of diagonal elements of Wm,s

to elements of Wm,t specify the “stiffness” of the virtual spring connecting themaster, and the slave tips. A factor close to zero implies a loose connection or noconnection. The ratios between the elements of Wm,j themselves can be used tofavor motion of some joints over others. We can project the Cartesian tip motion

20

Page 26: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

to the joint motion via the Jacobian Jm, the final objective function is

min∆qm

Wm,t 0 0

0 Wm,s 0

0 0 Wm,j

Jm

Jm

I

∆qm −

∆xdm

∆xem,s

0

(3)

Optimization constraints By defining instantaneous motion relationships be-tween different task frames {i} and the incremental joint motions we can imple-ment VF for those task frames. The relationship has the form

Hm,iJm,i(qm)∆qm ≥ hm,i (4)

where Jm,i(qm) is the Jacobian relating Cartesian task frame vector, ∆xm,i tothe incremental joint motion. In [17] we had proposed a library of five primitivesthat could be used to create VF for different tasks by appropriately selectingmatrix Hm,i and vector hm,i. Our approach allows us to use the basic primitivesdeveloped in [17] to a master-slave teleoperator system. Currently we haveimplemented two sets of constraints for joint limits and joint velocities. The twoset of limit constraints can be combined to give the following set of equations

Hm,j∆qm ≥ hm,j ; where Hm,j =

I−II−I

and hm,j =

qm,L − qm

qm − qm,U

qm,U · ∆tqm,U · ∆t

(5)

where qm,L and qm,U are lower and upper bounds of joint ranges and qm,U isthe upper bound of the joint velocities. In future we plan to analyze bimanualteleoperation tasks such as knot tying and add additional constraints using thisframework.

1.3 Implementation on Slave Manipulator

In this section we discuss objective and constraints for the slave device optimiza-tion problem.

Desired Cartesian Motion We define two frames called the neutral frames(denoted by letter N) that are specific tip frames of master and slave, chosen bythe user. They are chosen such that the user perceives through the Head MountedDisplay (HMD) that the slave gripper is aligned with her hand orientation. Wedenote the tip frame of the master with respect to a neutral frame as nFm =(nRm,n pm) and neutral frame of the slave with respect to its base frame as Ns.For the user to always perceive the slave gripper is aligned with her hand, werequire that the slave tip motion with respect to its neutral position be same asmaster tip motion with respect to its neutral. Thus we can write slave tip framewith respect to its base frame as F d

s = (Rds ,p

ds) = Ns ·

n Fm.A six vector ∆xd

s,m can be computed by taking the difference between the

desired frame and the current slave tip frame. The matrix Res = Rd

s · R−1

s canbe converted to a three vector by using the Rodriguez formula in (2) or itssmall angle approximation if applicable. Thus, ∆xd

s,m =[

pds − ps; θsωs

]

. Thecomputation of ∆xe

m,s required in (3) can be accomplished by exchanging theroles of master and slave in the above discussion.

21

Page 27: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Objective function The objective function for the slave has two criteria; onefor following the desired motion and the other to restrict extraneous motion ofthe joints. The final equation is

min∆qs

[

Ws,t 0

0 Wm,j

] [

Js

I

]

∆qs −

[

∆xds

0

]∥

(6)

2 Experimental Setup

(a)

(b)

(c)

SLU – 2nd Stage SLU – 1st StageGripper

Five Bar Stage

Z-T Stage

Actuation Unit

DaVinciMaster

HMD Display

5mm

(d)

Fig. 2. (a) Master manipulator with HMD display (b) 8-DOF hybrid slave manipulator(c) Closeup of distal end (d) The two SLU’s in a bent configuration.

The experimental setup consists of a DaVinci master which is a 7 DOF hapticdevice, a custom 8 DOF hybrid slave manipulator and a stereo vision system.The master is a commercially available system with its controller replaced bycustom hardware and software, thus allowing us greater flexibility in control.The servo loops have a sampling rate of 1kHz. The devices communicate overTCP network with a sampling rate of 100Hz, which is the same as the samplingrate for the local high-level control loop. The gains of the PD servo controllersare chosen such that the settling time is less than the sampling rate of masterhigh-level loop. This ensures that the quasi-static approximation required insection 1.2 is met. The vision system consists of a stereo laparoscopic cameramounted at the end of a passive arm. The video streams are displayed on aHMD.

22

Page 28: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

The 8-DOF hybrid slave manipulator is unique and its design was motivatedby MIS of the upper airway including the throat and larynx. The slave ma-nipulator consists of two long and slender robotic arms, comprised of a distalend and a proximal end. The distal end consists of two snake-like units (SLU)[1]. Each SLU has four NiTi tubes that use the super-elastic properties of NiTito allow creation of a slender 2 DOF mechanism. One of these tubes (primarybackbone) is located at the center and is connected to all the discs. The otherthree tubes (secondary backbones) are connected only to the end disc and canslide freely through other discs. The 2 DOF are achieved by pushing or pullingthe secondary backbones by three actuators that are located at the proximalend. The second SLU is located at the tip of the first SLU. Inside the backbonesof the first SLU, there are three more super elastic NiTi tubes that actuate thesecond SLU. An additional wire passes through the central backbone to actuatea detachable two-jaw gripper fixed to the end disc of the second SLU. The use offlexible backbone eliminates the need for miniature joints and pulleys. Moreoverthe redundant push-pull actuation provides actuation redundancy that enhancespayload carrying capability. These factors result in a mechanism that is downsizescalable, with our current prototype being 4mm in diameter.

Both the robotics arms pass through a laryngoscope that provides access tothe patient’s upper airways. In order to avoid obstructing the surgeon’s access,the actuator for the SLU are placed at the end of 1m steel tube. The actuator unitis mounted on compact 2 DOF “Z-Θ stage” capable of rotating and translatingthe actuator units about and along its own axis. The long shaft is supported by apassive universal joint mounted on a 2 DOF five bar mechanism. This provides“X-Y” Cartesian motion at the base of first SLU, and eliminates the lateraldeflections of the thin and long shafts.

Though remote actuation of the Distal Dexterity Unit (DDU) has advantages,it introduces modeling problems due to the extension of the actuation lines,friction, and backlash. To achieve faithful control in master-slave mode it isnecessary to implement actuation compensation. In a previous work [18] we useda combined model-based with recursive linear estimation approach to obtainthe correct actuation compensation. In this actuation compensation approachthe required compensation ǫl = ǫ(τ ,fs,λ, η) is a non-invertible function thatdepends on the required actuation forces τ of the flexible DDU, together withthe static friction fs in the actuation lines, and on the vector, λ, and a scalar η.The vector λ accounts for backlash parameters in all actuation lines while thescaling factor η accounts for modeling inaccuracies due to variations in materialproperties of the backbones and in the bending shape of the snake-like unit.The actuation forces τ are determined through static modeling of the snake-likeunit [19] and the friction parameters fs are determined empirically. Both λ and ηare recursively estimated using an external measurement of the orientation of thesnake tip such as vision. Using this approach in [18] we successfully demonstratedreducing the tracking errors from more than 45 deg to less than 1 deg . Since ǫl

is a non-invertible function only an estimate of actual slave frame Rs can beobtained from the estimated lengths of backbones given by lest = lenc + ǫl by

23

Page 29: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

applying forward kinematics to lest. Where lest is estimated length and lenc islength measured by encoders in the actuation unit.

3 Results and Discussion

1 2 3 4

x

xx

yyy

(a)

Fig. 3. Series of pictures showing roll motion of gripper. x and y represent the X andY axes of gripper frame.

Suturing using a curved needle in confined workspace requires large dexterityat the distal end and sufficient roll about the gripper axis. Figure 3 shows aseries of pictures taken during one of these roll motions on one of our earlierprototypes using the control strategy presented in [15]. This has a potentialbenefit in laryngeal surgery as multiple instruments need to be used through anarrow opening of the laryngoscope. Thus, little or no motion of joints at theproximal end minimizes tool collision and gives the surgeon sufficient access tothe surgical site. We are currently evaluating this setup with a simple suturingphantom for ex vivo suturing.

4 Conclusions and Future Work

We have presented the high-level control of a telesurgical system designed con-sidering the special requirements of MIS of throat. The high level control is basedon linearized constraints, multi-objective least square optimization problem thatis easily extendable to include additional constraints such as collision avoidance,anatomic-based constraints [14] and joint limits.

The dexterity of the SLU was effectively used to provide roll movement of thegripper, without any motions of the proximal joints. This is a crucial requirementfor suturing in confined spaces, such as the throat.

The current results serve as a validation of our control to be used in thenovel telerobotic system for MIS of the throat. The current work also providesa basis for implementing virtual fixtures in impedance-type robots for complextasks such as suturing.

24

Page 30: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Acknowledgements

This work was partially funded by the National Science Foundation (NSF) underEngineering Research Center grant #EEC9731478, NSF grant #IIS9801684 andNational Institutes of Health (NIH) grant #R21 B004457-01 and by ColumbiaUniversity and Johns Hopkins University internal funds. The authors are alsothankful for the guidance and support of Dr. Paul Flint.

References

1. Simaan, N., et al.: High dexterity snake-like robotic slaves for minimally invasivetelesurgery of the upper airway. In: MICCAI. (2004) 17–24

2. Peirs, J., et al.: Design of an advanced tool guiding system for robotic surgery. In:ICRA. (2003) 2651–2656

3. Takahashi, H., et al.: Development of high dexterity minimally invasive surgicalsystem with augmented force feedback capability. In: BioRob. (2006)

4. Ikuta, K., et al.: Development of remote microsurgery robot and new surgicalprocedure for deep and narrow space. In: ICRA. (2003) 1103 – 1108

5. Charles, S., et al.: Dexterity-enhanced telerobotic microsurgery. In: ICAR. (1997)5–10

6. Csencsits, M., et al.: User interfaces for continuum robot arms. In: IROS. (2005)3123 – 3130

7. Davies, B.L., et al.: Active compliance in robotic surgerythe use of force controlas a dynamic constraint. Proc Inst Mech Eng [H]. 211(4) (1997) 285–292

8. Marayong, P., et al.: Spatial motion constraints: Theory and demonstrations forrobot guidance using virtual fixtures. In: ICRA. (2003) 1954–1959

9. Rosenberg, L.B.: Virtual fixtures: Perceptual tools for telerobotic manipulation.In: IEEE Virtual Reality Annual International Symposium. (1993) 76–82

10. Park, S., et al.: Virtual fixtures for robotic cardiac surgery. In: MICCAI. (2001)1419 – 1420

11. Turro, N., Khatib, O.: Haptically augmented teleoperation. In: Intl. Symposiumon Experimental Robotics. (2000) 1–10

12. Abbott, J.J., Okamura, A.M.: Analysis of virtual fixture contact stability fortelemanipulation. In: IROS. (2003) 2699 – 2706

13. Funda, J., et al.: Constrained cartesian motion control for teleoperated surgicalrobots. IEEE Trans. Robot. Automat. 12(3) (1996) 453–465

14. Li, M., Taylor, R.H.: Spatial motion constraints in medical robot using virtualfixtures generated by anatomy. In: ICRA. (2004) 1270–1275

15. Kapoor, A., et al.: Suturing in confined spaces: Constrained motion control of ahybrid 8-dof robot. In: ICAR. (2005) 452 – 459

16. Abbott, J.J., et al.: Steady-hand teleoperation with virtual fixtures. In: 12th IEEEWorkshop on Robot and Human Interactive Communication. (2003) 145–151

17. Li, M., et al.: A constrained optimization approach to virtual fixtures. In: IROS,Edmonton, Canada (2005) 1408 – 1413

18. Xu, K., Simaan, N.: Actuation compensation for flexible surgical snake-like robotswith redundant remote actuation. In: ICRA. (2006) 4148–4154

19. Simaan, N.: Snake-like units using flexible backbones and actuation redundancyfor enhanced miniaturization. In: ICRA. (2005) 3020–3028

25

Page 31: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Automatic Registration of a Needle Guide Robot for Minimally Invasive Interventional Procedures Using

Computed Tomography

Roland Stenzel 1, Gernot Kronreif 2, Martin Kornfeld 2, Ralph Lin 1, Peng Cheng 1, Kevin Cleary 1

1 Computer Aided Interventions and Medical Robotics (CAIMR) Laboratory, Imaging

Science & Information Systems (ISIS) Center, Department of Radiology, Georgetown University Medical Center, Washington, DC, USA

{stenzel, lin, cheng}@isis.georgetown.edu, [email protected] ARC Seibersdorf Research GmbH, Austria

{gernot.kronreif, martin.kornfeld}@arcs.ac.at

Abstract. Minimally invasive procedures are increasingly popular because they can reduce operative trauma, recovery times, and overall costs. However, during these procedures, the physician has a limited overview of the interventional field and of the exact position of surgical instruments. We address the issue of precise instrument placement by developing a system that automatically registers a small robot (B-RobII robot from ARC Seibersdorf Research GmbH) to a CT scan showing the patient and positions an instrument for percutaneous access. We use a custom needle guide with a built-in spiral fiducial pattern as the robot’s end-effector and intra-operative computed tomography (CT) to register the robot to the patient directly before the intervention. Then, the robot aligns an interventional instrument with the path between the skin entry point and the target inside the patient’s body. The path is chosen by the physician before the intervention using a Graphical User Interface (GUI). The physician can then manually advance the instrument into the body. This paper introduces workflow, software, and the algorithms used. Results are presented from a study to validate the algorithmic accuracy and suitability of the custom needle guide.

Keywords: needle guide; robot; minimally invasive; intervention; registration; spiral fiducial pattern; CT; path planning

1 Introduction

Minimally invasive procedures are increasingly attractive to patients and medical personnel because they can reduce operative trauma, recovery times, and overall costs. However, during these procedures, the physician has a very limited overview of the interventional field and of the exact position of surgical instruments. Moreover, the intervention has to be performed with limited degrees of freedom. Consequently, very few physicians can perform minimally invasive surgeries. For instance, only two

26

Page 32: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

percent of all physicians trained to perform conventional, invasive prostate cancer surgery are also trained to perform the minimally invasive counterpart [1]. To address these drawbacks of precise instrument placement, we have developed a system that automatically registers a small high-precision robot [3] to the intra-operative position of the patient, then positions a surgical instrument for percutaneous access along a path towards the target inside the patient’s body, and places the instrument at the desired skin entry point [2]. The system is highly portable, can be set up quickly and easily, and has an easily adaptable Graphical User Interface (figure 1).

Fig. 1. Graphical User Interface showing axial, coronal, sagittal, and 3D rendered views of the data. All views include an overlay of needle guide (yellow), instrument (green), and working area (violet). The robot consists of two 2-degree-of-freedom modules which can be shifted within the working area (x and y). A “finger” is attached to each module - together with a needle holder the two fingers are establishing a parallelogram kinematics which finally results in a rotation of the needle holder around x and y. We have integrated a custom needle guide with the robot to enable the precise registration of robot and patient. The custom needle guide contains a set of fiducials that we placed using a spiral pattern.

2 Methods

The proposed scenario for use of the system is as follows. First, the robot control box and intervention planning computer are set up prior to the patient’s arrival. Next, the patient is placed on the CT table. Then, the robot, including the needle guide, is mounted on the frame and positioned so that the robot’s working area is close to the

27

Page 33: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

assumed skin entry point. After the robot is positioned as such, a small set of axial CT images (5cm along the table) is obtained. The CT images are then used to automatically register the robot and the patient. First, an intensity threshold is calculated using the CT’s histogram. This threshold is defined by a local minimum close to the upper end of the histogram. Then, all voxels above the threshold are segmented and the fiducials are defined by labelling the segmentation result. The labelling algorithm assigns all voxels, which are part of the segmentation result, to a fiducial by running through the data only once. Next, Principal Component Analysis is used to sort all the fiducials based on the fiducial’s position in the needle guide’s model, resulting in a table that represents the one-to-one relationship between the fiducials in the needle guide’s model and the fiducials appearing in the CT. Then, the actual transformation between the CT (patient) space and the robot space is calculated using a closed-solution paired-point registration. Once the registration is performed, the physician can examine the CT using the graphical user interface. The visualization now includes a virtual representation of the instrument, needle guide, and the robot’s working space at the current position. The physician then selects the skin entry and target points, the robot aligns the needle guide along the chosen path; and the physician performs the intervention by advancing the instrument into the body along the path. The physician intervenes only to select the skin entry and target points within the CT scan, thus the interaction between physician and the application is minimized.

Figure 2 shows the setup from a phantom validation study; Figure 3 shows a TeraRecon (TeraRecon Inc.) software reconstruction of a CT scan of the robot, phantom, and donut fiducials; and Figure 4 shows the entire workflow.

Fig. 2. Robot set up in the CT suite

28

Page 34: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 3. TeraRecon software reconstruction of CT scan of the robot, phantom, and donut fiducials

Set up robot control box and intervention planning computer

Segment fiducial using threshold

Retrieve CT volume including needle guide

and target point

Position needle guide near assumed skin

entry point and home robot

Place patient on CT table, place frame and

positioning arm,and attach robot to

arm

Intervention planning computer: Plan

interventional path

Register robot space and patient space

Intervention planning computer: Examine CT with overlay of

needle guide

Robot aligns to path

Calculate theoretical robot movement to

align path and needle guide

Path reachable by robot?

Physician advances interventional

instrument into body

Calculate segmentation

threshold based on histogram

Label fiducial

Fig. 4. Workflow of intervention

3 Experimental setup

We developed a novel accuracy test which uses only the robot itself to evaluate the registration algorithm. The error calculation excludes external influences and external

29

Page 35: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

errors like measurements done with rulers or even OPTOTRAK / Polaris / Polaris Vicra (Northern Digital Inc.) by using only a 15G biopsy needle as an end-effector and several CT scans from different robot positions within the working area. Thus the experiment was done using the workflow shown in figure 4, but without a patient or a phantom. As shown in figure 5, the process is divided into data gathering and data analysis steps. Data gathering. We placed the robot in a random location using the positioning arm, homed the robot, and took several CT data sets as follows. The first CT ( ) was taken immediately after homing the robot. Then we created a list of three random needle guide positions (translation and rotation in x and y directions). The intervention planning computer then commanded the needle guide (holding a 15G needle) to position n from , and a CT ( ) was taken. This step was repeated for all positions on the list. After CT scans had been taken for all positions from

the entire process was repeated twice without any changes to obtain statistically more significant results.

HCT

GNGP

GNGP nCT

GNGP

Place positioning arm and robot at random position

Volunteers clicks on two points P on needle in CTn

Take CTn (w/ needle)

Move robot to next position from NGPG

Take CTH (w/o needle)

Create list NGPG of robot

translation/rotation in x/y

Home robot

Calculate error Tmax, Tmean, Tstdev, Rmax, Rmean, and Rstdev

between NGPG and NGPC

Calculate robot translation/rotation

NGPC using T and P

Register robot using CTH (results in

transformation T)

All CTn

All series

Data Gathering Data Analysis

Fig. 5. Workflow of experimental setup Data analysis. The error was calculated as follows. For every the primary author (RS) selected two points

nCTP , one near each end of the needle. was then registered

to the robot as described in Section 2 to obtain the transformation between robot space and CT (patient) space. The two points

HCT

P were used as the skin entry and target

30

Page 36: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

points. The intervention planning computer then calculated the desired needle guide position to align with the points P . This needle guide position became position n in the list of calculated needle guide positions . The difference between and

is the error, because (ground truth) was used to get and in turn is based on points

CNGP CNGP

GNGP GNGP nCT CNGPP from . nCT

4 Results

The results are shown in table 1 (different series and needle guide positions) and table 2 (total error). The mean error was 0.69 mm for translation and 1.17 degrees for rotation. These errors include the inaccuracy in selecting the points on the needle.

Table 1. Table 2.

Needle guide positionSeries 1 2 3

1 T_max 2,02 1,74 0,40T_mean 1,45 1,38 0,28T_stdev 0,61 0,40 0,17R_max 2,12 3,08 0,85R_mean 1,28 1,60 0,68R_stdev 0,75 1,28 0,20

2 T_max 0,67 0,85 0,51T_mean 0,61 0,67 0,40T_stdev 0,07 0,26 0,14R_max 2,18 2,10 2,18R_mean 1,05 1,83 1,27R_stdev 0,99 0,24 0,89

3 T_max 1,17 0,60 0,59T_mean 0,68 0,42 0,32T_stdev 0,43 0,22 0,24R_max 2,18 1,46 0,83R_mean 1,29 0,88 0,60R_stdev 0,79 0,55 0,30

T in mm, R in degree

TotalT_max 2,02T_mean 0,69T_stdev 0,50R_max 3,08R_mean 1,17R_stdev 0,74T in mm, R in degree

5 Conclusion

We have developed a system that can automatically register a small high-precision robot to a CT scan (patient) and position a surgical instrument for percutaneous access. We used a custom needle guide with a built-in spiral fiducial pattern as the robot’s end-effector. To evaluate the algorithm’s accuracy, we obtained CTs from different robot positions and calculated the error based only on these CTs. The mean error of the overall accuracy evaluation is 0.69 mm for translation and 1.17 degree for rotation.

31

Page 37: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

We conclude that our algorithmic approach is suitable for precise instrument placement for minimally invasive interventions and that the custom needle guide with a built-in spiral fiducial pattern enables a stable and accurate registration. The Graphical User Interface (GUI) and the overall workflow of examining the CT and defining the path is very user friendly and has already been used for several tests and demonstrations. The hardware system is also easy to set up and extremely portable. Although our software system and our robot have been designed for needle placement, they may also be applicable to other procedures that require precision instrument placement, such as radio frequency ablations, needle biopsies, and gene seed placements. We will continue to build strong bridges with our clinical partners to explore such further applications. Acknowledgments. This work was funded by U.S. Army grant W81XWH-04-1-0078 and administered by the Telemedicine and Advanced Technology Research Center (TATRC), Fort Detrick, Maryland. The content of this manuscript does not necessarily reflect the position or policy of the U.S. Government.

References

1. M. Marohn: Presentation at Surgery for Engineers 2006. Johns Hopkins University, Baltimore, 2006.

2. B. K. P. Horn. Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America A, Vol. 4, p. 629, 1987.

3. K. Cleary, A. Melzer, V. Watson, G. Kronreif and D. Stoianovici. Interventional Robotic Systems: Applications and Technology State-of-the-Art. Minimally Invasive Therapy. 2006; 15:2; 101-113.

32

Page 38: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Motion Compensated Surgical Robot for MRI-guided Cryotherapy of Liver Cancer

Nobuhiko Hata, Jan Lesniak, Kemal Tuncali

National Center for Image-guided Therapy, Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School

75 Francis St., Boston, MA 02115 [email protected]

Background

Cryotherapy, under Magnetic Resonance Imaging (MRI) guidance, is a promising method of cure for liver cancer, a disease responsible for over 600,000 annual deaths worldwide. However, cryoneedle placement during cryotherapy is a complex and time-consuming task. This difficulty is due, in large measure, to the lack of a method to accurately guide the cryoneedle while taking into account the motion of the liver, in order to cause minimal damage to surrounding organs. Our objective was to develop and evaluate a surgical robot equipped with an organ-motion compensation mechanism for use in MRI-guided cryotherapy of liver cancer. This robot is MRI safe and compatible, and assists in the accurate placement of the cryoneedle by restricting the needle guide to a pivoting motion centered at a target point within the liver, while maintaining the clinician’s ability to interactively select the optimal needle trajectory. We included liver motion tracking in the control loop of the robot to synchronize the cryoneedle motion to respiratory motion of the liver, such that the clinician can focus on correct placement within the stabilized coordinate frame of the liver. The successful application of this robotic cryoneedle guide will lead to more accurate placement and hence improved tumor ablation, and will ultimately help to eliminate the recurrence of cancer after cryotherapy.

Material and Methods

Clinical Setup

We used the 0.5-tesla (T) open-configuration MRI scanner (Signa SP/i, GE Healthcare Technologies. Waukesha, WI), which has two donut-shaped superconducting magnets, a 58-cm vertical gap between these magnets for the physician’s access, and a 60-cm (in diameter) patient bore. We adapted the robot to clinical protocol of MRI-guided cryotherapy of liver cancer described in detail in [2]. The cryotherapy delivery system is FDA-approved device (Cryohit; Galil Medical, Yokneam, Israel) which achieves temperatures as low as -185°C at the tip of the needle probes. The gas is delivered by using biocompatible and MR imaging–compatible

33

Page 39: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

cryoneedle (diameter range, 2.1–2.4 mm; length range, 16–20 cm), which are needlelike and had an outer diameter approximating 13–14 gauge.

MRI-compatible robot

Kinematics and design of the robot is described in detail in [1]. The base stage of the robot was placed beneath the patient’s bed on the side, and the L-shaped arm (with end-effector) was placed on this base stage with its top arm reaching above the patient’s abdomen. The base has the active unit with a three degrees-of-freedom (DOF) motorized stage and a two DOF cryoneedle-holding device at the end of the top arm. Using these active three DOF and passive two DOF motions, the robot controls the placement of the tip of the cryoneedle on the predefined RCM while allowing the physician to control the orientation of the cryoneedle and to select the optimal cryoneedle insertion path.

Robot Control

The principle control methods used in this study are a combination of RCM control and synergistic human-machine control. We also added motion compensation on the top of this principal control to cancel the offset introduced by the motion of the liver.

The synergistic control method to be used in the robot is described in Fig. 1. Two of the three inputs given to the control are the predefined location of a tumor and the continuously monitored orientation of cryoneedle. The other input is the location of liver tracked by the MRI navigator echo (described in detail in the next section). The definition of the tumor is possible by imaging the liver with volumetric MRI and manually selecting the tumor focus by operating the mouse in the MRI console. Given that the robot’s coordinate system is calibrated to the MRI scanner’s coordinate system, the predefined tumor site can be set as RCM in the coordinate system of the robot pRCM. Note that the pre-defined location of RCM in the liver is static throughout the session of the insertion of a cryoneedle to the target, while the RCM position with respect to the moving liver is continuously updated by MRI-based motion tracking (indicated as gray box in the Fig. 1). The location of the cryoneedle tip is dynamically updated during the physician’s search for the optimal cryoneedle trajectory (white boxes in Fig. 1).

34

Page 40: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 1. Block diagram of control loop. Remote-center-of-motion (RCM) position is predefined in the volumetric MRI. The manipulator controller computes position command xcmd using the predefined RCM position and the current position of the needle tip p. The passive needle holder is continuously manipulated by the operator and a set of encoder readings from the needle holder q is continuously measured. This encoder readings q combined with encoder readings from the active motion stage x are processed by the kinematics computing engine to calculate current needle tip position p. MR-navigator echo continuously monitor the location of the liver (indicated as gray box) and feed its output to loop.

Motion compensation

Navigator echo [2] is a quick MR prepulse sequence that measures the position of, the diaphragm, for example, before collecting imaging data. Hence, similar respiratory states of the patient can be identified and used to gate image data acquisition so that either respiratory gating or respiratory ordered-phase encoding minimizes respiration-induced image blurring.

We have recently reported the application of navigator echo to detect the motion of the liver in the interventional MRI setting [3]. The method could localize the moving liver in the order of 10 to 100 ms by embedding the navigator echo in the MRI imaging sequence. The method tried to match the projection profiles taken before and after the motion, along phase and frequency encoding direction, by optimizing the cross-correlation. We call this process project profile matching. The mode of registration we used to measure the liver motion in this article was three-dimensional rigid translation.

Extending our preliminary study in [3], we developed a modified gradient echo (TR/TE: 40 ms/7 ms, field of view (FOV) 100 mm, matrix 256 x 128), which is near the sequence of semi-real time imaging used in an intraoperative MRI. In this pulse sequence, non-phase-encoded pulse excitation and echo acquisition process was inserted in every other echo acquisition for imaging. A total of 64 projection profiles

35

Page 41: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

per axis will be obtained during one imaging (~10 sec). This way, we can update the location of the liver every 80ms.

The positions of the moving liver are integrated into the control loop of synergistic motion control. The measured location of the liver (gray box in Fig. 1) is added to the adder in the control logic.

Result and Discussion

As of submission of this article, we performed two sets of studies to assess the robot. The first set of the validation was an MRI safety and compatibility test following

guideline issued by U.S. Food and Drug Administration and Center for Devices and Radiological Health. We met the test protocol [4] written by the vendor of the scanner to verify the safety of the device in the scanner. Our goal final is to achieve safety and compatibility standards issued by ASTM and IEC, and recommended by FDA [9, 10]. Whether we meet or fail this goal is part of success measure in this project.

The second set of experiments was an assessment of the accuracy and repeatability of RCM control using target phantom in motion. The radius of virtual remote center of motion was set to 150 mm under the passive two DOF needle holder; this 150 mm needle length is typical of cryotherapy for liver tumors. An MRI quality assurarance phantom was moved perpendicular to the axis of the bore continuously at the speed of 50mm/sec. The optical tracking sensor integrated in the MRI scanner measured the location of needle tip, and this optical measurement was used as a gold standard. Our goal was met to achieve accuracy of 3.0 mm which is comparable to previously published “non-MRI compatible” device.

Acknowledgement

This study was supported by NIH 1U41RR019703.

References

1. Morikawa S, et al. Motorized Remote-Center-Of-Motion Constraint Robot to Assist MR-Guided Microwave Thermocoagulation of Liver Tumors. 14th Scientific Meeting International Society of Magnetic Resonance in Medicine

2. Wang, Y., et al., Algorithms for extracting motion information from navigator echoes. Magnetic Resonance in Medicine, 1996. 36(1): p. 117-123.

3. Tokuda, J.,et al., Motion tracking in MR-guided liver therapy by using navigator echoes and projection profile matching, Academic Radiology, 2004. 11(1), p.111-120.

4. MR Safety & Compatibility. GE Medical Systems, (http://www.gehealthcare.com/inen/rad/mri/products/spi/safety.html),1994.

36

Page 42: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Robotic MRI-guided Prostate Needle Placement

G.S. Fischer1, S.P. DiMaio2, I. Iordachita1, G. Fichtinger1

1 Center for Computer Integrated Surgery, Johns Hopkins University[gfisch, iordachita, gaborf]@jhu.edu

2 Surgical Planning Lab, Brigham and Women’s [email protected]

Abstract. The efficacy of image-guided needle-based therapy and biopsyin the management of prostate cancer has been demonstrated in numer-ous studies. Magnetic Resonance Imaging (MRI) is an ideal modality forguiding and monitoring prostatic interventions, due to its excellent visu-alization of the prostate, its sub-structure and surrounding tissues. De-spite these advantages, closed high-field MRI scanners (1.5T or greater)have not typically been used in prostate interventions. Limitations on theuse of conventional mechatronics and the confined physical space makepatient access challenging. We have designed a robotic assistant systemthat overcomes these difficulties and promises safe and reliable intra-prostatic needle placement. Prostate biopsy and brachytherapy proce-dures are to be performed entirely inside a closed 3T MRI scanner. Thepaper explains the design process, workspace analysis, component selec-tion and the state of the system currently being prototyped.

1 Introduction

One out of every 6 men in the United States will be diagnosed with prostate can-cer at some point in his life [1]. The definitive method of diagnosis is core needlebiopsy and each year approximately 1.5M core needle biopsies are performed,yielding about 220,000 new prostate cancer cases [1]. If the cancer is foundto be confined to the prostate, then low-dose-rate permanent brachytherapy–performed by implanting a large number (50-150) of radioactive pellets/seedsinto the prostate using thin needles (typically 18G)–is a common treatmentoption. A complex seed distribution pattern must be achieved with great ac-curacy in order to eradicate the cancer, while minimizing radiation toxicity toadjacent healthy tissues. Transrectal Ultrasound (TRUS) is the current “goldstandard” for guiding both biopsy and brachytherapy; however, current TRUS-guided biopsy has a detection rate of 20-30% [2], primarily due to the low sensi-tivity (60%) and poor positive predictive value (25%) of ultrasound [3]. Further-more, implanted seeds cannot be reliably identified in the TRUS image [4]. MRIseems to possess many capabilities that TRUS lacks with its high sensitivity fordetecting prostate tumors, high spatial resolution, excellent soft tissue contrast,and multiplanar volumetric imaging capabilities.

Robotic assistance has been investigated for guiding instrument placement inMRI, beginning with neurosurgery [5] and later percutaneous interventions [6,7].Chinzei et al. developed a general-purpose robotic assistant for open MRI [8]

37

Page 43: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

that was subsequently adapted for transperineal intra-prostatic needle place-ment [9]. Krieger et al. presented a 2-degree of freedom (DOF) passive, un-encoded and manually manipulated mechanical linkage to aim a needle guidefor transrectal prostate biopsy with MRI guidance [10]. Other recent develop-ments in MRI-compatible mechanisms include haptic interfaces for fMRI [11]and multi-modality actuators and robotics [12].

This work introduces the design of a novel computer-integrated robotic mech-anism for transperineal prostate needle placement in up to 3T closed-bore MRI.The mechanism is capable of orienting and driving the needle, as well as eject-ing radioactive seeds or harvesting tissue samples inside the magnet bore, underremote control of the physician without moving the patient out of the imagingspace. A description of how this robot fits into the broader complete interven-tional system is described in [13].

2 System Layout and Architecture

A comprehensive computer-integrated needle placement system has been de-signed in order to accurately target planned tissue sites by minimizing the mis-placement effects described above. The complete system is created by combiningtwo major modules with a standard commercial high-field diagnostic MRI scan-ner: 1) a visualization, planning and navigation engine and 2) a needle placementrobot. The architecture of this system is outlined in Figure 1.

ManualServo Control

3D Slicer Interface

Placement Device

Clinical Planning

KinematicVerification

NEEDLE PLACEMENT SYSTEM

Intra-op MR Images

MRI SYSTEM

PLANNING & NAVIGATION SYSTEM

IMRI

qd

q

Pre-op MR Images

Device & NeedleTracking

AutomaticServo Control

Plan Execution& Status

NavigationDisplay

Image AcquisitionInterface

IMRI_RT

IMRI_CONTROL

XT XA

a.

b.

c.

d.

e.

f.

h.

g.

Fig. 1. System Architecture.

Blocks a and b represent target planning by visual inspection of multi-parametric fused image datasets and by applying statistical atlases. Kinematicsof the needle trajectories are evaluated here, subject to anatomical constraints,as well as constraints of the needle placement mechanism. Device and needlenavigation are shown in blocks c, d and e, which are enclosed in a loop thatrepresents device/needle positioning and sensing/localization that iterate untilthe needle trajectory leads to placement at the desired target. Device and needletracking may be image-based, as illustrated by block c and its connection with

38

Page 44: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

block h. Blocks d and e facilitate control of the needle driver, based on thisimage-based servo loop, which will provide the ability to compensate for needleand tissue deflection effects. Physical positioning and insertion of the needle oc-cur in f and g, by a robotic device that provides remote operation of the needlewhile the patient is positioned within the magnet bore. This paper describes thedesign process of the robotic element of the broader system; the elements of thesystem as a whole are described in [13].

Fig. 2. Configuration of the robot with the patient: Patient positioning(top-left), workspaceavailable inside leg support / access tunnel(top-right). Robot prototype shown in a mockscanner (bottom-left), and workspace available inside leg rest/access tunnel as shown in amock scanner (bottom-right).

3 Design Requirements

3.1 Workspace AnalysisThe system’s purpose is to accurately place needles in the prostate for biopsy andbrachytherapy seed placement. The patient is positioned in the supine positionwith the legs spread and raised as shown in Fig. 2, similar to TRUS-guidedprocedures, but with legs lower due to bore constraints. The robot operates inthe confined space between the patient’s legs. A tunnel-like leg rest, shown in Fig.2 (left), was designed to fixate the patient in a convenient position and to isolatethe robotic manipulator from the patient’s body. Fig. 2 (right) shows the robotin a mock scanner with a patient positioned in the appropriate configuration.

39

Page 45: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

The average prostate size is 50mm (lateral) × 35mm (anterior-posterior) ×40mm (length) [14]. This will be verified from analyzing 50 MRI datasets ob-tained with the patient in the appropriate pose. The standard 60mm × 60mmperineal window of TRUS-guided brachytherapy was increased to 80mm× 100mm,in order to accommodate patient variability and lateral asymmetries in patientsetup. In depth, the workspace is extends to 120mm superior of the perinealsurface. Direct access to all clinically relevant locations in the prostate is notalways possible with a needle inserted purely along apex-base direction due topubic arch interference. If more than 25% of the prostate diameter is blocked(typically in prostates larger than 55cc), the patient is then not a suitable can-didate for implantation [14]. Many of these patients will become treatable byintroducing two rotational DOF (degree of freedom) in the sagittal and coronalplanes. The resulting workspace overlaid on the leg rest/access tunnel is shownin Fig. 2(right).

3.2 System RequirementsThe primary motions of the robot’s base include two prismatic motions andtwo rotational motions upon a passive linear slide. In addition to these basemotions, application-specific motions are also required; these include needle in-sertion, cannula retraction, needle rotation and actuation of the biopsy gun. Theaccuracy of the individual servo controlled joints is targeted to be 0.1mm, withneedle placement accuracy better than 1.0mm in free space. The overall systemaccuracy will be reduced due to effects such as imaging resolution, needle deflec-tion, and tissue deformation. The target accuracy is 2mm, which approximatesthe technical accuracy of TRUS-guided procedures and is sufficient to target theminimal clinically significant foci size of 1/2cc [15]. Maximum needle insertionforces were determined to be approximately 15N; the robot is designed to accom-modate greater than twice this force (40N). The specifications for each motionare shown in Table 1. For a proof-of-concept and Phase-1 clinical trials, the tworotational DOF are not required.

Table 1. Kinematic Specifications

Degree of Freedom Motion Requirements

1) Axial Robot Placement 1m Manual with repeatable stop2) Vertical Motion 0-100mm Precise servo control3) Elevation Angle +15◦, −0◦ Precise servo control4) Horizontal Motion ±40mm Precise servo control5) Azimuth Angle ±15◦ Precise servo control6) Needle Insertion 120mm Cooperative or Automated7) Canula Retraction 60mm Cooperative or Automated8) Needle Rotation 360◦ Manual or Automated

3.3 MRI Compatibility RequirementsThe design of the manipulator is complicated by the limited choice of materialsand actuators imposed by the high-field (1.5-3T) MRI environment. The follow-ing section details material and component selection, with consideration of MRcompatibility issues.

40

Page 46: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

4 System and Component Design

4.1 Overview

Development will take place in several phases, through an evolution of proto-types. The first embodiment will have the vertical and horizontal motions actu-ated; this yields a high-resolution needle guide functionally similar to template-based conventional brachytherapy. The robot is mounted on a slide that allowsmanual placement of the robot inside of the leg rest/tunnel with a locking mecha-nism that ensures repeatable placement with respect to the scanner’s coordinatesystem. The robot can be manually translated to the foot of the bed to reloadbrachytherapy needles or remove the biopsy sample or for a rapid removal ofthe entire robot in case of emergency. The simplified 2-DOF design is to vali-date that the robot: 1) has the desired workspace, 2) functions properly in a 3Tfield, 3) does not cause prohibitive imaging artifacts and distortion and 4) yieldssufficient accuracy in joint motion and overall needle placement.

The next design iteration will produce a 4-DOF robot base with the linksmade out of high strength, dimensionally stable, highly electrically insulating andsterilizable plastic (e.g. Ultem or PEEK). The 4-DOF base will have a modularplatform that allows for different end effectors to be mounted on it. The twoinitial end effectors will accommodate biopsy guns and brachytherapy needles.Both require an insertion phase, and the latter requires an additional controlledlinear motion to accommodate cannula retraction to release the brachytherapyseeds. Detailed design of the end effectors is not presented here, but includesconsiderations for sterilization and draping of the mechanism, as per clinicalrequirements.

4.2 Mechanism Design

The following additional design requirements have been adopted: 1) linear mo-tion should be able to be decoupled from the rotations since the majority ofprocedures will not require the rotational DOF, 2) actuator motion should bein the axial direction (aligned with Bo field) to maintain a compact profile, and3) extension in both the vertical and horizontal planes should be telescopic toavoid linear guides that may prevent the robot from fitting in the constrainedworkspace.

The four primary base DOF (Motions 2−5 in Table 1) are broken into twodecoupled 2-DOF planar motions. In order to maintain high rigidity, planar barmechanisms are used. Motion in the vertical plane includes 100mm of verticaltravel, and up to 15◦ of positive elevation angle. This is achieved using a modifiedversion of a scissor lift mechanism. By coupling two such mechanisms as shownin Fig. 3 (left), 2-DOF can be achieved. For prismatic motion alone, both slidesare moved together; to tilt the surface, the front slide is fixed and the rear ismoved relative to it.

Motion in the horizontal plane is achieved by coupling two straight linemotion planar bar mechanisms as shown in Fig. 3 (right). By combining twostraight-line motions, both prismatic and rotational motions can be realized in

41

Page 47: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

the horizontal plane. Actuation is provided by compact pneumatic cylinders ori-ented in the axial direction. Fig. 3 (right) shows the mechanism in the 1-DOFconfiguration; it is straightforward to add the rotational motion for future de-signs.

Fig. 3. Mechanism for motion in the vertical plane. (top) Mechanism design for motion inthe horizontal plane. (bottom)

4.3 Actuator DesignIn order to avoid electric motors and the complex mechanical transmissionsrequired to situate them outside of the magnet bore [11], we have elected to usepneumatic actuators.

To maintain actuators in close proximity to the robot, an alternative toelectric motors is hydraulic or pneumatic actuators. Hydraulic actuators offerthe advantages of high stiffness and near-incompressible flow at the expenseof speed/bandwidth, inconvenient fluid connections when a permanently closedsystem is not possible and the potential for leaks. Pneumatic actuators offerrelatively high speed, power density, availability and cost effectiveness, at theexpense of decreased stiffness and less straightforward control due to nonlinear-ities including the compressibility of air and relatively large friction forces. Wechose to use pneumatic actuators made out of MR-compatible materials, withservo valves operated by piezoelectric actuators.

4.4 Position SensingStandard optical encoders have been tested in a 3T MRI scanner for functionalityand in a 1.5T scanner for induced effects in the form of imaging artifacts. From

42

Page 48: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

the experiment’s results, it appears that 50mm beyond the encoder (positionedat the isocenter on top of a cylindrical phantom), there is little or no inducedartifact when imaged in a GE Signa Excite 1.5T scanner. This is very promisingbecause the design can be made such that the sensing elements are sufficiently farfrom the scanner isocenter. Practical methods of robot tracking are discussedin [10].

4.5 Robot ControlWe have chosen to use a pair of proportional pressure regulators for differentialpressure control of each cylinder. At present, our first choice for valves is a pairof proportional pressure regulators per axis that provide a differential pressurein the cylinder. Piezoelectrically actuated proportional air pressure valves thatappear to be perfectly suited for the task are available off the shelf. The valveswill be in an enclosure situated near the foot of the scanner bed; this is a com-promise between proximity of controls to the MRI scanner and length of tubingbetween the valves and the actuators.

Fig. 4. System interfaces, shown for a single representative actuator. The low level jointcontroller and valves are placed in the scanner room at the foot of the bed; actuators andencoders are in the scanner bore.

4.6 Controller InterfaceLow level control software will be implemented on an embedded PC-104 com-puter using the Real Time Application Interface (RTAI) Linux kernel extensionto allow for the accurate clock necessary for PC-based servo control. A PC-104analog output module will be used to control the valves (two outputs per axis)and an analog input module will be used to monitor pressure sensors on eachvalve output. An FPGA module will be used for monitoring the linear opticalencoders on each joint, and will also provide digital I/O for limit switches and

43

Page 49: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

brakes. The PC-104 computer stack, DC-DC power regulator, valves, and pres-sure sensors will be located in an RF-shielded enclosure located in the scannerroom near the foot of the scanner bed (about 2m from the bore). Connectionsinto and out of the scanner room will include only a regulated compressed airsupply and a fiber optic ethernet connection.

A 3D Slicer-based(www.slicer.org) GUI–running on a workstation in thescanner’s console room–will coordinate imaging, targeting and intra-operativeguidance. The physician will specify needle targets and trajectories, which willbe positioned by the robot. In the interest of safety, the needles will be insertedunder user control, based on feedback from real-time interventional imaging.This system will interface with the low-level controller, as illustrated in Fig. 4.

Fig. 5. Robot mechanism design, shown on manual gross positioning slide with brachyther-apy end effector.

5 Results

Design and analysis are complete, material selection has been finalized and thecontroller selected. The robot is in the process of being constructed. The con-troller has been configured and is being programmed. Initial trials with pneu-matic cylinder control with proportional valves have begun and have been suc-cessful thus far; a rigorous comparison of control algorithms will be undertaken inthe near future. Initial trials with some materials (including aluminum and plas-tics) and sensors (including optical encoders) in the MRI scanner have provedsuccessful as well. The final design of the robot is as shown in Fig. 5. The imme-diate target is to implement a 2-DOF prismatic base in the context of an MRIscanner and leg rest/tunnel, in the actual workspace. Thus far, a 1-DOF versionhas been constructed as shown in Fig. 6. It is being used for initial accuracy andMR compatibility validation experiments.

6 Conclusions

MRI-guided percutaneous interventions are expected to make an important con-tribution to the management of prostate cancer. The ability of MRI to image

44

Page 50: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 6. 1-DOF robot prototype for validation experiments.

many different physiological parameters within tissues is of interest in the plan-ning and staging of clinical interventions; however, the accuracy of interven-tional guidance and navigation of instruments in soft tissues has fallen behindthe fidelity and resolution of new planning and targeting methods, thus limitingthe utility of these methods, as well as our ability to validate them. We havedescribed a novel computer-assisted planning and placement system for percu-taneous prostate intervention. We aspire to significantly reduce misplacementand placement uncertainty that is now on the order of 5− 10mm due to TRUSbeamwidth. This work is also of relevance to other organ systems and diseasesthat require targeted needle placement inside an MRI scanner.

Acknowledgment

This work was supported by NIH 1R01EB002963 and NSF EEC-97-31478.

References

1. Jemal, A.: Cancer statistics, 2004. In: CA Cancer J Clin. Volume 54(8). (2004)

2. Terris, M.K., Wallen, E.M., Stamey, T.A.: Comparison of mid-lobe versus lateralsystematic sextant biopsies in detection of prostate cancer. In: Urol Int. Volume 59.(1997) 239–242

3. Keetch, D.W., McMurtry, J.M., Smith, D.S., Andriole, G.L., Catalona, W.J.:Prostate specific antigen density versus prostate specific antigen slope as predic-tors of prostate cancer in men with initially negative prostatic biopsies. In: J Urol.Volume 156(2 Pt 1). (1996) 428–31

4. Han, B., K.Wallner, Merrick, G., Butler, W., Sutlief, S., Sylvester, J.: Prostatebrachytherapy seed identification on post-implant TRUS images. In: Med. Phys.Volume 30(5). (2003) 898–900

5. Masamune, K., Kobayashi, E., Masutani, Y., Suzuki, M., Dohi, T., Iseki, H.,Takakura, K.: Development of an MRI-compatible needle insertion manipulatorfor stereotactic neurosurgery. In: J Image Guid Surg. Volume 1(4). (1995) 242–8

6. Felden, A., Vagner, J., Hinz, A., Fischer, H., Pfleiderer, S.O., Reichenbach, J.R.,Kaiser, W.A.: ROBITOM-robot for biopsy and therapy of the mamma. In: BiomedTech (Berl). Volume 47 Suppl 1 Pt 1. (2002) 2–5

45

Page 51: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

7. Hempel, E., Fischer, H., Gumb, L., Hohn, T., Krause, H., Voges, U., Breitwieser,H., Gutmann, B., Durke, J., Bock, M., Melzer, A.: An MRI-compatible surgicalrobot for precise radiological interventions. In: Computer Aided Surgery. (2003)180–191

8. Chinzei, K., Hata, N., Jolesz, F.A., Kikinis, R.: MR compatible surgical assistrobot: system integration and preliminary feasibility study. In: Medical ImageComputing and Computer Assisted Intervention. Volume 1935. (2000) 921–933

9. DiMaio, S.P., Pieper, S., Chinzei, K., Fichtinger, G., Tempany, C., Kikinis, R.:Robot assisted percutaneous intervention in open-MRI. In: 5th Interventional MRISymposium. (2004) 155

10. Krieger, A., Susil, R.C., Menard, C., Coleman, J.A., Fichtinger, G., Atalar, E.,Whitcomb, L.L.: Design of a novel MRI compatible manipulator for image guidedprostate interventions. In: IEEE Trans. on Biomedical Engineering. Volume 52.(2005) 306–313

11. Ganesh, G., Gassert, R., Burdet, E., Bleule, H.: Dynamics and control of an MRIcompatible master-slave system with hydrostatic transmission. In: InternationalConference on Robotics and Automation. (2004) 1288–1294

12. Stoianovici, D.: Multi-imager compatible actuation principles in surgical robotics.In: International Journal of Medical Robotics and Computer Assisted Surgery.Volume 1. (2005) 86–100

13. DiMaio, S., Fischer, G., Haker, S., Hata, N., Iordachita, I., Tempany, C., Fichtinger,G.: System for MRI-guided Prostate Interventions. In: IEEE International Con-ference on Biomedical Robotics and Biomechatronics. (2006)

14. Wallner, K., Blasko, J., Dattoli, M.: Prostate Brachytherapy Made Complicated,2nd Ed. SmartMedicine Press (2001)

15. Bak, J., Landas, S., Haas, G.: Characterization of prostate cancer missed by sextantbiopsy. In: Clin Prostate Cancer. Volume 2(2). (2003) 115–118

46

Page 52: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Needle Insertion Point and Heading

Optimization with Application to Brachytherapy

Ehsan Dehghan and Septimiu E. Salcudean

Electrical and Computer Engineering Department,University of British Columbia, Vancouver, Canada,

{ehsand, tims}@ece.ubc.ca

Abstract. This paper presents a method of finding the optimal needleinsertion point, orientation and depth for needle insertion in 3D tissuemodels. The goal is to minimize the distance between predefined targetsand a needle inserted into tissue. The proposed iterative method usesthe best fitted 3D line to the displaced targets at each iteration as acandidate for the optimal insertion line. This approach is then appliedto a brachytherapy simulator to minimize seed misplacement error. Thetargets are designed to lie on a line in the undeformed configurationinside the prostate mesh. The optimization algorithm converges fast anddecreases the error effectively.

1 Introduction

Low dose rate brachytherapy is a widely used treatment for prostate cancer. Thisprocedure involves the permanent placement of radioactive capsules or “seeds”of 125I or 103Pd into the prostate and peri-prostatic tissue using a needle. Duringthe pre-operative procedure, trans-rectal ultrasound (TRUS) is used to acquiretransverse images of the prostate tissue. These images are segmented manually,and used to create a 3D image of the tissue. Seed positions are planned using this3D image to deliver sufficient radiation to kill cancerous tissue while maintaininga tolerable dose to the urethra and rectum [1]. Radioactive capsules and spacersare delivered to designated positions, loaded into a needle to match the plan.The target locations are usually along a straight line parallel to the y axis. Theneedles are inserted by the physicians through a grid with parallel holes in thex − z plane using real time TRUS imaging and X-ray fluoroscopy (see Fig. 1).

Due to the forces applied by the needle, the prostate tissue undergoes de-formations and rotations of up to 20 degrees [2]. Limited visual feedback, tissuedeformation and target displacements demand highly skilled physicians. Seedplacement error is still a common problem in brachytherapy [3], causing compli-cations such as impotence or urinary incontinence. In order to train physiciansand provide pre-surgery plans to decrease dose errors, brachytherapy simulatorsand planners are in demand.

The guiding grid typically used in brachytherapy allows the needle to moveparallel to the y axis. However, due to prostate deformation, target positions

47

Page 53: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Prostate

Bladder

Pubic Bone

Ultrasound

Probe

Brachytherapy

Needle

Guiding Grid

x

y

z

Fig. 1. Insertion of the needle during prostate brachytherapy

diverge from this line during insertion. In this case, insertion of the needle at adifferent orientation can minimize the error between the seed positions and thepredefined targets. This paper focuses on the optimization of the needle entrypoint and heading to decrease the error in seed placement in the presence oftissue deformation. We assume that a needle guide such as a robotic system [4]is available to place and orient the needle. This optimization is performed forrigid needles. Although brachytherapy needles are slightly flexible, this optimizedinsertion point and orientation can still be used as a starting point for furtherrefinement, if necessary.

In the next section a review of previous work on needle insertion simulationand planning is presented. The needle insertion simulation procedure is describedin the third section, followed by the optimization method. Simulation results arepresented next, and conclusions are drawn.

2 Related work

The Finite Elements Method (FEM), is a powerful method used extensively tosimulate tissue deformation in surgery and needle insertion.

2D and 3D tissue-needle interaction models have been presented before [5–7].In [6], a 3D mesh of the prostate tissue is generated and then used to simulatebrachytherapy needle insertion using linear FEM and condensation [8]. Using a2D model for tissue, needle path planning has been proposed in [9] to steer theneedle by manipulating the needle base in order for the tip to reach a targetwhile avoiding obstacles.

Alterovitz et al. [10] introduced a search based sensorless planning algorithmto optimize the insertion height and depth in a 2D model of prostate tissue. Theyalso optimized the insertion point, depth and angle for a highly flexible bevel-tip needle, proposed by Webster et al. [11], using a gradient descent method in2D [12]. In another work, they steered a highly flexible bevel-tip needle underMarkov motion uncertainty inside the tissue using needle axial rotation as inputand reached a target while avoiding obstacles [13].

Glozman and Shoham [14] used a spring mesh to simulate the tissue and alinear beam model to simulate the needle. They used this approach to steer a

48

Page 54: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

z

yx

Nx

NyNz

Fig. 2. The simplified mesh of prostate and surrounding tissue.

needle inside a 2D tissue model using the inverse kinematics of the needle. Theydid not optimize the needle insertion point.

Okamura et al. [15] modeled the force applied to tissue by a needle usingthree components: 1) capsule stiffness; 2) friction; and 3) cutting. Kataoka et al.

[16] presented the tip and friction force applied to a needle during penetrationinto a canine prostate. A needle-tissue force interaction model was introducedby DiMaio and Salcudean [5]. They identified this force by insertion of a rigidneedle into a PVC phantom. Heverly et al. [17] used an energy based fracturemechanics approach to show that the velocity dependence of tissue propertiescan reduce tissue motion with increased needle velocity.

3 Simulation Method

Since the optimization method in this work is simulation based, the FEM is usedto simulate needle insertion. A rigid needle is inserted into a much simplifiedprostate deformation simulator - a cube of material containing the prostate andthe surrounding tissue (see Fig. 2). A linear elastic material model is utilized andthe simulation is performed in quasi-static mode, in which the tissue is assumedto be in its equilibrium at each time sample. The linear FEM leads to a set oflinear algebraic equations. The stiffness matrix in this case can be inverted andsaved in memory offline, allowing the fast needle insertion simulation methoddescribed in [5] to be used. In this method, unknown forces and displacementsare computed only for the nodes on the needle. The needle force profile and thestick-slip friction model of [5] are used as the force applied by the needle to thetissue.

The needle may enter an element from any point on the surface of the element.However, the needle-tissue interaction system applies force and displacementboundary conditions only to nodes. To fulfill this requirement, every time theneedle tip comes close to a new element surface, the closest node to the needletip on that surface moves to the tip location. A computationally efficient methodbased on the matrix inversion lemma is used to update the stiffness matrix aftermesh adaptation [5, 6].

49

Page 55: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

This FEM simulator works as a function in the optimization program as:

ui = fi(ps, v, d) i ∈ {1, 2, · · · , N} (1)

In this function the insertion point ps, the insertion heading v and the insertiondepth d are the inputs and the displaced target positions ui are the outputs.N is the number of targets and uN is the position of the distal target. Tissueelasticity parameters, boundary conditions, target initial positions and needle-tissue force profile are tissue and needle dependent parameters in this function.We assume that these parameters have been obtained by the techniques surveyedin Section 1.

4 Optimization Method

The goal of the optimization method is to find an optimal insertion point, depthand orientation to minimize the distance between displaced targets and the rigidneedle in the deformed configuration.

A global coordinate system Oxyz and a needle-attached coordinate systemOx

Ny

Nz

Nare defined as shown in Fig. 2. Since the needle is axially symmetric,

its coordinate system orientation can be defined by just two angles - pitch andyaw. The insertion point in the x − z plane and the insertion depth are otherparameters that need to be optimized.

Consider a deformed body after insertion of the needle from a point on thesurface along a vector (see i.e. Fig. 3(b)). In response to needle forces the targetsare displaced. The main idea of the algorithm is to fit a line to the displacedtargets. This line is then used to define the new insertion point, heading anddepth. Therefore, the iterative algorithm below is used to find the optimal valuesfor insertion point, heading and depth:

1. To initialize the algorithm, align the needle with the line passing throughthe targets in the undeformed configuration. This line can be defined withan origin p and a vector v. The insertion depth is the distance of the distaltarget from the surface of the tissue (see Fig. 3(a)).

2. Insert the needle along this line to the desired depth. Compute the targetlocations ui in the deformed tissue using (1).

3. Fit a 3D line to target locations in the deformed tissue (see Figs. 3(b)-3(d)).The line parameters can be calculated as:

(pk+1, vk+1) = arg minp,v

N∑

i=1

minα

‖p + αv − uki ‖

2, (2)

where k is the iteration number. It can be shown that pk+1 is the geometricmean of the displaced target locations and vk+1 is the right singular vectorcorresponding to the largest singular value of the matrix below:

A =[

uk1 uk

2 · · · ukN

]T(3)

Matlab R©is used to compute the singular values and vectors.

50

Page 56: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Table 1. Simulation parameters

1st target location (mm) (35, 10, 55)

2nd target location (mm) (35, 20, 55)

3rd target location (mm) (35, 30, 55)Optimized needle insertion point (mm) (30.8, 70, 61.6)Optimized insertion depth (mm) 70.4Optimized needle orientation (yaw, pitch) (6.05o

, 10.37o)Error in 1st iteration (mm) (14.9)

Error in 3rd iteration (mm) (0.8)

4. Find the new insertion point pk+1s as the intersection of the fitted line and

the front surface of the tissue.

5. Find the new insertion depth as:

dk+1 = arg minα

‖pk+1s + αv − uk

N‖2 (4)

6. Go to step 2 unless the convergence criterion is met.

Note that, due to tissue deformation, it is usually impossible to pass a straightline through more than two nodes originally on a straight line. Therefore, theerror never converges to zero. The criterion for the convergence of the algorithmis the proximity of the computed parameters in two consecutive iterations (seeFig. 3(d)). The Euclidean norm of the difference between optimization parame-ters at consecutive iterations is used for this purpose. Alternatively, the distancebetween targets and the needle can be used as another convergence criterion.

5 Simulation Results

Three targets were defined in the prostate mesh along a line parallel to the y

axis as shown in Fig. 3(a). The needle insertion point, depth and orientation wereoptimized using the above mentioned algorithm. The simulation assumptions andresults are shown in Table 1. The undeformed mesh, and the deformed meshesfor three iterations are shown in Fig. 3. The optimization method converged in3 iterations. The parameters vector changed just 1% in the two final iterations.It can be seen in Fig. 3(d) that all the targets lie very close to the needle. In thiscondition the distance from each target to the needle was less than the diameterof the brachytherapy needle (see Table 1).

Several cases with different target positions were simulated. The proposedalgorithm always showed fast convergence, in 3 to 5 iterations. The simulationwas programmed in Matlab R©. Although the computation time needed for eachiteration is a function of the insertion depth, in the simulations reported here themean computation time was 50 s per iteration. With optimization of the code inC++, as in [6], much faster solutions are possible.

51

Page 57: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

0

50

1000 20 40 60 80

0

10

20

30

40

50

60

70

X(mm)

Y(mm)

Z(m

m) Insertion line

Insertion depth

(a) The undeformed configuration

0

50

100

20 0 20 40 60 80

0

20

40

60

80

X(mm)Y(mm)

Z(m

m)

New insertion line

New insertion depth

(b) After the first iteration

0

50

100

20 0 20 40 60 80

0

20

40

60

80

X(mm)Y(mm)

Z(m

m)

New insertion line

New insertion depth

(c) After the second iteration

0

50

100

20 0 20 40 60 80

0

20

40

60

80

X(mm)Y(mm)

Z(m

m)

(d) After the third iteration

Fig. 3. Simulation iterations for three targets. (b), (c) and (d) show the position of thetargets in the deformed tissue after insertion of the needle with the insertion parameterscalculated in (a), (b) and (c), respectively. 3D fitted line is shown as a dotted line. Thefront surface of the prostate mesh is removed to show the position of the targets andthe needle inside.

6 Conclusion and Future Work

An optimization method was introduced to find the optimal insertion point,orientation and depth for a rigid needle to reach targets in deformable tissue.This algorithm was used to implement a brachytherapy simulation program. Inour simulations we observed fast convergence and decreased errors in reachingthe targets. This method can be used with any deformable model for the tissue,since the assumptions in the simulation part do not put any restriction on theoptimization method.

In the future, validation studies will be carried out using a 5 degree of free-dom robot to orient and insert the needle into tissue phantoms. In addition,needle flexibility will be integrated into the optimization method. The proposedalgorithm first finds the optimal insertion point and heading to minimize thedistance of the targets from the rigid needle. Then the rigid needle location can

52

Page 58: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

be used as a starting point for the flexible needle steering method, introducedin [9], to pass the needle through the targets.

References

1. Pouliot, J., et al.: Optimization of permanent 125I prostate implants using fast sim-ulated annealing. Int. Journal of Radiation Oncology, Biology, Physics 36 (1996)711–720

2. Lagerburg, V., et al.: Measurement of prostate rotation during insertion of needlesfor brachytherapy. Radiotherapy and Oncology 77 (2005) 318–323

3. Taschereau, R., et al.: Seed misplacement and stabilizing needles in transperinealpermanent prostate implants. Radiotherapy and Oncology 55 (2000) 59–63

4. Wei, Z., et al.: Robot-assisted 3D-TRUS guided prostate brachytherapy: Systemintegration and validation. Medical Physics 31 (2004) 539–548

5. DiMaio, S.P., Salcudean, S.E.: Needle insertion modeling and simulation. IEEETrans. Robotics and Automation 19 (2003) 864 – 875

6. Goksel, O., et al.: 3D needle-tissue interaction simulation for prostate brachyther-apy. In: Proc. MICCAI. (2005) 827 – 834

7. Alterovitz, R., et al.: Simulating needle insertion and radioactive seed implantationfor prostate brachytherapy. In: Medicine Meets Virtual Reality 11. (2003) 19–25

8. Bro-Nielsen, M., Cotin, S.: Real-time volumetric deformable models for surgerysimulation using finite elements and condensation. In: Proc. Computer GraphicsForum, Eurographics’96. Volume 15. (1996) 57–66

9. DiMaio, S.P., Salcudean, S.E.: Needle steering and motion planning in soft tissue.IEEE Trans. Biomedical Engineering 52 (2005) 965 – 974

10. Alterovitz, R., et al.: Sensorless planning for medical needle insertion procedures.In: Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems. Volume 3. (2003)3337–3343

11. Webster III, R.J., et al.: Nonholonomic modeling of needle steering. In: Proc. 9thInt. Symp. Experimental Robotics. (2004) 3337–3343

12. Alterovitz, R., et al.: Planning for steerable bevel-tip needle insertion through 2Dsoft tissue with obstacles. In: IEEE Int. Conf. on Robotics and Automation. (2005)1640 – 1645

13. Alterovitz, R., et al.: Steering flexible needles under Markov motion uncertainity.In: Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems. (2005) 1570–1575

14. Glozman, D., Shoham, M.: Flexible needle steering and optimal trajectory planningfor percutaneous therapies. In: Proc. MICCAI. (2004) 137–144

15. Okamura, A., et al.: Force modeling for needle insertion into soft tissue. IEEETrans. Biomedical Engineering 51 (2004) 1707 – 1716

16. (Kataoka, H., et al.)17. Heverly, M., et al.: Trajectory optimization for dynamic needle insertion. In: IEEE

Int. Conf. on Robotics and Automation. (2005) 1646–1651

53

Page 59: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Composite Visual Tracking of the Moving HeartUsing Texture Characterization

Aurélien Noce, Jean Triboulet, Philippe Poignet

LIRMM - Robotics Department, 161 rue Ada, Montpellier, France{noce,triboule,poignet}@lirmm.fr

Abstract. Among the issues raised by Computer Vision, the trackingof one or several regions of interest is a major consideration. Achievingaccuracy and robustness is critical in Visual Servoing and more generallyin vision-based applications because of the sensibility of the algorithms.To address those issues in the specific case of MIS heart surgery, wedeveloped a texture-based approach that provide good results on ex-perimental beating heart images. Our approach consists in integratingtextural characterization of the region of interest to reinforce the track-ing procedure based on classical approaches such as correlation. We showthat an appropriate combination of image descriptors can improve theefficiency of the tracking, especially on low-contrasted medical images.

1 Introduction

During the past few decades, Computer Vision has shown its potential in a ever-growing number of applications, from robot manipulation medical assistance.This trend for the interest in vision is driven by the increase of computers power,that makes possible exploiting the information available from images in real-time. This, coupled with recent advances in visual servoing schemes [3, 2] andpattern tracking techniques [4, 1] settles the basis of upcoming vision-centeredapplications.

Nevertheless, extracting and interpreting structured information from pixeldata is a very complex challenge, that brings in several research areas: signalprocessing, visual tracking, visual servoing, etc. Moreover, the complexity ofreal experimental images cannot be precisely modeled, which is a real issue whenconsidering both precision of the tracking and overall robustness of the system.

1.1 Visual servoing of the beating heart

This work takes part in a medical robotics project in the Laboratory aimingat developing a robotic platform for motion compensation for beating heartsurgery. The need for efficient tracking appeared during experiments on beatingheart that pointed out the complexity of using artificial landmarks to evaluatethe position of the heart. Therefore our primary goals is to rely almost entirely onvisual information, we decided to investigate the field of markerless visual track-ing. But visual tracking on a beating heart is an extremely complex problem.

54

Page 60: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 1. Sample heart surface

This kind of application raises many questions in terms of image processing: themovements of the organ are fast and complex, the surface is deformable,reflexive– see Fig. 1 – and lowly contrasted. In this context classical tracking algorithmsare not reliable to track specific points, even on more contrasted regions likecoronary arteries.

Several work have investigated the issue of beating heart servoing, startingwith the work of Nakamura [14] that leaded to further developments [17, 6, 16]focused on motion compensation for surgical applications. Those works havepointed out several issues, as the complexity of the heart motion or the difficultyof tracking this motion visually.

1.2 Our approach

Considering the problem of motion compensation of the heart, we first investi-gated the techniques for 3D reconstruction [13] using artificial markers on theheart surface. But as our goal is to avoid the use of those markers, we also stud-ied texture analysis tools [15] and their performance on experimental images. Inthis paper, we propose using a composite tracking algorithm based on texturalcharacterization of the region of interest, that overcome the limitations of formerapproaches. Texture tools – Tab. 2 – are mainly used in image registration andanalysis, to identify specific regions without using a precise model or pattern.In medical images processing, texture is used in expert systems to find tumors[11] and other diseases. Those approaches are computationally expensive, so weselected an optimal subset of texture tools in order to reduce global computationand perform real-time tracking.

This paper is organized as follows. In Section 4, we give an overview of thetextural features we considered and our method to select the most accurate ones,as already pointed in [15]. In section 2, we introduce the pro tracking algorithm.In section 3, we present tracking results on artificial and experimental images,and compare the performance with respect to other commonly used trackingmethods. Finally, section 4.2 sums up those results.

55

Page 61: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Correlation/SSD

Tracked Pattern

Region Of Interest

Texture FeaturesComputation (Pattern)

Texture FeaturesComputation (ROI)

TextureDistance processing

Composite criteriaMinimisation

TextureProcessing

Block Matching

λ, γ

New position of the pattern

Fig. 2. Outline of the tracking procedure

2 Visual Tracking

Several approaches have been developed to perform visual tracking in imagesequences: some of them based on finding similarities between consecutive images[9], some other deducing the movement of the target from the optical flow [19],or using pattern variations during motion [10] or shape descriptors [12]. Forsimplicity and considering the desired high speed sampling frequency, we considerthat the tracked patch has limited motion between two consecutive frames. Wealso consider the tracking only using consecutive frames and don’t use, for themoment, temporal information.

2.1 Composite tracking method fundamentals

Initial tests performed on experimental beating heart sequences showed that op-tical flow methods have limited results on low contrasted parts of the image,and we have no precise model of the tracked regions. That’s the reason why westarted from a similarity-based tracking procedure to implement our method.The proposed tracking algorithm is based on block-matching, using cross cor-relation or SSD – Sum of Squares Distances – to evaluate image similarities.Those approaches are based on the ability to compare the region of interest witha reference pattern. They are computationally expensive, but performance canbe dramatically increased by reducing the search region and the precision of thetracking competes with much more sophisticated approaches.

At the same time correlation is computed, we also evaluate the texture-basedsimilarity and merge the two results to compute a composite criteria, that hasto be minimized, as illustrated on Fig. 2.

2.2 Pattern comparison

As mentioned previously, we use SSD and cross correlation to score the regionof interest. Cross correlation, or more precisely normalized cross correlation is a

56

Page 62: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 3. Search area around the previous position

very effective criteria for pattern detection. It is defined as follows1:

ρXY (u, v) = ∑i,j

(X(i,j)−µX)(Y (i+u,j+v)−µY )√∑i,j

(X(i,j)−µX)2√∑

i,j(Y (i,j)−µY )2

(1)

Good cross correlation implies a high degree of similarity between the templateand the candidate region, the value of 1 – the maximum possible value – repre-senting a perfect matching between the two patterns.

On the other hand, Sum of Squares Distance – SSD – quantifies the errorbetween two pattern:

dXY (u, v) = ∑i,j

((X(i,j)−µX)−(Y (i+u,j+v)−µY ))2√∑i,j

(X(i,j)−µX)2√∑

i,j(Y (i,j)−µY )2

(2)

The value of SSD is 0 when two patterns are closely correlated. This approachis usefull for region tracking as it provides a metric to easily compare images.But SSD is often more sensitive to illumination variations than correlation.

To find the reference pattern in the image, we explore a local surroundingof the previous known location of the region of interest, implicitly assumingthat the motion between two frames is bounded, which is realistic consideringour targeted sampling frequency of 200 frames per second. The moving windowcovers the research area to find the position corresponding to the higher possiblecorrelation – or the lower SSD – which becomes the new position of the target.To improve the robustness to deformations, the initial search pattern can bereplaced by the mean between initial pattern and the last identified region ofinterest.

1 In the equation µX and µY are the mean value of luminance in the images

57

Page 63: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

2.3 Texture features comparison

Texture features[8, 20] associate information to each region of interest, such ascontrast, rugosity or homogeneity. The characteristics form a vector, unique foreach tracked region, called the features vector. This characterization is relativelyrobust to changes of shape or illumination, and appears to be a good optionto complement more sensitive criteria used in tracking algorithms such as crosscorrelation or SSD. The distance between texture vectors is also computed usingan approach based on euclidian distance:

VX = (x1, x2, . . . , x8)T

VY = (y1, y2, . . . , y8)T

tVXVY=

√√√√ 8∑i=1

(xi − yi)2 (3)

VX and VY being the feature vectors associated respectively with the originaltemplate X and the current region of interest Y . The result is then normalizedto ease integration with SDD or correlation:

tXY =tVXVY

max∀X,Y

(tVXVY)

(4)

With this distance between the images, it is possible to perform trackingusing the principles illustrated on Fig. 3. But the texture-only tracking isn’tadequate to track small parts of a homogeneously textured region of the heart.

2.4 Matrices Fusion

In order to combine the advantages of the two previously described methods,we use as distance between regions a combination of SSD – or cross correlation– distance d = XY and texture distance tXY , as illustrated on Fig. 2, which isimplemented following this scheme:

σXY = λ.dXY + γ.tXY (5)λ + γ = 1

The λ and γ parameters give the ponderation between the two methods.

Discussing λ and γ As a first attempt, we used fixed values of λ and γ. Butthis solution wasn’t satisfying because if it performed better than usual SSD onsome problematic regions of the heart, its performances were sometimes worsethan the usual algorithm. This is due to the fact that when the contrast betweenforeground and background is high, usual approaches are really adequate.

To optimize the algorithm, we use dynamic values depending on the com-puted distance dXY – or 1 − ρXY – to adapt the result. There are many possi-bilities for the expressions of λ and γ, that should be designed considering thefollowing rules:

58

Page 64: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

– λ = 1 when correlation is maximum: σXY = dXY = 0– γ = 1 when correlation is minimum: σXY = tXY

We considered two simple but efficient approaches: a linear one and an ex-ponential one.

γ = dXY (6)

or

γ = e−dXY1−dXY (7)

Linear approach has a lower decreasing rate and uses correlation in priority,while exponential algorithm ensures to keep the good result of the correlationtracker, while using texture in priority on problematic regions.

Tracking The tracking itsef can be sum up as follow:

min∀Y ∈search area

(σXY ) (8)

where X is the reference template and Y is the candidate position for the trackedregion in the search area, which is defined as a local surrounding of the last knownposition of the region of interest, as represented on Fig. 3.

3 Results

We evaluated our method on several examples. We also evaluated the perfor-mance of other algorithms in order to demonstrate the gains obtained throughtexture analysis. The algorithms we considered here are from different kinds:

– correlation tracking.– SSD tracking.– optical flow based tracking.

Two example are given: at first synthetic images are used to evaluate effi-ciency and accuracy of the different approaches, and then natural images se-quences obtained via experiments on a beating heart.

3.1 Artificial images

In order to evaluate the precision of our tracking algorithm, we generated a set ofimages sequences by superimposing a small texture patch on a larger texture andapplying well known artificial movements on the small patch. Those sequencesuse two type of textures high contrasted black and white images and naturalbeating heart images.

59

Page 65: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 4. Sample artificial sequences: translation

Fig. 5. Sample artificial sequences: circular motion and rotation of the pattern

Translation Those sequences use simple translations as trajectories. If the blackand white sample – Fig. 4 on the left – didn’t cause tracking problems, it was adifferent issue with the heart textures – Fig. 4 on the right – as shown on Fig.

60

Page 66: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

(a) I (b) II (c) IV

(d) V (e) V (f) VI

Fig. 6. Sample tracking on a beating heart sequence

Fig. 7. Tracking errors with Optical Flow Method

7. This last figures shows simultaneously the trajectory of the different methodsfor one of our test sequences.

This illustrates a problem with the optical flow method, that performs thetracking by computing the motion between the background and the foregroundtemplate. Even if theorically our exmaples should fit perfectly with this approach– the template is literally moving on the background – practically the globalinhomogeneity of the heart texture prevents the algorithm from giving optimalresults. The problem we observe on Fig. 8 on the trajectory of the optical flowtracking can be explained by the presence of lens flares. Other methods performsvery well on this example, as seen on Tab. 1.

The correlation method also has a slight deviation on samples that use hearttexture. The use of the composite approach manages to reduce those flaws ef-fectively.

Rotation We also studied other motions to complete our evaluation of the al-gorithms. Rotation is very demanding and optical flow tracking quickly divergesfrom the path after only a few iterations. Fig. 5 show an example of circular

61

Page 67: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Method mean error in pixelsCorrelation 0

SSD 0.12

Optical Flow 1.11

Composite 0Table 1. Error in translation tracking

Fig. 8. Sample trajectory for vertical translation

translation with small rotations of the pattern. It is a very interesting case be-cause the pattern passes through all the problematic parts of the heart.

Surprisingly correlation and SSD method have difficulties to track the motion– Fig. 9 – on some parts of the heart with lower contrast or specularities.

On the other hand Composite tracking – both correlation-based and SSD-based composite tracking – perform well on those images, as seen on Fig. 9.

3.2 Beating heart sequences

Those sequences were captured on a real heart with an high-speed camera. Theobservable motion is limited in terms of amplitude, which justifies our initialhypothesis.

Interpreting natural sequences is more delicate because the motion is notprecisely known and much more complex: deformation of the surface and changesin pose and illumination make the tracking less robust. Anyway it is necessaryto compare the algorithms in the desired conditions and those sequences areadequate for qualitative evaluation of our method. As expected with beatingheart images, the results of tracking with optical flow are not satisfactory. Thetracking using correlation and SSD give good results, but on some specific regionsthiers accuracy were not as good as desired.

Finally with the composite approach, the stability is improved, as seen onthe tracking sequence – Fig. 6 – where the deformable motion is well trackedamong consecutive frames.

62

Page 68: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 9. Sample trajectory for circular translation and rotation of the pattern

4 Discussion

4.1 Texture characterization

Texture analysis is a very active research field in computer vision. The differentapproaches to characterize a texture, most fall into one of those three categories:Markov models based analysis [7], filtering techniques [18] and local character-ization of textures [8]. Considering the nature of the problem – region tracking– and the real-time constraint, we decided to investigate the texture featuresapproaches. The main limitation in the use of texture tools in the context ofbeating heart surface tracking is the real-time constraint. Classically, accuracyin texture analysis is obtained through the use of a large feature vector, whichis acceptable for diagnosis systems but too demanding in this context.

In order to choose the most appropriate texture features, we explored thefield’s literature and found a large number of approaches:

– Statistical approaches: first, second and higher order statistics.– Filtering – Wavelet and Fourier – based features.– Morphology based features.– Image difference based features.– Pyramidal decompositon.

Some solutions combine different approaches, as for example wavelet-domainhidden markov models [7], in order to improve texture classification accuracy.

We compared a set of 117 texture attributes using statistical tools – mainlyPrincipal Component Analysis – to rate those features in terms of extracted in-formation and redundancy. We finally selected a set of 8 interesting admissiblefeatures – Tab. 2 – which present both low inter-correlation and high varianceamong heart images. Those results [15] are specific to beating heart image anal-ysis and selection in itself is a form of a priori knowledge.

63

Page 69: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Approach FeatureCoocurrence Matrix EnergyCoocurrence Matrix ContrastCoocurrence Matrix Cluster ShadeCoocurrence Matrix Cluster ProminenceRun-length Matrix Non-UniformityRun-length Matrix Short Low Grey

Level Run EmphasisFirst Order Statistics SkewnessFirst Order Statistics Kurtosis

Table 2. Selected texture features

4.2 Potential of the method

As showed in Section 3, the use of a composite criteria integrating texture char-acterization can improve the tracking of natural textures. Our example focuseson tracking patches on the heart surface, but the technique may be used in othercontexts, providing the prior selection of appropriate textures features.

The main advantage of the proposed method is its good results on prob-lematics images, without having to perform costly pre-processing. Moreover thegeneral principle of texture features integration can be applied to a wild rangeof algorithm: virtually any algorithm using correlation or SSD may integrate ourcomposite distance to improve its performances.

4.3 Perspectives

Now the next step is to perform online tracking on a beating heart – not onlyon recorded sequences – to validate it in experimental conditions, and to testvisual servoing based on this region tracking. Another objective is to integratetemporal information in the tracking – as in [5] – using specificities of the heartmovement.

References

1. S. Benhimane and E. Malis. Real-time image-based tracking of planes using effi-cient second-order minimization. In IEEE/RSJ International Conference on Intel-ligent Robots Systems, volume 1, pages 943–948, September 2004.

2. S. Benhimane and E. Malis. A unified approach to visual tracking and servoing.In Robotic and Autonomous Systems, volume 52, pages 39–52, July 2005.

3. F. Chaumette and E. Malis. 2 1/2 d visual servoing: a possible solution to improveimage-based and position-based visual servoings. In IEEE International Confer-ence on Robotics and Automation, volume 1, pages 630–635, San Francisco, USA,Aprile 2000.

64

Page 70: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

4. A. I. Comport, E. Marchand, M. Pressigout, and F Chaumette. Real-time mark-erless tracking for augmented reality: The virtual visual servoing framework. InIEEE Transactions on Visualization and Computer Graphics, volume 12, pages289–298, July 2006.

5. R. Ginhoux, J. Gangloff, M. de Mathelin, L. Soler, M. M. Arenas Sanchez, andJacques Marescaux. Active filtering of physiological motion in robotized surgeryusing predictive control. In EEE Trans. Robotics, 2003.

6. R. Ginhoux, J. A. Gangloff, M. F. de Mathelin, L. Soler, M. M. A. Sanchez, andJ. Marescaux. Beating heart tracking in robotic surgery using 500 hz visual servo-ing, model predictive control and an adaptive observer. In Proc. IEEE Int. Conf.Robotics and Automation (ICRA), pages 274–279, 2004.

7. F. Guoliang and Xiang-Gen X. Wavelet-based texture analysis and synthesis usinghidden markov models. In IEEE Transactions on Circuits and Systems I: Funda-mental Theory and Applications, volume 50, pages 106–120, January 2003.

8. R.M. Haralick, K. Shanmugam, and Dinstein I. Textural features for image clas-sification. In SMC, volume 3, pages 610–621, November 1973.

9. S. A. Hutchinson, G. D. Hager, and P. I. Corke. A tutorial on visual servo control.IEEE Trans. Robotics and Automation, 12(5):651–670, October 1996.

10. Frederic Jurie and M. Dhome. Hyperplane approximation for template matching.IEEE Transactions on Pattern Analysis & Machine Intelligence, 24(7):996–1000,2002.

11. S. A. Karkanis, Iakovidis D. K., D. E. Maroulis, D. A. Karras, and M. Tzivras.Computer-aided tumor detection in endoscopic video using color wavelet features.In IEEE Transactions on Information Technology in Biomedicine, volume 7, pages141–152, September 2003.

12. P. Li, O. Tahri, and F. Chaumette. A shape tracking algorithm for visual servoing.In IEEE Int. Conf. on Robotics and Automation, pages 2858–2863, 2005.

13. Sauvée M., Poignet P., Triboulet J., Dombre E., Malis E., and Demaria R. 3d heartmotion estimation using endoscopic monocular vision system. In MCBMS’06:IFAC Symposium on Modeling and Control in Biomedical Systems, Spetember2006. Note.

14. Yoshihiko Nakamura, Kousuke Kishi, and Hiro Kawakami. Heartbeat synchroniza-tion for robotic cardiac surgery. In ICRA, pages 2014–2019, 2001.

15. A. Noce, J. Triboulet, P. Poignet, and E. Dombre. Texture features selection forvisual servoing of the beating heart, paper 183. Febrary 2006.

16. T. Ortmaier, M. Groger, D.H. Boehm, V. Falk, and G. Hirzinger. Motion estima-tion in beating heart surgery. In IEEE Transactions on Biomedical Engineering,volume 52, pages 1729–1740, October 2005.

17. T. Ortmaier, M. Groger, and G. Hirzinger. Robust motion estimation in roboticsurgery on the beating heart. In Proc. Computer Assisted Radiology and Surgery(CARS), pages 206–211, 2002.

18. P.P. Raghu and B. Yegnanarayana. Segmentation of gabor-filtered textures usingdeterministic relaxation. IEEE Transactions on Image Processing, 5(12):1625–1636, December 2002.

19. T. Suzuki and T. Kanada. Measurement of vehicle motion and orientation us-ing optical flow. In IEEE International Conference on Intelligent TransportationSystems, pages 25–30, 1999.

20. J. S. Weszka, C. R. Dyer, and A Rosenfield. A comparative study of texturemeasures for terrain classification. In SMC, editor, Trans. Systems, Man, andCybernetics, volume 6, pages 269–285, 1976.

65

Page 71: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Steady-Hand Manipulator for Retinal Surgery

Iulian Iordachita, Ankur Kapoor, Ben Mitchell, Peter Kazanzides, Gregory Hager, James Handa, and Russell Taylor

The Johns Hopkins University, Baltimore, Maryland 21218 USA

[email protected]

Abstract. This paper describes the ongoing development of a robotic assistant for microsurgery and other precise manipulation tasks. It reports a new and optimized version of a steady-hand manipulator for retinal surgery. The surgeon and the robot share control of a tool attached to the robot through a force sensor. The robot’s controller senses forces exerted by the operator on the tool and uses this information in various control modes to provide smooth, tremor-free precise positional control and force scaling. The result is a system with a higher efficacy, flexibility and ergonomics while meeting the accuracy and safety requirements of microsurgery.

1 Introduction

Many areas of clinical practice involve the manipulation of extremely small, delicate structures. Such structures occur in several organ systems, but are prevalent in the eye, ear, nervous system, and elements of the circulatory system. Within the eye, the manipulation of vitreoretinal structures is particularly difficult given their relative delicacy, inability to regenerate if injured, the surgical inaccessibility, and suboptimal instrumentation to visualize these structures.

1.1 Retinal Microsurgery. Limitations of current practice

During vitreoretinal surgery, the surgeon must visualize the pathology on a micron scale and manually correct the pathology using direct contact, free hand techniques. The procedure occurs within the confines of a very small space that is surrounded on all sides by vital structures.

At present, the conventional vitreoretinal system uses an operating microscope to visualize surgical instruments that are placed in three sclerotomy incisions 20-25 gauge in diameter. A prototypical surgical maneuver is the dissection and separation of fibrous scar tissue from the retinal surface (membrane peeling). This delicate maneuver is physically not possible for many ophthalmology specialists due to visualization limitations, excessive tremor, or insufficient fine motor control. Physiological tremor, which contributes to long operative times and which is exacerbated by fatigue, is a severe limiting factor in microsurgery [1]. Manual dexterity, precision and perception are particularly important during tasks where the

66

Page 72: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

ability to position instruments with great accuracy often correlates directly with the results of the procedure [1, 2]. In a recent study, the root mean square (RMS) amplitude of the tremor of an ophthalmic surgeon under surgical conditions was measured to be 108 µm [3]. While it may be possible to briefly position an instrument at a specified target with great accuracy, maintaining the position for extended periods of time becomes increasingly difficult due to physical, visual and mental fatigue [4].

From the surgical tool manipulation point of view, we have identified three major problems: 1) micron scale manual dexterity and precision are required for retinal surgery, 2) stability of instruments with respect to the retina for extended periods of time becomes increasingly difficult due to physical, visual, and mental fatigue and 3) tremor and motion accuracy affect the duration, quality, and consistency of the procedure which in turn affect the quality of the surgical outcome. To overcome these problems, we are developing a robotic assistance system for retinal procedures such as vein cannulation and retinal sheathotomy. The proposed system will operate both with and without image guidance from the operating microscope.

There is extensive literature reporting robotic systems for surgery (e.g., [5]), including commercially deployed systems (e.g., [6]). A number of researchers have proposed master-slave microsurgical systems (e.g., [7]), including some systems for the eye ([8]). With the exception of exploratory work by Hunter et al. [9] most of this work has focused on direct improvement of a surgeon’s ability to manipulate tissue remotely or at a very fine scale, rather than exploiting the ability of the computer to assist the surgeon more broadly.

In contrast, the JHU Steady-Hand Robot (SHR) [10, 11] was designed to cooperatively share control of a surgical tool with the surgeon while meeting the performance, accuracy, and safety requirements of microsurgery. The absolute operational positioning precision is approximately 5 microns. However, this first prototype had serious limitations that prevented it from becoming a clinically useful system. In particular, the parts of the mechanism nearest the patient were bulky and ergonomically inconvenient for the surgeon. This paper describes our second prototype, which is designed to overcome these limitations.

2 Mechanical System Design

The design of our second prototype began with an analysis of the necessary degrees of freedom (DOF), options for obtaining a remote center of motion (RCM), and establishment of specifications for mechanical parameters such as range of motion, precision, and maximum velocity. These are discussed in the following sections.

2.1 Degrees of Freedom (DOF) Analysis

We critically analyzed the necessary DOF in tool positioning for eye surgery. There are three phases in surgical tool motion: approach phase (A), insertion phase (I), and retinal surgery phase (R). In the approach phase, the surgeon requires at least 3 DOF (X, Y, and Z) to bring the tool to the entry point on the eye surface (sclerotomy incision). Although these 3 DOF could be realized by many combinations of rotary

67

Page 73: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

and translational axes, we chose a Cartesian design (XYZ stage). In the insertion phase, the surgeon requires 3 DOF (one translation plus two rotations). For the retinal surgery phase, four DOF are required: three rotations and one translation (Fig.1).

Fig. 1. Setup in retinal surgery phase: general view (Left) and magnified local view (Right).

The three rotations are local DOF and are necessary for tool orientation. In our evaluation of the manual retinal surgery procedures, we learned that the tool tip positioning accuracy is not very sensitive to the tool spin. We therefore decided to drive only the tool tilt and roll motions, leaving the spin motion for manual manipulation. The insertion could be a local DOF or generated by combining the general DOF (first three DOF). We chose the latter solution. The advantage is that we eliminate a DOF, which allows a more compact design, while the disadvantage is that we require coordinated motion of three axes to produce the insertion motion. This makes it more challenging to obtain high accuracy and, as discussed in the next section, is not consistent with the philosophy of a remote center of motion (RCM) kinematic design. Thus, the new robot has only 5 DOF: 3 translations (general DOF) and two rotations (local DOF). By eliminating two local DOF (tool insertion and spin), we have the possibility to create a thin tool holder and reduce the interaction between the robot and microscope work space.

As for the range of motion, taking into the account the eye size, its location on the face, and the insertion point position on the eye, we estimated that for the tool motion close to and inside of the eye, we need a work space around 50x50x50 mm, and for the tool orientation, around ±30º about each axis of rotation. Taking into account the necessary space in the approach phase, we set the final range of translation motions at ±50 mm. Because of variability in the configuration of the human face, it could be necessary to increase the rotating angles and/or to set different relative positions of the robot with respect to the patient.

2.2 Real RCM Point versus Virtual RCM Point

The retinal surgery phase requires tool motions to be constrained by an insertion point (i.e., the sclerotomy). As shown in Fig. 1, the allowable motions are the three

Microscope

Tool

Microscope view cone

Tilt

Spin

Roll

Insertion

Insertion Point

68

Page 74: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

rotations about the insertion point and the translation of the tool through the insertion point. This implies a remote center of motion (RCM), where the three rotation axes intersect at the insertion point. An RCM robot achieves this by mechanical design [13]. Furthermore, many RCM designs include a final actuator to provide the tool insertion (this can also be considered as a way to translate the RCM point along the tool axis). A real (mechanical) RCM design provides several advantages for surgical applications, such as increased safety due to the minimal number of actuators that must be powered to achieve each task motion. It is also possible to achieve an RCM point by using software to coordinate the robot joints (i.e., a virtual RCM), but this can reduce the accuracy and safety of the task motions.

This discussion of a real (mechanically constrained) versus virtual RCM point is relevant to the design of the tilt mechanism. This mechanism must be precise, assure the necessary range of motion, be compact, and have a remote center of motion that coincides with the insertion point. We analyzed many solutions for the robot wrist by analogy with welding robots. Finally, we considered three mechanisms: a parallel six-bar mechanism with a geometrically imposed RCM [12, 13], a parallel six-bar mechanism with offset (also with RCM) [14], and a slider-crank mechanism (not an RCM). Though a real RCM has certain advantages such as those cited above, for this system we value a compact design with high stiffness and accuracy. Therefore we chose to implement the slider-crank mechanism, with a virtual RCM..

2.3 Mechanical System Specifications

In establishing the specifications for the robot mechanical system, we considered its interaction with patient anatomical structures, surgeon workspace, and imaging system. Other important factors were the patient safety in correlation with surgery accuracy. The preliminary system specifications are given in Table 1.

Table 1. Robot performance specifications for approach phase (A), insertion phase (I), and retinal surgery phase (R) motions.

Robot Specification Units Value Roll/tilt motion degrees ±30 XYZ motion mm ±50 Roll/tilt precision radians ~0.00005 XYZ precision µm ~2 Net precision at retina µm ~5 Cartesian tip speed - phase A mm/s 10 - phase I mm/s 5 - phase R mm/s <1 Deviation of the tool shaft - phase A mm <1 from the center of - phase I mm <0.2 Sclerotomy point - phase R mm <0.2

69

Page 75: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

2.4 Mechanical System Components

The robot mechanical system consists of three major parts (Fig. 2): the XYZ system, the roll mechanism, and the tilt mechanism. The XYZ system assures the global motions of the surgical tool. The roll mechanism, consisting of a rotating table, was tilted at -15º from the horizontal direction to assure better access of the surgical tool to the eye depression of the patient face. This roll mechanism configuration is appropriate for the actual tilt mechanism type and for a robot located on the same side of the face as the targeted eye. If the robot location is on the other side of the face, it is necessary to avoid collision with the patient nose, which could be accomplished by increasing the tilt angle or by tilting the robot using a passive arm. For the current prototype, the roll mechanism assures a rotation of 360º for the tool. We chose this motion range so that we could simulate many surgical procedures.

The tilt mechanism (slider-crank) is attached to the roll mechanism through a long tubular arm. In this way, nearly the entire robot is away from the surgery area. Also, this configuration assures a better possibility to separate the non-sterilized robot from the sterilized surgical area. The translating joint of the tilt mechanism is realized by a rotary motor and a micrometer screw without backlash. To eliminate the translating joint backlash, the slider was realized from two parts that make contact on an oblique surface. The two parts are pushed against each other by a nut through a wave spring.

A 6-DOF force sensor is rigidly attached to the crank (the last element of the tilt mechanism). A tool holder is located between the force sensor and the surgical tool. This is a very important part of the robot: it must be sterilizable, it must be attached to the force sensor through an emergency release mechanism, it must assure the spinning rotation of the tool, and it must assure a precise and easy attachment for the tool. For the current prototype, we implemented only the last two functions. Because of the variability in size and shape of the surgical tools used in retinal surgery, it could be necessary to develop some custom made adapters for each tool type. At that time it will be possible to make a decision regarding the emergency release mechanism.

Fig. 2. Robot mechanical system (rendering of CAD model): general view (left) and tilt mechanism (right).

70

Page 76: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

3 Mechanical System Implementation

The manipulator itself consists of four modular subassemblies: 1) An off-the-shelf XYZ translation assembly; 2) A roll mechanism; 3) A tilt mechanism; 4) Specialized instruments held in the tool holder.

The XYZ translation assembly is formed by mounting a single axis Z-stage orthogonal to a dual axis X-Y table (NEAT: LM-400 and NEAT: XYR-6060, respectively, from New England Affiliated Technologies of Lawrence, MA). Each axis consists of a crossed-roller way mounted table actuated by an encoded DC servo motor driven leadscrew. The travel along each axis is 100 mm, and the positioning resolution is <2.5µm (1 µm encoder resolution).

For the roll mechanism, we employed a rotary table model B5990TS from Velmex, Inc. Bloomfield, NY, motorized with a DC motor RE 25, 10 Watt connected through a planetary gearhead GP 26 B (14:1 reduction), and encoded with a Digital MR Encoder (512 counts per turn) from Maxon Motor AG. The range of motion is ±180º with a repeatability of 1 arc-second.

The tilt mechanism (Fig. 3) consists of a custom-made slider-crank mechanism attached to the rotary table through a carbon fiber tube. The slider mechanism, included in the tube, utilizes a high precision lead screw (80 TPI, OD ¼”, sensitivity 1µm/inch) from Newport Corporation, Irvine CA, motorized with a DC Maxon motor RE 16, 4.5 Watt connected through a planetary gearhead GP 16 A (19:1 reduction), and encoded with a Digital MR Encoder (512 counts per turn). The crank motion range is ±30º relatively to the vertical tool position. Attached to the crank there is a small commercially available force/torque sensor (Model: NANO-17 SI 12/0.12, ATI Industrial Automation, NC), which has force resolutions of 0.0125N along the X,Y axes, 0.025N in the Z direction, and torque resolutions of 0.0625N-mm about the X,Y and Z axes. Force ranges of ±22.5N in the Z-axis and ±12.5N in the X-Y axes can be measured.

Fig. 3. Robot tilt mechanism.

The tool holder facilitates the attachment of a variety of surgical instruments, such as forceps, needle holder and scissors, that are required during microsurgical procedures. The current prototype assures the tool attachment with a manually actuated rigid coupling with a tapered sleeve mounted inside a tubular shaft. To reduce the friction force during the manual tool spinning, the shaft is supported with two radial ball bearings.

71

Page 77: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

The new prototype of our new steady-hand robot is complete (Fig. 4). The control system has been implemented and the whole system was functionally tested. Also, 3D visualization software was added to the system.

Fig. 4. The new steady-hand manipulator for retinal surgery.

4 Conclusion and Future Work

We have designed and fabricated an advanced and optimized version of a new steady-hand manipulator for retinal surgery. Our approach extends earlier work on cooperative manipulation in microsurgery and focuses on performance augmentation.

Our immediate goal is a rigorous evaluation of the completed system as a microsurgery augmentation aid in terms of efficacy, flexibility, and ergonomics. This will be done using some test environments developed by our colleagues at JHU’s Wilmer Eye Institute. The first of these experiments is vein cannulation, another challenging vitreoretinal surgical technique, involving chick embryo chorioallantoic membrane (Fig.5).

Fig. 5. The set-up for vein cannulation experiment.

72

Page 78: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

In the long term, we expect to improve further the rigidity and the accuracy of the system. Our final goal is to develop a two-handed retinal surgery workstation with high precision and sensitivity, but with the manipulative transparency of the hand-held tools. Although our first focus is retinal microsurgery, we believe that our approach is generalizable to other microsurgery.

Acknowledgments. This work was partially funded by the National Science Foundation (NSF) under Engineering Research Center grant EEC9731748, and by the Johns Hopkins University internal funds.

References

1. Patkin, M.: Ergonomics applied to the practice of microsurgery. Aust NZ J Surg, Vol.47, (1977) 320-329

2. Weinrib, H.P., Cook, J.Q.: Rotational technique and microsurgery. Microsurgery, Vol.5, (1984) 207-212

3. Sinch, S., Riviere, C.: Physiological tremor amplitude during retinal microsurgery. Proc. 28th IEEE Northeast Bioeng. Conf., Philadelphia, (2002) 171-172

4. Boff, K.R., Lincoln, J.E.: Engineering Data Compendium: Human Perception and Performance. H.G. Anderson Aerospace Medical Research Laboratory, Ohio, (1988)

5. Taylor, R.H., Stoianovici, D.: medical Robotics in Computer-Integrated Surgery. IEEE Transactions on Robotics and Automation, Vol. 19, (2003) 765-781

6. Chui, C.K., Hguyen, H.T., Wang, Y., Mullick, R., Raghavan, R., Anderson, J.: Potential field and anatomy vasculature for real time computation in daVinci. First Visible Human Conference, Bethesda, USA, (1996)

7. Ikuta, K., Yamamoto, K., Sasaki, K.: Development of remote microsurgery robot and new surgical procedure for deep and narrow space. IEEE Conference on Robotics and Automation, Taiwan, (2003) 1103-1108

8. Charles, S., Williams, R.E., Hamel, B.: Design of a Surgeon-Machine Interface for Teleoperated Microsurgery. Proc. Of the Annual Int’l Conf. of the IEEE Engineering in Medicine and Biology Society, (1989) 883-884

9. Hunter, I.W., Doukoglou, D., Lafontaine, S.R., Charette, G., Jones, L.A., Sagar, M.A., Mallison, G.D., Hunter, P.J.: A teleoperated microsurgical robot and associated virtual environment for eye surgery. Presence, Vol. 2, (1993) 265-280

10. Kumar, R., Hager, G.D., Barnes, A., Jensen, P., Whitcomb, L.L., Taylor, R.H.: An augmentation system for fine manipulation. In Proceeding of Medical Image Computing and Computer Assisted Intervention. Lecture Notes in Computer Science, Vol. 1935. Springer-Verlag, (2000) 956-965

11. Taylor, R.H., Jensen, P., Whitcomb, L.L., Barnes, A., Kumar, R., Stoianovici, D., Gupta, P., Wang, Z., de Juan, E., Kavoussi, L.: Steady-hand robotic system for microsurgical augmentation. The International Journal of Robotics Research, 18(12), (1999) 1201-1210

12. Rosheim, M.: Robot Wrist Actuators. New York: Wiley, (1989) 13. Taylor, R.H., Fund, J., Grossman, D.D., Karidis, J.P., LaRose, D.A.: Remote Center-of-

Motion Robot for Surgery. US Patent 5,397,323, Mar. 14 (1995) 14. Hamlin, G.J., Sanderson, A.C.: Tetrobot: a modular approach to reconfigurable parallel

robotics. Boston, Mass.; London, Kluwer Academic. (1998)

73

Page 79: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Robot-assisted image-guided targeting for

minimally invasive neurosurgery:intraoperative robot positioning and targeting

experiment

R. Shamir1, M. Freiman1, L. Joskowicz1, M. Shoham2,3, E. Zehavi3, Y.Shoshan4

1 School of Eng. and Computer Science, The Hebrew Univ. of Jerusalem, Israel.2 Dept. of Mechanical Engineering, Technion, Haifa, Israel.

3 Mazor Surgical Technologies, Caesarea, Israel.4 Dept. of Neurosurgery, School of Medicine, Hadassah University Hospital, Israel.

Email: [email protected]

Abstract. This paper is part of an ongoing effort to develop a novelimage-guided system for precise automatic targeting in keyhole mini-mally invasive neurosurgery. The system consists of a miniature robotfitted with a mechanical guide for needle/probe insertion. Intraopera-tively, the robot is directly affixed to a head clamp or to the patient skull.It automatically positions itself with respect to predefined targets in apreoperative CT/MRI image following an anatomical registration with aintraoperative 3D surface scan of the patient facial features. In this paper,we describe the intraoperative robot positioning module and a targetingin-vitro experiment which yields an error of 1.6mm (std=1.7mm).

1 Introduction

Precise targeting of tumors, lesions, and anatomical structures with a probe ora needle inside the brain based on preoperative CT/MRI images is the standardof care in many keyhole neurosurgical procedures. The procedures include tumorbiopsies, catheter insertion, deep brain stimulation, aspiration and evacuation ofdeep brain hematomas, and minimal access craniotomies. Additional procedures,such as tissue and tumor DNA analysis, and functional data acquisition, arerapidly gaining acceptance and also require precise targeting. These minimallyinvasive procedures are difficult to perform without the help of support systemsthat enhance the accuracy and steadiness of the surgical gestures.

Four types of support systems for keyhole neurosurgery are currently in use:1. stereotactic frames; 2. interventional imaging systems; 3. navigation systems,and; 4. robotic systems. Stereotactic frames provide precise positioning with amanually adjustable frame rigidly attached to the patient skull. These exten-sively used frames provide rigid support for needle insertion, and are relativelyaccurate and inexpensive (< 1mm, USD 50K). However, they require preopera-tive implantation of frame screws, head immobilization, and manual adjustment

74

Page 80: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

during surgery. They cause patient discomfort and do not provide real-time val-idation.

Interventional imaging systems produce images showing the actual needleposition with respect to the predefined target [1–3]. Their key advantage is thatthey account for brain shift. A few experimental systems incorporate optical real-time tracking or robotic positioning devices, or augment the reality view withthe imaging device output [15–17]. However, their nominal and operationalcosts are high and their availability is very limited. Furthermore, brain shift isa secondary issue in keyhole neurosurgeries.

Navigation systems (e.g., Medtronic, USA and BrainLab, Germany) show inreal time the location of hand-held tools on the preoperative image onto whichtargets have been defined [4–6]. Augmented with a manually positioned trackedpassive arm (e.g., Phillips EasyTaxisTM), they also provide mechanical guidancefor targeting. While these systems are now in routine clinical use, they are costly(USD 250K), require head immobilization and maintenance of line-of-sight fortracking, and additional time for registration and manual arm positioning.

Robotic systems provide frameless stereotaxy with a robotic arm that au-tomatically positions itself with respect to a target defined in the preopera-tive image [7–10]. Registration between the image and the intraoperative situ-ation is done by direct contact or with video images. Two floor-standing com-mercial robots include NeuroMateTM (Integrated Surgical Systems, USA) andPathFinderTM (Armstrong HealthCare, UK). Their advantages are that theyare rigid, accurate, and provide a frameless integrated solution. However, sincethey are bulky, cumbersome, and costly (US 300K), they are not commonly used.

2 System overview and protocol

We are developing a novel image-guided system for precise automatic targetingof structures inside the brain that aims at overcoming the limitations of existingsolutions [11]. The system automatically positions a mechanical guide to sup-port keyhole drilling and insertion of a needle or probe based on predefined entrypoint and target locations in a preoperative CT/MRI image. It incorporates theminiature MARS robot (Mazor Surgical Technologies) [12–14], originally devel-oped for orthopaedics, mounted on the head immobilization clamp or directly onthe patient skull via pins. Our goal is a robust system for keyhole neurosurgicalprocedures which require clinical accuracy of 1–1.5mm.

The key idea is to establish a common reference frame between the preop-erative CT/MRI image and the intraoperative patient head and robot locationswith an intraoperative 3D surface scan of the patient’s facial features. Once thisregistration has been performed, the transformation that aligns the planned andactual robot targeting guide location is computed. The robot is then automat-ically positioned and locked in place so that its targeting guide axis coincideswith the entry point/target axis.

The system hardware consists of: 1) the MARS robot and its controller; 2) acustom robot mounting base, targeting guide, and registration jig; 3) an off-the-

75

Page 81: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

actualbase

patient

positioningjig

robot

virtualpositioning

jig

(a) starting position (b) middle position (c) final position

Fig. 1. Intraoperative robot positioning augmented reality images.

shelf 3D surface scanner, and; 4) a standard PC. MARS is a 5 × 8cm2 cylinder,250–gram six-degree-of-freedom parallel manipulator with work volume of about10cm3 and accuracy of 0.1mm. It operates in semi-active mode; when locked, itis rigid and can withstand lateral forces of up to 10N [13]. The adjustable robotmounting jig attaches the robot base to either the head immobilization frameor to skull-implanted pins. The system software modules are: 1) preoperativeplanning; 2) intraoperative execution; 3) surface scan processing; and 4) three-way registration. The first and last modules are described and evaluated in [19].In this paper we describe the intraoperative robot positioning module, which isa major component of the second module and an in-vitro targeting experiment.

3 Intraoperative robot positioning

The intraoperative robot positioning module helps the surgeon place the robotbase close (within 5mm) of its planned position both for skull and frame-mountedcases. Given the small robot work volume and the lack of anatomical landmarkson the skull, this coarse positioning is necessary to avoid deviations of 10mmor more from the planned position. The preoperative module is indicating thatthese deviations can severely restrict or invalidate altogether the preoperativeplan.

The module shows the surgeon a real-time, augmented reality image con-sisting of a video image of the actual patient skull and a positioning jig, and,superimposed on it, a virtual image of the same jig indicating the robot basein its desired location (Figure 1). The surgeon can then adjust the position andorientation of the positioning jig until it matches the planned location. The in-puts are the preoperative plan, the geometric models of the robot base and thepatient face, the real-time video images, and a face/ear scan.

The goal is to compute the planned robot base position with respect to thevideo camera image so that robot base model can be projected on the video imageat its desired planned position (Figure 2). The video camera is directly mountedon the 3D surface scanner and is pre-calibrated, so that the transformationbetween the two coordinate systems, T video

scanner is known in advance. A 3D surfacescan of the face is acquired and matched to the geometric face model withthe same method used for three-way registration as described in [19]. This

76

Page 82: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Tscannervideo

Tplanscanner

SURFACE

SCANNER

JIGPOSITIONING

ROBOTBASE

PATIENT

VIDEOCAMERA

PREOPPLAN

CT/MRI

planned actual

Fig. 2. Intraoperative robot positioning computation.

establishes the transformation between the preoperative plan and the scanner,T scanner

plan . By composing the two transformations, we obtain the transformationbetween the preoperative plan and the video, T video

plan .

4 In-vitro experiments of the entire system

This experiment aims at testing the in-vitro registration accuracy of the entiresystem. For this purpose, we manufactured the registration jig, a precise stere-olithographic phantom replica of the outer head surface of the second studentauthor (M. Freiman) from an MRI dataset, and a positionable robot mountingbase [19]. Both the phantom and the registration jigs include fiducials at knownlocations for contact-based registration. In addition, the phantom includes fidu-cials inside the skull that simulate targets inside the brain. The phantom isattached to a base with a rail onto which slides a manually adjustable robotmounting base. The goal is to measure the Fiducial and Target RegistrationErrors (TRE and FRE, respectively) [18].

We used an optical tracking system (Polaris, Northern Digital, Canada –0.3mm accuracy) as a precise coordinate measuring machine to obtain the groundtruth relative locations of the phantom and the registration jig. Their spatiallocation is determined by touching with a calibrated tracked pointer the phantomand registration jig fiducials. The positional error of the tracked pointer at thetip is estimated at 0.5mm. The phantom and the registration jig were scannedwith a video scanning system (Optigo200, CogniTens – 0.03mm accuracy). Thephantom manufacturing error with respect to the MRI model is 0.15mm, asmeasured with the Optigo200 and our surface registration method [19].

The experiment quantifies the accuracy of the three-way registration algo-rithm for several targets inside the skull. In each trial, we performed three-wayregistration with the registration jig and for each of the targets moved the robotso that the needle guide axis coincides with the planned target axis. We insertedthe optically tracked needle into the needle guide and recorded the points on its

77

Page 83: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Phantomhead

trackedneedle

MARSrobot

(a) frontal view (b) details

Fig. 3. In-vitro experimental setup.

trajectory to the target (Figure 3). We then computed the best-fit, least-squaresline equation of these points and computed the shortest Euclidean distance be-tween the planned and actual entry and target points, and the relative anglebetween the axes. Table 1 shows the results for four runs. The TRE is 1.74mm(std=0.97mm) at the entry point, 1.57mm (std=1.68mm) at the target point,and 1.60o (0.58o) for the axis orientation.

5 Conclusion

We have described a system for automatic precise targeting in minimally invasivekeyhole neurosurgery that aims at overcoming the limitations of the existingsolutions. The system, which incorporates the miniature parallel robot MARS,will eliminate the morbidity and head immobilization requirements associatedwith stereotactic frames, eliminate the line-of-sight and tracking requirements ofnavigation systems, and provide steady and rigid mechanical guidance withoutthe bulk and cost of large robots. This paper presents the intraoperative robotpositioning module and a targeting in-vitro experiment. It establishes viabilityof the surface scan concept and the accuracy of the location error of phantomtargets with respect to the robot base to 1.6mm, which is close to the required1–1.5mm clinical accuracy in many keyhole neurosurgical procedures.

Acknowledgments: This research is supported in part by a Magneton grantfrom the Israel Ministry of Industry and Trade. We thank Dr. Tamir Shalom andCogniTens for their generous assistance in acquiring the scans, ad Haim Yeffetfor manufacturing the experimental setup platform.

78

Page 84: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Run phantom/scan Target Entry Target TrajectoryRMS (std) name error error angular error

A 1.10 1.45 0.89B 1.59 0.99 0.88

1 0.40 (0.13) C 2.43 0.07 2.07D 1.41 1.97 1.48E 1.65 2.92 0.96F 2.38 2.83 0.8

A 2.10 1.61 1.10B 0.55 1.19 1.65

2 0.39 (0.12) C 1.42 0.11 1.96D 4.53 7.87 2.65E 2.78 3.17 1.34F 2.57 2.63 2.27

A 3.18 0.45 1.98B 0.99 0.38 2.42

3 0.42 (0.13) C 1.13 0.99 2.13D 0.65 1.25 2.37E 1.30 0.99 1.53

A 1.64 0.96 1.98B 1.36 0.37 1.82

4 0.40 (0.12) C 0.78 0.22 1.11D 2.37 1.45 0.80E 0.55 0.81 1.20

Avg(std) 0.40 (0.12) 1.74 (0.97) 1.57 (1.68) 1.60 (0.58)

Table 1. In-vitro registration results (in mm) of four trial experiments. The firstcolumn is the run number. The second column is the surface scanner registration errorwith respect to the phantom. The third column is the target name inside the brain.The fourth, fifth, and sixth columns are needle errors at the entry point, target, andthe trajectory angular error. The last row is the average and standard deviation overall 22 trials and targets.

79

Page 85: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

References

1. Tseng, C-S. Chen, H-H. Wang, S-S, et al., ”Image guided robotic navigation systemfor neurosurgery”. Journal of Robotic Systems 17(8), 2000, pp 439-447.

2. Chinzei, K. Miller. K. ”MRI Guided Surgical Robot”. Australian Conf. on Roboticsand Automation, Sydney, 2001.

3. Kansy, K. Wikirchen, P. Behrens, U. et al. ”LOCALITE - a frameless neuronaviga-tion system for interventional magnetic resonance imaging”. Proc. of Medical ImageComputing and Computer Assisted Intervention, 2003, pp 832-841.

4. Kosugi, Y. Watanabe, E. Goto, J. et al. ”An articulated neurosurgical navigationsystem using MRI and CT images”. IEEE Trans. on Biomedical Eng. 35(2), 1998.

5. Akatsuka, Y. Kawamata, T. Fujii, M. et al. ”AR navigation system for neuro-surgery”. Proc. of Medical Imaging and Computer-Aided Interventions, 2000.

6. Grimson, E, Leventon, M. Ettinger, G. et al., ”Clinical experience with a high preci-sion image-guided neurosurgery system”. Proc. of Medical Imaging and Computer-Aided Interventions, 1998, pp 63-72.

7. Chen, MD. Wang, T. Zhang, QX et al.,”A robotics system for stereotactic neuro-surgery and its clinical application”. Proc. Conf. Robotics and Automation, 1998.

8. Masamune, K. Ji, LH. Suzuki, M. et al., Takeyoshi Dohi, Hiroshi Iseki, ”A newly de-veloped stereotactic robot with detachable drive for neurosurgery”. Proc. of MedicalImage Computing and Computer Aided Imaging, 1998, pp. 215-222.

9. Davies, B. Starkie, B. Harris, S. et al. ”Neurobot: a special-purpose robot for neu-rosurgery”, Proc. Int. Conf. and Robotics and Automation, 2000, pp 410-414.

10. Hang, Q. Zamorano, L. Pandya, A. et al., ”The application of the NeuroMateRobot: a quantitative comparison with frameless and frame-based surgical localiza-tion systems”. Computer Aided Surgery 7(2), 2002, pp 90-98.

11. Joskowicz, L. Shoham, M. Shamir, R. Freiman, M. Zehavi, E. and Shoshan, Y.“Miniature robot-based precise targeting system for keyhole neurosurgery: con-cept and preliminary results”. 19th Int. Conf. on Computer-Assisted Radiology andSurgery, CARS’2005, H.U. Lemke et. al. editors, Elsevier 2005.

12. Shoham, M. Burman, M. Zehavi, E. et al., “Bone-mounted miniature robot forsurgical procedures: concept and clinical applications”. IEEE Trans. on Roboticsand Automation 19(5), 2003, pp 893-901.

13. Wolf, A. Shoham, M. Schinder M. and Roffman, M. “Feasibility study of a minirobotic system for spinal operations: analysis and experiments”, European SpineJournal, 2003.

14. Yaniv, Z. and Joskowicz, L. “Registration for robot-assisted distal locking of longbone intramedullary nails”, IEEE Trans. on Medical Imaging, 2005.

15. Fichtinger, G., Deguet, A., Masamune, K., Balogh, E., Fischer, G. S., Mathieu,H., Taylor R. H., Zinreich, S. J., Fayad, L. M., ”Image overlay guidance for needleinsertion in CT scanner”. IEEE transactions on biomedical engineering, 52(8), pp.1415-1424, August 2005.

16. Knoop, H., Peters, H., Raczkowsky, J., Eggers, G., Rotermund, F., Wrn, H., ”Inte-gration of a surgical robot and intraoperative imaging”. Proc. of Computer AssistedRadiology and Surgery, CARS’2005, pp. 595-599, 2005.

17. Vahala, E., Ylihautala, M., Tuominen, J., Schiffbauer, H., Katisko, J., Yrjana, S.,Vaara, T., Ehnholm, G., Koivukangas, J., ”Registration in interventional procedureswith optical tracking”. Journal of magnetic resonance imaging 13, pp. 93-98, 2001.

18. Fitzpatrick, J. M., West, J. B., Maurer, C. R., Jr., ”Predicting error in rigid-bodypoint-based registration”. IEEE transactions on medical imaging 17(5), pp. 694-702,1998.

80

Page 86: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

19. Shamir, R. Freiman, M. Joskowicz, L. Shoham, M. Zehavi, E. Shoshan, Y. ”Robot-assisted image-guided targeting for minimally invasive neurosurgery: planning, regis-tration, and in-vitro experiment”, Proc. of Medical Image Computing and ComputerAided Imaging, MICCAI’2005, LCNS 3750, pp. 131-138, 2005.

81

Page 87: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Automatic Positioning of a Laparoscope by

Preoperative Workspace Planning andIntraoperative 3D Instrument Tracking

Atsushi Nishikawa1, Kanako Ito1, Hiroaki Nakagoe1, Kazuhiro Taniguchi1,Mitsugu Sekimoto2, Shuji Takiguchi2, Yosuke Seki2, Masayoshi Yasui2,

Kazuyuki Okada2, Morito Monden2, and Fumio Miyazaki1

1 Department of Mechanical Science and Bioengineering,Graduate School of Engineering Science, Osaka University, Japan

2 Department of Gastroenterological Surgery,Graduate School of Medicine, Osaka University, Japan

Abstract. This paper presents a robotic assistant system for provid-ing precise positioning of the laparoscope and stable image without thesupport of the human camera assistant. Using an optical 3D tracking sys-tem, we realize automatic positioning of a laparoscope by preoperativeworkspace planning and intraoperative 3D instrument tracking. In thepreoperative planning stage, the surgeon can decide several workspacesand respectively the most favorable zooming ratio for operating in eachworkspace. In operating, if the tip of the surgical instrument exists withinthe workspace decided in the preoperative planning stage, the systemautomatically manipulates the laparoscope such that the workspace isboth centered in the laparoscope image and magnified with the prede-fined zooming ratio. A laparoscopic cholecystectomy simulation using apig organ by a single surgeon shows the effectiveness of the proposedsystem.

1 Introduction

In current laparoscopic surgery, the vision of the operating surgeon usually de-pends on the camera assistant responsible for guiding the laparoscope. The as-sistant holds the laparoscope for the surgeon and positions the scope accordingto the surgeon’s instructions. This method of operation is frustrating and inef-ficient for the surgeon, because commands are often interpreted and executederroneously by the assistant. Also, the views may be suboptimal and unsta-ble because the scope is sometimes aimed incorrectly and vibrates due to theassistant’s hand tremors. To solve these problems, several robotic laparoscopepositioning systems have been developed in the last ten years (see [1] for acomprehensive review of passive and active(robotic) laparoscope holders). Al-though these camera positioners may provide more precise positioning of thelaparoscope and stable images, they must be usually controlled by the operatingsurgeon himself/herself using a human-machine interface such as instrument-mounted joysticks, foot pedals, voice controller, and head/face motion-activated

82

Page 88: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

systems. This is an additional task, distracting the surgeon’s attention from themain region of interest, and it may result in frustration and longer surgery time.

To release the surgeon from the task of controlling the view and automaticallyoffer an optimal and stable view during laparoscopic surgery, several automaticcamera positioning systems have been devised [2–6]. Basically, these systemsvisually extract the shape and/or position of the surgical instrument from la-paroscopic images in real-time, and automatically manipulate the laparoscopeto always center the tip of the instrument in the displayed image, which is basedon the simple idea that the projected position of the distal end of the surgicaltool corresponds to the surgeon’s region of interest in the laparoscopic images.Besides the centering of the most interesting area, there is an additional andimportant factor that defines a good image of the surgical scene — zoomingratio which corresponds to the amount of insertion of the laparoscope alongits longitudinal axis. A couple of full-automatic positioning systems defined thezooming ratio uniformly as a function of the estimated distance between the tipof the tool and the laparoscope [2][3] or the area ratio between the visible tooland the whole image [4]. These approaches may completely remove the surgeon’scamera control burden but may not offer the appropriate view that the surgeonwants, because the most adequate zooming ratio varies widely during surgery,depending on both the surgical procedure/phase and the habit/preferences ofthe operating surgeon. For this reason, most of the previous instrument trackingsystems [5][6] gave up the idea of systematic control of the zooming parameterand set it manually through the conventional human-machine interfaces, whichrequired extra control burden on the surgeon.

Ko and Kwon [7] first tackled the above problem and associated the typeof surgical instruments (such as dissector, grasper, clip applier, scissors) withsurgical procedures and defined the zooming ratio adaptively according to thetool ID. In order to identify the tool type during surgery, they attached a coloredID marker to the tip of each surgical tool and extracted the marker from thelaparoscopic images using a real-time color image processing technique. The toolvariations, however, do not fully reflect the zooming variations during surgery.Thus they combined the recognizer of the surgeon’s voice commands such as“zoom in” and “zoom out” to their instrument tracking system in order to allowadjustment of the zooming ratio.

In this paper, we realize automatic positioning of a laparoscope by preopera-tive workspace planning and intraoperative 3D instrument tracking. Unlike theprevious automatic camera positioning systems, the proposed method definesthe zooming ratio adaptively according to where the surgical instrument exists.To accomplish this, we used an optical 3D tracking system (Polaris Accedo, NDIcorporation). In the preoperative planning stage, the surgeon can decide severalworkspaces and respectively the most favorable zooming ratio for operating ineach workspace. In operating, if the tip of the surgical instrument exists withinthe workspace decided in the preoperative planning stage, the system automat-ically manipulates the laparoscope such that the workspace is both centeredin the laparoscope image and magnified with the predefined zooming ratio. A

83

Page 89: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 1. System configuration.

laparoscopic cholecystectomy simulation using a pig organ by a single surgeonshows the effectiveness of the proposed system.

2 Method

2.1 System Overview

The system configuration is shown in Figure 1. Our laparoscope positioning sys-tem consists primarily of an optical 3D measurement system (Polaris Accedo,NDI Corporation), an all-purpose PC (CPU: AMD Athlon 1.4GHz, Memory:256MB, OS: Vine Linux 2.5), a three DOFs robot manipulator that holds thelaparoscope (for details, see [8]), a scan converter (DSC06d-HR, Digital ArtsCorporation) for superimposing graphics on the scope image, and a 21-inch TVmonitor. The following three processes run on the main computer: “system coreprocessor” which includes the proposed algorithm, “overlapped graphics gener-ator” which outputs feedback information for the surgeon graphically on thePC display, and “robot manipulator controller” which converts the surgical viewcontrol commands to motor control commands. In the next sections, we willexplain the system core process in more detail.

84

Page 90: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

2.2 3D Instrument Tracking and the Projection of the Tip ofInstrument onto the Laparoscopic Images

The proposed system uses the commercial optical tracking system, Polaris Ac-cedo, to measure the 3D position/pose of both the instrument and the laparo-scope in real-time, and “calculates” the projected position of the tip of the in-strument onto the laparoscopic images without any image processing techniques(Notice that, in our system, the projection of the tool is not extracted from thereal images but indirectly calculated through the 3D measurement values). Assubstitute for image processing, precise camera calibration (calculation of the3D-2D projection matrix) is required in advance. In this paper, we performedthis using the image overlay technique described in [9].

2.3 Preoperative Workspace Planning

In the preoperative planning stage, the laparoscopic camera is controlled eitherby a camera assistant using a remote controller or by the surgeon using theconventional human-machine interface, while the system measures the 3D posi-tion/pose of both the laparoscope and the surgical instrument manipulated bythe surgeon in real-time (An implementation example is shown in section 3).

Assume that the whole surgical space is composed of several workspaces,denoted by Wi, i = 1, 2, . . . , N . For simplicity, each workspace Wi can be rep-resented by a sphere with the fixed radius r. The number of workspaces, N ,is decided by the operating surgeon according to the surgical procedures, sur-gical type, or his/her preferences. Preoperative workspace planning steps aresummarized as follows.

1. i← 12. Controlling the position of the laparoscopic camera such that the workspace

Wi is both centered in the image and magnified with the most favorablezooming ratio.

3. Pointing the center of the workspace by the tip of the surgical instrumentand simultaneously recording the following two values to the PC.– The 3D position of the center of the workspace Wi, which equals to the

3D position of the tip of the instrument at the same time.– The distance between the center of the workspace and the tip of the

laparoscope, denoted by di, which corresponds to the zooming ratio.4. If i = N then stop. Otherwise, i← i + 1, then go to Step 2.

2.4 Automatic Positioning of the Laparoscope

During surgery, the proposed system automatically manipulates the laparoscopesuch that the workspace is both centered in the laparoscopic image and magnifiedwith the predefined zooming ratio, while measuring the 3D position/pose of boththe laparoscope and the surgical instrument in real-time.

Let U be the inclusive workspace that covers all the workspaces {Wi}(i =1, 2, . . . , N), and d0 be the zooming ratio for observing the whole workspace U.

85

Page 91: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 2. Tool position: case 1(a) Fig. 3. Tool position: case 1(b)

Fig. 4. Tool position: case 1(c) Fig. 5. Tool position: case 2

Also, let xt be the 3D position of the tip of the surgical instrument at timet (t = 1, 2, . . .). The proposed automatic laparoscope positioning algorithm issummarized as follows.

Case 1 The tip of the instrument xt exists within the workspace U (xt ∈ U).Case 1 is broken down into the following three subcases: (a),(b),(c).

(a) If the tip of the surgical instrument xt is within the only workspace Wi, thatis, ∃i; xt ∈Wi and ∀j �= i; xt /∈ Wj (i, j = 1, 2, . . . , N), then, insert/retractthe laparoscope to reach the desired zooming ratio di while centering the tipof the instrument in the image by a “virtual” visual servoing. Figure 2 shows3 examples of Case 1(a), where i = 1, 2, and 3. (The term “virtual” meansthe system uses the 2D image information “indirectly calculated” from the3D measurement value and the 3D-2D projection matrix; also see section2.2.)

(b) If the tip of the instrument xt is within the multiple workspaces Wα andWβ , that is, ∃α;xt ∈ Wα and ∃β �= α; xt ∈ Wβ (α, β = 1, 2, . . . , N),then, calculate the new zooming ratio d∗ by linear interpolation betweendα and dβ and insert/retract the laparoscope to reach the zooming ratio d∗

while centering the tip of the instrument in the image by a “virtual” visualservoing. Figure 3 shows 1 example of Case 1(b), where α = 1 and β = 2.(If the number of workspaces that includes the tip of the instrument is morethan three, the same interpolation algorithm can be executed.)

(c) If the tip of the surgical instrument xt is without all the workspaces {Wi},that is, ∀i; xt /∈Wi (i = 1, 2, . . . , N) (see Figure 4), then, keep the current

86

Page 92: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 6. Preoperative workspace planning system.

zooming ratio while centering the tip of the instrument in the image by a“virtual” visual servoing.

Case 2 The tip of the instrument xt exists without the workspace U (xt /∈ U)(see Figure 5). In this case, the laparoscope is retracted to reach the predefinedzooming ratio d0 (for zooming out) while centering the tip of the instrument inthe image by a “virtual” visual servoing.

3 Results and Discussion

To evaluate the validity of the proposed system, a laparoscopic cholecystectomysimulation with a pig organ was performed by a single surgeon.

At first, in the preoperative planning stage, an endoscopic surgeon controlledthe position of the laparoscope manipulator using a human-machine interfaceand decided and registered several workspaces and the most favorable zoomingratio for each workspace. Figure 6 illustrates the 2D image returned to the sur-geon in our preoperative workspace planning system. This system utilizes theoptical tracker, Polaris Accedo, to measure the 3D position/pose of both the in-strument and the laparoscope in real-time, and calculates the projected positionof the tip of the instrument onto the laparoscopic images (see section 2.2). Asshown in Figure 6, if the resulting tip projection exists neither in the ellipticarea A, nor in the top-left/top-right boxes, the system controls the laparoscopiccamera by a virtual visual servoing such that the tip of the instrument moves

87

Page 93: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 7. Preoperative planning results: the configuration of workspaces.

into the elliptic area B (⊂ A). If the projection of the instrument tip is withinthe top-left(ZOOM IN) or top-right(ZOOM OUT) area, the system inserts orretracts the laparoscope along its longitudinal axis. That is, our preoperativeplaning system normally centers the tip of the instrument in the laparoscopicimage. Only while the user (surgeon) intentionally puts the tool tip into theZOOM IN/ZOOM OUT boxes, the system produces magnification/reductionof the viewing field. In the current version, a hand switch (mouse) for mem-orization of a set of workspaces and the corresponding zooming ratios is alsoprovided. Both the 3D position of the tip of the instrument which correspondsto the workspace position, and the corresponding zooming ratio (distance be-tween the tool tip and the distal end of the laparoscope) are recorded to thesystem when the mouse button is clicked by the user.

In this experiment, the radius of workspace, r, was set to 12 mm, and thenumber of workspaces, N , was 13. Figure 7 shows the configuration of the se-lected workspaces and Table 1 summarizes the resulting zooming ratio. As shownin Table 1, the most adequate zooming ratio varies depending on where the surgi-

Table 1. Preoperative planning results: zooming ratio [unit:mm]

Workspace No. 1 2 3 4 5 6 7 8 9 10 11 12 13

Zooming ratio 56.8 53.7 63.8 60.5 49.9 42.2 40.1 49.8 54.8 53.7 59.2 72.9 74.8

88

Page 94: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 8. Automatic positioning of the laparoscope.

cal instrument may exist. Notice that the previous automatic camera positioningsystems cannot cope with such zooming variations.

The surgeon then used the proposed system to perform a laparoscopic chole-cystectomy simulation with the pig organ. Based on the preoperative planningresults, the system automatically manipulated the laparoscope until the removalof the gallbladder (see Figure 8). As a result, the surgeon successfully and safelycompleted the whole operative procedure, without the support of the humancamera assistant. The operation time was 1070 seconds (about 18 minutes).

We received many positive comments for the proposed system, such as com-fortable zooming ratio, effective tool tracking, no fatigue (non physical/mentalstress), from the operating surgeon. However, he did make the negative com-ment that sometimes small view motion occurred when he would have preferredto maintain the current view. In current camera positioning algorithm describedin section 2.4, the zooming ratio may be frequently changed when the tip of thesurgical instrument is within multiple workspaces. This is caused by the use ofa simple linear interpolation technique. In this experiment, most of the selectedworkspaces were partially overlapped (see Figure 7). That is why the operatingsurgeon felt the slight image movements.

Figure 9 shows the comparison of zooming motions between the proposedsystem (a) and a human camera assistant (b). Although the whole motion ten-dencies are very similar between the two, there is no slight movement in caseof the human assistant. The proposed zooming algorithm can stand further im-provement which is one of our future works.

89

Page 95: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

(a) the proposed system

(b) a human camera assistant

Fig. 9. Comparison of zooming control between the proposed system and a humancamera assistant (the horizontal axis: time [h:m:s], the vertical axis: distance betweenthe tip of the laparoscope and the incision point [mm]).

4 Conclusion

We proposed a new approach to the automatic positioning of a laparoscope. Itrelies on a preoperative planning phase where the surgeon identifies multipleworkspaces, defining the desired image center and zooming ratio in each one. A3D tracking system localizes both the laparoscope and the surgical instrument,and the laparoscope is automatically moved according to the preoperatively de-fined zooming ratios and the real-time 3D position of the surgical instrument. Ina laparoscopic cholecystectomy simulation with a pig organ by a single surgeon,our system succeeded in completely relieving the surgeon from manipulatingthe laparoscope during the whole operative procedure, while achieving safety,effective tool tracking, and appropriate (but frequently changed) zooming ra-

90

Page 96: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

tio. Now we are studying an improved method to reduce unnecessary change ofthe zooming ratio. In order to validate the advantage of the proposed system,a more exhaustive comparative study between our system and other existingrobotic/human camera assistants is of paramount importance and on-going. Anin vivo test is also important for evaluation of the applicability of our system toclinical use and one of our future works.

Acknowledgments

This work was supported in part by Grant-in-Aid for Scientific Research (C)(No. 16591253) of the Japan Society for the Promotion of Science, Grant-in-Aidfor Young Scientists (B) (No. 17760352) of the Ministry of Education, Culture,Sports, Science and Technology, Japan, and the Seeds Encouragement Programof the Japan Science and Technology Agency.

References

1. Jaspers, J.E.N., Breedveld, P., Herder, J.L., Grimbergen, C.A.: Camera and instru-ment holders and their clinical value in minimally invasive surgery. Surg LaparoscEndosc Percutan Tech 14(3) (2004) 145–152

2. Wei, G.Q., Arbter, K., Hirzinger, G.: Real-time visual servoing for laparoscopicsurgery. IEEE Engineering in Medicine and Biology Magazine 16 (1997) 40–45

3. Zhang, X., Payandeh, S.: Application of visual tracking for robot-assisted laparo-scopic surgery. Journal of Robotic Systems 19(7) (2002) 315–328

4. Casals, A., Amat, J., Laporte, E.: Automatic guidance of an assistant robot inlaparoscopic surgery. In: Proceedings of the 1996 IEEE International Conference onRobotics and Automation. (1996) 895–900

5. Wang, Y.F., Uecker, D.R., Wang, Y.: A new framework for vision-enabled androbotically assisted minimally invasive surgery. Computerized Medical Imaging andGraphics 22 (1998) 429–437

6. Nishikawa, A., Asano, S., Fujita, R., Yamaguchi, S., Yohda, T., Miyazaki, F., Seki-moto, M., Yasui, M., Takiguchi, S., Miyake, Y., Monden, M.: Selective use of facegesture interface and instrument tracking system for control of a robotic laparoscopepositioner. In: Proceedings of the 6th International Conference on Medical ImageComputing and Computer-Assisted Intervention (MICCAI 2003). (2003) 973–974

7. Ko, S.Y., Kwon, D.S.: A surgical knowledge based interaction method for a laparo-scopic assistant robot. In: Proceedings of the 13th IEEE International Workshopon Robot and Human Interactive Communication (Ro-Man 2004). (2004) 313–318

8. Nishikawa, A., Hosoi, T., Koara, K., Negoro, D., Hikita, A., Asano, S., Kakutani,H., Miyazaki, F., Sekimoto, M., Yasui, M., Miyake, Y., Takiguchi, S., Monden, M.:FAce MOUSe: A novel human-machine interface for controlling the position of alaparoscope. IEEE Trans on Robotics and Automation 19(5) (2003) 825–841

9. Yamaguchi, S., Nishikawa, A., Shimada, J., Itoh, K., Miyazaki, F.: Real-time imageoverlay system for endoscopic surgery using direct calibration of endoscopic camera.In: Proceedings of the 19th International Congress and Exhibition on ComputerAssisted Radiology and Surgery (CARS2005). (2005) 756–761

91

Page 97: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Port Placement Based on Robot Performance Optimization

Ana Luisa Trejos1,2, Rajni Patel1,2, Bob Kiaii1,3, Ian Ross4,

1 Canadian Surgical Technologies & Advanced Robotics, London Health Sciences Centre, 339 Windermere Road, London, ON, Canada N6A 5A5

2 Department of Electrical and Computer Engineering, The University of Western Ontario, London, ON, Canada N6A 5B9

3 Department of Surgery, Division of Cardiac Surgery, The University of Western Ontario, London Health Sciences Centre, 339 Windermere Road, London, ON, Canada N6A 5A5

4 Department of Diagnostic Radiology and Nuclear Medicine, The University of Western Ontario, London Health Sciences Centre, London, ON, Canada N6A 5A5

{analuisa.trejos, rajni.patel}@c-star.ca, {bob.kiaii, ian.ross}@lhsc.on.ca

Abstract. Determining the proper location of ports during robot-assisted mini-mally-invasive cardiac surgery is a critical task that can affect the outcome of the procedure. Improperly placing the ports can cause robot collisions, inability to reach the surgical site, inability to manipulate the tools properly, or colli-sions between the tools inside the patient’s body. This paper proposes a new port placement selection method based on maximizing robot performance while ensuring that the tools reach the entire surgical workspace. A modified per-formance measure is proposed and used to optimize port placement when using the da VinciTM surgical manipulator for artery dissection during coronary by-pass surgery. Theoretical results are compared with the actual ports used during a robotic surgical procedure as selected by an expert surgeon.

1 Introduction

In recent years, scientists have been studying ways of reducing the invasiveness of cardiac surgical procedures, in order to reduce surgery time and cost, patient recovery time and morbidity rates. One way to minimize the damage to the patient’s body is to perform the procedure through small incisions in the chest wall. Through these inci-sions, surgical tools and a camera (or endoscope) are inserted. This procedure, called endoscopic surgery, requires the placement of ports at the point of incision to provide support for each instrument and ensure its smooth motion.

In the last few years, advances in robotics and telepresence systems have led to the development of remotely controlled robotic manipulators, which have been able to reduce or eliminate most of the drawbacks caused by endoscopic surgery, namely, reduced dexterity, hand motion reversal, and increased manipulation forces. The da VinciTM Surgical System [1] created by Intuitive Surgical is the only system commer-cially available that is approved for surgery in North America. The use of this type of

92

Page 98: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

system has made endoscopic heart surgery possible, minimizing the size of the surgi-cal incisions while maintaining operative dexterity and surgical accuracy. Although the technology of remotely manipulated surgical instruments continues to progress, there are still some difficulties with the planning process that require further investi-gation.

Recent work in surgical planning has focused on the development of procedures and guidelines that help plan an efficient robotically-assisted intervention. One of the most critical issues when planning robotic surgeries is the placement of the entry ports in the patient’s body. Port placement guidelines for robotically-assisted endo-scopic surgery were developed in [2]. Methods based on these guidelines have been used to date; however, the ability to use them properly requires a considerable amount of experience on the surgeon’s part.

Several groups have worked on methods to select ideal port locations [3], [4]. [5]. The most significant contributions refer to the development of an optimization algo-rithm for port placement [6] and an optimization strategy for port and robot position-ing [7]. These methods both require a surgeon to define the “ideal” instrument posi-tion and orientation with respect to the surgical targets, which have been determined through clinical studies and therefore vary depending on the surgeon’s preference. There has not been a proper study performed on what the ideal orientation of the tools should be, based on dexterity, interference of the tools, and workspace requirements. Furthermore, their optimization approaches are independent of robot positioning. Although this guarantees that the port location chosen is based only on the require-ments of the surgery, it completely ignores the requirements of the robot and may cause difficulties in terms of reduced dexterity and inability to reach the surgical site.

This paper proposes to utilize performance optimization techniques, currently used for robot manipulation and design, to complement port placement and robot configu-ration optimization for minimally invasive robotics-assisted cardiac surgery. The kinematic structure of the manipulator used in this study is analyzed in the following section.

1.1 Kinematic Structure of the Surgical Robot

The surgical robotic system used in this study is the da VinciTM Surgical System by Intuitive Surgical [1]. A kinematic schematic of one of the tool arms is shown in Fig. 1. All of the tool arms have the same kinematic structure: six non-actuated joints, six active joints, and several passive joints. The passive joints form a double parallelo-gram that creates a remote centre of motion (RCM). This ensures that no translational motion occurs against the entry point. Since the performance of the manipulator dur-ing surgery is only affected by the movement of the active joints, the kinematic analy-sis of the active and non-actuated sections has been considered separately. For the active section, the link between joints 6 and 7 becomes the base, and the joints form-ing the double parallelogram can be represented by a single joint at the entry point (β). A more detailed kinematic analysis has been presented in [8].

93

Page 99: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig. 1. Kinematic configuration of the tool arms of the daVinci manipulator with gripper at-tachment

1.2 Performance Measures

Once a kinematic model of the manipulator is defined, its performance can be meas-ured in many ways depending on the application. The definition of indices that char-acterize the performance of a robotic manipulator allows the optimization of robot parameters during the design stages and the optimization of manipulator poses in the presence of redundant degrees of mobility. Most performance measures are based on the manipulator Jacobian matrix, and measure such properties as dexterity, manipula-bility, distance from joint singularities, and isotropy.

The isotropy of the manipulator indicates the level of uniformity in its behavior, i.e., its ability to effect accurate and consistent speeds and forces. Different methods of measuring isotropy have been proposed in the literature [9], [10]. One particular measure, termed the Global Conditioning Index (GCI), is based on the distribution of the conditioning index (k) over the entire manipulator workspace. The GCI has been selected for port placement optimization, and is defined as follows [11]:

,k1

WW∫∫= dWdWGCI (1)

where k refers to the condition number of the Jacobian over the entire workspace, W. Details of the port placement optimization method are outlined in the section below.

2 Optimization Methodology

Optimization algorithms were developed to obtain the port location and robot position that maximize the GCI, Fig. 2. The algorithm starts by defining the set of possible port locations for each arm. For each port location it then defines the set of possible active base locations, which, due to the presence of the remote centre of motion,

94

Page 100: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

forms a section of a sphere around the entry point (also shown in Fig. 2). This set is then reduced to include only those from which the tools can reach the surgical work-space. An optimization algorithm then selects the base location that maximizes the performance. Finally the port location corresponding to the active base location that produced the maximum performance is selected. The global conditioning index (GCI) was implemented in MATLAB using the triplequad function to integrate over the workspace and the fmincon function to search for the best location [12].

Surgical workspace

Base Entry point

Set of possible base locations

Fig. 2. Flow diagram of the optimization procedure (left) and representation of the active base parameter set and workspace set (right)

2.1 Weighted Performance

Although the dexterity optimization analysis provides the best port and active base location, positioning the robot at this ideal base location is not an easy task. In a sur-gical setting, there is no feedback regarding the robot joint angles or end-effector position that could be used to ensure that the ideal base pose is being achieved. Fur-thermore, the robot is positioned by unlocking all of the joints of the passive section and manually moving all of the links at once, until the ideal position is found. This makes it very difficult to achieve an ideal configuration, even if position feedback from the joints were available. It is then important to consider port locations from which several base positions provide good performance, such that if the ideal position is missed by a few degrees, the overall performance of the robot is not significantly affected.

A different way of analyzing the optimality of one port location over another is to look at what percentage of the set of possible base locations can reach the entire workspace. This reachability percentage (r) can be combined with the GCI to define a new performance index, as follows:

95

Page 101: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

WGCI = r * GCI (2)

This weighted performance measure, determined for each possible port location, calculates the maximum performance of the set of possible base locations and multi-plies it by the percentage of the parameter set from which the end effector can reach the entire workspace.

2.2 Experimental Evaluation

A patient was scanned using computerized tomography (CT) a few days prior to the surgical procedure, using a four-slice multidetector CT scanner with a 0.8-second rotation time (Light Speed; GE Medical Systems, Milwaukee, Wis.). The computed tomography (CT) scan technique obeyed the following protocol: unenhanced, helical 5 mm collimation with a 3 mm reconstruction interval, 0.75:1 pitch, and a 36 cm field of view. The position of the patient during the scanning mimicked that for the actual surgical procedure.

Surgery was then performed on the patient using the da Vinci surgical manipulator to harvest the internal mammary artery. During the procedure, digital pictures of the patient were taken to show the location of the actual ports that were used for each of the arms. The left arm of the robot encountered some difficulties during the proce-dure due to the arm interfering with the shoulder of the patient. Half way through the procedure, the location of the port was changed from the third intercostal space to the fourth space, to allow the instrument to reach the entire length of the artery. The right arm of the robot was always able to reach the entire workspace and maneuver well.

After the procedure, the CT images were used to manually select the set of possi-ble port locations for each of the arms, based on the anatomy of the intercostal spaces and the distance to the internal features of the patient. The information from these images was correlated to the information from the digital pictures to determine where the ports were located during surgery. From the set of possible port locations, the optimization algorithms were used to select the best port location.

3 Results and Analysis

An initial analysis was performed similar to that presented in [8] but using true pa-tient dimensions for the workspace and the possible port locations. Fig. 3 shows the results obtained for the tool arms. For the left arm, since the port location was switched from the third to the fourth intercostal space, possible port locations at both spaces were evaluated. For the right arm, the entire surgical area could not be reached from seven of the port locations (not shown in the graphs below). The performance indices shown in the graphs correspond to the maximum performance possible for each port location. The location of the ports is represented as the distance from the port to the center of the surgical workspace.

96

Page 102: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

0.090

0.095

0.100

0.105

0.110

15.66 15.76 15.83 15.84 16.01 16.05

Distance from center of w orkspace (cm)

Perfo

rman

ce

0.040.060.080.1

6.75 7.71 9.24 10.40 11.21 11.88 12.76Per

form

ance

0.04

0.06

0.08

8.18 8.41 9.28 10.16 10.87 11.52Distance from center of workspace (cm)

Per

form

ance

Fig. 3. GCI for each possible port location for the tool arms of the da Vinci Manipulator: Left arm at the third (top left) and fourth (bottom left) intercostal spaces, and Right arm (right). The square data point represents the port location that was used during surgery

3.1 Weighted Performance

The results of the optimization using the weighed performance measure are presented in Fig. 4. The results show that the right arm was quite well positioned — only a 2% increase in performance could have been achieved by selecting a different port.

0.0100.0150.0200.025

6.75 7.71 9.24 10.40 11.21 11.88 12.76Perfo

rman

ce

0.020

0.040

0.060

8.18 8.41 9.28 10.16 10.87 11.52

Distance from center of w orkspace (cm)

Perfo

rman

ce

0.020

0.025

0.030

0.035

0.040

0.045

0.050

15.66 15.76 15.83 15.84 16.01 16.05

Distance from center of w orkspace (cm)

Perfo

rman

ce

Fig. 4. Weighted GCI for each possible port location for the tool arms of the da Vinci Manipu-lator: Left arm at the third (top left) and fourth (bottom left) intercostal spaces, and Right arm (right). The square data point represents the port used during surgery

For the left arm at the third intercostal space, the new performance measure shows that a 10% increase in performance could have been achieved by selecting a port that was only 15 mm away. At the fourth intercostal space, the port selected was at the lower end of the spectrum, making it possible to select a port with a performance measure almost 70% higher. It should be noted that even the worst performance achieved from the fourth intercostal space is better than the best one achieved from the third space. This result agrees with the port change that occurred during surgery.

97

Page 103: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Although during the surgical procedure, the port location initially chosen for the left arm did not allow the end effector to reach the entire workspace, the dexterity op

nfigurations. Performance me

4 Conclusions and Future Work

t on the ability to perform a robotic sur-gical procedure successfully. Selecting an adequate port location can improve the

n large increases in performance from the least ideal to the op

tor pro-vid

The port pla

timization program still considered that the port was a feasible option. It is obvious from these experiments that collisions between the arms and other elements in the room must be considered when selecting the ideal ports. There is still a lot of work to be done in order to develop the reachability evaluation to the point where difficulties like these are identified and an optimal port location is truly determined. Future ver-sions of this optimization will try to address these issues.

It is difficult to determine if a difference in dexterity can be perceived by the clini-cian when using the robotic manipulator in different co

asures are only a means of comparing the different manipulator configurations, and do not provide an absolute measure of performance. In order to determine the impact of the optimization, the percentage increase in performance must be consid-ered. Our analysis indicates that increases in performance of up to 70 % can be achieved when comparing the worst port to the best.

Proper positioning of ports has a large impac

surgeon’s ability to manipulate the instrument, and can ensure that the surgical robot will be able to reach the surgical site without the need for port repositioning. The results presented above show that robot performance can be optimized by selecting adequate port locations.

Initial evaluations using performance measures to optimize robot dexterity show that it is possible to obtai

timal port location. When these locations are compared to those used in surgery as chosen by an expert surgeon, the increase in performance is not as significant. How-ever, dexterity optimization measures provide additional input to the surgeon when placing the ports, and can be especially useful for less experienced surgeons.

A significant limitation arises when trying to position the base of the active section at the “ideal” position and orientation. The controls of the da Vinci manipula

e no joint position feedback, which, when combined with the inability to control the position of the joints independently, makes it completely impractical to transfer the procedure to the operating room. To address this issue, a new performance meas-ure has been proposed. This measure combines the isotropy measure with the ability to reach the surgical workspace from the different base locations, such that, once the port location has been determined on the patient, there is a much better chance that an adequate dexterity will be achieved from whatever base location is chosen.

Apart from the robot dexterity and the ability to reach the entire surgical site, there are other factors that must be considered when selecting port locations.

cement method proposed in this paper must be complemented by other selection criteria, including: (1) collision avoidance between the arms of the robot, other obsta-cles in the operating room, and the patient; (2) collision avoidance between the surgi-cal tools; (3) interference avoidance between the tools and the camera field of view;

98

Page 104: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

and (4) preservation of the surgeon’s intuition compared to open surgery by maintain-ing the relative orientation between the surgeon’s hands and eyes.

Currently, experiments are underway to use three-dimensional models of the pa-tient to model the workspace and to transfer the optimization results to the operating ro

e Fund under grant 00-May-0709, the Natural Sciences and Engineering Research

References

., Salisbury, J.K. Jr.: The IntuitiveTM telesurgery system: overview and appli-cation. IEEE International Conference on Robotics and Automation, Apr 24–28, 2000, San

3.

puter

5.

6.

gent Robots and Systems, Nov 3–5, 1999,

gn, vol. 113, pp. 220–226, 1991.

om.

Acknowledgments This research was supported by the Ontario Research and Development ChallengCouncil (NSERC) of Canada under grant RGPIN-1345, and an infrastructure grant from the Canada Foundation for Innovation awarded to the London Health Sciences Centre (Canadian Surgical Technologies & Advanced Robotics). The authors would like to thank David Brown-ing and David Harrison for their technical support.

1. Guthart, G.S

Francisco, CA, pp. 618–621. 2. Tabaie, H.A., Reinbolt, J.A., Graper, W.P., Kelly, T.F., Connor, M.A.: Endoscopic coro-

nary artery bypass graft (ECABG) procedure with robotic assistance. Heart Surgery Forum, vol. 2, no. 4, pp. 310–317, 1999. Marmurek, J., Wedlake, C., Pardasani, U., Eagleson, R., Peters, T.: Image-guided laser projection for port placement in minimally invasive surgery. Medicine Meets Virtual Real-ity 14, Studies in Health Technologies and Informatics, vol. 119, pp. 367–372, 2006.

4. Austad, A., Elle, O.J., Røtnes, O.J.: Computer-aided planning of trocar placement and robot settings in robot-assisted surgery. International Congress and Exhibition on ComAssisted Radiology and Surgery CARS 2001, June 27–30, 2001, Berlin, pp. 1020-1026, 2001. Lehmann, G., Chiu, A., Gobbi, D., Starreveld, Y., Boyd, D, Drangova, M., Peters, T.: Towards dynamic planning and guidance of minimally invasive robotic cardiac bypass sur-gical procedures. International Conference on Medical Image Computing and Computer Assisted Intervention, Oct 14–17, 2001, Utrecht, The Netherlands, pp. 368–375. Cannon, J.W., Stoll, J.A., Selha, S.D., Dupont, P.E., Howe, R.D., Torchiana, D.F.: Port placement planning in robot-assisted coronary artery bypass. IEEE Transactions on Robot-ics and Automation, vol. 19, no. 5, pp. 912–917, 2003.

7. Adhami, L., Coste-Manière, È.: Optimal planning for minimally invasive surgical robots. IEEE Transactions on Robotics and Automation, vol. 19, no. 5, pp. 912–917, 2003.

8. Trejos, A.L., Patel R.V.: Port placement for endoscopic cardiac surgery based on robot dexterity optimization. IEEE International Conference on Robotics and Automation, April 18–21, 2005, Barcelona, Spain, pp. 912–917.

9. Kim, J.-O., Khosla, P.K.: Dexterity measures for design and control of manipulators. IEEE/RSJ International Workshop on IntelliOsaka, Japan, pp. 758–763.

10. Stocco, L., Salcudean, S.E., Sassani, F.: Fast constrained global minimax optimization of robot parameters. Robotica, vol. 16, pp. 595–605, 1998.

11. Gosselin, C., Angeles, J.: A global performance index for the kinematic optimization of robotic manipulators. ASME Journal of Mechanical Desi

12. The MathWorks, The MathWorks, Inc.; c1994–2006, June 2006, [online] Available: http://www.mathworks.com

99

Page 105: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

A Novel Method for Robotic Knot Tying

Shuxin Wang1, Longwang Yue1, 2, Huijuan Wang1

1. School of Mechanical Engineering, Tianjin University, Tianjin 300072, China 2. School of Mechanical and Electrical Engineering, Henan University of

Technology, Zhengzhou 450052, China Email: [email protected]

Abstract: Suturing and knot tying, an important part of a surgery, is the most universal wound closure method. As surgical robots are wide used clinically, more and more attention has been paid on the study of robot assistant knot tying. The traditional robot assistant knot tying of the suture is completed with one tool leading the suture to circle around the other tool to form a suture loop. This approach can be named as twining knot tying. In twining knot tying, the non-linear deformation of the suture may result in an abrupt slippage of the suture loop from the tool, which makes the knot tying difficult and time-consuming. This paper presents a new knot tying method-twisting knot tying. Through experimental study of twisting knot tying, the rules of this method are obtained. In order to verify the robotic twisting knot tying, a verification experiment was performed with the MicroHand system developed in our Lab. The verification experiments show that the twisting knot tying method can overcome the inherent shortage of the traditional twining knot tying, improve the efficiency and mission success rate of knot tying, and shorten the time of suture in the surgery.

Index Terms: knot tying; robot assistant surgery; twisting knot tying

1. Introduction

A typical surgery includes three essential steps: incision dissection, cutting and suture. Suturing and knot tying is the most universal wound closure method, which is a necessary skill that a surgeon has to possess [1]. With more and more clinical use of the surgical robots, suturing and knot tying have become a necessary function of the surgical robots. The Da Vinci is the typical surgical robot that can accomplish the task of suturing and knot tying.

Similar to knot tying by the surgeon, the knot tying by a surgical robot consists of the following steps: (1) Piercing the blood vessel; (2) Twining the suture; (3) Clamping the suture; (4) Drawing the suture. This knot tying of suture is called as “twining knot tying”.

Generally, most of the surgical robots adopt master-slave mode [2,3], which makes it easy for a surgeon to complete complicated surgeries, such as microsurgery and minimally invasive surgery (MIS). Howerever, in order to complete a surgery, a surgeon has to depend on the image guidance and intuition to manipulate the master-

100

Page 106: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

slave surgical robotic system. The surgeon will lose the force and tactile feedback under such circumstances. Therefore, the robotic knot tying of suture may be very time-consuming and of poor quality in master-slave mode [4]. Different from other operations in a surgery, the knot tying process is to manipulate the suture to form a loop and achieve a knot. Because of the flexible character of the suture, it is especially hard for a robot to complete the knot tying process, which have been verified in the suturing and knot tying experiment of a rabbit’s 1mm blood vessel by the “MicroHand” system. The animal experiments show that the suture loop may slip off the end tool in the twining process of suture, and the drawing force is difficult to control in the drawing process [5]. The slippage of the suture loop makes the knot tying time-consuming and the difficult controlled force result in blood leakage from the closed wound.

The existing researches of knot tying by a surgical robot are mainly focused on three fields [6-18]: (1) development of new surgical robotic system; (2) force analysis of suturing and knot tying; (3) simulation of surgical suture in the virtual surgical training system. All of the studies of knot tying by robot are based on the twining knot tying method. Because of the non-linear deformation of the suture, the suture loop may slip off the end tool in the twining process of the twining knot tying. Such slippages cannot be avoided completely and will result in poor quality knot and a waste of time in the surgery. In this paper, we present a new knot tying method-“twisting knot tying”, and try to find the relationship between the suture and the effect of knot tying by robots to improve the suturing quality and shorten the suturing time.

The remainder of the paper is organized as follows. Section 2 presents a new knot tying method-twisting knot tying and analyzes it with the knot theory. Section 3 makes the experimental study of the twisting knot tying and does a twisting knot tying experiment of an artificial blood vessel. The conclusions are drawn in Section 4.

2. A New Robotic Knot Tying Method

The suture’s character is a key factor that affects the knot tying by surgical robot. In the knot tying process, there is not only bending deformation but also twisting deformation. In the twisting knot tying, local torsion acting on the ends of the suture leads to a global deformation and forms a suture loop. We employ the belt model of the knot theory [19] to analyze the deformation regulation of the suture in the twisting knot tying process.

The suture can be viewed as a flexible cylinder with a central line K and surface guide line K’. All the line segments between K and K’ on each cross section of the suture compose a flat belt. K and K’ are the edges of the belt, as shown in Fig.1. wT is twist number denoting the torsional quantity between K’ and K, rW is the writhing number denoting the snarling quantity of K. wT and rW can be worked out with projection method. According to the celebrated White Formula

( , ) ( , ) ( )w rlk K K T K K W K′ ′= + , the linking number ( , )lk K K ′ of K and K’ is an isotopic invariant. This means that the summation of the twist number wT and the

101

Page 107: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

writhing number rW of the suture loop is a constant with an unchangeable constraint condition.

Fig.1. Belt model of the suture [19]

2.1 Twisting Knot Tying Method

Under an unchangeable constraint condition, the suture loop can be looked as a closed belt. Because the linking number of the closed belt is an isotopic invariant, the linking number ( , )lk K K ′ of the suture loop is a constant. Therefore, the revolution of the suture pulled by the tool will generate not only the writhing number rW but also the twist number wT . This makes the knot tying difficult and time-consuming.

We can make the suture generate a twist number wT through rotating of one clamping tool. Because of the reciprocal transformable character, the writhing number

rW will transform into the twist number wT that denotes the suture loop for the knot tying. This method can be called twisting knot tying, as shown in Fig.2. There are four steps in the robotic twisting knot tying.

According to the reciprocal transformable character of the writhing number rW and the twist number wT , a suture loop can be achieved through the rotation of the suture. But the reciprocal transformation is conditional. With a decreasing distance between the two ends of the suture, the twist number wT can be transformed into the writhing number rW . On the other hand, the writhing number rW can be transformed into the twist number wT . The surgical robots are always used in surgeries of a limited workspace, such as MIS, microsurgery and ENT. The limited workspace results in a shorter distance, so the suture can only transform from the twist number wT into the writhing number rW . In the rotation of the suture, there is only rotation without change of position or gesture of the end tool. Thus the limited workspace and the shorter suture length will not affect the process of the second way.

After the qualitative analysis of the robot assistant knot tying, quantitative analysis of the twisting knot tying should be made. Several parameters can be determined with experimental study of the twisting knot tying of the suture, such as the position and gesture of the end tools, the rotation angle and direction of the clamping tool, the clamping point and the azimuth of the suture.

K

K’

102

Page 108: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig.2. Twisting knot tying process: (a) End tool 1 clamps the opposite suture end; (b) The suture is twisted and forms a suture loop with the rotation of end tool 2; (c) End tool 2 passes through the suture loop and clamps the other suture end; (d) The suture loop is tightened through relative movement of the two tools.

2.2 Parameters of the Suture

The suture length is a key parameter that affects the formation of the suture loop. On the one hand, if the suture length is too long, it would form an irregular suture loop with a hollow part in the middle, which is not easy to perform knot tying; on the other hand, if the suture length is too short, it would be difficult to execute the knot tying in the limited workspace, and the suture may abruptly buckle for the big bending moment of the suture. Therefore, a suitable suture length is a basis of the study on the twisting knot tying method.

Because of the small stretched axial deformation, big lateral bending deformation and big axial torsional deformation, the suture can be modeled as a serial of rigid segments connected with flexible joints. Based on the force analysis of the suture model, we can build up the balanced equations of the suture loop. Based on the equations, the suture length L can be calculated through a repetition program. This length is called the appropriate length of the suture for knot tying.

3. Experiment Study

Experiments of twisting knot tying have been performed by the MicroHand system, as shown in Fig.3. The suture is clamped by two end tools of the MicroHand and twisted through the rotation of the end tools. The suture figure can be taken by the image collection system of the MicroHand.

3.1 Experiment Design

In the experiment, a 7/0 polyester suture is used as the experimental object. Suppose the suture length is L, the distance between the two end tools is D, the relative rotation angle is R, and the separation angle of the two end tools is V. The experiment consists of two parts. The first part is a horizontal twisting knot tying experiment. The second part is a sloping twisting knot tying experiment of the same suture.

(b) (a) (c) (d)

103

Page 109: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

Fig.3. Twisting knot tying experiment with MicroHand

In the horizontal twisting knot tying experiment, the suture deformation can be considered respectively with fixed D, variable R and fixed R, variable D. In the experiment of fixed D and variable R, L equals 5cm, D equals 0.5cm, 1.5cm, 2.5cm, 3.5cm, and 4cm respectively, R equals 0, π, 2π, 3π, 4π, 5π, and 6π respectively, as shown in Fig.4. In the experiment of fixed R and variable D, L equals 5cm, R equals 0, π, 2π, 3π, 4π, 5π, and 6π respectively, D equals 0, 1cm, 2cm, 3cm, and 4cm respectively, as shown in Fig.5.

0.5-0 0.5-2π 0.5-4π 0.5-6π

Fig.4. Horizontal twisting knot tying with fixed D and variable R(D-R), L equals 5cm, D equals 0.5cm, R equals 0, 2π, 4π and 6π respectively.

2π-0 2π-2 2π-4

Fig.5. Horizontal twisting knot tying with fixed R and variable D(R-D), L equals 5cm, R equals 2π, D equals 0, 2cm, 4cm respectively.

π/2-0 π/2-2π π/2-4π π/2-6π

Fig.6. Sloping twisting knot tying with fixed V and variable R(V-R), L equals 5cm, V equals π/2, R equals 0, 2π, 4π and 6π respectively.

In order to meet with the practical surgery [4], a study of sloping twisting knot tying is made with the separation angle of the two end tools equaling V. The suture figures

Tool 1

L

Suture

Tool 2 D

RV

104

Page 110: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

with different V and R are shown in Fig.6. In the study, D equals 0.5cm, V equals π/6, π/4, π/3, and π/2 respectively, R equals 0, π, 2π, 3π, 4π, 5π, and 6π respectively. The subscript in each figure means V-R.

3.2 Data Analysis of the Experiment

3.2.1 Data Analysis of the Horizontal Twisting Knot Tying Experiment Based on the contrastive analysis of the suture figures in the horizontal twisting knot tying experiment, the deformation rule of the horizontal twisting knot tying can be summarized as:

(a) Corresponding to a certain D and an increasing R, the suture figure will change from a dropping curve to a spatial loop at a certain R. As D increases, the certain R will increase, too.

(b) Since the diameter of the loop varies inversely with D, a minimal D must be chosen for a fixed length suture to obtain the maximal diameter of the loop and make it easy to achieve the knot tying.

(c) Through increasing R, the loop can be adjusted normal to a certain value to meet the challenge of knot tying.

(d) When R is fixed, the difficulty of forming a loop increases with D. (e) When D is fixed, the loop will rotate around itself as R increases. (f) When R is fixed, the suture snarling will loosen as D increases.

3.2.2 Data Analysis of the Sloping Twisting Knot Tying Experiment Based on the contrastive analysis of the suture figures in the sloping twisting knot tying experiment, the deformation rule of the sloping twisting knot tying with fixed D, V and variable R can be expressed as:

(a) When D is fixed, the suture will form a loop at a certain R as R increases. When V equals π/6 or π/2, R is about 2π and the loop is an inscribed circle of the extension lines of the tools’ axial line. When V equals π/4 or π/3, R is about 3π and the loop is an inscribed circle of the tools’ axial line.

(b) After forming a loop, the loop will form a snarling as R increases. (c) The center of the loop lies on the bisector of angle between the tools’ axial

lines. As R increases, the loop will rotate around the bisector. This rule can be used to adjust the loop to meet the challenge of knot tying.

(d) The loop diameter is not related to the value of V. If the suture length L equals 10cm, the suture cannot form a loop but wrap around

on the tools, which makes it impossible to achieve a knot tying.

3.3 Verification Experiment of the Twisting Knot Tying

In order to verify the practicability of the twisting knot tying method, the twisting knot tying experiment of an artificial blood vessel was performed using the MicroHand system, as shown in Fig.7. Ten square knots were completed. Each knot

105

Page 111: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

took one minute, which is much less than four minutes in twining knot tying method. More important is that the twisting knot tying is simple and efficient.

(a) (b) (c) (d)

Fig.7. Verification experiment of twisting knot tying

4. Conclusions

Similar to knot tying by the surgeon, the knot tying by a surgical robot has to perform the same process. To overcome the shortage of the traditional twining knot tying method, a novel method for robotic knot tying, twisting knot tying of the suture, was developed. Calculating and experimental results show the following conclusions:

(a) The suture length L is a key influencing factor of the suture’s deformation. When the length L is less than a certain value, the suture can form a knot by twisting knot tying method;

(b) Not only the horizontal twisting knot tying but also the sloping twisting knot tying, the less D of the two end tools, the less R of formation a suture loop. At the same time the less D of the two end tools, the longer diameter of the loop;

(c) The less V of the two constraint points of the suture, the easier formation of a suture loop;

(d) After forming a loop, the loop normal can be adjusted to a certain value by increasing R, which can meet the challenge of knot tying. But an oversize R will make the suture form a snarling, which will make the loop irregular and reduce it’s diameter;

(e) Different suture has different twisting knot tying parameters (L, D, R and V). The suture length L can be worked out by a simulation model, the other parameters can be obtained through experimental study.

The knot tying experiments of the artificial blood vessel show that the twisting knot tying can overcome the inherent shortage of the traditional twining knot tying and improve greatly the efficiency, mission success rate of knot tying, which reduce the difficulty and shorten the time of the surgery.

Acknowledgments. This research is supported by NSFC (Grant No. 50575162) and NCET(Program for New Century Excellent Talents in University). In particular, the authors would like to thank Professor S. Jack Hu from the University of Michigan for his comments and suggestions in improving the paper.

106

Page 112: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

References

1. Ota, D.; Loftin, B.; Saito, T.; Lea, R.; Keller, J., Virtual reality in surgical education, Computers in Biology and Medicine, v 25, n 2, March 1995, p 127-37

2. Mitsuishi, M.; Warisawa, S.; Tsuda, T.; et al., Remote Ultrasound Diagnostic System, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164), 2001, pt. 2, p 1567-74 vol.2

3. Ikuta, K.; Yamamoto, K.; Sasaki, K., Development of Remote Microsurgery Robot and New Surgical Procedure for Deep and Narrow Space, Proceedings - IEEE International Conference on Robotics and Automation, v 1, 2003, p 1103-1108

4. Deyueldre, M., Robotically assisted laparoscopic microsurgical tubal reanastomosis: a feasibility study, Fertility and Sterility, 2000, 74(5): p 1020-1023

5. Wang, S.; Ding, J.; Yun, J.; Li, Q., A Robotic System with Force Feedback for Micro-Surgery, Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 2005, p 200-206

7. Brown, J.; Montgomery, K.; Latombe, J-C.; Stephanides, M., A microsurgery simulation system, In Medical Image Computing and Computer-Assisted Interventions (MICCAI), Utrecht, The Netherlands, 2001

8. Kang, H; Wen, J.T., Robotic assistants aid surgeons during minimally invasive procedures, IEEE Engineering in Medicine and Biology Magazine, v 20, n 1, Jan.-Feb. 2001, p 94-104

9. Kang, H; Wen, J.T., Robotic Knot Tying for Minimally Invasive Surgeries, International Robotics and Systems (IRoS) Conference, Lausanne, Sept 2002, p 1421-1426

10.Kitagawa, M.; Okamura, A. M.; Bethea, B.T.; Gott, V.L.; Baumgartner, W.A., Analysis of Suture Manipulation Forces for Teleoperation with Force Feedback, MICCAI 2002. 5th International Conference. Proceedings, Part I (Lecture Notes in Computer Science Vol.2488), 2002, p 155-62

11.Brown, J.; Latombe, J-C.; Montgomery, K., Real-time knot-tying simulation, Visual Computer, v 20, n 2-3, May 2004, p 165-79

12.Phillips, J.; Ladd, A., Simulated knot tying, Proceedings - IEEE International Conference on Robotics and Automation, v 1, 2002, p 841-846

13.Terzopoulos D, Platt J, Barr A and Fleischer K, Elastically deformable models, Computer Graphics (SIGGRAPH’87)

14.Terzopoulos, D.; Witkin, A., Physically based models with rigid and deformable components, IEEE Computer Graphics and Applications, v 8, n 6, Nov. 1988, p 41-51

15.James, D. L.; Pai, D. K., Artdefo: Accurate real time deformable objects, Computer Graphics Proceedings. SIGGRAPH 99, 1999, p 65-72

16.Picinbono, G.; Delingette, H.; Ayache, N., Non-linear and anisotropic elastic soft tissue models for medical simulation, Proceedings - IEEE International Conference on Robotics and Automation, v 2, 2001, p 1370-1375

17.Pai, D.K., STRANDS: Interactive Simulation of Thin Solids using Cosserat Models, Computer Graphics Forum, v 21, n 3, September, 2002, p 347-352

18.Moll, M.; Kavraki, L. E., Path Planning for Minimal Energy Curves of Constant Length, in Proc. 2004 IEEE Intl. Conf. On Robotics and Automation, 2004, pt. 3, P 2826–2831 Vol.3

19.Moll, M.; Kavraki, L. E., Path Planning for Variable Resolution Minimal Energy Curves of Constant Length, in IEEE International Conference on Robotics and Automation, Barcelona, Spain, 2005

20.Jiang, B., Mathematics of Knots, Hunan, Hunan Education Press, 1998, 12, p 99-123(in Chinese)

107

Page 113: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

A Robotic Neurosurgery System with Autofocusing Motion Control for Mid-infrared Laser Ablation

Ryoichi Nakamura1, Shigeru Omori3, Yoshihiro Muragaki1, 2, Katsuhiro Miura4, Masao Doi4, Ichiro Sakuma5, and Hiroshi Iseki1, 2

1 Institute of Advanced Biomedical Engineering and Science, Tokyo Women’s Medical University, Tokyo, Japan

2 Dept. of Neurosurgery, Neurological Institute, Tokyo Women’s Medical University, Tokyo, Japan

3 R&D Center, Terumo Corporation, Kanagawa, Japan 4 Mitaka Kohki Co., Ltd., Tokyo, Japan

5 Dept. of Environmental Studies, Graduate School of Frontier Science, The University of Tokyo, Tokyo, Japan

[email protected]

Abstract. There are certain limitations to the procedures for perfect tumor extraction using conventional manual surgery. This is because the area close to the boundary between a tumor and the surrounding normal brain tissue is usually preserved in order to prevent functional disorders. For the purpose of treating such boundary areas, a computer-controlled robotic laser surgery system was developed. This system is characterized by a mid-infrared laser device that can perform less invasive, precise surgery with a low output power of less than 1.0 W. Further, the computer-controlled system can realize the ablation of designated areas on the brain surface with a 0.5-mm spatial resolution. For in vivo applications, we have also developed an autofocusing system for laser irradiation. The results of animal studies demonstrate that the system enables the focal point of the laser head to be maintained on the brain surface, thus facilitating constant ablation in the designated areas.

1 Introduction

Laser surgery in the field of neurosurgery has been performed for more than 20 years; almost all of the commercialized laser devices have been applied clinically in this field [1]. In the field of neurosurgery, however, the laser devices have been rarely used. Because conventional laser irradiation is performed manually—operated by the surgeon’s hand—precise and safe treatment is difficult to achieve. In addition, these types of devices are usually expensive.

In order to establish a system for precise and maximal tumor resection, we developed a robotic laser surgery system to facilitate the removal of the tumor tissue that remains after manual resection. The residual tumor tissue is usually present in a thin layer, visualized by intraoperative magnetic resonance imaging (MRI) or navigation systems [2–4]. In order to actualize this laser ablation system, it is necessary to fulfill the following three conditions: (1), the use of a laser device that

108

Page 114: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

does not cause collateral damage through non-contact irradiation; (2), the development of a laser manipulation system that has a precise positioning function of less than 0.5 mm, which is the resolution of intraoperative magnetic resonance (iMR) images; and (3), the development of an autofocusing mechanism that during irradiation is able to maintain a focal point on a given brain surface irrespective of its shape and motion. With respect to 1~3, we developed a satisfactory robotic surgery system and report the results of experiments in vitro [5].

This paper describes the concept and prototype of a newly developed robotic laser surgery system with a confocal optical sensing system for laser autofocusing in brain surface ablation. The accuracy and performance of the laser ablation treatment using this system is evaluated both in vitro and in vivo.

2 Methods

2.1 Laser Device

In the early history of laser application, the maintenance of hemostasis was the major objective of the resection of brain tumors. However, with the introduction and increasingly wide use of electric bipolar cautery forceps, laser devices have lost their prominent position in hemostasis treatment. In recent years, trials of precise ablation using pulse laser have been reported [6–8]. However, it has been pointed out that when using pulse laser it is difficult to treat lesions close to delicate areas of the brain, owing to mechanical damage by shock waves rather than thermal damage [7]. We have developed a laser device that can be used for the ablation of tumors adjacent to functional areas.

Figure 1 shows the structure of our mid-infrared laser. This laser device is characterized by an Er-doped YSGG laser crystal chip positioned at the distal end of a silica optical fiber. The laser crystal chip outputs a 2.8-μm wavelength laser beam, pumped by a near-infrared continuous wave (CW) laser diode (LD: λ = 970 nm). Since an absorption peak of water exists at a wavelength of approximately 3 μm [9],

Fig. 2. Schematic diagram of the microlaser head.

Fig. 2. Absorption spectrum of water [9].

109

Page 115: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

irradiated laser beams are absorbed by the water of the brain tissue before the beams can penetrate deep into the brain; consequently, heat generation occurs only in the local target area.

2.2 Laser Positioning System Configuration

Figure 3 shows a schematic wavelength diagram of the robotic laser surgery system. Once a surgeon determines the area of irradiation, the system itself automatically performs the laser scanning inside the area. This system consists of the mid-infrared laser device, a 2-axis moving stage for laser scanning, a charge-coupled devise (CCD) camera, and a personal computer (PC). The image data of the brain surface is acquired by the CCD camera, transferred to the PC, and displayed on a monitor screen. A surgeon designates a target area to be irradiated by tracing the outline of the target area on the monitor screen. A positioning dislocation of an ablation of 0.5 mm or less was achieved in an in vitro study using porcine brain [5].

Fig. 3. Schematic diagram of the Neuro-Laser Surgical Robot System.

2.3 Autofocusing System

In this study, we designed an autofocus system to satisfy the following requirements.

a) a non-contact position sensing function that is uninfluenced by surface water b) a mechanisms that enables the sensing point and the laser ablation point to be

located at the same position c) a driving mechanism for the unit of the laser head and the CCD camera In order to satisfy these requirements, we used an optical sensing system with

another laser beam (λ = 675 nm) to measure and maintain a constant distance between the laser head and the brain surface. Figure 4 shows the configuration of the sensing system. It has a confocal optical path. In this system, the location of the sensing laser

Computer

Laser Diode

Brain

XY Motorized Stages

Λ = 2.8 μm Laser Head

CCD Camera

Stage Controller

110

Page 116: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

beam spot on the brain surface is projected onto a laser spot positioning sensor made of silicon photo sensors. The beam spot on brain surface can be detected in spite of the weak reflection because there is conjugate point on the sensor. The optical sensing path was built into the image capture optics of the CCD camera with the same optical axis. The computer controlled the movement of a linear motorized stage to which the two laser heads were attached (Fig. 5). One of the laser heads was designed as a backup.

3 Results

3.1 Effects of Focusing Status on the Ablation Lesion-Ablation Depth Accuracy without the Autofocus Function

Since we used a relatively flat brain surface when performing the in vitro study, we were able to focus the laser beams on any point in a target area. However, the brain surfaces that are treated in clinical cases are usually uneven, or the surface itself is inclined at an angle. Therefore, it was necessary to evaluate the ablation characteristics under defocused beam conditions.

Figure 6 shows the ablation of the porcine brain under several conditions in terms of the defocused laser beam spots. The focused beam spot diameter was set to 0.16 mm (1/e2), the laser output power to 0.47 W, and the scanning speed of the laser beams to 2 mm/s. The sample was a cross section of the dissected porcine brain fixed in formalin after laser irradiation with a dislocation of 1 mm to 4 mm; it was viewed successively from the just-focused position at 1 mm intervals. In order to measure the ablation depth, the brain was fixed in formalin and cut to expose all the incised lesions in one surface. Red ink was then injected into each lesion. Figure 7 displays the relationship between the dislocation and the ablation depth. As the focal point

CCD Camera

Sensor

Brain

Laser Diode (λ = 675nm)

Lens

Mirror

SSeennssiinngg HHeeaadd

Lens

Sensor

CCD Camera

Laser Head Lens

Fig. 4. Configuration of the optical sensing system for autofocusing.

Fig. 4. Autofocus system with a mid-infrared laser head.

111

Page 117: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

receded from the brain surface the depth of the ablation lesion became gradually shallower (Fig. 6, right to left). With a dislocation of 2 mm, the ablation depth was reduced by more than 40% (third vertical line from the right); this is not an acceptable condition for clinical use. From the results of this experiment, it was considered that an autofocusing system was necessary in order to maintain a constant ablation depth. This is because the crenulated surface of the brain usually has peaks and valleys exceeding 5 mm.

3.2 Accuracy of the Autofocusing Motion in the Static Condition

We used the autofocus function to evaluate the accuracy of the displacement of the laser head for the different tissues of the brain surface. The dissected porcine brain was placed on the motor-driven stage and moved down in 1-mm steps. We measured the stroke of the laser head module controlled by the autofocus function (Fig.8). The focal point of the laser head module was placed on the whitish brain surface and a blood vessel to determine whether or not there are major differences in the motion accuracy depending on the type of target tissue.

Figure 9 shows the result of the autofocusing motion. The autofocusing function performed very well, and accurately followed the vertical motion of the brain. Further, there were no major differences in the accuracy of the autofocus function when used with different target tissue. From the results of this experiment, we determined that the autofocusing module is suitable for brain laser surgery, a technique that requires a high accuracy of laser head motion control toward heterogeneous tissue.

3.3 Dynamic Response in Scanning Conditions

In order to control the mechatronic system based on real-time motion sensing, the sensing rate and lag between sensing and actuation are important considerations. If the sensing rate is low and the calculation time for deciding the control command is

0

0.2

0.4

0.6

0.8

1

1.2

0 1 2 3 4 5Dislocation (mm)

Abl

atio

n D

epth

(mm

)

Focal Point Dislocation

1 mm

Pig brain fixed in formalin

Red Ink

Fig. 6. Ablated lesion in a dissected porcine brain under various focus conditions.

Fig. 7. Laser ablation depth with dislocation from the just-focused position.

112

Page 118: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

long, accurate actuation is difficult. With respect to this laser surgery system, the scanning motion speed of the x-y axes is faster, although the accuracy is lower. We evaluated the change of accuracy dependent on the scanning speed and determined the maximum speed which fulfilled the required accuracy (0.5 mm).

The dissected porcine brain was placed on the motor-driven x-y stage and moved in the horizontal axis at several scanning speeds. We measured the stroke of the laser head module controlled by the autofocusing function. The vertical span of brain surface between the start and end points of scanning was 5 mm and the distance between the start and endpoints was 9.5 mm (Figs.10, 11). Figure 11 demonstrates that the stroke of the laser head is gradually decreased as the scanning speed increases. The maximum scanning speed that fulfills the requirements is 4 mm/s.

0

2

4

6

8

10

12

0 2 4 6 8 10 12

0

1

2

3

4

5

1 2 3 4 5 6

9.5 mm

on whitish brain surface on blood vessel

PPiigg BBrraaiinn 5 mm

HHoorriizzoonnttaall SSccaannnniinngg

SSeennssoorr && LLaasseerr HHeeaadd

Fig. 10. Experiment setup for Dynamic Response in Scanning Condition.

Fig. 11. Relation between scanning speed of tissue surface and accuracy of Auto-focusing function

Fig. 8. Experimental setup for testing the accuracy of displacement of the laser head with the autofocusing function.

Fig. 9. Accuracy of the laser head motion controlled by the autofocusing system.

PPiigg BBrraaiinn

SSeennssoorr && LLaasseerr HHeeaadd

113

Page 119: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

3.4 Evaluation of the Autofocusing System in In Vivo Brain Ablation

An animal experiment was performed using pigs in order to demonstrate the function of the autofocusing system. Firstly, on the computer monitor screen, we drew an outline of approximately 5 mm2 on an irradiated area of brain using a computer “mouse.” Secondly, the ablation process was performed by laser irradiation with raster scanning. Physiological saline was poured over the brain before laser scanning in order to prevent carbonization. The focusing beam spot diameter was set to 0.14 mm (1/e2), the laser output power to 0.45 W, and the scanning speed of the laser beams to 4 mm/s. Figure 12 shows the images displayed on the PC screen during the laser ablation treatment. The autofocus system functioned when the brain surface was covered by a film of water and surface ablation of the designated area was successfully performed.

Fig. 12. Photographs of the porcine brain surface displayed on the monitor screen. Left: before irradiation. Right: laser ablation was performed inside the dashed line. The ablation depth is approximately 0.5 mm.

Fig. 13. Prototype of the Robotic Neurosurgery System with autofocus motion control for mid-infrared laser ablation.

114

Page 120: MICCAI 2006 Workshop Proceedings - 大阪大学robotics.me.es.osaka-u.ac.jp/.../eng/publications/pdf/proceedings.pdf · MICCAI 2006 Workshop Proceedings Workshop on Medical Robotics:

4 Conclusions

We removed residual tumor tissue during the resection of malignant brain tumors on a trial basis and considered that the laser autofocus system enhanced our robotic laser surgery system. The detection method, using a confocal optical design, functioned to maintain the distance between the laser head and the brain surface covered by a film of water. In addition, the precise focus of the irradiating laser beam during the ablation of designated areas of porcine brain was achieved.

We are planning to evaluate the response of the autofocus function against the laser scanning speed in order to apply the system in clinical study. Since, we consider that countermeasures for unexpected bleeding during the ablation process is most important, we intend to study surgical protocols in order to prevent such bleeding.

Acknowledgements This research was supported by a Health Labour Science Research Grant for Medical Devices for Analyzing, Supporting, and Substituting the Function of Human Body from the Ministry of Health, Labour and Welfare. This study was also supported by an Industrial Technology Research Grant Program (00A45003a) from the New Energy and Industrial Technology Development Organization of Japan. References 1. Satish S. et al: Lasers in Neurosurgery. Lasers in Surgery and Medicine 15: 126, 1994. 2. Iseki H. et al: Intraoperative Examinations for Tumors required in the Neurosurgical Operating

Theater of the 21st Century. Japanese Journal of Neurosurgery 11 (8): 508, 2002. 3. Muragaki Y. et al: Intraoperative brain mapping and intraoperative MRI for Glioma surgery. BRAIN

MEDICAL 13 (3): 255, 2001. 4. Iseki H. et al: New Possibilities for Stereotaxis Information-Guided Stereotaxis. Stereotactic and

Functional Neurosurgery 76: 159, 2001. 5. Omori S. et al: Robotic Laser Surgery with 2.8μm Microlaser in Neurosurgery. Journal of Robotic

and Mechatronics 16 (2): 122, 2004. 6. Andras C. et al: Intracranial Pressure Waves Generated by High-Energy Short Laser Pulses Can

Cause Morphological Damage in Remote Areas. Lasers in Surgery and Medicine 21: 444, 1997. 7. H.C. Ludwig et al: Optimized Evaluation of a Pulsed 2.09μm Holmium: YAG Laser Impact on the

Rat Brain and 3 D Histomorphometry of the Collateral Damage. Minimally Invasive Neurosurgery 41: 217, 1998.

8. Janice O. et al: Brain Ablation in the Rat Cerebral Cortex Using a Tunable-Free Electron Laser. Lasers in Surgery and Medicine 33: 81, 2003.

9. Bayly J.G., Kartha V.B., Stevence W.H.: The absorption spectra of liquid phase H2O, HDO and D2O from 0.7μm to 10μm. Infrared Physics 3, 211,1963.

115