Iver UMich 2009

15
P A P E R  An Overview of Autonomous Underwater  Vehicle Research and Testbed at PeRL AU T HORS Hunter C. Brown  Ayoung Kim Ryan M. Eustice University of Michigan A B S T R A C T This article provides a general overview of the autonomous underwater vehicle (AUV) research thrusts being pursued within the Perceptual Robotics Laboratory (PeRL) at the University of Michigan. Founded in 2007, PeRL s research centers on improving AUV autonomy via algorithmic advancements in environmentally based perceptual feedback for real-time mapping, navigation, and control. Our three major research areas are (1) real-time visual simultaneous localization and mapping (SLAM), (2) cooperative multi-vehicle navigation, and (3) perception- driven control. Pursuant to these research objectives, PeRL has developed a new multi-AUV SLAM tes tbed based upon a modi ed Ocean-Se rver Iver2 AUV platform. PeRL upgraded the vehicles with additional navigation and perceptual sensors for underwater SLAM research. In this article, we detail our testbed development, pro- vide an overview of our major research thrusts, and put into context how our mod- ied AUV testbed enables experimental real-world validation of these algorithms. Keywords: AUVs, SLAM, navigation, mapping, testbed Introduction T he Perceptual Robotics Labo- ratory (PeRL) at the University of  Michigan (UMich) is actively in-  volved in three major research efforts: real-time vision-based simultaneous localization and mapping (SLAM), heterogeneous multi-vehicle coopera- tive navigation, and perception-driven control. To test and experimentally  va li da te th es e al go ri th ms , we ha ve developed a new multi-autonomous underwater vehicle (AUV) testbed based upon a modi ed Ocean-Server Iver2 commercial AUV platform. This new AUV testbed provides a  simple man-portable platform for  real-world experimental validation and serves as a dedicated engineering testbe d for proof-of -concept algorith- mic implement ations. In this article,  we re po rt on ou r de ve lo pmen ts in this area and provide an overview of this new experimental facility  (Figure 1). Overview of Underwater Navigation One of the major limitations in the  eld of underwater robotics is the lack of radio-frequency transmis- sion modes. The opacity of water to electromagnetic waves precludes the use of the global positioning system (GPS) as well as high-speed under-  water radio communication (Stewart, 1991). Hence, communication and navigation underwater must rely upon other means. Kinsey et al. (2006) pro-  vi de d a n ov er vi ew of th e cu r re nt state-of-the-art in underwater vehicle navigatio n, of which we brie y sum- marize here. Conventional Underwater  Navigation Systems Two broad categories of under-  wat er nav iga tio n met hod s exi st for localizing vehicles and instruments: absolute positioning and relative dead-reckoning. The traditional long- baseline (LBL) method of underwater FIGURE 1 PeRLs multi-AUV testbed is based upon a modi ed Ocean- Server Iver2 AUV platfor m. Shown in the foreground is a modied vehicle displaying a new nose cone designed and fabricated by PeRL. For comparison, a stock vehicle displaying the original nose cone is shown in the background. Spring 2009 Volume 43, Number 2  33

Transcript of Iver UMich 2009

Page 1: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 1/15

P A P E R

An Overview of Autonomous Underwater Vehicle Research and Testbed at PeRL A U T H O R SHunter C. Brown Ayoung KimRyan M. EusticeUniversity of Michigan

A B S T R A C TThis article provides a general overview of the autonomous underwater vehicle

(AUV) research thrusts being pursued within the Perceptual Robotics Laboratory(PeRL) at the University of Michigan. Founded in 2007, PeRL’s research centerson improving AUV autonomy via algorithmic advancements in environmentallybased perceptual feedback for real-time mapping, navigation, and control. Ourthree major research areas are (1) real-time visual simultaneous localization andmapping (SLAM), (2) cooperative multi-vehicle navigation, and (3) perception-driven control. Pursuant to these research objectives, PeRL has developed a newmulti-AUV SLAM testbed based upon a modied Ocean-Server Iver2 AUV platform.PeRL upgraded the vehicles with additional navigation and perceptual sensors forunderwater SLAM research. In this article, we detail our testbed development, pro-vide an overview of our major research thrusts, and put into context how our mod-i ed AUV testbed enables experimental real-world validation of these algorithms.Keywords: AUVs, SLAM, navigation, mapping, testbed

Introduction

The Perceptual Robotics Labo-ratory (PeRL) at the University of Michigan (UMich) is actively in- volved in three major research efforts:

real-time vision-based simultaneouslocalization and mapping (SLAM),heterogeneous multi-vehicle coopera-tive navigation, and perception-drivencontrol. To test and experimentally validate these algori thms, we havedeveloped a new multi-autonomousunderwater vehicle (AUV) testbedbased upon a modi ed Ocean-Server Iver2 commercial AUV platform.This new AUV testbed provides a

simple man-portable platform for real-world experimental validationand serves as a dedicated engineering testbed for proof-of-concept algorith-mic implementations. In this article, we report on our developments inthis area and provide an overview of this new experimental facility (Figure 1).

Overview of Underwater

NavigationOne of the major limitations in

the eld of underwater robotics isthe lack of radio-frequency transmis-sion modes. The opacity of water toelectromagnetic waves precludes theuse of the global positioning system(GPS) as well as high-speed under-

water radio communication (Stewart,1991). Hence, communication andnavigation underwater must rely uponother means. Kinsey et al. (2006) pro- vide d an over vi ew of the cu rre ntstate-of-the-art in underwater vehiclenavigation, of which we briey sum-marize here.

Conventional Underwater Navigation Systems

Two broad categories of under- water navigation methods exist folocalizing vehicles and instrumentsabsolute positioning and relativedead-reckoning. The traditional long-baseline (LBL) method of underwate

FIGURE 1

PeRL’s multi-AUV testbed is based upon a modied Ocean-Server Iver2 AUV platform. Showin the foreground is a modied vehicle displaying a new nose cone designed and fabricatedby PeRL. For comparison, a stock vehicle displaying the original nose cone is shown in thbackground.

Spring 2009 Volume 43, Number 2 33

Page 2: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 2/15

positioning estimates absolute posi-tion by measuring time-of-ight rangesto xed beacons (Hunt et al., 1974;Milne, 1983). The precision of thisestimate is bounded, and the accuracy is determined by system biases. Therange of this solution is limited to a

few kilometers in the best acousticconditions, and the positioning resolu-tion is on the order of 1 m. The slow update rate of LBL is constrained by the acoustic travel times— typically up-dating every few seconds. In contrastto slow, coarse, but absolute LBL posi-tioning, a Doppler velocity log (DVL)or inertial navigation system(INS) in-stead estimates the distance traveledto infer position. Dead-reckoning is

fast ( 10 Hz) and delivers

ne resolu-tion ( 1 cm), but the precision of thisrelative measurement is unbounded,growing monotonically with time.This makes it dif cult to return to a known location or to relate measure-ments globally to one another.

Underwater SLAM Over the past decade, a signicant

research effort within the terrestrial

mobile robotics community has beento develop environmentally basednavigation algorithms that eliminatethe need for additional infrastructureand bound position error growth tothe size of the environment— a key prerequisite for truly autonomousnavigation. The goal of this work hasbeen to exploit the perceptual sensing capabilities of robots to correct for accumulated odometric error by local-

izing the robot with respect to land-marks in the environment (Bailey and Durrant-Whyte, 2006; Durrant- Whyte and Bailey, 2006).

One of the major challenges of theSLAM problem is (1) dening xedfeatures from raw sensor data and(2) establishing measurement to fea-

ture correspondence [i.e., the prob-lem of data association (Neira andTardos, 2001)]. Both of these taskscan be nontrivial— especially in anunstructured underwater environ-ment. In man-made environments,typically composed of planes, lines,

and corners primitives, point featurescan be more easily dened; however,complex underwater environmentspose a more challenging task for fea-ture extraction and matching.

One approach to curbing the chal-lenges of dening xed features fromraw sensor data is to seed the environ-ment with arti cial landmarks that areeasily recognizable. For example, onepractical application of a range-only

underwater SLAM has been to local-ize an AUV in an acoustic-beaconnetwork that has a priori unknowngeometry (Newman and Leonard,2003; Olson et al., 2006). In this sce-nario, the beacon geometry is learnedonline by the SLAM algorithm, elim-inating the need to conduct an initialsurvey calibration of the acoustic-beacon network by a surface ship.Moreover, this paradigm can easily be

extended to a scenario where the AUV self-deploys the acoustic beacons in situover the survey site.

More recently, progress has beenmade in applying SLAM in an a priori unknown underwater environment without the aid of arti cial landmarks.In particular, one SLAM methodology that has seen recent success in the ben-thic realm is to apply a pose-graphscan-matching approach (Fleischer,

2000; Eustice et al., 2006a). Pose-graph SLAM approaches do notrequire an explicit representation of features and instead use a data-drivenapproach based upon extracting relative-pose constraints from raw sensor data. The main idea behindthis methodology is that registering

overlapping perceptual data, for ex-ample, optical imagery as reportedin by Eustice et al. (2006a, 2006b)or sonar bathymetry as reported by Roman and Singh (2005), introducesspatial drift-free edge constraints intothe pose-graph. These spatial con-

straints effectively allow the robot toclose-the-loop when revisiting a pre- viously visited place, thereby resetting any accumulated dead-reckoning error.

Overview of PeRL’ sResearch Thrusts

PeRL’s three major research areasare (1) real-time visual SLAM, (2) co-operative multi-vehicle navigation,

and (3) perception-driven control. Inthis section, we provide an overview of our laboratory ’s research thrusts asthey pertain to AUV algorithms.

Real-Time Visual SLAM The rst of the three PeRL research

domains, real-time vision-based SLAMalgorithms, has direct application toareas such as autonomous underwater ship-hull inspection (Eustice, 2008;

Kim and Eustice, 2009) and deep-sea archaeological missions (Ballard,2008; Foley et al., 2009).

Present day means for ship-hulland port security inspection require ei-ther putting divers in the water or pilot-ing a remotely operated vehicle (ROV)over the area of interest— both of thesemethods are manpower intensiveand generally cannot quantitatively guarantee 100% survey coverage. Au-

tomating this task, however, is chal-lenging and compounded by the factthat areas around ships in berth areseverely conned, cluttered, and com-plex sensingenvironments (e.g., acous-tically,optically, magnetically).Currenttethered robotic inspection systemspresent issues of snagging, maneuver

34 Marine Technology Society Journal

Page 3: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 3/15

degradation, and tether management,a l l of which make maneuver ing around the ship at the pier dif cult.Moreover, current robotic inspectionmethods require human in-the-loopintervention for both sensory in-terpretation and control (e.g., ROV

piloting). Navigation feedback inthese scenarios is typically performedusing acoustic transponder time-of-

ight ranging (Smith and Kronen,1997). This necessitates the setupandcalibration of the acoustic-beaconinfrastructure, and therefore vitiatesour ability to rapidly and repeatedly inspect multiple underwater structures.

Similarly, deep-sea archaeology also requires high-performance nav-

igation (Ballard, 2008; Foley et al.,2009). Optical imagery, bathymetricsonar, and environmental measure-ments are all products of interest tothe archeologist. The data may be col-lected over multiple missions or evenmultiple eld seasons, and it is theprecision of the navigation that makesit possible to transform these measure-ments into co-registered maps.

To combat the aforementioned nav-

igation limitations (i.e., infrastructure-based and unbounded error growth),PeRL has been developing a camera-based navigation system that uses vehicle-collected imagery of the hull(port/hull inspection) or sea oor (un-derwater archeology) to extract mea-surements of vehicle motion. Thesecamera-derived spatial measurementsare fused with the onboard dead-reckoned data to produce a bounded-

error navigation estimate (Figure 2).In essence, the AUV builds a digi-

tal map of the sea oor or hull by regis-tering overlapping digital-still images(both along-track and cross-track im-agery). Images that are successfully registered produce a relative measure-ment of the vehicle’s attitude (head-

ing, pitch, and roll) and translational

(x, y, z) displacement. When fused wi th the onboard navigation data from a bottom-lock DVL, the resultis a navigation system whose error iscommensurate or much better thanLBL but which is infrastructure free.The signi cant advantage of this navi-gation paradigm is that it is in situ. For archeological surveys, this means thatthe AUV can be more easily deployedfor exploratory surveys to investigate

target shipwreck sites—

without hav-ing to invest signi cant ship time indeploying a beacon network to obtainprecision navigation; and for hull in-spection surveys, no additional acoustic-beacon infrastructure is required for precise global localization along thehull. In layman’s terms, these algo-

rithms allow the AUV to navigate

much like a human does by visuallynavigating with respect to the sea oor environment.

A useful and important by-productof this navigation methodology is thatthe overlapping registered imagery cabe used to construct an optically derivedbathymetry map (Figure 3). This mapcan be used to construct a quantita-tively accurate three-dimensional (3-Dphotomosaic by back-projecting the

imagery over the optical bathymetrymap. Figure 3 displays the result of applying this technology to the fourth-century B.C. Chios classical ancienshipwreck site (Foley et al., 2009). Inparticular, Figure 3a shows the opti-cally derived bathymetry map for 15 m by 45 m swath centered overtop

FIGURE 2

The foundation of visually augmented navigation (Eustice et al., 2008) is the fusion of “ zero-drift” camera measurements with dead-reckoned vehicle navigation data to produce a boundederror position estimate. These constraints are fused with onboard navigation sensor data in aview-based stochastic map framework; the model is composed of a pose-graph where thenodes correspond to historical robot poses and the edges represent either navigation or cameraconstraints.

Spring 2009 Volume 43, Number 2 35

Page 4: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 4/15

the wreck site. The optical bathymetry is generated from a 3-D triangulatedpoint cloud derived from pixel corre-spondences. In Figure 3b, we display a quantitatively accurate 3-D photo-mosaic obtained by back-projecting the imagery onto the gridded surface.

It should be emphasized that this resultis fully automatic and metrically quan-titative, in other words, measurementsof object size and geometric relation-ships can be derived. Although thistechnology is still very much in theactive research stage, its current andfuture value for in situ, rapid, quan-titative documentation of marine ar-cheological wreck sites or ship-hullinspection cannot be overstated.

Cooperative NavigationIn addition to real-time visual

SLAM, PeRL is working toward co-operative multi-vehicle missions for large-area survey. Multi-vehicle co-operative navigation offers promiseof ef cient exploration by groups of mobile robots working together topool their mapping capability. Mostprior research in the SLAM commu-

nity has focused on the case of single-agent mapping and exploration. Although these techniques can oftenbe extended to a centralized multi-agent framework (Walter and Leonard,2004) (provided that there are nocommunication bandwidth restric-tions), the extension of single-agenttechniques to a decentralized multi- vehicle SLAM framework is often nei-ther obvious nor appropriate. Much

of the previous research in the area of distributed multi-vehicle SLAM hasfocused primarily on terrestrial (i.e.,land and aerial) applications (Williamset al., 2002; Ridley et al., 2002; Rekleitiset al., 2003; Bourgault et al., 2004;Ong et al., 2006; Partan et al., 2006).There, high-bandwidth radio commu-

FIGURE 3

A by-product of camera-based AUV navigation is the ability to produce an optically derivedbathymetry. VAN-derived bathymetric maps from the Chios 2005 shipwreck survey are depicted(Foley et al., 2009). (a) Triangulated 3-D point cloud from registered imagery. The point cloud isgridded to 5 cm to produce an optically derived bathymetry map. (b) A quantitatively correct 3-Dmosaic is generated by back-projecting the imagery onto the optical bathymetry map. The grayareas correspond to regions of no image overlap. (c) A zoomed overhead view shows detail ofthe amphorae pile in the center mound of the 3-D photomosaic.

36 Marine Technology Society Journal

Page 5: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 5/15

nication is possible; however, under- water communication bandwidth isdistinctly limited from that on land(Partan et al., 2006).

It requires on the order of 100times more power to transmit thanit does to receive underwater, which

makes acoustic transmission and re-ception asymmetrical for mediumaccess schemes (Partan et al., 2006).Half duplex time division multipleaccess networks are usual, with typi-cal acoustic-modem data rates rang-ing from 5 kbits/s at a range of 2 km(considered a high rate) to as little as80 bits/s (a low rate). The low acousticdata rates are not simply a limitationof current technology — the theoretical

performance limit for underwater acoustic communications is 40 kmkbps (e.g., a max theoretical data rateof 20 kbps at a range of 2 km) (Partanet al., 2006). Therefore, any type of multi-vehicle SLAM framework mustadhere to the physical bandwidth lim-itations of the underwater domain.

In a previous work, Eustice et al.(2006c, 2007) developed a synchro-nous clock acoustic-modem-based

navigation system capable of support-ing multi-vehicle ranging. The systemconsisted of a Woods Hole Oceano-graphic Institution (WHOI) Micro-Modem (Freitag et al., 2005a, 2005b)(an underwater acoustic modem devel-oped by WHOI) and a low-power stable clock board. This system canbe used as a synchronous-transmissioncommunication/navigation system wherein data packets encode time of

origin information as well as localephemeris data (e.g., x, y, z positionaldata and error metric). This allows for the direct measurement of inter-vehicleone-way travel time (OWTT) time-of-

ight ranging. The advantage of a OWTT ranging methodology is thatall passively receiving nodes within lis-

tening range are able to decode andmeasure the inter-vehicle range to thebroadcasting node.

PeRL is currently investigating probabilistic fusion methods for a OWTT cooperative navigation multi- vehicle framework that scales across

a distributed network of nodes in a non-fully connected network topol-ogy. The proposed acoustic-modemnavigation framework will exploitinter-vehicle OWTT ranging to sup-plement perceptual SLAM localizationthereby reducing the need for statecommunication. The goal is to dis-tribute state estimation between the vehicles in a coordina ted fash ion,allowing navigation impoverished ve-

hicles (e.g., no INS or DVL) to sharefrom positional accuracies of better equipped vehicles (e.g., those withDVL bottom-lock, or VAN-navigated vehicles near the sea oor).

Figure 4 depicts preliminary re-sults reported by Eustice et al. (2007),demonstrating the OWTT proof of concept. Here, a GPS-equipped sur-face ship navigationally aided a sub-

merged AUV by broadcasting theship GPS position to the network while the AUV measured its range tthe ship via the OWTTs.

Perception-Driven Control Another research focus is in the

domain of perception-driven control. Algorithms are under development toenable a vehicle to respond to the en vironment by autonomously selectingalternative search patterns based uponperceived feature distributions in theenvironment. This creates improve-ments in survey ef ciency by limiting the duration in benign feature-poorareas and instead spends more bottom-time over actual targets. A sea oor

survey vehicle, for example, may drivinto an area devoid of features during a mission. Instead of continuingto search the featureless space, wherthere is little return on investmentfrom a visual navigation system, th vehicle would return to a previouslknown feature rich area and beginsearching in another direction. PeRLis currently working on algorithms to

FIGURE 4Preliminary OWTT results as reported by Eustice et al. (2007) for a two-node network consistiof an AUV and a surface ship. (left) The raw dead-reckoned AUV trajectory is shown in bGPS-derived ship position in cyan, OWTT fused AUV trajectory in red, and the LBL measuAUV position in green, which serves as an independent ground-truth. (right) Zoomed view of tAUV trajectory. (Color versions of gures available online at: http://www.ingentaconnect.com/content/mts/mtsj/2009/00000043/00000002.)

Spring 2009 Volume 43, Number 2 37

Page 6: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 6/15

assist in the decision making process of when to revisit known landmarks ver-sus continuing new exploration. For example, Figure 5 depicts imagery from a hull inspection survey of a de-commissioned aircraft carrier (Kimand Eustice, 2009). At the one ex-

treme, we see that some areas of thehull are very texture-rich (heavily bio-fouled), whereas other areas are nearly feature-less (no distinguishing featuressuch as weld seams, rivets, port open-ing, etc). In this type of environment,it makes no sense to incorporate imag-ery from the featureless region into the visual SLAM map because the imageregistration algorithm will fail to local-ize the vehicle because of a lack of vi-

sual features. Instead, by coupling the visual navigation feedback into thetrajectory planning and control, the AUV can more inte ll igently adaptits survey and map-building strategy so as to only return to feature-richareas of the hull when it accrues toomuch pose uncertainty. By jointly considering along-hull pose uncer-tainty, map feature content, and en-ergy expenditure accrued during

AUV transit, we can frame the onlinepath planning problem as a multi-objective optimization problem thatseeks to nd an optimal trajectory withrespect to the possibly divergent criteria of exploration versus localization.

Overview of PeRL ’ s AUV Testbed

To pursue PeRL’s three research

thrust areas, two commercial Ocean-Server Iver2 AUV systems were pur-chased and modi ed to serve as a real-world testbed platform for SLAMresearch at UMich. Although severalother vehicle platforms currently in-clude stereo-vision systems and DVLsensors, the Iver2 (Figure 6) was

FIGURE 5

Depiction of the multi-objective constraints at play in perception-driven control.

FIGURE 6

PeRL’s modi ed Ocean-Server Iver2: external and internal view.

38 Marine Technology Society Journal

Page 7: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 7/15

selected as a testbed development plat-form because of its ability to be trans-ported in a personal vehicle andlaunched by a single user. The vehi-cles, as shipped, are rated to a depth of 100 m, have a maximum survey speedof approximately 2 m/s (4 knots), and

weigh approximately 30 kg allowing for transport by two people (Andersonand Crowell, 2005).

Because the commercial-off-the-shelf Iver2 vehicle does not comeequipped with camera or DVL sens-ing, sensor upgrades were requiredto enable the stock vehicle to performSLAM and coordinated multi-AUV missions. PeRL upgraded the vehicles

with additional navigation and percep-tion sensors (detailed in Figure 6band Table 1), including 12-bit stereodown-looking Prosilica cameras, a Teledyne RD Instruments (RDI)600 kHz Explorer DVL, a KVH In-dustries, Inc. (KVH) single-axis ber-

optic gyroscope (FOG), a Microstrain3DM-GX1 attitude-heading-referencesensor, a Desert Star SSP-1 digital pres-sure sensor, and a WHOI Micro-modemfor inter-vehiclecommunication.For illumination, we contracted the cus-tom design and fabrication of a LEDarray by Farr and Hammar Engineering Services, LLC. To accommodate the ad-ditional sensor payload, a new Delrin

nose cone was designed and fabricated An additional 32-bit embedded PC104CPU hardware was added for datalogging, real-time control, and in situreal-time SLAM algorithm testingand validation. Details of the designmodi cation are discussed herein.

Mechanical/Electrical Designand Integration

The design goals during the inte-gration phase of vehicle developmenconsisted of minimizing the hydro-dynamic drag, maintaining the neutralbuoyancy, and maximizing the sensorpayload capacity within the pressurehull. These requirementswere achieved

TABLE 1

Integrated sensors on the PeRL vehicles.

Iver2 Instruments Variable Update Rate Precision Range Drift

Ocean-Server OS5000 Compass Attitude 0.01-40 Hz 1-3° (Hdg), 2° (Roll/Pitch) 360° –

Measurement Specialties PressureSensor MSP-340

Depth 0.01-40 Hz <1% of FS† 0-20.4 atm –

Imagenex Sidescan Sonar(Dual Freq.)

Sonar image 330 or 800 kHz – 15-120 m –

USGlobalSat EM-406a GPS XYZ position 1 Hz 5-10 m – –

New Instruments

Prosilica GC1380H(C) Camera(down-looking stereo-pair)

Gray/color image 0.1-20 fps 1360 × 1024 pixels12-bit depth

– –

Strobed LED Array‡ Il lumination 0.1-4 fps – 2-4 m altitude –

Teledyne RDI 600 kHz Explorer DVL Body velocity 7 Hz 1.2-6 cm/s (at 1 m/s) 0.7-65 m –

KVH DSP-3000 1-Axis FOG Yaw rate 100 Hz 1-6° /h ±375°/s 4°/h/ √ Hz

Desert-Star SSP-1 300PSIG DigitalPressure Transducer

Depth 0.0625-4 Hz 0.2% of FS† 0-20.4 atm –

Applied Acoustics USBL XYZ position 1-10 Hz ±0.1 m (Slant Range) 1000 m –

OWTT* Nav (Modem + PPS) Slant range Varies 18.75 cm (at 1500 m /s) Varies <1.5 m/14 h• WHOI Micro-modem Communication Varies – Varies –

• Seascan SISMTB v4 PPS Clock Time 1 Hz 1 µs – <1 ms/14 h

Microstrain 3DM-GX1 AHRS Attitude 1-100Hz 2° (Hdg), 2° (Roll/Pitch) 360° –

Angular rate 1-100 Hz 3.5°/s ±300°/s 210°/h/ √ Hz†Full scale.‡In development.*One-way travel time.

Spring 2009 Volume 43, Number 2 39

Page 8: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 8/15

through the use of lightweight mate-rials such as acrylonitrile butadienestyrene (ABS), Delrin, and aluminum,and through careful center of buoy-ancy and center of mass computations.The entire vehicle was modeled us-ing SolidWorks solid modeling soft-

ware, and the extensive use of thesecomputer-aided design (CAD) modelsprovided optimal arrangements of in-ternal components prior to actual in-stallation (Figure 7).

The addition of a redesignedSLAMnose cone and sensor payload shiftedboth the original center of buoyancy and center of gravity. New positions were estimated using the CAD modelsand re ned during ballast tests at

the UMich Marine HydrodynamicsLaboratory (MHL). The vehicle isballasted to achieve approximately 0.11 kg (0.25 lbs) of reserve buoyancy for emergency situations when the vehicle must surface without power. Vehicle trim is set neutral to achievepassive stability and to optimize bothdiving and surfacing operations.

The interior component arrange-ment within the main body took into

account mass, geometry, heat dissipa-tion, and electrical interference consid-erations when determining the spatiallayout and arrangement of sensors,computing, and electronics in the main

tube. Because of the high density of sensors and other devices in the pres-sure housing, the components withthe highest heat radiation, such ascomputers and DC-DC converters, were placed in direct contact with thealuminum chassis to allow better heat

dissipation. Also, sensors that are proneto electrical noise from surrounding electronics were spatially separated inthe layout (e.g., the Micro-Electro-Mechanical Systems (MEMS) Micro-strain 3DM-GX1 is located in thenose cone tip, the furthest point fromthe motor and battery pack in uence).

Electrically, the vehicle is poweredby a 665 Wh Li-ion battery pack made up of seven 95 Wh laptop bat-

teries. This battery pack is managedby an Ocean-Server Intelligent Battery and Power System module. The addi-tional sensors and PC104 computing added by PeRL draw less than 40 W total. This load is in addition to theoriginal 12 W nominal vehicle hotelload and 110 W propulsion load of the stock Iver2, which results in a com-bined maximum total power draw of approximately 162 W (this assumes

full hotel load and the motor at fullpower). The estimated run time atfull load is 4.1 h; however, taking into account a more realistic assess-ment of propulsion load for photo

transect survey conditions, the contin-uous run time will be closer to 5.2+ hat a 2 knot (i.e., 75 W propulsion) sur- vey speed.

SLAM Nose ConeIn order to support the real-time

visual SLAM objectives of PeRL, a down-looking stereo-vision system was added to the Iver2 vehicles. A dual camera arrangement was chosenbecause it provides greater exibility for visual SLAM research. For exam-ple, using a stereo-rig allows for themeasurement of metrical scale in-formation during pairwise image reg-istration and, thereby, can be used toimprove odometry estimation by ob-

serving velocity scale error in DVLbottom-track measurements. Addi-tionally, the dual camera arrangementcan be used to run one of the camerasin a low-resolution mode suitable for real-time image processing (e.g., VGA or lower), whereas the second camera can be run at its native 1.6 megapixelresolution for of ine processing. Atdepth, illumination will come froma LED array mounted at the aft of

the vehicle. The array will provide op-timal illumination over a 2-4 m imag-ing altitude at up to 4 fps strobe rate with a camera-to-light separation dis-tance of approximately 1.25 m.

To accommodate the two-camera vision system and the DVL transducer,a new nose cone was designed and fab-ricated. The UMich custom-designednose cone (Figure 8) was fabricatedfrom Acetron GP (Delrin) because of

the material’

s high tensile strength,scratch resistance, fatigue endurance,low friction, and low water absorption.Threaded inserts are installed in thenose cone to prevent stripped threads,and stainless fasteners with a polytetra-

uoroethene paste (to prevent corro-sion issues) are used in all locations.

FIGURE 7

Mechanical layout.

40 Marine Technology Society Journal

Page 9: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 9/15

Page 10: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 10/15

high-level control interface where thehost speci es the reference heading,speed, and depth set points, and (2) a low-level control interface where thehost speci es servo commands for thethruster and control surfaces.

The modi ed PeRL vehicle uses

a Digital-Logic ADL945 1.2 GHzCore-Duo PC104 for backseat con-trol and data logging. The ADL945runs less than 10 W in power con-sumption, has Gigabit Ethernet for interfacing with the Prosilica GigEcameras, and runs 32-bit UbuntuLinux. The host stack records data from the cameras and navigation sen-sors, performs online state estimation,and directs the real-time control of

the Iver2 via the backseat driver API.For ease of software development andmodularity, we have adopted a multi-process software paradigm that usesthe open-source LCM inter-processcommunication library developedby the Massachusetts Institute of Technology (MIT) Defense AdvancedResearch Projects Agency (DARPA)Urban Grand Challenge team (Leonardet al., 2008; LCM, 2009).

Missions and Testing Current missions and testing con-

ducted by PeRL include testing at theUMich MHL tow tank, automated visual ship-hull inspection (conductedat AUV Fest 2008), eld testing atthe UMich Biological Station, and ar-chaeological surveys of shipwrecks inthe Thunder Bay National Marine

Sanctuary.

UMich MarineHydrodynamics Laboratory

The UMich MHL provides a con-trolled experimental environment for testing real-time underwater visual

SLAM algorithms. The freshwater physical model basin measures 110 m ×6.7 m × 3.0 m (Figure 9) and canachieve prescribed vehicle motions via the electronically controlled tank carriage. Trajectory ground-truth isobtained from the encoder measured

carriage position and will be used to validate real-time visual SLAM poseestimates derived from registering theimagery of the tank oor. For easeof real-time algorithm development, we can attach a wet-mateable wiredEthernet connection to a vehicle bulk-head so that imagery and sensor data can be streamed live to a topside desk-top computer. This allows for greater

exibility in real-time software devel-

opment, visualization, and debugging.The use of the test facility has beenbene cial for early development andtesting of our Iver2 hardware andreal-time underwater visual SLAMalgorithms.

AUV Fest 2008The Iver2 serves as a testbed for

real-time visual autonomous port andhull inspection algorithm research at

UMich. We use the Iver2 as a proxy for visual hull inspection by develop-

ing and testing our algorithms to nav-igate over the sea oor (because theunderlying visual navigation algorithmis fundamentally the same in the twoscenarios). This allows us to use theIver2 to validate the accuracy of our vi-sually augmented navigation method,

and to test and implement our real-time algorithms on the type of embed-ded system hardware typically foundon AUVs.

Meanwhile, to test our VAN algo-rithms in a real hull-inspection sce-nario, PeRL collaborated with MITand Blue n Robotics at AUV Fest2008 to put one of our camera systemson the Hovering Autonomous Under- water Vehicle (HAUV) ( Vaganay et al.,

2005). In this experiment, we col-lected imagery of the hull of theUSS Saratoga — a decommissionedU.S. aircraft carrier stationed at New-port, Rhode Island (Figure 10a). PeRLpackaged and mounted a calibratedProsilica GC1380HC camera (the samecamera systemused in the Iver2 SLAMnose cone) and a ash strobe light sys-tem on the HAUV hull inspection vehicle. Boustrophedon survey im-

agery was collected by the HAUV of the hull of the USS Saratoga. The

FIGURE 9

Vehicle testing at MHL tow tank.

42 Marine Technology Society Journal

Page 11: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 11/15

HAUV is equipped with a 1200 kHzDVL, a FOG, and a depth sensor, allof which are comparable to the sensor suite integrated into PeRL’s Iver2testbed.

Preliminary results for visual hull-relative navigation are shown in Fig-

ure 10 (Kim and Eustice, 2009). Here, we see a pose-graph of the camera con-straints generated through pairwiseregistration of overlapping imagery.These constraints are fused with navi-gation data in an extended information

lter framework to provide a bounded

error precision navigation estimateanywhere along the hull. Each nodein the network corresponds to a digitalstill image taken along the hull (ove1300 images in all). Note the discretedropout of imagery along the secondleg in the region of 5 m to 20 m along

the hull axis. Because of a logginerror, we did not record any imageryduring this time; however, this gap inthe visual data record actually highlightthe utility of our hull-referenced visuanavigation approach. Because we arable to pairwise register views of thhull taken from different times andloca-tions, the camera-based navigation algorithm is able to “ close-the-loop” andregister itself to earlier imagery from

the rst leg of the survey —

thereby reset-ting any incurred DVL navigation errorduring the data dropout period. It is pre-cisely this hull-referenced navigation capability that allows the AUV to navigatin situ along the hull without the needfor deploying any external aiding (e.gacoustic-beacon transponders).

UMich Biological Station

Field trials were held on DouglaLake at the UMich Biological Station(UMBS) in Pellston, Michigan, dur-ing July 2008. Four days of on-watertesting demonstrated maneuverability, vehicle speed, dead-reckon navigation, wireless Ethernet communica-tion, sidescan sonar functionality,digital compass, and manual surfac joystick operation modes as summarized in Table 2.

Launch and recovery were conducted from shore, dock, and from apontoon boat. A full sidescan sonasurvey of the southeastern bay aDouglas Lake was run from theUMBS docks (Figure 11). After completion of the mission, the vehicle wamanually driven under joy-stick con

FIGURE 10

Hull inspection results from AUV Fest 2008. (a) Depiction of the AUV Fest 2008 experimentalsetup. (b) The camera-derived pose constraints are shown as red and green links. Each vertexrepresents a node in the pose-graph enclosed by its 3σ covariance uncertainty ellipsoid. Be-cause of the change of the visual feature richness along the hull, the uncertainty ellipsoid inateswhen the vehicle is not able to build enough pose constraints but then deates once VAN cre-ates camera constraints with previous tracklines. Three small gure insets depict the typicalfeature richness for different regions of the hull. (c) Triangulated 3-D points are tted to obtain

a smooth surface reconstruction and then texture mapped to create a 3-D photomosaic. The sixwhite dots are targets that were manually placed on the hull and used to verify the utility of thevisual hull inspection algorithm.

Spring 2009 Volume 43, Number 2 43

Page 12: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 12/15

trol from a portable wireless stationback to the dock for recovery. This on-shore launch and recovery capability facilitates the ease of experimental

eld testing with the Iver2.

Thunder Bay National

Marine SanctuaryIn August2008, PeRL collaborated with the National Oceanic and Atmo-spheric Administration (NOAA) Thun-der Bay National Marine Sanctuary (TBNMS) researchers to map un-explored areas outside the Sanctuary ’scurrent boundaries (Figure 12a). Es-tablished in 2000, the TBNMS protects

TABLE 2

Results of testing stock Iver2 at UMBS.

Attribute Performance

Maneuverability 1.5 m altitude bottom following

25 cm depth band control

4.5 m turning radius

Speed 0.5-1.5 m/s

DR navigation accuracy (DR = prop counts + compass u/w; GPS when on surface)

10% distance traveled

802.11g Wi-Fi 100 m range (on surface)

FIGURE 11

VectorMap mission-planning interface for the Iver2. The depicted tracklines are for a sidescan sonar survey of a portion of Douglas Lake. Launcand recovery of the Iver2 were performed from a dock on shore.

44 Marine Technology Society Journal

Page 13: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 13/15

one of the nation’s most historically signi cant collections of shipwrecks.Located in the northeast corner of Michigan’s Lower Peninsula, the

448 mi2 sanctuary contains 40 knownhistoric shipwrecks. Archival researchindicates that over 100 sites await dis-covery within and just beyond the

sanctuary ’s current boundaries. Thisfact, coupled with strong public sup-port and the occurrence of dozens oknown shipwrecks, provides the rationale for the sanctuary ’s desire to ex-pand from 448 mi2 to 3,662 mi2 (aneightfold increase). To date, however

a comprehensive remote sensing sur vey has not been conducted in thepotential expansion area. Moreoversigni cant portions of the existingsanctuary have not been explored.

PeRL is engaged in a 5-year collaboration effort with TBNMS to usethe Sanctuary as a real-world engineering testbed for our AUV algorithms research. TBNMS providesin-kind support of ship time and fa-

cilities and in return receives SLAMderived archaeological data productranging from 3-D photomosaics ofshipwrecks to sidescan sonar maps othe Sanctuary sea oor. In addition,PeRL is engaged in public outreachefforts in collaboration with TBNMSto educate the general public in theuse and technology of underwaterrobotics. In development is an AUVtechnology display in their state-of

the-art Great Lakes Maritime Heri-tage Center, (a 20,000 ft2 building featuring a 100-seat theater, 9,000 ft2

of exhibit space, and distance learning capabilities), which will consisof an Iver2 AUV hull, a multimediakiosk, and maps and data productsderived from PeRL’s eld testing inthe Sanctuary.

This past August, as part of anNOAA Ocean Exploration grant,

PeRL elded one of its two Iver2 AUVs to collect sidescan sonar imagery in unmapped regions of the Sanctuary sea oor. Figure 12 shows surveytracklines and sonar imagery collecteof a newly found wreck outside of thSanctuary ’s boundaries in approxi-mately 50 m of water depth.

FIGURE 12

Sidescan sonar mapping results from the 2008 summer eld season in TBNMS. (a) Thunder BayNational Marine Sanctuary facility. (b) A large area search was conducted rst to locate the targetwreck indicated in the white box (survey tracklines are shown in green). A second ner-scalesurvey was then conducted to map the target at higher resolution (tracklines overlaid in gray).(c) Target imagery found using 330 kHz sonar. (d) Detailed target imagery using 800 kHz sonar.

Spring 2009 Volume 43, Number 2 45

Page 14: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 14/15

ConclusionThis paper provided an overview

of PeRL’s AUV algorithms researchand testbed development at UMich.To summarize, PeRL’ s main re-search thrusts are in the areas of (1) real-time visual SLAM, (2) coop-

erative multi-vehicle navigation, and(3) perception-driven control. Towardthat goal, we reported the modica-tions involved in preparing two com-mercial Ocean-Server AUV systemsfor SLAM research at UMich. PeRLupgraded the vehic les with ad-ditional navigation and perceptualsensors, including 12-bit stereo down-looking Prosilica cameras, a TeledyneRDI 600 kHz Explorer DVL for

3-axis bottom-lock velocity measure-ments, a KVH single-axis FOG for yaw rate, and a WHOI Micro-modemfor communication, along with other sensor packages. To accommodatethe additional sensor payload, a new Delrin nose cone was designed andfabricated. An additional 32-bit em-bedded CPU hardware was added for data-logging, real-time control, andin situ real-time SLAM algorithm test-

ing and validation. Our eld testing and collaboration with the TBNMS will provide a validation of the pro-posed navigation methodologies in a real-world engineering setting, whereasa new museum exhibit on under- water robotics at their visitor center will disseminate the ndings and resultsof this research to the general public.

AcknowledgmentsThis work is supported in partthrough grants from the NationalScience Foundation (Award #IIS0746455), the Of ce of Naval Re-search (Award #N00014-07-1-0791),and an NOAA Ocean Explorationgrant (Award #WC133C08SE4089).

References Anderson, B., Crowell, J. 2005. Workhorse AUV — A cost-sensible new autonomous un-derwater vehicle for surveys/soundings, search& rescue, and research. In Proc. of the IEEE/MTS OCEANS Conference and Exhibition.pp. 1228-1233. Washington, D.C.: IEEE.

Bailey , T., Durrant-Whyte, H. 2006. Simul-taneous localization and mapping (SLAM):Part II. IEEE Robot. Autom. Mag. 13(3):108-117.

Ballard , R.D. (ed). 2008. ArchaeologicalOceanography. Princeton, New Jersey:Princeton University Press. 296 pp.

Bourgault , F., Furukawa, T., Durrant-Whyte,H. 2004. Decentralized Bayesian negotiationfor cooperative search. In Proc. of the IEEE/RSJ International Conference on Intelligent

Robots and Systems. pp. 2681-2686. Sendai, Japan: IEEE.

Durrant-Whyte , H., Bailey, T. 2006. Simul-taneous localization and mapping: Part I.IEEE Robot. Autom. Mag. 13(2):99-110.

Eustice, R.M. 2008. Toward real-time visu-ally augmented navigation for autonomoussearch and inspection of ship hulls and portfacilities. In: International Symposium onTechnology and the Mine Problem. Monterey,CA: Mine Warfare Association (MINWARA).

Eustice, R.M., Singh, H., Leonard, J.J.2006a. Exactly sparse delayed-state lters for view-based SLAM. IEEE Trans. Robot.22(6):1100-1114.

Eustice, R.M., Singh, H., Leonard, J.J., Walter, M.R. 2006b. Visually mapping theRMS Titanic: Conservative covariance esti-mates for SLAM information lters. Int. J.Rob. Res. 25(12):1223-1242.

Eustice, R.M., Whitcomb, L.L., Singh, H.,Grund, M. 2006c. Recent advances insynchronous-clock one-way-travel-timeacoustic navigation. In Proc. of the IEEE/MTS OCEANS Conference and Exhibition.pp. 1-6. Boston, MA: IEEE.

Eustice, R.M., Whitcomb, L.L., Singh, H.,Grund, M. 2007. Experimental results in

synchronous-clock one-way-travel-timeacoustic navigation for autonomous under- water vehicles. In Proc. of the IEEE/RSJInternational Conference on Intelligent Ro-bots and Systems. pp. 4257-4264. Rome,Italy: IEEE.

Eustice, R.M., Pizarro, O., Singh, H. 2008.

Visually augmented navigation for autono-mous underwater vehicles. IEEE J. OceanicEng. 33(2):103-122.

Fleischer , S. 2000. Bounded-error vision-based navigation of autonomous underwater vehicles. PhD thesis, Stanford University.

Foley , B., DellaPorta, K., Sakellariou, D.,Bingham, B., Camilli, R., Eustice, R.,Evagelistis, D., Ferrini, V., Hansson, M.,Katsaros, K., Kourkoumelis, D., Mallios, A.,Micha, P., Mindell, D., Roman, C., Singh, H.,Switzer, D., Theodoulou, T. 2009. New methods for underwater archaeology: The2005 Chios ancient shipwreck survey. J. Hesperia. In press.

Freitag , L., Grund, M., Singh, S., Partan, J.,Koski, P., Ball, K. 2005a. The WHOI micro-modem: an acoustic communications andnavigation system for multiple platforms.In Proc. of the IEEE/MTS OCEANSConference and Exhibition. pp. 1086-1092. Washington, D.C: IEEE.

Freitag , L., Grund, M., Partan, J., Ball, K.,Singh, S., Koski, P. 2005b. Multi-bandacoustic modem for the communicationsand navigation aid AUV. In Proc. of theIEEE/MTS OCEANS Conference andExhibition. pp. 1080-1085. Washington,D.C.: IEEE.

Hunt , M., Marquet, W., Moller, D., Peal, K.,Smith, W., Spindel, R. 1974. An acousticnavigation system. Woods Hole Oceanographic

Institution Technical Report WHOI-74-6.Kim, A., Eustice, R.M. 2009. Pose-graph vi-sual SLAM with geometric model selection for autonomous underwater ship hull inspection.In: IEEE/RSJ International Conference onIntelligent Robots and Systems. Submitted for publication.

46 Marine Technology Society Journal

Page 15: Iver UMich 2009

8/10/2019 Iver UMich 2009

http://slidepdf.com/reader/full/iver-umich-2009 15/15