Automated, Long-Range, Night/Day, Active-SWIR Face ... · Automated, Long-Range, Night/Day, ......
Transcript of Automated, Long-Range, Night/Day, Active-SWIR Face ... · Automated, Long-Range, Night/Day, ......
Automated, Long-Range, Night/Day, Active-SWIR Face Recognition System
Brian E. Lemoff, Robert B. Martin, Mikhail Sluch, Kristopher M. Kafka, Andrew Dolby, Robert Ice WVHTC Foundation, 1000 Technology Drive, Suite 1000, Fairmont, WV, USA 26554
ABSTRACT
Covert, long-range, night/day identification of stationary human subjects using face recognition has been previously demonstrated using the active-SWIR Tactical Imager for Night/Day Extended-Range Surveillance (TINDERS) system. TINDERS uses an invisible, eye-safe, SWIR laser illuminator to produce high-quality facial imagery under conditions ranging from bright sunlight to total darkness. The recent addition of automation software to TINDERS has enabled the autonomous identification of moving subjects at distances greater than 100 m. Unlike typical cooperative, short range face recognition scenarios, where positive identification requires only a single face image, the SWIR wavelength, long distance, and uncontrolled conditions mean that positive identification requires fusing the face matching results from multiple captured images of a single subject. Automation software is required to initially detect a person, lock on and track the person as they move, and select video frames containing high-quality frontal face images for processing. Fusion algorithms are required to combine the matching results from multiple frames to produce a high-confidence match. These automation functions will be described, and results showing automated identification of moving subjects, night and day, at multiple distances will be presented. Keywords: Face Recognition, SWIR, Night Vision, Surveillance, Biometrics, Active Imaging
1. INTRODUCTION
The capability to covertly detect and identify people at long distances would be of great value to the military, law enforcement, and private security communities. Of most interest would be a capability that works night or day, under conditions ranging from bright sunlight to total darkness. Such a capability does not currently exist. While there are several biometric modalities commonly used to identify individuals, including DNA, fingerprint, iris, and face recognition, only face recognition has the potential to be of use at long distances. In an effort to develop this capability, the West Virginia High Technology Consortium Foundation (WVHTCF), under a research contract from the Office of Naval Research (ONR) and oversight from the Office of the Secretary of Defense Deployable Force Protection Program (DFP), is developing the Tactical Imager for Night/Day Extended Range Surveillance (TINDERS), an active short-wave infrared (SWIR) imaging system that illuminates targets with an invisible and eye-safe SWIR laser beam and matches SWIR facial images against mug-shots enrolled in a visible-spectrum database for identification.1,2,3 TINDERS nighttime face recognition results for stationary targets have been published at distances of 100 m, 200 m, and 350 m. A practical system, however, must be able to identify people in the distance as they move around naturally, since a covert identification system cannot expect subjects to stand still and look directly at the camera. Thus, a system must have an automated capability to detect people, track them as they move, capture good facial images from video, and process them for face recognition. These automated capabilities have recently been developed for the TINDERS system and are described in this paper.
1.1 Background
Detailed motivations for and descriptions of the TINDERS system were previously published,1,2,3 and are summarized in this section. Active-SWIR imaging has a number of advantages over other imaging modalities that make it uniquely suitable for long-range night/day human identification. Traditional visible-spectrum imagery produces the most recognizable facial images, but at night, there is not enough light to make a long-range close-up facial image. A powerful spotlight could be used to illuminate the face, but this would not be covert, and the required intensity would pose an eye-safety hazard. Thermal infrared imagery is commonly used for long-range nighttime surveillance, but the imagery produced does not correlate well to visible-spectrum mug shots. Active near infrared (NIR) imagery is an excellent modality for shorter-range nighttime face recognition, as the facial imagery correlates well to visible-spectrum mug-shots; however, the NIR illumination power required for close-up face images at distances beyond 100 m poses a
severe eye-slonger than exposure thatimes more liwavelength >
Facial imageoff of the fawavelength rof hair, i.e. skshelf face rematch 40 ou1156 differensize. Figure were made urepresenting on a customi
Figure 1.visible-spwavelengcorrectly
Figure 2.
afety hazard n1400 nm, all lt is 65 times hiight than a NIR> 1400 nm was
ery in this wavace, revealing range differs frkin tone appeacognition softw
ut of 56 (71%) nt faces. Note1 shows exam
using the samea best case forzed version of
. Example high-pectrum imagesgth, > 1400 nm, matched 40 out
(left) Conceptu
near the illumilight is absorbigher than at 8R illuminator os chosen as the
velength regionmost of the
rom visible-spears dark and haware availablehigh-quality S
e that success rmples of SWIR e illumination r TINDERS SW
f the latest Face
-quality active-S. The SWIR iand SWIR focalof 56 SWIR im
ual illustration of
inator, where bed in the eye 00 nm. Thus, of the same size TINDERS sen
n has no thermsame features
ectrum imageryair appears white in 2010 (FaceSWIR images rate as measureand visible-spewavelength anWIR image que Examiner Wo
SWIR images uimages were acl plane array useages with the co
f TINDERS syst
the beam is sbefore reachina SWIR illumi
ze. For these rensor modality.
mal component, as visible-spy primarily in tte. Despite thee Examiner Wwith correspo
ed this way depectrum face imnd focal plane uality. The curorkstation softw
used in a 2010 cquired at closeed in the TINDEorrect visible ima
tem. (right) Curr
maller and mong the retina, rinator of waveeasons, active-
, and only shoectrum or NIRthe lower refleese differences
Workstation 2.1nding visible-s
epends on databmages used in th
array used inurrent TINDERware from Mor
face recognitione range, in the ERS system. Offage in a database
rent version of T
ore intense. Aresulting in a
elength > 1400--SWIR imaging
ows light that iR imagery. F
ectivity of skin s in skin and ha1 from Identix)spectrum imagbase size, decrhat experiment
n TINDERS, bRS face recognrpho Trust USA
n experiment, adark, using the
f-the-shelf face re containing 115
TINDERS protot
At SWIR wavmaximum per-nm can safelyg, with an illum
is reflectively sFacial imageryand higher ref
air reflectivity,) was able to cges from a datreasing with int. These SWIRut at close ran
nition software A.4
along with matce same illuminarecognition softw
56 different faces
type system.
velengths rmissible y emit 65 mination
scattered y in this flectivity , off-the-correctly tabase of ncreasing R images nge, thus is based
ching ation ware s.
Figure 2 includes both a conceptual illustration and a photograph of the TINDERS prototype hardware. The TINDERS system consists of three physical units: an optical head that sits on a pan-tilt (PT) stage; an electronics box that provides power, light (through and optical fiber), and communications to the optical head; and a computer that runs the user interface, low-level camera control functions, system automation, and face recognition software. The TINDERS optical head includes both the SWIR illuminator optics and the imager. In the current version of the hardware, the optical head weighs roughly 30 pounds and sits in an environmentally-controlled enclosure atop a commercial pan-tilt stage. The imager and illuminator pan, tilt, and zoom together so that the illuminator beam is always just filling the imager field of view. This serves to maximize the image signal level and avoid wasted light. The illuminator light source, located in the electronics box, delivers a maximum power of 5W to the optical head through an optical fiber in the umbilical. Because the illumination beam is expanded to 5-inches in diameter prior to exiting the optical enclosure, the TINDERS illuminator is safe to the unaided eye at point-blank range.
1.2 Automation Strategy
As discussed above, even with high-quality, short-range SWIR facial imagery from a stationary, frontal face, the single-image success rate is on the order of 70%. Thus, high-confidence identification of noncooperative, moving targets, at long distances will require the fusion of face matching results from multiple SWIR facial images known to be of the same person. Ideally this process would be fully-automated, to allow for unattended operation. The individual automation processes required for a fully automated system include:
Detection of an individual and designation of that individual to be tracked;
Tracking of the individual as they move;
Zooming in on the face of the moving target;
Capturing video frames containing frontal facial images of sufficient quality for identification;
Submission of captured facial images to face recognition software;
Fusion of the matching results from multiple video frames;
Thresholding to determine whether fused matching result has enough confidence to report as a match;
Reporting the positive identification result. As previously reported2, a cascade pattern matching algorithm was developed to detect upper bodies. This algorithm is capable of automatically detecting people at distances up to 3 km as long as the field of view is wide enough to include the full upper body. To implement full automation, a rule would need to be applied to determine when a detected person should or should not be tracked. In lieu of this, TINDERS displays a box around all detected people, and an operator can click on the box in order to designate the person to be tracked. For tracking, TINDERS currently uses an SLA-2000 video processing board5, a commercial product primarily used for tracking ground objects in aerial surveillance video. The TINDERS video is processed by this board, and when a detected person is designated for tracking, the tracking box coordinates are sent to the board, which updates the target position after each frame. The updated tracking coordinates are then used to calculate a velocity vector that is sent to the pan-tilt stage to keep the tracked target as close to the center of the imager field of view as possible.
Once a person is being tracked with a wide field of view, a fully-automated solution would automatically zoom in on the target while continuing to track. This has not yet been implemented in TINDERS, so zoom is still controlled by an operator. At narrower fields of view, another cascade algorithm, also previously reported2, detects faces. As with the upper-body detection algorithm, a box is displayed around the detected face, and an operator can click on the box to initiate tracking on the face. Once the face box coordinates have been sent to the SLA-2000 for tracking, the head can still be tracked even when the person turns so that the face is no longer visible.
Once the field of view and distance are small enough for face recognition to be possible, TINDERS begins to evaluate detected faces for face recognition suitability. It was previously reported2 that eye-detection and nose-detection algorithms have been developed to determine whether a frontal face with clear features is present in the video frame. At 30 frames per second, the algorithms typically run fast enough to search one third of the frames for good faces. When a “good” face is detected, it is placed on a queue of images to be processed for face recognition. As new “good” faces are detected, they are placed at the front of the queue so that the face recognition software will always be processing the most recently detected “good” face image. The face recognition software processes SWIR face probe images from the queue one at a time, matching them against a visible-spectrum database of facial images. After each probe image is matched, the results are fused with the previous results, and the fused results are displayed.
In a fully-autperson, prioris currently uimage in andpeople in thefaces belongresults when
In a fully authreshold woto do this, bufused matchireport the res
To experimedaytime and exercise the lTINDERS redisplaying ofunctioning abut they are TINDERS n“playback m
2.1 Detectio
A daytime scroughly 138 functions. Inthen clicked turned and wdashed blue band in time, as in this imadashed blue b
Figure 4. initiated b(smaller)
tomated solutior to fusing the under developmd near the trace image. Othe
g to the same pa new person
utomated solutould be appliedut it has not being score, and sult.
entally illustratnighttime scenlive body detececords video c
on the GUI exas if the data wrepresentative
nighttime videoode” to illustra
on and Tracki
cenario was rum from the
n the image onon this box, w
walked away frbox shows uppthe SLA-2000age. One apprbox in the righ
Daytime exampby clicking on thupper-body dete
on, TINDERS face matching ment, but it hacking box is serwise, the entperson. Controis to be identif
tion, a confided to determine ween implementdetermine whe
te the various narios were runction, face detecontaining telexactly as it d
were live. All iof the live dis
o data of test sate the automat
ing
un in which a TINDERS sys
n the left, the dwhich initiated tfrom the buildiper body detect will lose trackroach to mitiga
ht image, to per
ple at 138-m ranhis box. (right) ection box are ov
would need toresults. For th
as not yet beensearched for fatire image is seols on the GU
fied.
ence level wowhen to reported. Currently,ether a positiv
2.
automation funn with a single ection, trackingemetry that alldid when live,images includesplay. In additsubjects rotatinted face captur
subject was astem. Figure dashed blue botracking. The iing. The solidtion. It shouldk of the subjecate this in the friodically adjus
nge. (left) UpperTINDERS pansverlaid on the liv
o determine whhis purpose, a n integrated. Waces, largely pearched, leavin
UI allow the op
ould be calculat a result as a p, the operator me match has be
RESULTS
nctions that hasubject walkin
g, and automatlows the video, with all det
ed below were tion to the singng in place byre and face reco
sked to walk a4 illustrates tx shows the upimage on the rid orange (larged be noted that ct. This typicalfuture is to usest the tracking
r body is detected to track subjectve image as the
hether the facesvery fast SWI
When a head ipreventing the ng it up to theperator to clear
ated for the fupositive identifmust look at theen made. Th
ave been implng at a distanceted face recogno to be playedtection algorithtaken as screengle-subject way 360 degrees ognition functi
around in a rothe TINDERSpper body deteight shows theer) box showsthe subject is nlly occurs whe
e the upper-bodbox to keep it
d and a dashed bt as he walks. Osubject walks.
s on the queueIR-to-SWIR fais being trackecapture of fac
e operator to er the queue an
used face recofication. Workhe top candida
he operator can
lemented into te > 100 m and nition functiond back later onhms and facen shots during
alking scenariofrom 100-m a
ions.
oughly rectangS upper-body dected in the live subject some s the tracking bno longer centeen there is a fedy detection incentered on th
blue box is displOrange (large) tra
e all belong to tace matching aed, only the paces belonging nsure that all
nd the face rec
ognition resultk is currently uates, ranked in n then click a b
the TINDERS a distance > 2
ns. For ease of n the TINDER
e recognition this “playback
os, previously-cand 350-m wa
ular pattern indetection and ve video. The time later, aftebox, while theered in the ora
eature-rich backnformation, suche subject.
layed. Trackingacking box and
the same algorithm art of the
to other captured
cognition
ts, and a underway
order of button to
S system, 200 m, to analysis, RS GUI, software k mode”, collected as run in
n an area tracking operator
er he has e smaller ange box, kground, ch as the
g was blue
Figure 5.was initiathe live im
A nighttime tracking funcis displayed.approaches tdetection sho
Similar day aof nighttime is that it is co
Figure 6.dashed bl
The images sdetection is nighttime imimage (a) the1.83 m, and face box, initthat the faceTINDERS is
Nighttime exaated by clicking mage as the subj
scenario was ction exercised The operatothe building, wown in the righ
and night scenaupper-body de
overed with sno
. Nighttime exalue upper-body d
shown in figurnot enabled w
mages at 215-me FOV is 2.2 mnow both an utiating tracking
e is being tracks still tracking h
ample at 137-m ron this box. (rigject walks.
run with the sd at night. In thor clicked on twith the oranght image.
arios were run etection and traow, which app
ample at 224-mdetection box are
res 4 through 6when the field
m range as TINm, and only an upper-body deg of the face. ked. In imagehis head, even
range. (left) Uppght) TINDERS p
same subject ahe left image, tthis box, initiage tracking box
at another locaacking at a distears black due
m range. TINDEe overlaid on the
6 have a large d of view (FO
NDERS zooms upper-body detection and a fIn image (c), ae (d), the subjthough the fac
per body is detepans to track subj
t the same locthe subject is fating tracking. x overlaid on
ation at a distatance of 224 m to high absorp
ERS pans to trae live image as th
field of view,OV) exceeds an
in to a smalleretection box is face detection an orange trackject has turnedce is no longer
ected and a dashbject as he walks
cation. Figure facing the came
In the right the image. I
ance of roughlym. The reason t
ption of the SW
ack subject as hhe subject walks
, in which the an upper limit.r field of viewshown. In imbox are visibl
king box is nod and is now wvisible.
hed blue box is ds. Orange trackin
5 illustrates tera, and an uppimage, the su
In this case, th
y 220-m. Figuthe ground app
WIR illuminatio
e walks. Orangs.
full body is vi. Figure 7 sh
w where face demage (b) the FO
le. The operatow visible arouwalking away
displayed. Tracng box is overlai
the same detecper-body detecubject is trackehere is no upp
ure 6 shows an pears dark in thon by water3.
ge tracking box
isible. TINDEhows a progreetection is ena
OV has been retor then clickeund the face, iny from the cam
cking id on
ction and ction box ed as he per-body
example his image
and
ERS face ession of abled. In duced to
ed on the ndicating mera, but
Figure 7.upper bo(d) TIND
2.2 Automa
During the saautomated rethree of thesecm. At thisTINDERS tr2000 that isSimultaneou
As each newmatches), a tthe screen. UFor each prothen fused wto each canddisplayed in the matchingbeen processresults shown
(
(
Nighttime examdy and face at F
DERS continues t
ated Face Reco
ame nighttime ecognition were functions wos time, the subracked the subjs used to consly, TINDERS
w SWIR probehumbnail of thUnder the prob
obe searched, thwith the results didate equal tothe face recogn
g results are aused. Note thatn in this paper,
(a)
(c)
mple at 215-m rFOV=1.83 m. (cto track head aft
ognition
scenario in whre also performorking simultanbject was walject’s head. Tntrol the pan-S is detecting an
e image is prohe probe imagebe image, the he top 10 candfrom previous
o that candidatnition windowutomatically upt some faces in, a visible-spec
range. (a) TINDEc) TINDERS trater subject turns
hich full-body med. Figure 8neously. In thelking back andThe orange box-tilt stage. Tnd capturing fr
ocessed by thee appears on thnumber of pro
didates from thprobes using t
te’s highest mw, along with thpdated. Figuren the results hactrum database
ERS detects uppacks face (orangand walks away
tracking was i8 shows a scree figure, TINDd forth towardx overlaid on t
The green boxrontal facial im
e face recognite left of the facobes searched he database arethe “maximumatching score
heir fused rank e 8 shows the ave been obsccontaining fac
(b)
(d)
per body at FOVge box) while d
y from camera.
illustrated in Feen shot of theDERS is zoomeds the camerathe face indicax overlaid on
mages, and proc
tion software ce recognition and the numb
e returned withm score” fusion
over all of thand score. Asface recognitio
cured for privacial images of
V=2.2 m. (b) TINdetecting both fa
Figure 5, face de full TINDERed into its minia and away froates the trackin
the face indcessing them fo
(searching thewindow, locat
ber waiting in th matching sco method, whice probes. Thes each new proon results afte
acy reasons. F114 people wa
NDERS detects ace and upper b
detection, trackRS GUI, illustrimum field of vom the camerng box from thdicates face dfor face recogni
e database for ted along the bthe queue is inores. These sch assigns a fuse top 8 candid
obe image is prr 4 probe imag
For all face recas used.
both body.
king, and rating all view, 64 ra, while he SLA-
detection. ition.
the best ottom of ndicated. cores are sed score dates are rocessed, ges have cognition
Figure 8.TINDERtracked, Tdisplayin
Figure 9.Results ar
Figure 9 givprocessed. Tfollowing theshows the casecond one hlow scores. After the thirmode in whipoor-quality
Nighttime exaRS is tracking his
TINDERS is deng the cumulative
Evolution of fare for nighttime,
ves more detaThe left side oe processing oaptured imagehas accurate eyAfter the secord and fourth pch an operatorprobe images
ample at 135-m s head (orange betecting and cape identification r
face recognition , 135-m range, w
il of how the of the figure shof each probe is along with t
ye locations. Fnd probe, the oprobe searchesr or analyst cans, change the
range. In this sbox). At the sampturing frontal faresults in the bot
results as 4 captwhile the subject
face recognithows a close-uimage, showinthe automatic ollowing the fione with corres, the top two rn review all of watch list, rep
1
3
creenshot of theme time, the faceface shots, automttom window.
tured probe imat was walking an
tion results evup of the left sng the top two eye placement
first probe searcect eye locationresults are unc
f the captured pprocess the fa
e full TINDERSe is detected (grematically proces
ages are sequentind being tracked
volve as the fside of the face
fused search t. Notice thatch, the top twons, is searchedchanged. The probe images, mace recognition
S GUI, the subjeeen box). Whilessing them for f
ially processed fd by TINDERS.
four probe imae recognition wresults. The r
t of the four po results are im
d the correct caTINDERS GUmanually adjun search, and
2
4
ect is walking we the subject is bface recognition
for face recogni
ages are captuwindow as it aright side of thprobe images, mpostors, but bandidate is rankUI also has a “ust eye location
review more
while being n and
tion.
ured and appeared he figure only the oth have ked first. “manual” ns, delete
detailed
matching scoto work with
When trackinsubject has tuimages that huntil new facsubject was recognition w
Figure 10screen shshot showto be seaprocessed
To better illurotating 360 TINDERS aurecognition. a TINDERS subject has bprocessed, al3, and 4, butfocus, most l
ore informationh while an activ
ng non-cooperaurned away frohave accumulace images are tracked as he
was performed
0. Daytime 215hot shows subjecws the subject stiarched. In the td, and the fused
ustrate how thdegrees in pl
utomated faceThe first examscreen shot ofbeen successfulong with the at incorrectly lolikely due to m
n. While this mve target is eng
ative subjects aom the cameraated on the qucaptured. Figuwalked back a.
5-m long-term trct walking and taill being trackedthird image, thematching results
he automated face, once arou capture detec
mple was recorf the rotating s
fully identifiedautomatically-gocated in probe
motion blur.
mode can be vaged and is no
at long range, a or is otherwiueue will be prure 10 shows and forth towa
racking with aualking on cell phd as he walks awa subject is agains are updated.
face recognitiound and back cted video framrded at a distansubject after 4 d. The right sgenerated eye e 2. Also, not
ery useful, andrmally used af
there may be lse not presentirocessed and man example fro
ard the camera
utomated face rehone just after traay from the camn walking towa
on works, two again. Recordmes with nearnce of 100 m in
probe images side of the figlocations. No
tice that probe
d can improve fter the engagem
long periods ofing a suitable matching resulom 219-m ran
a and away fro
ecognition examacking was initia
mera. Captured fard the camera,
examples are ded video wasr-frontal face in dark nighttimhave been pro
gure shows thtice that the ey1 has a slight
identification ment has ende
f time betweenface image. Dlts fused until nge during brigom the camera
mple. (clockwiseated on the faceface images fromand new face im
presented hers run in “playbimages and sume conditions. ocessed for fac
he 4 images thyes are correcttly angled pose
accuracy, it is d.
n face capturesDuring this timthe queue is e
ght daylight, wa while automa
e from top left) . The second sc
m the queue contmages are captu
re in detail of back mode” w
ubmitted them Figure 11, lef
ce recognition,hat were captutly located in pe and is slightl
difficult
s, when a me, probe empty or
where the ated face
First creen tinue ured,
subjects while the
for face ft, shows , and the ured and probes 1, ly out of
Figure 11. Nirecognition is probe images a
Figure 12matching
Figure 12 shthe most receare shown acandidate imbut with a lowdid not affecincreased theincreased theexample of ain Figure 13.
ighttime 100-m rdisplayed belowand eye location
2. Close-up vig scores for the ra
hows close-upsent probe imagat the right (th
mages in the facw score. The st the fused scoe score of the e score of the automated face
rotating test subjw as it appears fons used in the fac
ew of face recoank 1 (genuine)
of the face rege and top two his detail can bce recognition wsecond probe s
ores of the top rank 1 candidrank 1 candid
e recognition o
bject. (left) Screfollowing the proce recognition.
ognition results candidate.
ecognition windmatching candbe accessed owindow). Aftesearch yielded two candidatesdate, and the fdate, increasinf a rotating sub
en shot shows vocessing and fus
following each
dow as it appedidates. The don the TINDERer probe 1 was only low matcs. The third prfourth probe seg the confidenbject, this time
1
3
video of rotatingsion of 4 capture
h of the 4 probe
eared followingdetailed single-pRS GUI by hsearched, the c
ching scores (durobe search research yielded nce level of the recorded at n
g subject while sed face images.
e searches alon
g the processinprobe scores fo
hovering the mcorrect candida
due to incorrectsulted in a newa new rank 2
he positive idenight at a distan
2
4
successful autom(right) The four
ng with single-p
ng of each profor the rank 1 cmouse over onate was alreadyt eye location)
w rank 2 candid candidate and
entification. Ance of 350-m i
mated face r captured
probe
obe, with candidate ne of the y rank 1, and thus
date, and d further
A second is shown
Figure 13automate(right) Th
In this paperrange face reprovided expexamples of at multiple ddistances ranperformed wrepresents anfully-automa
This researchoversight froacknowledgeUSA, and the
[1] Brian E.Ice, “LoTechnol
[2] Brian E.Ice, “AuProc. SPHomelan
[3] Robert BsignatureSignatur
[4] MorphoThttp://wwrWorkst
[5] SightLin
3. Nighttime 3d face recognitiohe nine captured
r we reviewed ecognition, desperimental examupper-body dedistances. De
nging from 100while the TINDn initial implemated operation o
h was performom the Deploe important tece cooperation o
. Lemoff, Robong-range nigogy and Appli. Lemoff, Robutomated nightPIE 8711, Sennd Security andB. Martin, Mes for long-rares IV, 87340J Trust USA Facww.morphotruation.aspx .
ne Applications
350-m rotating ton is displayed b
d probe images a
the developmescribed the autmples that illuetection, face detailed exampl0 m to 350 m wDERS system
mentation of baof TINDERS w
med under controyable Force chnical contribof the WVU Ce
ert B. Martin, ght/day humancations XXXIXert B. Martin, t/day standoff nsors, and Cod Homeland Dikhail Sluch, nge night/day (May 23, 2013
ce Examiner Wust.com/Identity
s web page. ht
test subject. (lebelow as it append eye locations
3. D
ent of the TINtomation capabustrate the autodetection, and tles of automawere also provwas tracking
asic automationwould be possib
4. ACKNO
ract N00014-0Protection Sci
butions from Jaenter for Identi
REF
Mikhail Sluchn identificatioX, 87042J (JunMikhail Sluch
detection, tracommand, Cont
Defense XII, 87Kristopher Mhuman detect
3). Workstation webySolutions/For
ttp://www.sigh
eft) Screen shotears following ths used in the face
DISCUSSION
NDERS active-bilities that woomated capabilitracking were
ated face recogvided, includin
the face of an functions. Sible.
OWLEDGM
09-C-0064 fromience and Tecason Stanley, Wification Techn
FERENCES
h, Kristopher Mon using activne 18, 2013). h, Kristopher Mking, and identrol, Communi110N (June 6, . Kafka, Robetion and ident
b page. rFederalAgenci
htlineapplicatio
t shows video ohe processing ane recognition.
N
SWIR imagingould be requireities that have provided for bgnition, both
ng examples wha walking test ignificant addit
MENTS
m the Office ochnology ProgWilliam McConology Researc
M. Kafka, Wilve-SWIR ima
M. Kafka, Wilntification of pications, and 2013). ert V. Ice, antification”, Pro
ies/Officer360
ons.com/index.
of rotating subjend fusion of 9 ca
g system for ced for fully-aut
been implemeboth daytime andaytime and here automatedsubject. The
tional work wo
of Naval Reseagram. The aormick, Ken Wch in some of t
lliam B. McCoaging”, Proc.
lliam B. McCopersonnel for inIntelligence (C
nd Brian E. Loc. SPIE 8734
/Investigator36
html .
ect while succesaptured face ima
covert, night/datomated operatented to date. nd nighttime onighttime, at d face recognie work describould be require
arch, with fundauthors wouldWitt, and Morpthe data collect
ormick, and RSPIE 8704,
ormick, and Rnstallation proC3I) Technolo
Lemoff, “Activ4, Active and
60/ABIS/FaceE
ssful ages.
ay, long-tion, and Specific
operation multiple
ition was bed here ed before
ding and d like to phoTrust tion.
Robert V. Infrared
Robert V. tection”,
ogies for
ve-SWIR Passive
Examine