An Efficient Visibility Enhancement Algorithm for Road Scenes Captured by Intelligent Transportation...

12
 IEEE TRANSACTI ONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 5, OCTOBER 2014 2321 An Efcient Visibility Enhancement Algorithm for Road Scenes Captured by Intelligent Transportation Systems Shih-Chia Huang, Bo-Hao Chen, and Yi-Jui Cheng  Abstract—The visibility of images of outdoor road scenes will gene rall y beco me degr aded when capt ure d duri ng incle ment weather conditions. Drivers often turn on the headlights of their vehicles and streetlights are often activated, resulting in localized light sources in images capturing road scenes in these conditions. Additionally, sandstorms are also weather events that are com- monly encountered when driving in some regions. In sandstorms, atmospheric sand has a propensity to irregularly absorb specic portions of a spectrum, thereby causing color-shift problems in the captured image. Traditional state-of-the-art restoration tech- niques are unable to effectively cope with these hazy road images that feature localized light sources or color-shift problems. In re- sponse, we present a novel and effective haze removal approach to re med y pr obl ems cause d by loc ali zed lig ht sources and color shi fts , which thereby achieves superior restoration results for single hazy ima ge s. The per fo rma nce of the pr opo se d met hod has been pr ov en through quantitative and qualitative evaluatio ns. Experimenta l results demonstrate that the proposed haze removal technique can more effectively recover scene radiance while demanding fewer computationa l costs than traditional state-of-the-a rt haze remov al techniques.  Index T erms—Color shift, dark channel prior, localized light. I. I NTRODUCTION V ISIBILITY in road images can be degraded due to natural atmosp her ic phe nomena suc h as haz e, fog, and san d- storms. Thi s vis ibi lit y deg radati on is due to the abs orption and scattering of light by atmospheric particles. Road image degradation can cause problems for intelligent transportation syste ms such as trav eling vehicl e data recorders and trafc surveillance systems, which must operate under a wide range of weath er conditions [1]–[13 ]. The amount of absor ption and scattering depends on the depth of the scene between a trafc camera and a scene point; therefore, scene depth information is important for recovering scene radiance in images of hazy environments. Manuscript received November 25, 2013; revised March 11, 2014; accepted March 20, 2014. Date of publication May 16, 2014; date of current version September 26, 2014. The work of author S.-C. Huang was supported by the National Science Council (NSC) of Taiwan under Grants NSC 100-2628-E- 027-012-MY3 and NSC 102-2221-E-027-065. The Associate Editor for this paper was S. S. Nedevschi. The authors are with the Depart ment of Electr onic Engineeri ng, College of Electr ic Enginee ring and Compute r Scien ce, Natio nal Taipei Univers ity of Technology, Taipei 106, Taiwan (e-mail: [email protected]). Color versions of one or more of the gures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identier 10.1109/TITS.2014.2314696 In order to improve visibility in hazy images, haze removal techniques have been recently proposed. These can be divided into two princi pal classica tions , i.e., the giv en depth [14]– [16] and unknown depth [17]–[24] approaches. Given depth approa ches use additi onal information [14]–[16]. They rely on the assumption that the depth is given; they then use the depth information to restore hazy images. Tan and Oakley [14], Narasimhan and Nayar [15], and Kopf  et al.  [16] developed haze removal approaches based on the given depth information. Thi s inf ormati on is acq uired from add iti onal ope rat ion s or intera ction s, such as apply ing information pertaini ng to the altitude, tilt, and position of the camera [14], or through manual approx ima tion of the dis tan ce dis tri bution of the sky area and vanishing point in a captured image [15], or through an approximate 3-D geometrical model of the captured scene [16]. However, these approaches are not suitable for haze removal in real-world applications because the depth information needs to be provided by the user, yet it is scarcely given. Therefore, a haze removal technique changes the given depth into an unknown depth. Many studies have proposed the es- timation of an unk nown depth to rec ov er scene rad iance in hazy images. These can be divided into two major categories, i.e., multiple images [17]–[1 9] or a single image [20]–[24 ]. Schechner et al.  [17] and Narasimhan and Nayar [18], [19] pro- pos ed haz e remov al tec hni que s tha t est ima te the unk nown dep th by using multiple images to restore a hazy image. Specically, the method proposed by Schechner  et al.  [17] uses two or more images of the same scene with different polarization degrees by rotating a polarizing lter to estimate the depth of a scene and then remove haze. Narasimhan and Nayar [18], [19] presented methods that compute the scene depth from two or more images in different weather conditions, in which the scene radiance of a hazy image can be restored. However, these methods usually require either a complex computation or the use of additional hardware devices. This leads to high restoration expense. Bec ause of thi s expen se, rec ent res ear ch has foc use d on single-image restoration. Recent investigations [20]–[24] have examined the use of single images to estimate the unknown depth without using any additional information to recover scene radian ce in hazy images. Tan [20] propo sed a singl e-ima ge haze removal approach that removes haze by maximizing the loc al con tra st of the rec overe d sce ne rad iance based on an observation that captured hazy images have lower contrast than res tor ed ima ges . Thi s app roa ch can produc e a sat isf act ory res ult for haz e remov al in sin gle ima ges, but the res tor ed res ult s 1524-905 0 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

description

The visibility of images of outdoor road scenes willgenerally become degraded when captured during inclementweather conditions. Drivers often turn on the headlights of theirvehicles and streetlights are often activated, resulting in localizedlight sources in images capturing road scenes in these conditions.Additionally, sandstorms are also weather events that are commonlyencountered when driving in some regions. In sandstorms,atmospheric sand has a propensity to irregularly absorb specificportions of a spectrum, thereby causing color-shift problems inthe captured image. Traditional state-of-the-art restoration techniquesare unable to effectively cope with these hazy road imagesthat feature localized light sources or color-shift problems. In response,we present a novel and effective haze removal approach toremedy problems caused by localized light sources and color shifts,which thereby achieves superior restoration results for single hazyimages. The performance of the proposed method has been proventhrough quantitative and qualitative evaluations. Experimentalresults demonstrate that the proposed haze removal technique canmore effectively recover scene radiance while demanding fewercomputational costs than traditional state-of-the-art haze removaltechniques.

Transcript of An Efficient Visibility Enhancement Algorithm for Road Scenes Captured by Intelligent Transportation...

  • IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 5, OCTOBER 2014 2321

    An Efficient Visibility Enhancement Algorithm forRoad Scenes Captured by Intelligent

    Transportation SystemsShih-Chia Huang, Bo-Hao Chen, and Yi-Jui Cheng

    AbstractThe visibility of images of outdoor road scenes willgenerally become degraded when captured during inclementweather conditions. Drivers often turn on the headlights of theirvehicles and streetlights are often activated, resulting in localizedlight sources in images capturing road scenes in these conditions.Additionally, sandstorms are also weather events that are com-monly encountered when driving in some regions. In sandstorms,atmospheric sand has a propensity to irregularly absorb specificportions of a spectrum, thereby causing color-shift problems inthe captured image. Traditional state-of-the-art restoration tech-niques are unable to effectively cope with these hazy road imagesthat feature localized light sources or color-shift problems. In re-sponse, we present a novel and effective haze removal approach toremedy problems caused by localized light sources and color shifts,which thereby achieves superior restoration results for single hazyimages. The performance of the proposed method has been proventhrough quantitative and qualitative evaluations. Experimentalresults demonstrate that the proposed haze removal technique canmore effectively recover scene radiance while demanding fewercomputational costs than traditional state-of-the-art haze removaltechniques.

    Index TermsColor shift, dark channel prior, localized light.

    I. INTRODUCTION

    V ISIBILITY in road images can be degraded due to naturalatmospheric phenomena such as haze, fog, and sand-storms. This visibility degradation is due to the absorptionand scattering of light by atmospheric particles. Road imagedegradation can cause problems for intelligent transportationsystems such as traveling vehicle data recorders and trafficsurveillance systems, which must operate under a wide rangeof weather conditions [1][13]. The amount of absorption andscattering depends on the depth of the scene between a trafficcamera and a scene point; therefore, scene depth informationis important for recovering scene radiance in images of hazyenvironments.

    Manuscript received November 25, 2013; revised March 11, 2014; acceptedMarch 20, 2014. Date of publication May 16, 2014; date of current versionSeptember 26, 2014. The work of author S.-C. Huang was supported by theNational Science Council (NSC) of Taiwan under Grants NSC 100-2628-E-027-012-MY3 and NSC 102-2221-E-027-065. The Associate Editor for thispaper was S. S. Nedevschi.

    The authors are with the Department of Electronic Engineering, Collegeof Electric Engineering and Computer Science, National Taipei University ofTechnology, Taipei 106, Taiwan (e-mail: [email protected]).

    Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

    Digital Object Identifier 10.1109/TITS.2014.2314696

    In order to improve visibility in hazy images, haze removaltechniques have been recently proposed. These can be dividedinto two principal classifications, i.e., the given depth [14][16] and unknown depth [17][24] approaches. Given depthapproaches use additional information [14][16]. They relyon the assumption that the depth is given; they then use thedepth information to restore hazy images. Tan and Oakley [14],Narasimhan and Nayar [15], and Kopf et al. [16] developedhaze removal approaches based on the given depth information.This information is acquired from additional operations orinteractions, such as applying information pertaining to thealtitude, tilt, and position of the camera [14], or through manualapproximation of the distance distribution of the sky areaand vanishing point in a captured image [15], or through anapproximate 3-D geometrical model of the captured scene [16].However, these approaches are not suitable for haze removal inreal-world applications because the depth information needs tobe provided by the user, yet it is scarcely given.

    Therefore, a haze removal technique changes the given depthinto an unknown depth. Many studies have proposed the es-timation of an unknown depth to recover scene radiance inhazy images. These can be divided into two major categories,i.e., multiple images [17][19] or a single image [20][24].Schechner et al. [17] and Narasimhan and Nayar [18], [19] pro-posed haze removal techniques that estimate the unknown depthby using multiple images to restore a hazy image. Specifically,the method proposed by Schechner et al. [17] uses two or moreimages of the same scene with different polarization degrees byrotating a polarizing filter to estimate the depth of a scene andthen remove haze. Narasimhan and Nayar [18], [19] presentedmethods that compute the scene depth from two or more imagesin different weather conditions, in which the scene radiance ofa hazy image can be restored. However, these methods usuallyrequire either a complex computation or the use of additionalhardware devices. This leads to high restoration expense.

    Because of this expense, recent research has focused onsingle-image restoration. Recent investigations [20][24] haveexamined the use of single images to estimate the unknowndepth without using any additional information to recover sceneradiance in hazy images. Tan [20] proposed a single-imagehaze removal approach that removes haze by maximizing thelocal contrast of the recovered scene radiance based on anobservation that captured hazy images have lower contrast thanrestored images. This approach can produce a satisfactory resultfor haze removal in single images, but the restored results

    1524-9050 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

  • 2322 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 5, OCTOBER 2014

    feature artifact effects along depth edges. Fattal [21] proposeda haze removal technique for single images that estimates thealbedo of the scene and deduces the transmission map basedon an assumption that the transmission shading and the surfaceshading are locally uncorrelated. This technique can generateimpressive results when the captured image is not heavilyobscured by fog. In other words, this technique cannot contendwith images featuring dense fog. Li et al. [22] describe acharacteristic property in which the smaller transmission inten-sity values possess larger coefficients in the gradient domain,whereas the larger transmission intensity values possess smallercoefficients. Based on this property, the method of Li et al. canrestore the visibility of hazy images by employing a multiscaletechnique in the regions containing small transmission values.However, this method usually results in excessive restorationwith regard to the sky regions of the resultant image.

    He et al. [23] proposed a haze removal algorithm via the darkchannel prior technique that uses the observation that at leastone color channel is composed of pixels that have lower intensi-ties within local patches in outdoor haze-free images to directlyestimate the amount of haze and subsequently recover sceneradiance efficiently. Until now, the approach of He et al. [23]has attracted the most attention due to its ability to effectivelyremove haze formation while only using single images. Inspiredby the dark channel prior technique [23], Xie et al. proposedan improved haze removal algorithm by employing a schemeconsisting of the dark channel prior and the multiscale Retinextechnique to quickly restore hazy images [24]. However, thescene radiance recovered via the dark-channel-prior-based tech-niques [24], [23] is usually accompanied by the generation ofserious artifacts when the captured hazy road image containslocalized light sources or color-shift problems due to sandstormconditions. This can be problematic for many common roadscenarios. For example, in inclement weather conditions, thedrivers generally turn on headlights when they are driving inorder to improve visual perception, and streetlamps are lit forsimilar reasons. The techniques based on the dark channel prior[23], [24] cannot produce satisfactory restoration results whenpresented with these situations.

    Therefore, we propose a novel haze removal approach bywhich to avoid the generation of serious artifacts by the con-junctive utilization of the proposed hybrid dark channel prior(HDCP) module, the proposed color analysis (CA) module, andthe proposed visibility recovery (VR) module. These modulesare further discussed in Section III. The proposed techniquecan effectively conceal localized light sources and restrain theformation of color shifts when the captured road image containslocalized light sources or color-shift problems. Experimentalresults and subsequent quantitative and qualitative evaluationsdemonstrate that the proposed technique can more effectivelyremove haze from single images captured in real-world condi-tions than other state-of-the-art techniques [22][24].

    The remainder of this paper is organized as follows. InSection II we briefly describe the dark channel prior technique.Section III presents a detailed description of the proposedtechnique and its applicability for single-image haze removaland the circumvention of the previously mentioned problems.Section IV presents and contrasts the experimental results

    Fig. 1. Pictorial description of hazy image acquisition via the optical model.

    produced via the four methods for images representing a widerange of weather conditions. Section V discusses and demon-strates the improvement of traffic surveillance applications viathe use of the proposed method. Finally, the conclusion ispresented in Section VI.

    II. BACKGROUND

    A. Optical Model

    In computer vision and pattern analysis, the optical modelis widely used to describe the digital camera information of ahazy image under realistic atmospheric conditions in the RGBcolor space as

    Ic(x, y) = Jc(x, y)t(x, y) +Ac (1 t(x, y)) (1)

    where c {r, g, b}, Ic(x, y) represents the captured image,Jc(x, y) represents the scene radiance that is the ideal haze-free image, Ac represents the atmospheric light, and t(x, y)represents the transmission map describing the portion of thelight that arrives at a digital camera without scattering. The firstterm of (1), i.e., Jc(x, y)t(x, y), represents the direct attenua-tion describing the decayed scene radiance in the medium. Thesecond term of (1), i.e., Ac(1 t(x, y)), represents the airlightthat resulted from the scattered light and leading to the colorshifting in the scene.

    Generally, the homogenous atmosphere can be assumed tobe uniform, and the scene radiance is exponentially attenuatedaccording to the depth of the scene. The transmission map canbe expressed as

    t(x, y) = ed(x,y) (2)

    where is the atmospheric attenuation coefficient, and d(x, y)is the scene depth that represents the distance between anobserved object and the digital camera. Fig. 1 shows theoptical model that describes the hazy image information ob-tained by a traveling vehicle data recorder under atmosphericconditions.

    B. Haze Removal Using Dark Channel Prior

    Dark Channel Prior: The dark channel prior is a state-of-the-art image restoration technique by which to remove haze

  • HUANG et al.: ALGORITHM FOR ROAD SCENES CAPTURED BY INTELLIGENT TRANSPORTATION SYSTEMS 2323

    from a single image [23]. In order to estimate the amount ofhaze in an image, dark channel Jdark can be expressed as

    Jdark(x, y) = min(i,j)(x,y)

    (min

    c{r,g,b}Jc(i, j)

    )(3)

    where c {r, g, b}, J represents an arbitrary color image, Jcrepresents a channel of color image J , represents a localpatch centered at (x, y), minc{r,g,b} Jc(i, j) is performed asthe minimum operation on Jc, and min(i,j)(x,y) is performedas a minimum filter on the local patch centered at (x, y).

    As described in [23], dark channel Jdark has a low intensitywhen the outdoor image lacks haze, with the exception of thesky region. The dark channel value of a haze-free image is closeto zero and can be represented by

    Jdark 0. (4)

    In other words, if the dark channel value is larger than zero, itmeans that regions exhibit haze. As such, we can estimate theamount of haze via (3).

    Estimating the Transmission Map: In a single hazy image,these dark channel values can provide a direct and accurateestimation of haze transmission. First, the optical model in (1) isindependently normalized by atmospheric light Ac in the RGBcolor space as

    Ic(x, y)

    Ac=

    Jc(x, y)

    Act(x, y) + 1 t(x, y). (5)

    Then, the dark channel operation is calculated on the both sidesof (5) as

    min(i,j)(x,y)

    (min

    c{r,g,b}Ic(i, j)

    Ac

    )

    = t(x, y) min(i,j)(x,y)

    (min

    c{r,g,b}Jc(i, j)

    Ac

    )+1 t(x, y). (6)

    According to the work in [23], as J is a haze-free image, darkchannel Jdark can be obtained by

    min(i,j)(x,y)

    (min

    c{r,g,b}Jc(i, j)

    Ac

    )= 0. (7)

    Equation (7) can be incorporated into (6), resulting in theestimation of the transmission map as

    t(x, y) = 1 min(i,j)(x,y)

    (min

    c{r,g,b}Ic(i, j)

    Ac

    ). (8)

    As described in [23], the image may appear somewhat un-natural if the haze is removed thoroughly. Therefore, a constantparameter (set to 0.95) is added into (8) in order to retain aportion of the haze for distant objects as

    t(x, y) = 1 min(i,j)(x,y)

    (min

    c{r,g,b}Ic(i, j)

    Ac

    ). (9)

    Moreover, He et al. suggested an optimal patch size of thedark channel prior as 15 15 [23].

    Soft Matting: The recovered images produced by (9) maycontain some block effects in a hazy image. In order to reducethese artifacts, He et al. adopted a soft matting [25] techniqueto refine the transmission map in (9). The matting Laplacianmatrix L is given as

    k|(i,j)wk

    (ij 1|wk|

    (1+(Iik)T

    (k+

    |wk|U3)1

    (Ijk)))

    (10)

    where ij is the Kronecker delta, k is the mean matrix of thecolors in window wk, k is the covariance matrix of the colorsin window wk, U3 is an identity matrix of size 3 3, is aregularizing parameter, and |wk| is the total number of pixels inwindow wk.

    The refined transmission map t can be obtained by thefollowing sparse linear system:

    (L+ U)t = t (11)

    where L is the matting Laplacian matrix, U is an identity matrixof the same size as L, and is 104.

    Recovering the Scene Radiance: Finally, a single hazy im-age I can be recovered as scene radiance J as

    Jc(x, y) =Ic(x, y)Ac

    max (t(x, y), t0)+Ac (12)

    where c {r, g, b}, the value of t0 is assumed to be 0.1, andthe value of atmospheric light A is the highest intensity pixel inthe original input image according to its correspondence to thebrightest 0.1% of pixels in the dark channel.

    III. PROPOSED METHOD

    In this section, we present an effective approach for the hazeremoval of single images captured during different environmen-tal conditions that not only avoids the generation of artifacteffects but also recovers true color. Our approach involves threeproposed modules, i.e., an HDCP module, a CA module, and aVR module.

    Initially, the proposed HDCP module designs an effectivetransmission map to circumvent halo effects in the recoveredimage and estimates the location of the atmospheric light toavoid underexposure. In order to recover the true color of scenesfeaturing a wide range of weather conditions, we propose theCA module. This CA module determines the intensity statisticsfor the RGB color space of a captured image in order to acquirethe color information. As the final step of our process, theproposed VR module recovers a high-quality haze-free image.

    A. HDCP Module

    As mentioned in the previous section, the dark channel priortechnique [23] can work well for haze removal in single imagesthat lack localized light sources. However, haze removal by the

  • 2324 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 5, OCTOBER 2014

    Fig. 2. Framework of the HDCP module.

    dark channel prior technique [23] usually results in a seriouslyunderexposed image when the captured scene features localizedlight sources. The proposed HDCP module can produce arestored image that is not underexposed by using a procedurebased on the dark channel prior technique [23]. The darkchannel prior technique in [23] can employ large patch sizeoperation for the captured image in order to acquire the correctatmospheric light. However, the use of a large local patchwill result in invariable transmission and thereby leads to thegeneration of halo effects in the recovered image.

    In contrast, when the dark channel prior technique [23]uses a small patch size, the recovered image will not exhibithalo effects. However, localized light will be misjudged asatmospheric light. Hence, we present the HDCP module thatensures correct atmospheric light estimation and the subsequentavoidance of halo effects during the haze removal of singleimages based on the hybrid dark channel prior technique. Thistechnique will be introduced in the following.

    To effectively estimate the density of the haze featured byan image, we combine the advantages of small and large patchsizes via different weights. In addition, we use the large patchsize to acquire the correct atmospheric light during the imple-mentation of the hybrid dark channel prior technique. Equation(3) can be rewritten via the HDCP as

    Jdark(x, y) =

    + min

    (i,j)(x,y)

    (min

    c{r,g,b}Jc(i, j)

    )

    +

    + min

    (i,j)(x,y)

    (min

    c{r,g,b}Jc(i, j)

    )(13)

    where J represents an arbitrary image under a wide rangeof weather conditions, Jc represents a channel of color im-age J , and represent local patches centered at (x, y),minc{r,g,b} Jc(i, j) performs the minimum operation on Jc,min(i,j)(x,y) performs a minimum filter on the local patchcentered at (x, y) using the small patch size, and min(i,j)(x,y)performs a minimum filter on the local patch centered at (x, y)using the large patch size. After calculating the haze density,

    the transmission map can be directly and accurately estimatedby rewriting (8) as

    th(x, y) = 1 +

    min(i,j)(x,y)

    (min

    c{r,g,b}Ic(i, j)

    Ac

    )

    +

    min(i,j)(x,y)

    (min

    c{r,g,b}Ic(i, j)

    Ac

    ). (14)

    In order to retain a small amount of haze for the naturalappearance, a constant parameter is added to (14). Thus, thetransmission map can be expressed as

    th(x, y) = 1 +

    min(i,j)(x,y)

    (min

    c{r,g,b}Ic(i, j)

    Ac

    )

    +

    min(i,j)(x,y)

    (min

    c{r,g,b}Ic(i, j)

    Ac

    )(15)

    where can be set to 0.95 experimentally, the most optimalsmall patch size of the image can be set to 3 3 experimentally,and the most optimal large patch size can be set to 45 45experimentally. Moreover, and are the constant factorsfor a small patch size and a large patch size, respectively, bywhich the optimum results for single-image haze removal canbe acquired. Note that the values of atmospheric light Ac are,respectively, the highest intensity pixels in each RGB channelof the original input image according to its correspondenceto the brightest 0.1% of pixels in the dark channel image, asdescribed in [23]. The general framework of the HDCP moduleis shown in Fig. 2.

    B. CA Module

    The particles of sand in the atmosphere caused by sand-storms absorb specific portions of the color spectrum. This phe-nomenon leads to color shifts in images captured during suchconditions, resulting in different color channel distributions.The dark channel prior method [23] uses the same formula foreach color channel when recovering scene radiance, therebycausing serious color shifts in restored images. In order to solve

  • HUANG et al.: ALGORITHM FOR ROAD SCENES CAPTURED BY INTELLIGENT TRANSPORTATION SYSTEMS 2325

    Fig. 3. Proposed algorithm for the visibility enhancement of single images via the HDCP technique.

    this problem, we propose the CA module that is based on thegray world assumption [26]. The gray world assumption relieson the notion that average intensities should be equal in eachRGB color channel for a typical image, which is described as

    Ravg =1

    MN

    Mx=1

    Ny=1

    Ir(x, y)

    Gavg =1

    MN

    Mx=1

    Ny=1

    Ig(x, y)

    Bavg =1

    MN

    Mx=1

    Ny=1

    Ib(x, y) (16)

    where Ir(x, y), Ig(x, y), and Ib(x, y) represent the capturedimage in the RGB color channels, respectively, and MN rep-resents the total number of pixels in the captured image. Basedon this assumption, color spectrum adjustment parameter canbe produced for each RGB color channel in order to avoid colorshifts in the restored image. This can be measured as

    c =1

    MN

    Mx=1

    Ny=1 I

    r(x, y)

    1MN

    Mx=1

    Ny=1 I

    c(x, y)=

    Mx=1

    Ny=1 I

    r(x, y)Mx=1

    Ny=1 I

    c(x, y).

    (17)

    C. VR Module

    In order to produce a high-quality haze-free image capturedin different environments, we combine the information pro-vided via the HDCP and CA modules to effectively recover thescene radiance. Equation (12) can be rewritten as

    Jc(x, y) =c (Ic(x, y)Ac)max (th(x, y), t0)

    +Ac + c(c 1) (18)

    where c {r, g, b}, Jc(x, y) represents the scene radiance,Ic(x, y) represents the image captured under different condi-tions, Ac represents the atmospheric light, th(x, y) representsthe transmission map using the HDCP module, t0 is assumed tohave a typical value of 0.1, and and represent the adjustmentparameters. Moreover, specific portions of the color spectrumcan be irregularly absorbed by atmospheric particles underdifferent weather conditions. Thus, we employ parameter toadjust the atmospheric variables. First, the intensity statistics ofthe RGB color channel of the captured image can be calculatedfor the acquisition of color information via the probability massfunction (PMF), which is described as

    PMF (Ick) =nckMN

    , for k = 0, 1, . . . , L (19)

    where c {r, g, b}, nck denotes the total number of pixels thathave intensity Ick, MN denotes the total number of pixels forthe captured image, and a constant factor L is set equal to themaximum intensity value of a pixel. Ultimately, parameter can be produced by using this color information, which can beexpressed in (20), shown at the bottom of the next page.

    As mentioned, the framework of our proposed algorithm isshown in Fig. 3.

    IV. EXPERIMENTAL RESULTS

    The objective of this section is to demonstrate via qualita-tive and quantitative evaluations the advantage of our HDCPmethod in comparison with other state-of-the-art methods, in-cluding the method of Li et al. [22], the method of He et al.[23], and the method of Xie et al. [24] for haze removal in singleimages captured by traveling vehicle data recorders in realisticconditions.

    Here, we supply two video sequences, which are calledStreet and Highway, to test the efficacy of each method.Fig. 4 shows the Street video sequence that features a road

  • 2326 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 5, OCTOBER 2014

    Fig. 4. Restoration results using the method of He et al. [23], the method of Xie et al. [24], and the proposed method for the Street sequence.

    along which the headlights of vehicles and streetlights areturned on due to foggy conditions. Fig. 5 shows the Highwayvideo sequence, which shows an area where many vehicles arepassing along a highway during a sandstorm.

    In order to prove that the proposed method is an effectiveimage restoration technique for images captured in a wide rangeof weather conditions, its efficacy will be analyzed according tothe following four situations: (a) visual assessment for localized

    light sources; (b) visual assessment for color-shift problems; (c)quantitative evaluation; and (d) performance results.

    Part (a) describes the haze removal and recovery of thescene radiance for the captured Street video sequence con-taining localized light sources. Part (b) discusses the haze re-moval for the Highway video sequence, which was capturedduring sandstorm conditions and features subsequent color-shift problems. Part (c) provides the quantitative evaluation

    r = arg max0kL1PMF (Irk)

    g =arg max0kL1PMF (Irk) + arg max0kL1PMF (I

    gk )

    2

    b =arg max0kL1PMF (Igk ) + arg max0kL1PMF

    (Ibk)

    2(20)

  • HUANG et al.: ALGORITHM FOR ROAD SCENES CAPTURED BY INTELLIGENT TRANSPORTATION SYSTEMS 2327

    Fig. 5. Restoration results using the method of He et al. [23], the method of Xie et al. [24], the method of Li et al. [22], and the proposed method for theHighway sequence.

    of haze removal by using these two video sequences and therepresentative Foggy Road Image DAtabase (FRIDA) [27].Part (d) details the processing speeds of the proposed methodand the dark-channel-prior-based methods.

    A. Visual Assessment for Localized Light SourcesIn this section, we compare our HDCP technique with the

    dark-channel-prior-based techniques [23], [24] and demon-strate that the proposed method can work well for single hazyimages with localized light sources.

    As can be observed in Fig. 4, the localized light sources arebrighter than the atmospheric light in the captured frames, andthe haze removal performed via each method was measured byvisual evaluation. It is obvious from the results that the pre-

    vious dark-channel-prior-based techniques [23], [24] produceunderexposed recovered scene radiance in frames 218, 3543,and 7710. This is because the localized light sources in thoseframes are misjudged as atmospheric light.

    In contrast, our HDCP technique can effectively conceal thelocalized light sources and thus can accurately estimate theposition of the atmospheric light, as shown in frames 218,3543, and 7710. Hence, the proposed approach can avoid thegeneration of artifact effects.

    B. Visual Assessment for the Color-Shift ProblemsThe experimental results in this section confirm that the

    proposed approach can more effectively recover the sceneradiance of single images captured during sandstorm conditions

  • 2328 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 5, OCTOBER 2014

    than the other state-of-the-art methods [22][24]. As can beseen in frames 260, 503, and 729 in Fig. 5, a yellowing huepervades, and all frames exhibit color-shift problems. This isbecause the atmospheric particles caused by sandstorm con-ditions absorb certain portions of the color spectrum, therebycausing discrepancies in the distributions of each RGB colorchannel.

    As can be also observed in Fig. 5, the haze removal per-formed via each method is measured by visual evaluation. Itis apparent that the recovered sandstorm frames produced bythe dark-channel-prior-based techniques [23], [24] still fea-ture serious color-shift problems, as shown in frames 260,503, and 729. This is because the dark-channel-prior-basedtechniques [23], [24] assume that the color spectrum chan-nels are all equally absorbed by the atmospheric particles. Inaddition, the method of Li et al. [22] not only contaminatesthe brightness of the sandstorm images but also generatesserious artifacts in the recovered frames, as shown in theirrestoration results in Fig. 5. This is because the use of themethod proposed by Li et al. [22] is based on the multiscaletechnique that only considers restoring the contrast of thesandstorm frames and thus cannot effectively recover vividcolor.

    In contrast, the proposed approach can effectively recover ahaze-free frame captured under sandstorm conditions. This isdue to its ability to solve color-shift problems by adjusting thevalue of each color channel based on the proposed CA module,as shown in frames 260, 503, and 729 in Fig. 5.

    C. Quantitative EvaluationQuantifying the results of image restoration is not an easy

    task because a standard real-world reference image for quan-tifying restored perception has not been validated. In general,the major categories of objective metrics are nonreference andreference methods [28].

    Due to the unavailability of a real-world haze-free referenceimage by which to compare the efficacy of the proposed methodwith that of the other state-of-the-art methods, this paper firstemploys three well-known quantitative metrics [29], whichare e, r, and , for the nonreference method. In addition,FRIDA supplies the standard synthetic reference image for thereference methods. This study also employed these ground-truth images of the synthetic images for assessment via the peakto SNR (PSNR) and mean difference metrics [5] between theground-truth images and the restored images.

    For the nonreference method, the e metric detects the rateof the restored visible edges in the restored image. This can beexpressed as

    e =nr no

    no(21)

    where nr and no are the numbers of visible edges in the restoredhaze-free image and the original hazy image, respectively. Next,the r metric supplies the geometric mean ratios of the gradientnorms after and before restoration in relation to the total amount

    TABLE IAVERAGE RESTORATION VALUES OF THE COMPARED METHODS

    ATTAINED BY e, r, AND IN A WIDE RANGE OF WEATHER CONDITIONS

    of visible edges within the restored haze-free image. This canbe represented as

    r = exp

    1n r

    Pir

    log ri

    (22)

    where Pi is the ith element of the corresponding set r, andr represents the set of visible edges in the restored haze-freeimage. Note that ri is the ith rate of the gradients betweenthe restored haze-free image and the original hazy image.Moreover, the metric estimates the number of pixels thatmight be either saturated as white or smeared as black in therestored haze-free image. This can be expressed as

    =ns

    dimxdimy(23)

    where ns represents the total amount of both the saturatedpixels and the smeared pixels, and dimxdimy represents thehazy image resolution.

    Table I offers the average restoration rates produced by usingthe e, r, and metrics in each video sequence for the proposedmethod and the compared methods. It should be noted that thehigher values produced by the e and r metrics indicate superiorrestoration rates, whereas a higher value produced by the metric indicates inferior restoration rates. As shown in Table I,the results of the comparison of the average restoration ratesclearly indicate that the visibility restoration performance of theproposed method was significantly superior to the performanceof previous state-of-the-art methods [22][24] under a widerange of weather conditions.

    For the reference method, we supplied a comparison via thePSNR and mean difference metrics between the ground-truthimages and the images restored by the method of He et al., themethod of Xie et al., and the method of Li et al. The attainedvalues of the PSNR metric with higher values indicate betterrestoration, whereas the attained values of the mean differencemetric with lower values indicate better restoration. Fig. 6shows three images of FRIDA that consist of a scene capturedalong a road. It is apparent from the quantitative results that theproposed method is capable of not only effectively restoringsynthetic road images but also attaining a substantially higherdegree of efficacy in comparison with the other state-of-the-artmethods [22][24].

  • HUANG et al.: ALGORITHM FOR ROAD SCENES CAPTURED BY INTELLIGENT TRANSPORTATION SYSTEMS 2329

    Fig. 6. Comparison of the restoration efficacy of each compared approach via the reference method using FRIDA.

    D. Performance ResultsTable II details the overall processing speeds achieved

    by the dark-channel-prior-based methods [23], [24] and theproposed method for each image resolution, where we im-plemented each compared method for x86-64 with 128-bitSSE2/SSE3 extensions. These sources are written in theC/C++/single instruction multiple data (SIMD) assembly, com-piled with GCC 4.2.4, and run on an Intel Xeon E5520 proces-sor and 32-GB main memory running a Windows server 2008operating system.

    According to Table II, which presents the overall processingspeeds, the proposed method was up to 3.6654 times faster thanthe method of He et al. [23]. This is because the method ofHe et al. employs the soft matting technique [25] to refinethe large patch size in the transmission map, which inevitablycauses an enormous computational burden. Moreover, the pro-posed method was up to 1.1717 times faster than the method of

    TABLE IIPROCESSING SPEEDS (IN FRAMES PER SECOND) OF

    THE COMPARED METHODS

    Xie et al. [24]. This is due to the traditional methods utilizationof the multiscale Retinex technique based on the surroundingfunction whose form is Gaussian, which also results in a hugecomputational burden when restoring hazy images.

  • 2330 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 5, OCTOBER 2014

    Fig. 7. Improvement of traffic surveillance applications for a video sequence captured in a haze-filled environment.

    In contrast, the proposed method uses a small patch sizeinstead of the soft matting and multiscale Retinex techniquesto restore a hazy image without the generation of halo effects.The result is dramatically superior performance by the proposedmethod in comparison with the other dark-channel-prior-basedmethods.

    V. APPLICATIONS

    This section demonstrates the improvement of traffic surveil-lance applications via the use of the proposed method throughquantitative and qualitative evaluations of intelligent transporta-tion systems. Currently, one factor of intelligent transportationsystems that is critical in supporting traffic management tasksis the ability to extract information about moving objects withinscenes captured by traffic surveillance systems. As such, trafficsurveillance systems are important components of intelligenttransportation systems [1][5]. Moreover, automated motiondetection is the first essential process in the development oftraffic surveillance systems and is also crucial in accomplishingtasks such as vehicle classification, vehicle recognition, vehicletracking, collision avoidance, and so on [6][11].

    A survey of the state-of-the-art approach for motion detec-tion has been proposed in [30]. However, the use of this ap-proach often results in the incomplete detection of the extractedshapes of moving objects when the traffic surveillance cam-era is contaminated by atmospheric particles during inclementweather conditions, as shown in Fig. 7. This is because thegathered background information based on the cerebellar modelarticulation controller network is insufficient for moving objectdetection in a haze-filled environment.

    Due to the incorporation of the proposed method into thisstate-of-the-art approach for traffic surveillance, the accuracyrates obtained via Similarity and F1 for the frames restored byusing the proposed method are up to 41.13% and 44.92% higherthan those produced using the original frames, respectively.Therefore, our findings prove that the proposed method canbe effectively applied to traffic surveillance systems, which arecommonly operated under challenging weather conditions.

    VI. CONCLUSION

    In this paper, we have proposed a novel approach basedon our HDCP technique for haze removal in single images

  • HUANG et al.: ALGORITHM FOR ROAD SCENES CAPTURED BY INTELLIGENT TRANSPORTATION SYSTEMS 2331

    captured under a wide range of weather conditions. First, theproposed HDCP module efficiently conceals localized lightsources and, consequently, accurately estimates the positionof the atmospheric light. In addition, our HDCP module canprovide effective transmission map estimation and therebyavoids the production of artifact effects in the restored image.In the second stage, the proposed CA module uses the grayworld assumption to effectively obtain the color informationof the captured image and thereby circumvent the color-shiftproblems in the restored image. In the final stage, the VRmodule combines the information obtained by the HDCP andCA modules to avoid the generation of serious artifact effectsand thus obtain a high-quality haze-free image regardless ofweather conditions. The experimental results demonstrate thatthe proposed technique produces a satisfactory restored image,as measured by the quantitative and qualitative evaluationsof realistic scenes, while demanding less computational cost.Moreover, the proposed technique is significantly superiorto other state-of-the-art methods. To the best of our knowl-edge, this is the first study that presents an effective hazeremoval approach that is applicable in a wide range of weatherconditions.

    REFERENCES[1] N. K. Kanhere and S. T. Birchfield, A taxonomy and analysis of cam-

    era calibration methods for traffic monitoring applications, IEEE Trans.Intell. Transp. Syst., vol. 11, no. 2, pp. 441452, Jun. 2010.

    [2] J.-P. Tarel, N. Hautire, L. Caraffa, A. Cord, H. Halmaoui, and D. Gruyer,Vision enhancement in homogeneous and heterogeneous fog, IEEEIntell. Transp. Syst. Mag., vol. 4, no. 2, pp. 620, Summer 2012.

    [3] N. Buch, S. A. Velastin, and J. Orwell, A review of computer visiontechniques for the analysis of urban traffic, IEEE Trans. Intell. Transp.Syst., vol. 12, no. 3, pp. 920939, Sep. 2011.

    [4] L. Caraffa and J.-P. Tarel, Markov random field model for single imagedefogging, in Proc. IEEE Intell. Veh. Symp., Jun. 2013, pp. 994999.

    [5] J.-P. Tarel and N. Hautire, Fast visibility restoration from a single coloror gray level image, in Proc. IEEE Int. Conf. Comput. Vis., Sep. 2009,pp. 22012208.

    [6] S. C. Huang, An advanced motion detection algorithm with video qualityanalysis for video surveillance systems, IEEE Trans. Circuits Syst. VideoTechnol., vol. 21, no. 1, pp. 114, Jan. 2011.

    [7] F. C. Cheng, S. C. Huang, and S. J. Ruan, Scene analysis for objectdetection in advanced surveillance systems using Laplacian distributionmodel, IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 41, no. 5,pp. 589598, Sep. 2011.

    [8] S. C. Huang and B. H. Chen, Automatic moving object extractionthrough a real world variable-bandwidth network for traffic monitoringsystems, IEEE Trans. Ind. Electron., vol. 61, no. 4, pp. 20992112,Apr. 2014.

    [9] F. C. Cheng and S. J. Ruan, Accurate motion detection using a self-adaptive background matching framework, IEEE Trans. Intell. Transp.Syst., vol. 13, no. 2, pp. 671679, Jun. 2011.

    [10] P. Foucher, P. Charbonnier, and H. Kebbous, Evaluation of a road signpre-detection system by image analysis, in Proc. Int. Conf. Comput. Vis.Theory Appl., 2009, pp. 362367.

    [11] D. J. Dailey, F. W. Cathey, and S. Pumrin, An algorithm to estimate meantraffic speed using uncalibrated cameras, IEEE Trans. Intell. Transp.Syst., vol. 1, no. 2, pp. 98107, Jun. 2000.

    [12] W. J. Wang, B. H. Chen, and S. C. Huang, A novel visibility restorationalgorithm for single hazy images, in Proc. IEEE Int. Conf. Syst., Man,Cybern., Oct. 2013, pp. 847851.

    [13] Y. J. Cheng, B. H. Chen, S. C. Huang, S. Y. Kuo, A. Kopylov, O. Seredin,Y. Vizilter, L. Mestetskiy, B. Vishnyakov, O. Vygolov, C. R. Lian, andC. T. Wu, Visibility enhancement of single hazy images using hybriddark channel prior, in Proc. IEEE Int. Conf. Syst., Man, Cybern., Oct.2013, pp. 36273632.

    [14] K. Tan and J. P. Oakley, Enhancement of color images in poor visibilityconditions, in Proc. ICIP, Sep. 2000, vol. 2, pp. 788791.

    [15] S. G. Narasimhan and S. K. Nayar, Interactive (De)weathering of animage using physical models, in Proc. ICCV Workshop Color Photometr.Methods Comput. Vis., Oct. 2003, pp. 13871394.

    [16] J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen,M. Uyttendaele, and D. Lischinski, Deep photo: Model-based photo-graph enhancement and viewing, ACM Trans. Graphics, vol. 27, no. 5,pp. 116:1116:10, Dec. 2008.

    [17] Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, Polarization basedvision through haze, Appl. Opt., vol. 42, no. 3, pp. 511525, Jan. 2003.

    [18] S. G. Narasimhan and S. K. Nayar, Contrast restoration of weatherdegraded images, IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 6,pp. 713724, Jun. 2003.

    [19] S. K. Nayar and S. G. Narasimhan, Vision in bad weather, in Proc. 7thIEEE Int. Conf. Comput. Vis., Jun. 1999, vol. 2, pp. 820827.

    [20] R. Tan, Visibility in bad weather from a single image, in Proc. IEEEConf. Comput. Vis. Pattern Recognit., Jun. 2008, pp. 18.

    [21] R. Fattal, Single image dehazing, in Proc. ACM SIGGRAPH, 2008,p. 72.

    [22] W. J. Li, B. Gu, J. T. Huang, S. Y. Wang, and M. H. Wang, Single imagevisibility enhancement in gradient domain, IET Image Process., vol. 6,no. 5, pp. 589595, Jul. 2012.

    [23] K. He, J. Sun, and X. Tang, Single image haze removal using darkchannel prior, IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12,pp. 23412353, Dec. 2011.

    [24] B. Xie, F. Guo, and Z. Cai, Improved single image dehazing using darkchannel prior and multi-scale Retinex, in Proc. Int. Conf. Intell. Syst.Des. Eng. Appl., Oct. 2010, pp. 848851.

    [25] A. Levin, D. Lischinski, and Y. Weiss, A closed form solution to naturalimage matting, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.,2006, vol. 1, pp. 6168.

    [26] A. C. Hurlbert, Formal connections between lightness algorithms, J.Opt. Soc. Amer. A, Opt. Image Sci., vol. 3, no. 10, pp. 16841693,Oct. 1986.

    [27] Foggy Road Image DAtabase FRIDA. [Online]. Available: http://www.lcpc.fr/english/products/image-databases/article/frida-foggy-road-image-database

    [28] S. C. Huang, F. C. Cheng, and Y. S. Chiu, Efficient contrast enhance-ment using adaptive gamma correction with weighting distribution, IEEETrans. Image Process., vol. 9, no. 1, pp. 10321041, Mar. 2013.

    [29] N. Hautire, J.-P. Tarel, D. Aubert, and E. Dumont, Blind contrast en-hancement assessment by gradient ratioing at visible edges, Image Anal.Stereol., vol. 27, no. 2, pp. 8795, 2008.

    [30] S. C. Huang and B. H. Chen, Highly accurate moving object detectionin variable-bit-rate video-based traffic monitoring systems, IEEE Trans.Neural Netw. Learn. Syst., vol. 24, no. 12, pp. 19201931, Dec. 2013.

    Shih-Chia Huang received the Ph.D. degree in elec-trical engineering from National Taiwan University,Taipei, Taiwan, in 2009.

    He is an Associate Professor with the Depart-ment of Electronic Engineering, College of Elec-tric Engineering and Computer Science, NationalTaipei University of Technology and an Inter-national Adjunct Professor with the Faculty ofBusiness and Information Technology, Universityof Ontario Institute of Technology, Oshawa, ON,Canada. He is the author or coauthor of more than

    40 journal and conference papers, and he holds more than 30 patents inthe United States, Europe, Taiwan, and China. His research interests includeimage and video coding, wireless video transmission, video surveillance,error resilience and concealment techniques, digital signal processing, cloudcomputing, mobile applications and systems, embedded processor design, andembedded software and hardware codesign.

    Dr. Huang received the Kwoh-Ting Li Young Researcher Award in 2011from the Taipei Chapter of the Association for Computing Machinery and theDr. Shechtman Young Researcher Award in 2012 from the National TaipeiUniversity of Technology. He is an Associate Editor of Journal of ArtificialIntelligence.

  • 2332 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 5, OCTOBER 2014

    Bo-Hao Chen received the B.S. degree from Na-tional Taipei University of Technology, Taipei,Taiwan, in 2011. He is currently working toward thePh.D. degree in the Department of Electronic Engi-neering, National Taipei University of Technology.

    His research interests include digital image pro-cessing, video coding, particularly, moving objectdetection, contrast enhancement, and haze removal.

    Yi-Jui Cheng received the B.S. degree from Na-tional Chin-Yi University of Technology, Taichung,Taiwan, in 2011 and the M.S. degree from Na-tional Taipei University of Technology, Taipei,Taiwan, in 2013.

    He is with the Department of Electronic Engineer-ing, College of Electric Engineering and ComputerScience, National Taipei University of Technology,Taipei, Taiwan. His research interests include digi-tal image processing, particularly contrast enhance-ment, depth generation, and haze removal.

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    /CreateJDFFile false /Description > /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ > /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles false /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /DocumentCMYK /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ]>> setdistillerparams> setpagedevice