Recognition of road markings from street-level panoramic images for automated map generation.

13
Disclaimer The Department of Electrical Engineering of the Eindhoven University of Technology accepts no re ponsibility for the contents of M.Sc. theses or practical training reports Department of Electrical Engineering Den Dolech 2, 5612 AZ Eindhoven P.O. Box 513, 5600 MB Eindhoven The Netherlands http://w3.ele.tue.nl/nl/ Series title: Master graduation paper, Electrical Engineering Commissioned by Professor: Group / Chair: Date of final presentation: Report number: by Author: Prof. dr. ir. P.H.N. de With SPS Recognition of road markings from street-level panoramic images for automated map generation May 22, 2015 Internal supervisors: Prof. dr. ir. P.H.N. de With, Ir. L. Hazelhoff T. Woudsma

Transcript of Recognition of road markings from street-level panoramic images for automated map generation.

Page 1: Recognition of road markings from street-level panoramic images for automated map generation.

Disclaimer

The Department of Electrical Engineering of the Eindhoven University of Technology accepts no re ponsibility for the contents of M.Sc. theses or practical training reports

Department of Electrical Engineering

Den Dolech 2, 5612 AZ Eindhoven P.O. Box 513, 5600 MB Eindhoven The Netherlands http://w3.ele.tue.nl/nl/

Series title:

Master graduation paper, Electrical Engineering

Commissioned by Professor:

Group / Chair:

Date of final presentation:

Report number:

by Author:

Prof. dr. ir. P.H.N. de With

SPS

Recognition of road markings from street-level panoramic images for automated map generation

May 22, 2015

Internal supervisors: Prof. dr. ir. P.H.N. de With, Ir. L. Hazelhoff

T. Woudsma

Page 2: Recognition of road markings from street-level panoramic images for automated map generation.

Recognition of road markings from street-levelpanoramic images for automated map generation

Thomas WoudsmaDepartment of Electrical Engineering

Eindhoven University of Technology, The NetherlandsCyclomedia Technology B.V., The Netherlands

Email: [email protected]

Abstract—Road-marking maps created from the automatedrecognition of road markings from images can be used for theautomated inspection of markings, used by autonomous vehiclesor applied in navigation systems. This paper presents a road-marking recognition pipeline operating on street-level panoramicimages. First, all individually images are processed in a geograph-ical region of interest. The single-image marking recognition stageconsists of Inverse Perspective Mapping, segmentation, contourclassification, context inference and marking model evaluation.Second, single-image detections are merged into the multi-viewpositioning stage, which uses connectivity-based clustering. Thesingle-image stage detects 88%-97% of the pedestrian crossings,block, give-way and stripe markings in a city environment withground-truth deviations below 0.5 m. Context inference signifi-cantly improves both the detection performance and positioningaccuracy. On a large dataset of 84,387 images, the full processingpipeline achieves detection rates of 85%, 92% and 80% forcrosswalks, block- and give-way markings, respectively, witha positioning error smaller than 0.6 m. This shows that thepresented system is performing sufficiently well for generatingroad-marking maps. Closer analysis of missed detections revealsthat the common causes are marking damage and high capturerange.

I. INTRODUCTION

Road markings provide extensive information on traffic situ-ations and are therefore vital for traffic safety. Amongst others,these include lane markings, give-way triangles, pedestriancrossings, stop lines and arrows. Databases with the positionand type of the road markings can be used for various appli-cations. For instance, this data can be supplied to a navigationsystem to alert the driver of upcoming traffic hazards or allowfor more detailed route generation, e.g. truck drivers canset up a route to avoid pedestrian crossings. The databasescan also be used for automatic quality monitoring, therebystrongly reducing the need for manual quality supervision.Additionally, marking situations can be checked for safetyanalysis, such as markings at priority situations.

Currently, marking inspection is performed by manuallyinspecting the markings on roads. Typically, such inspectionsare performed reactively, e.g. after complaints of road usersor accidents. Road-marking recognition systems can help toautomate this inspection. These systems often use images(street-level or aerial) to detect and position road markings.Specific image analysis algorithms can be used to recognizemarkings in these images. In this paper, the focus is onmarking recognition from street-level images, specifically the

(a) Abrasion (b) Occlusion (c) Shadows

Fig. 1. Three examples of road markings that are difficult to detect.

street-level panoramic images created by Cyclomedia B.V. inthe Netherlands, which are annually captured on all publicroads.

Detection and recognition of road markings from imagesinvolves several challenges. There are numerous of factorsmaking the recognition difficult, such as occlusions (e.g. byother vehicles), shadows cast by surrounding objects, varyingweather conditions (affecting lighting) and marking deterio-ration due to abrasion from vehicles. Figure 1 shows exam-ples of three common difficult detection situations (abrasions,occlusions and shadows on markings). Even if the detectionsucceeds in overcoming these challenges, recognition of thespecific marking types can still be very complicated becausemost recognition algorithms heavily rely on accurate shapeextraction.

Further analysis of the most common road markings showsthat these markings occur in specific periodic patterns (e.g.dashed lane markings occur at regular intervals) and occurin groups (e.g. block markings and arrows in an exit lane).Where individual recognition of (damaged) markings may becomplicated, modeling high-level context information/patternscan help to improve the detection rate and recognize roadmarkings that are for instance partially damaged. In addition,images are often taken at regular intervals giving redundantmarking information. In this case, the same markings are cap-tured from different viewpoints which can improve detectionrates, if a specific marking is occluded in some images (butnot in all).

For roads in the Netherlands, which are considered heresolely, the marking design standards are managed by theCROW [1]. They divide road markings into three categories:

Page 3: Recognition of road markings from street-level panoramic images for automated map generation.

(1) parallel markings (parallel with the driving direction), (2)perpendicular markings (perpendicular to the driving direction)and (3) symbol markings. Markings in the first category, whichare the most common, include e.g. lane markings (continuousand dashed) and block markings. The second category includese.g. markings of pedestrian and bike crossings, give-waytriangles (“shark teeth”) and stop lines. The last categoryconsists of a wide variety of symbols such as arrows, speednumbers, words and bike symbols, which do not occur inperiodic patterns.

This research focuses on the automated generation ofroad-marking databases (type, position and orientation) in ageographical region of interest. This implies the automateddetection, recognition and positioning of road markings. Ourresearch work builds upon previous work by Li et al. [2]that has mainly concentrated on recognition of road markingson highways. This work extends this prior research for roadmarkings in city and rural environments in several ways. Nextto adding support for different road types, this work alsocontains significant algorithmic alterations and improvements,of which the most important contributions at the algorithmiclevel are (1) context inference using probabilistic modeling, (2)evaluation of marking placement models to identify markingclusters, and (3) add multi-view positioning of markings tofind real-world positions and generate marking maps.

This work resulted in a generic road-marking recognitionpipeline, which can be applied to the recognition of a widevariety of markings (e.g. crosswalks, give-way triangles andblock markings). Furthermore, this system complements anexisting traffic-sign recognition system [3], by providing bothredundant (i.e. several situations consist of both road markingsand traffic signs) and complementary information (i.e. somesituations are indicated by only signs or markings). Thisoverall results in a more complete overview of high-qualitydriver signaling and traffic situations. Before we present ourapproach, some related work is discussed.

II. RELATED WORK

Commonly, road marking recognition is developed for Ad-vanced Driver Assistance Systems (ADAS) or for autonomousdriving vehicles [4], using car-mounted cameras, which aresometimes combined with LIDAR systems. As the main goalof these ADAS systems is to aid drivers to stay in their lane, asignificant portion of the related work focuses at lane detection[5][6]. As described by a survey paper [7], these systemscommonly follow three principal steps: (1) pre-processing toremove noise and other unwanted image data, (2) featureextraction to find relevant parts such as edges, (3) model fittingto verify the detected markings and to remove false positives.

Recognition of several marking types is e.g. performed byFoucher et. al. [8], who present a system for the recognitionof pedestrian crossings and arrows. After segmentation, theauthors identify crossings based on the mutual relations be-tween the connected components from the segmentation mask,where crosswalks are identified if the segments meet certainconditions. To recognize arrows, the connected components

are compared to 63 models of arrows. Experimental resultson a dataset containing 165 crosswalks and 151 arrows showa true positive rate of 90% and 78%, respectively.

Li et al. [9] follow a similar approach for the recognitionof crosswalks, stop lines and lane markings. After filteringthe segmentation result with directed morphological opera-tions, all connected components are analyzed, and the targetmarkings are identified based on angular orientation and blobdimensions. Although no quantitative results are available,qualitative recognition results on urban images are presented.

Qin et al. [10] present a general framework for road-marking detection and analysis. After Inverse PerspectiveMapping (IPM), segmentation and contour extraction, theframework is split into different modules for specific markings(lanes, arrows, crosswalks and words). Every module includesa Support Vector Machine (SVM), which is trained for theclassification of each marking, using geometric features suchas Hu moments. Experiments show precision rates above 90%,though problems occur with the recognition of worn andshadow-covered markings.

Previous work in [2], describes a recognition pipeline thatcan accurately detect and recognize lane, stripe, block andarrow markings. From the IPM image, a marking segmentationis obtained with a local threshold. Then each connectedcomponent is translated to its centroids, scaled to be withina unity interval and rotated to align its primary axis, makingthem invariant to these three transformations. Next, the fourdifferent marking types are classified by SVMs, trained on thespecific types, using shape features. These are the distancefrom the shape centroid to its contour at regular angularintervals. Lanes are then modeled using RANSAC for straightlanes and the Catmull-Rom spline for curved lanes. On thedataset of 910 highway panoramic street-level images, thealgorithm achieved precision and recall metrics of over 90%.This work is used as a starting point for our contributions forwhich an approach is discussed in the next section.

III. APPROACH

Most related research is focused on in-car use where imagesare captured with regular (non-panoramic) video cameras.These systems aim at the recognition of specific markings thatare important for driver assistance and autonomous vehicles,such as lane markings and stop lines and do not generatedatabases of the recognized markings with type and globalposition and orientation.

In this research, we use the same basic processing steps(IPM, segmentation, contour classification, model evaluation),as commonly applied in literature. This pipeline is extendedwith two novel major processing stages, compared to relatedwork. First, contextual inference is added to incorporate in-formation about the neighboring marking elements, therebyclearly improving the detection performance. Second, anaccurate positioning stage is added, which uses recognizedmarkings from several images to determine the real-worldcoordinates. The proposed system should satisfy the followingrequirements:

Page 4: Recognition of road markings from street-level panoramic images for automated map generation.

fvertical

= -½p

fvertical

= 0 (horizon)

fvertical

= ½p

fhorizontal

-p p

-½p

½p

f vert

ica l

S SNW E

(a) Street-level panoramic images from Cyclomedia (Cycloramas) used in the experiments. The horizontal axiscorresponds to the horizontal angle (azimuth) around the camera. The vertical axis corresponds to the verticalangle (altitude), where φvertical = 0 indicates the horizon.

(b) IPM of street-level panoramic image.

Fig. 2. Example street-level panoramic image with its IPM image.

1) follow a semi-supervised generic learning-based ap-proach to recognize a variety of markings,

2) apply context inference to exploit contextual relationsbetween neighboring elements,

3) apply road-marking models in a generic framework toretrieve marking clusters,

4) extraction of global marking positions of the identifiedclusters, allowing for the generation of road-markingmaps.

These requirements result in a system capable of recognizingmultiple marking types: pedestrian crossings, give-way andblock lines, stripes. We have selected these types as theyare most common on intersections and denote very importantinformation for road safety. It should be noted that recognitionof lane markings is covered in [2].

This system will be evaluated at two different levels: (1)single-image marking recognition, and (2) multi-view markingpositioning. The first experiment assesses the performance atdifferent stages in the system to investigate their performanceaspects. In a second experiment we evaluate the quality ofthe generated marking maps. Additionally, this experiment in-volves a combination with a traffic-sign recognition system [3],which has been published in [11].

The proposed road-marking recognition system followingthis approach is explained in Section V. However, we will firstelaborate on the characteristics of the source data (street-levelpanoramic images) used as input by the recognition pipeline.

IV. SOURCE DATA

The presented system for the recognition of road markingsoperates on street-level panoramic images, which provide arecent and accurate overview of the road infrastructure. Theseimages are acquired at a large scale and are recorded at allpublic roads within the target area, using a capturing intervalof 5 m. The recording vehicles drive along with regulartraffic at normal speeds. The cars are utilized in an efficientway by capturing during daytime during all kinds of weather

conditions, including sunny, cloudy and foggy weather, anddirectly after (but not during) rain or snow.

The panoramic images have a resolution of 2, 400× 4, 800pixels and are stored as equi-rectangular images. The capturinglocation is also accurately known for each image, based ona high-quality positioning system featuring both GPS andIMU devices. Figure 2a displays an example equi-rectangularpanoramic image.

The employed capturing systems are calibrated precisely,resulting in panoramic images that are mapped to a sphere,on which angular distances can be measured. The resultingimages are stored as equi-rectangular images, which have alinear relationship between the pixel coordinates within theimage and the viewing directions in horizontal and verticaldimensions. This allows for the precise calculation of thereal-world 3D positions based on triangulation. The positionof an object can be retrieved in case multiple points (≥ 2)corresponding with the considered object are found in multipleimages, using straightforward geometrical computations.

V. ROAD-MARKING RECOGNITION SYSTEM

This section presents a learning-based system for road-marking recognition, using street-level panoramic imagescaptured from a vehicle. This system is split up into twomajor processing blocks: single-image marking recognitionand multi-view positioning. The first block independentlyprocesses all images in a specific geographical region ofinterest. To recognize markings in images, this block consistsof five consecutive processing steps, which are described inSection V-A. This results in both a pixel location, orientationand a marking type for each recognition.

The detection results from the single-image marking recog-nition (marking types, positions and orientations in images) arepassed to the multi-view positioning block which merges theseresults to get global marking positions. Section V-B elaboratesfurther on this block. The complete recognition and positioningpipeline is shown in Figure 3.

Page 5: Recognition of road markings from street-level panoramic images for automated map generation.

Single-image Marking Recognition

Multi-viewPositioning

ImageSegmentation

PreProcessing

MarkingSVMMarking

SVMMarkingSVM

ContourClassification

MarkingSVMMarking

SVMMarkingModel

ModelEvaluation

ContextInference

MarkingSVMMarking

SVMContextModel

Fig. 3. Road-marking recognition pipeline. First, markings are recognized in each image separately by performing IPM, segmentation, contour classification,context inference and model evaluation. Then the results are combined with multi-view positioning. The five images below the diagram give the intermediateresults for the single-image marking recognition. Note that the contour classification, which generates probability maps, is performed for each marking type(here shown for give-way markings). These are used by the context inference and model evaluation together with context and marking models, respectively.

A. Single-Image Marking Recognition

The recognition pipeline first processes all panoramicimages individually and recognizes markings in each im-age in five sequential processing steps. Because street-levelpanoramic images (Cycloramas) are used, the first step isto (1) perform the aforementioned IPM. This results in atop-down view of the scene, centered around the vehicle.Then, (2) image segmentation is used to find the relevantregions in the top-down image (i.e. road markings). This isfollowed by (3) the classification of each relevant region (orconnected component) in the segmentation result. Featuresfrom these connected components are extracted and classifiedby trained SVMs. This results in a probability map for eachmarking type. To enhance the performance of the SVMs,context information (i.e. neighboring elements in the spatialplacement patterns) is exploited in the next step by modelingthe probability maps as Markov Random Fields (MRF). As aresult, we adopt (4) context inference, selecting the most likelymarking type with respect to the context, is performed byusing Loopy Belief Propagation (LBP) on the MRFs. Finally,(5) the classified contours are evaluated by marking modelswhich merge single elements into multi-element markings (e.g.pedestrian crossings or give-way lines). The next five sectionselaborate on these steps.

1) Image Pre-Processing: Direct recognition of road mark-ings from the spherical, equi-rectangular panoramic imagesis challenging, due to the inherent perspective deformations.This can be observed from e.g. Fig. 2a, which illustratesthat parallel lines on the ground plane are not parallel in theimage plane, but instead converge to a single vanishing point.Therefore, such perspective-distorted images are commonlytransformed to a top-down view, using an Inverse PerspectiveMapping (IPM) (similar to [12] [13]). This transformation

remaps the image such that the image plane equals a pre-defined (ground) plane (e.g. the road). It should be noted thatsince most roads are not perfectly flat (but slightly curved fordrainage), small deformations may be visible. Nevertheless,the resulting images allow for easier detection, recognition andpositioning of road markings, as e.g. illustrated by Fig. 2b.

These top-down images are calculated by:

x =

(xcar +

arctan( yIPM

xIPM)

2π× n

)mod n, (1)

y =

(m− arctan(d/h)

2π× n

)mod m. (2)

In these equations, (x, y) and (xIPM , yIPM ) denote the hor-izontal and vertical image coordinates in the equi-rectangularpanoramic image and the computed IPM image, respectively.The parameter xcar represents the horizontal coordinate of thefront of the car within the panoramic image, h and d denotethe camera height from the ground plane and the distancefrom pixel coordinate (xIPM , yIPM ) to the center of the IPMimage. Finally, m and n denote the resolution of the panoramicimage.

2) Road Marking Segmentation: The retrieved IPM imageis segmented into two categories: road marking- and non-roadmarking-pixels. Road markings are typically brighter than theroad and have a low saturation, as they are typically close towhite luminance. Therefore, image regions that have a highlocal intensity and a low saturation are extracted in a two-step process. Using this metric for the segmentation of roadmarkings gives good results [14].

The first step involves the calculation of the intensitydifference between the grayscale pixel values and the averagegraycale intensity value in a rectangular window around eachconsidered pixel. With gp the grayscale pixel value of pixel p

Page 6: Recognition of road markings from street-level panoramic images for automated map generation.

(a) (b) (c)

Fig. 4. Illustration of the segmentation steps. (a) input top-down image, (b) segmentation result based on local intensity measure, (c) segmentation based onboth local intensity and saturation measures.

θ

d1d2d3

d4

d5 d6 d7

d8θ

d1d2d3

d4

d5

d6

d7

d8

Fig. 5. Example of shape features in two different shapes (block and triangle).Clearly, the vectors have different magnitudes for equal angles.

and v, w the size of the local neighborhood around pixel p,this calculation can be expressed as:

g′

p = gp −1

vw

v2∑

i=− v2

w2∑

j=−w2

gij . (3)

The size of the window is determined by the marking types ofinterest. A binary segmentation is then obtained by applyingOtsu’s threshold method on the found differences.

The second step involves filtering based on the saturationvalue, where a thresholding operation removes the highly-saturated pixels from the previously obtained mask. After mor-phological closing and hole filling, all connected components(groups of neighboring pixels) are extracted from the retrievedsegmentation mask. Figure 4 illustrates this procedure.

3) Contour Classification: The next step is to classify eachof the connected components in the segmentation result. Firstthe contour is extracted from each connected component,representing its outline in pixel positions. Then all contours aretranslated to the origin and rotated to align their primary axisto be translation- and rotation-invariant. Often scale invarianceis used as well, but in this case the scale is relevant tothe marking type, e.g. small stripe markings and crosswalksmay have the same shape but different scale, such that scale-invariance is omitted. Next, the distance between the contourcentroid and the contour edge at set angular intervals isdetermined, as shown in Figure 5. As road markings have

highly regular (and mostly convex) shapes, these values can beused as a shape descriptor and concatenated to form a featurevector.

To classify between the different marking types, the featurevectors are transformed to zero mean and unity standarddeviation, by subtracting the mean feature vector and dividingby the standard deviation from vectors from training sets. Thisresults in a set of N feature vectors for each marking, whereeach vector corresponds to a specific marking type. Thesevectors are then classified by SVM classifiers, each operatingon the feature vector extracted for the marking type that itshould recognize. Each SVM outputs the distance towardsits decision bound, which is then converted to a probabilitymeasure using Platt scaling [15] [16]. After evaluation of theSVMs, each segmented object has N probability measures,which are in the same unity interval and indicate probabilities,which allows for direct comparison between marking types.For each contour, a vector of N probability measures iscreated. Figure 6a and 6c show the obtained probability mapfor two different marking categories.

4) Context Inference: Individual markings can be occluded(e.g. by other vehicles) and may have a lowered visibilityor can be damaged. This results in non-ideal shapes in thesegmentation mask, which are recognized with lowered proba-bility, or a shape has high probabilities for other marking types.Therefore, we exploit the periodic spatial placement patternsat which road markings are typically placed, to improve therecognition performance. We use this contextual informationto update the recognition scores for each detected marking,based on the scores of markings located at the expectedlocations for the respective marking types.

For this type of novel contextual information, we employ aMarkov Random Field (MRF), which allows for updating ofthe recognition probabilities based on contextual information,i.e. based on the probabilities of their neighbors. Within theMRF, all detected road markings are modeled as nodes, wherethe initial probabilities of the nodes are set to the probabilitiesfound by the SVMs at the classification step. All neighboringnodes, having an inter-node distance smaller than a pre-definedthreshold, are connected with edges.

Page 7: Recognition of road markings from street-level panoramic images for automated map generation.

(a) (b)

(c) (d)

Fig. 6. Illustration of the MRF processing. Left column: input probabilities for (a) give-way and (c) block markings, red-green denote low-high values. Rightcolumn: output probabilities for (b) shark teeth and (d) block marking. Note that the overlap between triangle and block markings disappears.

Fig. 7. Construction of context weighting functions using marking distanceand orientation difference from a training set for perpendicular markings. Theheat map at the top shows the weights in the x-y plane for neighboring nodesin the MRF, where red indicates a high and blue a low weight. For instance,for perpendicular markings the weight to another node in the MRF is high ifit is at its periodical distance (e.g. 1 meter on the minor axis and < 0.5 meteron the major axis). The second figure shows the weighting function in termsof the orientation difference (between markings/nodes of the MRF).

In contrast to the conventional MRF, where all edges areunweighted, we assign weights to the edges. These weightsrepresent the contextual influence of nodes on each other,based on how well their inter-node distance and relativeorientations fit to the expected marking placement pattern. Thisrelationship is modeled by fitting a Gaussian Mixture Model(GMM) on the inter-node distances and orientations of a train-ing set. Figure 7 shows an example context weight functionfor perpendicular markings. Because different marking typeshave varying contextual relations, the weights are determinedfor each marking type. Figure 8 provides an illustration of anexample MRF for road markings. Details of this figure will

1 2 3 4

p1 = [0, 0.01, 0.99]T p

2 = [0, 0.15, 0.85]T p

3 = [0, 0.6, 0.4]T p

4 = [0, 0.1, 0.9]T

w1→2

= [w11→2

, w21→2

,...]T

w2→1

w2→3

w3→2

w3→4

w4→3

Fig. 8. Example MRF network including edge weights, where each node hasa probability to belong to one out of 3 classes. Note that the weights are alsospecific for each marking type and are thus vectors.

be further explained below.Since finding the exact probabilities within an MRF is

computationally expensive, the solution is typically approx-imated with Loopy Belief Propagation (LBP) [17] [18], whichupdates the probabilities by passing messages on the edges.After all messages have been sent, new probability values arecalculated from the probabilities of the previous iteration andthe weighted messages. Below, we will explain this process indetail.

With pi being the vector of probabilities that node i belongsto each of the marking types, the message that will be sentfrom node i to node j in the next iteration, m′

i→j , equals:

m′i→j =

m′i→j

||m′i→j ||

, where (4)

m′i→j = pi +

∑k∈N∧k 6=j

wk→i mk→i. (5)

In these equations, m denotes the vector containing the mes-sages for each marking type, N is the set of nodes connectedto node i and wk→i denotes the vector of edge weights foreach marking type on the edge from node k to node i.

Next, the probabilities are updated. First a belief value b iscalculated by incrementing the current probabilities with theweighted sum of all incoming messages. The new probabilities

Page 8: Recognition of road markings from street-level panoramic images for automated map generation.

that node j belongs to each of the marking types, p′j , are then

found by normalizing the obtained belief values.

bj = pj +∑i→j

wi→jmi→j and p′j =

bj||bj ||

. (6)

This process is repeated until all probabilities converge, or themaximum number of iterations is reached. Based on the newlyacquired probabilities, all segmented markings are assigned toa single marking class, by selecting the class with the highestprobability. Figure 6 displays the input and output of this pro-cessing stage. Because the orientation of (partially damaged)markings is subject to noise, each marking element is assignedan orientation co-determined by its context. In particular, we fita line using least-squares to neighboring elements of the sameclass and determine the dominant orientation of this line.

5) Model Evaluation: To recognize high-level markingelements (e.g. lines of stripes, crosswalks, give-way situations)and to remove falsely detected markings, we apply a markingmodel that exploits the periodic placement patterns in whichmost road markings occur. For example, pedestrian crossingsare constructed by a number of equally-sized rectangles at reg-ular intervals with equal orientations, and give-way situationsare constructed by multiple shark teeth, located along a line,where each triangle is oriented parallel to the driving direction.

This placement model is evaluated as follows. First, wecalculate the distances between all markings and their close-by neighbors on their major and minor axes, where the majoraxis is aligned with the orientation of the contour as extractedin the previous steps. Second, we connect all markings thatadhere to the adjacency and orientation rules for the specificmarking type, where we ignore marking pairs deviating fromthose rules. In this stage, elements are connected when theyare located at the expected locations within the pattern. Thesepositions consist of the locations of neighbors at once ortwice the period of the specific marking pattern. For example,two lane markings are connected, if they are positioned ata predefined distance interval on their major and minor axeswithin set deviation. Last, markings belonging to the samehigh-level element (such as a pedestrian crossing) are thengrouped together using connectivity-based clustering, whichcreates clusters from markings which are pair-wise connected.Groups of recognized markings that contain too few elementscan be removed to reduce the amount of falsely positives.

This model evaluation results in a single position, orienta-tion and width for each found marking element, thereby allow-ing for the recognition of high-level elements (i.e. crosswalks,give-way situations and lane divisions). The evaluation of theplacement model is illustrated by Fig. 9.

B. Multi-View Positioning

When all images in a certain region of interest are processed(e.g. in a city), the detected markings are processed by themulti-view positioning block. Because the images have aglobal position (in GPS coordinates, logged at capture time),the position of each detected marking can be calculated from

Fig. 9. Illustration and evaluation of the placement model. Left: cropfrom input IPM image. Middle: found markings within the cropped region(pedestrian crossing element = red, block marking element = green, sharkteeth = black, other = blue). Right: output. The pedestrian marking segmentsare coupled together, and also the block markings that are located on the sameline. The two erroneously found block markings are ignored as they do notfit the model.

0 5 10 15 200.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Distance from car [m]

Dis

tanc

e W

eigh

t

Distance Weighting Function

Fig. 10. Distance weighting function constructed by fitting a Gaussianfunction to the AUC. The weight function is maximal around 6 m, i.e. this isthe range where the least markings are missed.

the relative position in the image due to the IPM transfor-mation, which maps the image plane to the horizontal planeat ground level. This results in a marking map on whichmultiple detections of the same marking are present, whereall detections originate from different images.

To obtain a result where each marking is detected onlyonce, connectivity-based clustering is used again to mergedetections. The connectivity criterion is based on the sizeand orientation of the marking. Specifically, Marking A isconnected to Marking B if the centroid of A is within the shapeof B. Each shape originates from the single image detectionprocessing stage, and is determined by its cluster size (numberof elements times period) and its orientation. So for eachcluster size, the size of the shape is defined from [1].

After this procedure, clusters with only one element (mark-ing detected in only one image) are discarded, assuming thatmarkings are detected at least twice in all images. For eachcluster, the final crosswalk position is calculated from theweighted mean of all detections in the clusters. This weightis computed by fitting a function to a performance metric asa function of the detection distance (i.e. pixel distance fromimage center). In this case, we have determined the AUC at

Page 9: Recognition of road markings from street-level panoramic images for automated map generation.

different detection ranges and fitted a Gaussian function onthe resulting values. From the fitted curve in Figure 10, itcan be observed that the performance is optimal at 6 m. Fordistances smaller than the optimum, markings are occluded bynearby objects or the car itself. At distances larger than theoptimum, markings can be occluded as well, but also sufferfrom perspective distortions.

VI. EXPERIMENTAL SETUP

The system consists of two major stages, the single-imagemarking recognition and the multi-view positioning. We eval-uate the system at both stages, using different datasets andconfigurations. The datasets consist of the aforementionedstreet-level panoramic images with corresponding metadata(i.e. car orientation, global position, time stamp).

As general performance metrics we use the true positives(TPs), representing the correctly detected markings, false pos-itives (FPs), indicating the falsely found markings, and falsenegatives (FNs), referring to the missed markings. Addition-ally, we apply recall-precision curves to show the performancefor the specified marking types. The recall denotes the ratio offound and missed markings and the precision the ratio betweenthe correctly and falsely found markings. In an ideal system,both the recall and precision are unity (all markings found andno false positives.)

The first dataset contains 263 images of a large city in theNetherlands, in which the ground-truth positions of pedestriancrossings, block-shaped, give-way and stripe marking elementshave been annotated. The set consists of 834, 1573, 805 and771 single-marking elements for the previously mentionedfour marking types, respectively. This set is used to testthe single-image marking recognition performance, where wespecifically assess (1) the impact of using context informationto enhance recognition rates, (2) evaluate the influence ofdetection distance from the car/capture position, and (3) toanalyze the contribution of the marking model evaluation.

Next to testing to the recognition performance of single-marking elements, we also specifically evaluate detectionrate and positioning accuracy of clusters of markings, suchas crosswalks or give-way lines. We calculate the averagepositioning error for each marking type and also create a curveof the detected percentage as a function of the distance fromthe ground truth. The single-image marking recognition isexecuted on the first dataset both with and without the contextinference step. The dataset contains 25 pedestrian crossings,60 block lines, 39 give-way lines and 34 dashed lines.

To test the performance of the complete marking recognitionpipeline, we have created a large dataset in the municipality ofLingewaard with 84,387 images corresponding to 400 km ofroad. To test both the recognition and positioning performanceof marking clusters, global GPS positions along with the sizeand orientation of the markings are used as ground truthfor this set. We focus on the recognition and positioningaccuracy of pedestrian crossings, give-way markings and blockmarkings, which occur with amounts 105, 729 and 141,respectively.

After the evaluation of the complete road-marking recogni-tion pipeline, we perform an additional experiment in order torelate our detected markings to traffic signs. Road markingsand traffic signs often coexist, such that combined databasescan be used for the evaluation of redundant and complemen-tary information. In this case, we create such a database forthe consistency checking of road markings and traffic signs,concentrating on crosswalks and give-way markings. Markingsand signs are consistent if they are within a set distance andhave matching orientations. For the traffic-sign recognition, weuse the system described in [3]. The consistency evaluation isperformed on the full dataset of 84,387 images.

VII. DETECTION AND POSITIONING RESULTS

This section presents the results of (1) single road-markingrecognition performance, (2) marking cluster detection perfor-mance, (3) 3D positioning (full pipeline) and (4) consistencychecking.

A. Individual Road-Marking Recognition Performance

Figure 11 shows the recall-precision curves for pedes-trian crossings, block-shaped markings, give-way trianglesand stripe markings. The performance of each marking typehas been evaluated for three pipeline stages (SVM, contextinference and model evaluation) and for two detection ranges(distance within 10 m and 20 m of the car). We first analyzethe recognition performance of individual markings for theSVM classification and then evaluate the performance impactof the added context inference and model evaluation.

SVM Performance: Within a range of 10 m from the car,over 90% of all markings are detected, with the exception ofgive-way markings (> 80%). However, when a distance of20 m is considered, the performance drops significantly forcrosswalks, give-way and stripe markings. First, this is due tofarther away markings often tending to be occluded by otherobjects, such as vehicles. Second, perspective distortions occurat larger distances from the capturing location, as the IPMassumes a flat ground plane (although most roads are curved).

Influence of context inference: When context inferenceis used, the performance is equal or in most cases betterthan the SVM performance. For pedestrian crossings, blockmarkings and give-way triangles, the recognition performanceis only increased slightly. Looking at stripe markings, theimpact of context inference on the recognition performanceis considerable. This is due to the fact that stripe contours areeasily distorted, resulting in a low probability output of theSVM. By exploiting the contextual placement patterns, stripemarkings with low probabilities can be boosted. Within 10 m,the same recall of above 90% can be achieved at a much higherprecision (>90%). For a distance of 20 m, the precision canbe improved as well, albeit at a lower recall.

Result of model evaluation: As mentioned before, the modelevaluation is mainly used for clustering and removal of falsepositives. The output probabilities after context inference areset to zero if markings do not adhere to the model. Thisimproves the final precision in most cases, but also lowers the

Page 10: Recognition of road markings from street-level panoramic images for automated map generation.

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Recall

Pre

cisi

on

Recall−precision performance for Crosswalks

SVM (<10m)+Context (<10m)+Model (<10m)SVM (<20m)+Context (<20m)+Model (<20m)

(a)

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Recall

Pre

cisi

on

Recall−precision performance for Blocks

SVM (<10m)+Context (<10m)+Model (<10m)SVM (<20m)+Context (<20m)+Model (<20m)

(b)

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Recall

Pre

cisi

on

Recall−precision performance for Give−way

SVM (<10m)+Context (<10m)+Model (<10m)SVM (<20m)+Context (<20m)+Model (<20m)

(c)

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Recall

Pre

cisi

on

Recall−precision performance for Stripes

SVM (<10m)+Context (<10m)+Model (<10m)SVM (<20m)+Context (<20m)+Model (<20m)

(d)

Fig. 11. Recall-precision curves for single-image marking recognition for (a) pedestrian crossings, (b) block markings, (c) give-way markings and (d) stripemarkings. For each marking, the performance is shown after SVM classification (blue), context inference (red) and model evaluation (green) and both detectionranges: within 10 m (solid line) and 20 m (dashed line).

recall slightly. Since marking clusters should at least have 2marking elements, single isolated markings are discarded, eventhough they might be correct. Furthermore, due to perspectivedistortions, distances between markings are altered. The IPMassumes a flat ground plane, but roads are often curved (e.g.for drainage). Markings that are far away from the car areparticularly affected by these distortions.

B. Road-Marking Cluster Performance

The model evaluation produces road-marking clusters witha position, orientation and size. Table I shows the recognitionresults of road-marking clusters within 10 m of the car. Con-sidering the detection results, we observe that without contextinference, between 79% and 90% of the clusters are found,except for block markings of which only 40% of the clustersis found. With context inference, the detection performanceis significantly improved, where 88% of the crosswalks andover 90% of the other markings are found. The positioningaccuracy, which is expressed as the mean of the distances fromthe ground-truth cluster positions, is considerably improved byusing context inference, except for give-way markings, whereit is marginally worse (2 cm). However, this is still within thesignificance of this measurement.

Table II shows the same detection and accuracy metrics

TABLE IMARKING GROUP PERFORMANCE FOR DETECTIONS WITHIN 10 M

Situation Found # Found % False det. Pos. Error mPedestrian Crossing 22 88% 1 0.59 m

with context 22 88% 3 0.53 mBlock Markings 24 40% 3 0.77 m

with context 58 97% 5 0.16 mGive-way Markings 31 79% 1 0.20 m

with context 35 90% 6 0.22 mStripe Markings 30 88% 4 0.73 m

with context 32 94% 5 0.15 m

TABLE IIMARKING GROUP PERFORMANCE FOR DETECTIONS WITHIN 20 M

Situation Found # Found % False det. Pos. Error mPedestrian Crossing 51 56% 3 0.60 m

with context 51 56% 5 0.55 mBlock Markings 63 38% 39 1.14 m

with context 154 89% 32 0.40 mGive-way Markings 90 58% 3 0.46 m

with context 112 72% 12 0.34 mStripe Markings 54 49% 7 0.99 m

with context 59 53% 9 0.63 m

for a range of 20 meters from the car. Overall, contextinference still improves both the number of found markingsand the positioning accuracy, but we observe that the resultsare not improved for pedestrian crossings and stripe markings

Page 11: Recognition of road markings from street-level panoramic images for automated map generation.

(a)

(b)

Fig. 12. Two situations where marking clusters are missed. In (a) the pedestriancrossing is occluded, in (b) the stripe markings are distorted by the IPM.

0 1 2 3 4 5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Percentage of detected markings as a function of distance from ground truth

Detection distance [m]

Det

ecte

d %

SVM+Context

Fig. 13. Detected percentage as a function of distance from ground truth.

(56% and 53% found). For these two marking types, we nowinvestigate the causes for the lower detection performance.

Figure 12 shows two cases where crosswalks and stripemarkings are not detected. Because crosswalk markings arerelatively large, they tend to be occluded by other objectsif they are farther away from the car. Stripe markings arerelatively small and thus are more susceptible to the distortionsof the IPM. This implies that even without context inference,all undistorted markings have been detected, such that addedcontext information does not improve detection rates. Becausemost markings are detected within 10 m and the results fromindividual images are merged in the next stage, almost allmarkings can still be detected, as the same marking occurs inmultiple images.

Regarding positioning accuracy, Figure 13 gives the percent-age of detected markings that are within a certain range of theground truth. From these curves, it is clear that the context in-ference improves the positioning accuracy of marking clustersin an image. When the pipeline is used only with the SVMclassification, roughly 60% is detected within 1 meter andabove 80% within 2 meters. Using context information, thiscan be improved to above 80% within 1 meter and above 90%within 2 meters. Positioning accuracy is particularly importantfor the multi-view positioning, which is the next step in thepipeline and is evaluated in the next section.

TABLE IIIMULTI-VIEW POSITIONING RESULTS FOR CROSSWALKS, BLOCK

MARKINGS AND GIVE-WAY TRIANGLES.

Situation Found # Found % False det. Accuracy mPedestrian Crossing 89 85% 3 0.60 mBlock Markings 130 92% 41 0.50 mGive-way Markings 581 80% 23 0.38 m

C. 3D Positioning Results

Table III shows the detection and positioning accuracyresults for the multi-view positioning of crosswalks, blockmarkings and give-way triangles. Overall, the recognition rateis equal or above 80%. For each marking type, we will evaluatethe most common causes for false negatives (misses) and falsepositives (erroneously found markings), provided that theyare significant. It should be noted that the impact of missedmarkings (FNs) is larger than falsely found markings (FPs).With little manual effort, all found detections of the pipelinecan be inspected and accepted or discarded accordingly.

For all classes, marking abrasion is the most commoncause for missed detections. Furthermore, markings are missedif there are only a few images which capture them, oftenoccurring when they are far away. Below, we specificallyaddress each marking type.

The main cause for missed detections in crosswalks is themerging of clusters. In most cases, a smaller crosswalk ofonly two elements is located closely to a larger crosswalk.Due to positioning errors, caused by perspective distortionsand GPS position errors, these clusters are ’connected’ andare merged, but marked as missed because both markings areannotated. Figure 14a shows an example of two smaller two-stripe crosswalks that are merged to the larger crosswalks inthe middle.

The detection rate of block markings is already high (92%).However, there are a lot of false positive detections. Closerinspection of the false positives reveals that the pipelinerecognized tile patterns in gardens and on driveways andsidewalks, as shown in Figure 14b.

Give-way triangles have the lowest detection rate comparedto the other markings and also have the highest number ofelements in the dataset. As there are around 150 missedmarkings, we create a breakdown of the causes of this highnumber. Over 50% of all missed give-way markings wereheavily damaged, thus resulting in the inability of detection bythe presented system. The second cause at 23% is due to give-way markings being present on bike roads/lanes. These lanesare often located far from the road/capture location, resultingin distorted and occluded markings. At 13%, cluster mergingis the third most probable cause of FNs and is comparable tothe merging aspects described for crosswalks. Other minorcauses include occlusions and IPM distortions (e.g. whenmarkings are on slopes). Figure 14c and Figure 14d showcases of damaged markings and far-away capture locations,respectively.

Page 12: Recognition of road markings from street-level panoramic images for automated map generation.

(a) FNs (red) due to the merging of close-by pedestrian crossings (blue).

(b) FP due to similar shapes and patterns.

(c) FN due to severely damaged markings.

(d) FN due to far away capture locations (blue dots).

Fig. 14. False and missed detections of 3D-positioned markings.

D. Road-Marking and Traffic-Sign Co-Occurrence Validation

The presented road-marking recognition system is appliedin conjunction with the existing traffic-sign recognition sys-tem [3] to check the correct co-occurrence of signs andmarkings, in particular for pedestrian crossings and give-waysituations. The goal of this experiment is to (1) evaluatethe recognition of traffic situations where both marking andsigns occur, (2) explore consistency checking on databasescontaining the positions of signs and markings. The lastaspect aims at identification of situations where expected signs

TABLE IVOVERVIEW OF THE COMPLETE COMBINED RESULTS AND FOR INDIVIDUAL

SIGN AND MARKING RECOGNITION ONLY.

Situation Approach Correctly det. False det.

Pedestriancrossings

Combined recognition 53 100% 8Marking recognition only 51 96.2% 4Sign recognition only 49 92.5% 4

Give-waysituations

Combined recognition 694 96.5% 28Marking recognition only 500 69.5% 23Sign recognition only 598 83.1% 5

TABLE VOVERVIEW OF THE CONSISTENCY CHECKING RESULTS.

Situation ConsistentPedestrian crossings 34 / 53 64.2%Give-way situations 349 / 719 48.5%

or markings are missing, potentially leading to dangeroustraffic behavior. For instance, a pedestrian crossing should beindicated by both signs visible from all driving directions anda sufficiently large crosswalk marking.

Combined Recognition: Table IV shows the number ofrecognized traffic situations for marking-, sign- and combined-recognition. It should be noted that this table considers trafficsituations and not single-marking clusters or signs, such thatmultiple signs and markings denoting the same situation aremerged. The results show that marking- and sign-recognitioncomplement each other when considering the identification ofsituations, especially taking into account that a significant partof the situations is denoted exclusively by only a road markingor a sign.

Consistency Checking: Considering Table V, two-third ofthe pedestrian crossings and about half of the give-way sit-uations are marked as consistent, i.e. expected markings andsigns are both detected. Compared to manual safety inspection,this approach reduces the number of situations that has to beverified with about a factor of two (or better).

VIII. CONCLUSIONS AND FUTURE WORK

In this paper we have presented a road-marking recogni-tion system to create road-marking maps from street-levelpanoramic images. Next to the general characteristics ofmarking recognition system, such as segmentation and contourclassification, the proposed system contributes to the perfor-mance by several aspects. This system is able to (1) recognizea variety of markings, (2) exploit context relations betweenindividual marking elements, (3) retrieve marking clusters,and (4) find the global positions of the recognized mark-ings. These contributions have resulted in a system designwith two consecutive processing stages. First, each image isprocessed individually to identify the present markings. Thisstage applies Inverse Perspective Mapping (IPM), segmenta-tion, learning-based contour classification with SVMs, contextinference and model evaluation on each individual image sub-sequently. Context inference is realized by modeling the SVMresults in a Markov Random Field with weighted edges and

Page 13: Recognition of road markings from street-level panoramic images for automated map generation.

performing inference with Loopy Belief Propagation. From thecontext inference results, marking clusters are constructed byapplying marking placement models, which exploit markingdesign rules provided by traffic legislation. In the secondprocessing stage, recognition results from the separate imagesare combined with connectivity-based clustering to find the3D positions of the markings.

First the single-image processing stage has been evaluatedfor crosswalks, block-, stripe- and give-way markings. Thebase performance (SVM classification) within 10 m of the carhas been found sufficient for crosswalks, give-way and stripemarkings (≥79%), but not for block markings (40%). Theuse of context inference strongly improves both the detectionperformance and positioning accuracy of all marking types toabove 88% (above 90% for most marking types) and findsthem within a few decimeters of the ground-truth locations.The performance within 20 m of the car is lower than close-by detections, even with context inference for some types.However, because actual markings are captured in multipleimages, this is not an issue, as is discussed below in the resultsof the multi-view positioning.

Applying the full processing pipeline to a dataset of acomplete municipality in the Netherlands including more than84,387 images (corresponding to more than 400 km of road),yields promising results. In this set, pedestrian crossings,block- and give-way markings have been recognized at 85%,92% and 80%, respectively. Closer inspection of missed de-tections reveals that most undetected markings are severelydamaged (give-way markings in particular), or are located farfrom the capture location, which results in more occlusionsand perspective distortions. Manually verifying all detectionsof the system for removing false positives, this system is per-forming sufficiently well for creating road-marking maps forthe use of traffic safety inspection or navigation applications.

Exploiting high-level context information from othersources such a traffic signs, a significant amount of the misseddetections can be identified with traffic situation analysis.Combined databases of markings and signs (1) supply a largernumber of traffic situations than using a single source, and(2) allow for consistency checking of sign and marking co-occurrences, which directs manual traffic safety inspectionto aberrant cases. On our dataset, we have found 64.2% forcrosswalk situations and 48.9% for give-way situations to beconsistent, thereby reducing the manual verification by a factorof two or more.

Looking to the results, future work should be geared towardstwo objectives: (1) specific processing of damaged markingsand (2) support for other marking types, such as lines, arrowsand speed numbers. Improving segmentation performance andexploiting high-level context information of other markingsand signs, can help to increase recognition performance andgive an additional indication of damaged markings. Shapes ofalternative marking types can be learned by the system andcontext relations and marking models can be setup.

Besides development and validation of a road-markingrecognition system in an industrial environment, this work hasinitiated and contributed to several international publications:

• L. Hazelhoff, I. Creusen, T. Woudsma, and P.H.N. de With,Combined generation of road marking and road sign databasesapplied to consistency checking of pedestrian crossings, Ac-cepted for: IAPR Int. Conf. on Machine Vision and Applica-tions, 2015.

• T. Woudsma, L. Hazelhoff, I. Creusen, and P.H.N. de With,Automated generation of road marking maps from street-levelpanoramic images, Submitted to Int. Conf. on Intelligent Trans-portation Systems, 2015.

• L. Hazelhoff, I. Creusen, T. Woudsma, and P.H.N. de With,Exploiting automatically generated databases of traffic signs androad markings for contextual co-occurrence analysis, Submittedto Int. Journal on Electronic Imaging, 2015.

REFERENCES

[1] CROW, “Richtlijnen voor de bebakening en markering van wegen,”2005.

[2] C. Li, I. Creusen, L. Hazelhoff, and P.H.N. de With, “Detection andrecognition of road markings in panoramic images,” in ACCV Workshopon My Car Has Eyes - Intelligent Vehicles with Vision Technology, 2014.

[3] L. Hazelhoff, I. Creusen, and P. H. N. de With, “Exploiting street-levelpanoramic images for large-scale automated surveying of traffic signs,”Machine Vision and Applications, vol. 25, no. 7, pp. 1893–1911, 2014.

[4] S. Vacek, C. Schimmel, and R. Dillmann, “Road-marking analysis forautonomous vehicle guidance.” in EMCR.

[5] M. Fu, X. Wang, H. Ma, Y. Yang, and M. Wang, “Multi-lanes detectionbased on panoramic camera,” in Control Automation (ICCA), 11th IEEEInternational Conference on, June 2014, pp. 655–660.

[6] J. Huang, H. Liang, Z. Wang, T. Mei, and Y. Song, “Robust lane markingdetection under different road conditions,” in Robotics and Biomimetics,2013 IEEE International Conference on, Dec 2013, pp. 1753–1758.

[7] S. Yenikaya, G. Yenikaya, and E. Duven, “Keeping the vehicle on theroad: A survey on on-road lane detection systems,” ACM Comput. Surv.,vol. 46, no. 1, pp. 2:1–2:43, Jul. 2013.

[8] P. Foucher, Y. Sebsadji, and J.-P. Tarel et al., “Detection and recognitionof urban road markings using images,” in Intelligent TransportationSystems, International IEEE Conference on, 2011, pp. 1747–1752.

[9] H. Li, M. Feng, and X. Wang, “Inverse perspective mapping based urbanroad markings detection,” in Cloud Computing and Intelligent Systems(CCIS), International Conference on, vol. 03, 2012, pp. 1178–1182.

[10] B. Qin, W. Liu, and X. Shen et al., “A general framework for roadmarking detection and analysis,” in Intelligent Transportation Systems,2013 16th International IEEE Conference on, 2013, pp. 619–625.

[11] L. Hazelhoff, I. Creusen, T. Woudsma, and P. H. N. de With, “Combinedgeneration of road marking and road sign databases applied to con-sistency checking of pedestrian crossings,” in 14th IAPR InternationalConference on Machine Vision Applications (submitted to), 2015.

[12] J. Rebut, A. Bensrhair, and G. Toulminet, “Image segmentation andpattern recognition for road marking analysis,” in Industrial Electronics,IEEE Int. Symp. on, vol. 1, May 2004, pp. 727–732.

[13] T. Wu and A. Ranganathan, “A practical system for road markingdetection and recognition,” in Intelligent Vehicles Symp. (IV), IEEE, June2012, pp. 25–30.

[14] T. Veit, J.-P. Tarel, P. Nicolle, and P. Charbonnier, “Evaluation of roadmarking feature extraction,” in Intelligent Transportation Systems, 2008.11th International IEEE Conference on, Oct 2008, pp. 174–181.

[15] J. C. Platt, “Probabilistic outputs for support vector machines andcomparisons to regularized likelihood methods,” in ADVANCES INLARGE MARGIN CLASSIFIERS. MIT Press, 1999, pp. 61–74.

[16] H.-T. Lin, C.-J. Lin, and R. Weng, “A note on platt’s probabilisticoutputs for support vector machines,” Machine Learning, vol. 68, no. 3,pp. 267–276, 2007.

[17] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks ofPlausible Inference. San Francisco, CA, USA: Morgan KaufmannPublishers Inc., 1988.

[18] K. P. Murphy, Y. Weiss, and M. I. Jordan, “Loopy belief propagationfor approximate inference: An empirical study,” in Proceedings of the15th Conf. on Uncertainty in Artificial Intelligence, 1999, pp. 467–475.[Online]. Available: http://dl.acm.org/citation.cfm?id=2073796.2073849