Learning Image Processing with OpenCV - DropPDFTools to develop new projects Creating an OpenCV C++...

Post on 22-Feb-2018

251 views 4 download

Transcript of Learning Image Processing with OpenCV - DropPDFTools to develop new projects Creating an OpenCV C++...

LearningImageProcessingwithOpenCV

TableofContents

LearningImageProcessingwithOpenCV

Credits

AbouttheAuthors

AbouttheReviewers

www.PacktPub.com

Supportfiles,eBooks,discountoffers,andmore

Whysubscribe?

FreeaccessforPacktaccountholders

Preface

Whatthisbookcovers

Whatyouneedforthisbook

Whothisbookisfor

Conventions

Readerfeedback

Customersupport

Downloadingtheexamplecode

Downloadingthecolorimagesofthisbook

Errata

Piracy

Questions

1.HandlingImageandVideoFiles

AnintroductiontoOpenCV

DownloadingandinstallingOpenCV

GettingacompilerandsettingCMake

ConfiguringOpenCVwithCMake

Compilingandinstallingthelibrary

ThestructureofOpenCV

CreatinguserprojectswithOpenCV

Generalusageofthelibrary

Toolstodevelopnewprojects

CreatinganOpenCVC++programwithQtCreator

Readingandwritingimagefiles

ThebasicAPIconcepts

Imagefile-supportedformats

Theexamplecode

Readingimagefiles

Eventhandlingintotheintrinsicloop

Writingimagefiles

Readingandwritingvideofiles

Theexamplecode

User-interactionstools

Trackbars

Mouseinteraction

Buttons

Drawinganddisplayingtext

Summary

2.EstablishingImageProcessingTools

Basicdatatypes

Pixel-levelaccess

Measuringthetime

Commonoperationswithimages

Arithmeticoperations

Datapersistence

Histograms

Theexamplecode

Theexamplecode

Summary

3.CorrectingandEnhancingImages

Imagefiltering

Smoothing

Theexamplecode

Sharpening

Theexamplecode

Workingwithimagepyramids

Gaussianpyramids

Laplacianpyramids

Theexamplecode

Morphologicaloperations

Theexamplecode

LUTs

Theexamplecode

Geometricaltransformations

Affinetransformation

Scaling

Theexamplecode

Translation

Theexamplecode

Imagerotation

Theexamplecode

Skewing

Theexamplecode

Reflection

Theexamplecode

Perspectivetransformation

Theexamplecode

Inpainting

Theexamplecode

Denoising

Theexamplecode

Summary

4.ProcessingColor

Colorspaces

Conversionbetweencolorspaces(cvtColor)

RGB

Theexamplecode

Grayscale

Examplecode

CIEXYZ

Theexamplecode

YCrCb

Theexamplecode

HSV

Theexamplecode

HLS

Theexamplecode

CIEL*a*b*

Theexamplecode

CIEL*u*v*

Theexamplecode

Bayer

Theexamplecode

Color-space-basedsegmentation

HSVsegmentation

YCrCbsegmentation

Colortransfer

Theexamplecode

Summary

5.ImageProcessingforVideo

Videostabilization

Superresolution

Stitching

Summary

6.ComputationalPhotography

High-dynamic-rangeimages

CreatingHDRimages

Example

Tonemapping

Alignment

Exposurefusion

Seamlesscloning

Decolorization

Non-photorealisticrendering

Summary

7.AcceleratingImageProcessing

OpenCVwiththeOpenCLinstallation

AquickrecipetoinstallOpenCVwithOpenCL

ChecktheGPUusage

Acceleratingyourownfunctions

CheckingyourOpenCL

Thecodeexplanation

YourfirstGPU-basedprogram

Thecodeexplanation

Goingrealtime

Thecodeexplanation

Theperformance

Summary

Index

LearningImageProcessingwithOpenCV

LearningImageProcessingwithOpenCVCopyright©2015PacktPublishing

Allrightsreserved.Nopartofthisbookmaybereproduced,storedinaretrievalsystem,ortransmittedinanyformorbyanymeans,withoutthepriorwrittenpermissionofthepublisher,exceptinthecaseofbriefquotationsembeddedincriticalarticlesorreviews.

Everyefforthasbeenmadeinthepreparationofthisbooktoensuretheaccuracyoftheinformationpresented.However,theinformationcontainedinthisbookissoldwithoutwarranty,eitherexpressorimplied.Neithertheauthors,norPacktPublishing,anditsdealersanddistributorswillbeheldliableforanydamagescausedorallegedtobecauseddirectlyorindirectlybythisbook.

PacktPublishinghasendeavoredtoprovidetrademarkinformationaboutallofthecompaniesandproductsmentionedinthisbookbytheappropriateuseofcapitals.However,PacktPublishingcannotguaranteetheaccuracyofthisinformation.

Firstpublished:March2015

Productionreference:1230315

PublishedbyPacktPublishingLtd.

LiveryPlace

35LiveryStreet

BirminghamB32PB,UK.

ISBN978-1-78328-765-9

www.packtpub.com

CreditsAuthors

GloriaBuenoGarcía

OscarDenizSuarez

JoséLuisEspinosaAranda

JesusSalidoTercero

IsmaelSerranoGracia

NoeliaVállezEnano

Reviewers

WalterLucetti

AndrédeSouzaMoreira

MarvinSmith

CommissioningEditor

JulianUrsell

AcquisitionEditor

SamWood

ContentDevelopmentEditor

KirtiPatil

TechnicalEditor

FaisalSiddiqui

CopyEditor

StutiSrivastava

ProjectCoordinator

NidhiJoshi

Proofreaders

MartinDiver

MariaGould

SamanthaLyon

Indexer

TejalSoni

Graphics

AbhinashSahu

ProductionCoordinator

ConidonMiranda

CoverWork

ConidonMiranda

AbouttheAuthorsGloriaBuenoGarcíaholdsaPhDinmachinevisionfromCoventryUniversity,UK.Shehasexperienceworkingastheprincipalresearcherinseveralresearchcenters,suchasUMR7005researchunitCNRS/LouisPasteurUniv.Strasbourg(France),GilbertGilkes&GordonTechnology(UK),andCEITSanSebastian(Spain).Sheistheauthoroftwopatents,oneregisteredtypeofsoftware,andmorethan100refereedpapers.Herinterestsarein2D/3Dmultimodalityimageprocessingandartificialintelligence.SheleadstheVISILABresearchgroupattheUniversityofCastilla-LaMancha.ShehascoauthoredabookonOpenCVprogrammingformobiledevices:OpenCVessentials,PacktPublishing.

Thisisdedicatedtooursonsforthetimewehavenotbeenabletoplaywiththemandourparentsfortheirunconditionalsupportduringourlifetime.ThanksfromGloriaandOscar.

OscarDenizSuarez‘sresearchinterestsaremainlyfocusedoncomputervisionandpatternrecognition.Heistheauthorofmorethan50refereedpapersinjournalsandconferences.Hereceivedtherunner-upawardforthebestPhDworkoncomputervisionandpatternrecognitionbyAERFAIandtheImageFileandReformattingSoftwareChallengeAwardbyInnocentiveInc.Hehasbeenanationalfinalistforthe2009CorBaayenaward.Hisworkisusedbycutting-edgecompanies,suchasExistor,Gliif,Tapmedia,E-Twenty,andothers,andhasalsobeenaddedtoOpenCV.Currently,heworksasanassociateprofessorattheUniversityofCastilla-LaManchaandcontributestoVISILAB.HeisaseniormemberofIEEEandisaffiliatedwithAAAI,SIANI,CEA-IFAC,AEPIA,andAERFAI-IAPR.HeservesasanacademiceditorofthePLoSONEjournal.HehasbeenavisitingresearcheratCarnegieMellonUniversity,ImperialCollegeLondon,andLeicaBiosystems.HehascoauthoredtwobooksonOpenCVpreviously.

JoséLuisEspinosaArandaholdsaPhDincomputersciencefromtheUniversityofCastilla-LaMancha.HehasbeenafinalistforCertamenUniversitarioArquímedesdeIntroducciónalaInvestigacióncientíficain2009forhisfinaldegreeprojectinSpain.Hisresearchinterestsinvolvecomputervision,heuristicalgorithms,andoperationalresearch.HeiscurrentlyworkingattheVISILABgroupasanassistantresearcheranddeveloperincomputervisiontopics.

Thisisdedicatedtomyparentsandmybrothers.

JesusSalidoTercerogainedhiselectricalengineeringdegreeandPhD(1996)fromUniversidadPolitécnicadeMadrid(Spain).Hethenspent2years(1997and1998)asavisitingscholarattheRoboticsInstitute(CarnegieMellonUniversity,Pittsburgh,USA),workingoncooperativemultirobotsystems.SincehisreturntotheSpanishUniversityofCastilla-LaMancha,hespendshistimeteachingcoursesonroboticsandindustrialinformatics,alongwithresearchonvisionandintelligentsystems.Overthelast3years,hiseffortshavebeendirectedtodevelopvisionapplicationsonmobiledevices.HehascoauthoredabookonOpenCVprogrammingformobiledevices.

ThisisdedicatedtothosetowhomIoweallIam:myparents,SagrarioandMaria.

IsmaelSerranoGraciareceivedhisdegreeincomputersciencein2012fromthe

UniversityofCastilla-LaMancha.Hegotthehighestmarksforhisfinaldegreeprojectonpersondetection.ThisapplicationusesdepthcameraswithOpenCVlibraries.Currently,heisaPhDcandidateatthesameuniversity,holdingaresearchgrantfromtheSpanishMinistryofScienceandResearch.HeisalsoworkingattheVISILABgroupasanassistantresearcheranddeveloperondifferentcomputervisiontopics.

Thisisdedicatedtomyparents,whohavegivenmetheopportunityofeducationandhavesupportedmethroughoutmylife.Itisalsodedicatedtomysupervisor,Dr.OscarDeniz,whohasbeenafriend,guide,andhelper.Finally,itisdedicatedtomyfriendsandmygirlfriend,whohavealwayshelpedmeandbelievedthatIcoulddothis.

NoeliaVállezEnanohaslikedcomputerssinceherchildhood,thoughshedidn’thaveonebeforehermid-teens.In2009,shefinishedherstudiesincomputerscienceattheUniversityofCastilla-LaMancha,whereshegraduatedwithtophonors.ShestartedworkingattheVISILABgroupthroughaprojectonmammographyCADsystemsandelectronichealthrecords.Sincethen,shehasobtainedamaster’sdegreeinphysicsandmathematicsandhasenrolledforaPhDdegree.Herworkinvolvesusingimageprocessingandpatternrecognitionmethods.Shealsolikesteachingandworkinginotherareasofartificialintelligence.

AbouttheReviewersWalterLucetti,knownontheInternetasMyzhar,isanItaliancomputerengineerwithaspecializationinroboticsandroboticsperception.Hereceivedthelaureadegreein2005whilestudyingatResearchCenter“E.Piaggio”inPisa(Italy),wherehewroteathesisabout3Dmappingoftherealworldusinga2Dlasertiltedwithaservomotor,fusing3DwithRGBdata.Whilewritingthethesis,heencounteredOpenCVforthefirsttime—itwasearly2004andOpenCVwasatitslarvalstage.

Afterthelaurea,hestartedworkingassoftwaredeveloperforlow-levelembeddedsystemsandhigh-leveldesktopsystems.HegreatlyimprovedhisknowledgeofcomputervisionandmachinelearningasaresearcheratGustavoStefaniniAdvancedRoboticsCenterinLaSpezia(Italy),aspinoffofPERCROLaboratoryofScuolaSuperioreSant’AnnaofPisa(Italy).

Currently,heisworkinginthesoftwareindustry,writingfirmwareforembeddedARMsystems,softwarefordesktopsystemsbasedontheQtframework,andintelligentalgorithmsforvideosurveillancesystemsbasedonOpenCVandCUDA.

Heisalsoworkingonapersonalroboticproject:MyzharBot.MyzharBotisatrackedgroundmobilerobotthatusescomputervisiontodetectobstaclesandtoanalyzeandexploretheenvironment.TherobotisguidedbyalgorithmsbasedonROS,CUDA,andOpenCV.Youcanfollowtheprojectonthiswebsite:http://myzharbot.robot-home.it.

AndrédeSouzaMoreirahasamaster’sdegreeincomputerscience,withemphasisoncomputergraphicsfromthePontificalCatholicUniversityofRiodeJaneiro(Brazil).

HegraduatedwithabachelorofcomputersciencedegreefromUniversidadeFederaldoMaranhão(UFMA)inBrazil.Duringhisundergraduatedegree,hewasamemberofLabmint’sresearchteamandworkedwithmedicalimaging,specifically,breastcancerdetectionanddiagnosisusingimageprocessing.

Currently,heworksasaresearcherandsystemanalystatInstitutoTecgraf,oneofthemajorresearchanddevelopmentlabsincomputergraphicsinBrazil.HehasbeenworkingextensivelywithPHP,HTML,andCSSsince2007,andnowadays,hedevelopsprojectsinC++11/C++14,alongwithSQLite,Qt,Boost,andOpenGL.Moreinformationabouthimcanbeacquiredonhispersonalwebsiteatwww.andredsm.com.

MarvinSmithiscurrentlyasoftwareengineerinthedefenseindustry,specializinginphotogrammetryandremotesensing.HereceivedhisBSdegreeincomputersciencefromtheUniversityofNevadaReno.Histechnicalinterestsincludehighperformancecomputing,distributedimageprocessing,andmultispectralimageryexploitation.Priortoworkingindefense,MarvinheldinternshipswiththeIntelligentRoboticsGroupattheNASAAmesResearchCenterandtheNevadaAutomotiveTestCenter.

www.PacktPub.com

Supportfiles,eBooks,discountoffers,andmoreForsupportfilesanddownloadsrelatedtoyourbook,pleasevisitwww.PacktPub.com.

DidyouknowthatPacktofferseBookversionsofeverybookpublished,withPDFandePubfilesavailable?YoucanupgradetotheeBookversionatwww.PacktPub.comandasaprintbookcustomer,youareentitledtoadiscountontheeBookcopy.Getintouchwithusat<service@packtpub.com>formoredetails.

Atwww.PacktPub.com,youcanalsoreadacollectionoffreetechnicalarticles,signupforarangeoffreenewslettersandreceiveexclusivediscountsandoffersonPacktbooksandeBooks.

https://www2.packtpub.com/books/subscription/packtlib

DoyouneedinstantsolutionstoyourITquestions?PacktLibisPackt’sonlinedigitalbooklibrary.Here,youcansearch,access,andreadPackt’sentirelibraryofbooks.

Whysubscribe?FullysearchableacrosseverybookpublishedbyPacktCopyandpaste,print,andbookmarkcontentOndemandandaccessibleviaawebbrowser

FreeaccessforPacktaccountholdersIfyouhaveanaccountwithPacktatwww.PacktPub.com,youcanusethistoaccessPacktLibtodayandview9entirelyfreebooks.Simplyuseyourlogincredentialsforimmediateaccess.

PrefaceOpenCV,arguablythemostwidelyusedcomputervisionlibrary,includeshundredsofready-to-useimagingandvisionfunctionsandisusedinbothacademiaandindustry.Ascamerasgetcheaperandimagingfeaturesgrowindemand,therangeofapplicationsusingOpenCVincreasessignificantly,bothfordesktopandmobileplatforms.

Thisbookprovidesanexample-basedtourofOpenCV’smainimageprocessingalgorithms.WhileotherOpenCVbookstrytoexplaintheunderlyingtheoryorprovidelargeexamplesofnearlycompleteapplications,Thisbookisaimedatpeoplewhowanttohaveaneasy-to-understandworkingexampleassoonaspossible,andpossiblydevelopadditionalfeaturesontopofthat.

Thebookstartswithanintroductorychapterinwhichthelibraryinstallationisexplained,thestructureofthelibraryisdescribed,andbasicimageandvideoreadingandwritingexamplesaregiven.Fromthis,thefollowingfunctionalitiesarecovered:handlingofimagesandvideos,basicimageprocessingtools,correctingandenhancingimages,color,videoprocessing,andcomputationalphotography.Lastbutnotleast,advancedfeaturessuchasGPU-basedaccelerationsarealsoconsideredinthefinalchapter.Newfunctionsandtechniquesinthelatestmajorrelease,OpenCV3,areexplainedthroughout.

WhatthisbookcoversChapter1,HandlingImageandVideoFiles,showsyouhowtoreadimageandvideofiles.Italsoshowsbasicuser-interactiontools,whichareveryusefulinimageprocessingtochangeaparametervalue,selectregionsofinterest,andsoon.

Chapter2,EstablishingImageProcessingTools,describesthemaindatastructuresandbasicproceduresneededinsubsequentchapters.

Chapter3,CorrectingandEnhancingImages,dealswithtransformationstypicallyusedtocorrectimagedefects.Thischaptercoversfiltering,pointtransformationsusingLookUpTables,geometricaltransformations,andalgorithmsforinpaintinganddenoisingimages.

Chapter4,ProcessingColor,dealswithcolortopicsinimageprocessing.Thischapterexplainshowtousedifferentcolorspacesandperformcolortransfersbetweentwoimages.

Chapter5,ImageProcessingforVideo,coverstechniquesthatuseavideoorasequenceofimages.Thischapterisfocusedonalgorithms’implementationforvideostabilization,superresolution,andstitching.

Chapter6,ComputationalPhotography,explainshowtoreadHDRimagesandperformtonemappingonthem.

Chapter7,AcceleratingImageProcessing,coversanimportanttopicinimageprocessing:speed.ModernGPUsarethebestavailabletechnologytoacceleratetime-consumingimageprocessingtasks.

WhatyouneedforthisbookThepurposeofthisbookistoteachyouOpenCVimageprocessingbytakingyouthroughanumberofpracticalimageprocessingprojects.Thelatestversion,Version3.0ofOpenCV,willbeused.

Eachchapterprovidesseveralready-to-useexamplestoillustratetheconceptscoveredinit.Thebookis,therefore,focusedonprovidingyouwithaworkingexampleassoonaspossiblesothattheycandevelopadditionalfeaturesontopofthat.

Tousethisbook,onlyfreesoftwareisneeded.AlltheexampleshavebeendevelopedandtestedwiththefreelyavailableQtCreatorIDEandGNU/GCCcompiler.TheCMaketoolisalsousedtoconfigurethebuildprocessoftheOpenCVlibraryonthetargetplatform.Moreover,thefreelyavailableOpenCLSDKisrequiredfortheGPUaccelerationexamplesshowninChapter7,AcceleratingImageProcessing.

WhothisbookisforThisbookisintendedforreaderswhoalreadyknowC++programmingandwanttolearnhowtodoimageprocessingusingOpenCV.Youareexpectedtohaveaminimalbackgroundinthetheoryofimageprocessing.Thebookdoesnotcovertopicsthataremorerelatedtocomputervision,suchasfeatureandobjectdetection,tracking,ormachinelearning.

ConventionsInthisbook,youwillfindanumberofstylesoftextthatdistinguishbetweendifferentkindsofinformation.Herearesomeexamplesofthesestyles,andanexplanationoftheirmeaning.

Codewordsintext,foldernames,filenames,fileextensions,pathnames,systemvariables,URLs,anduserinputareshownasfollows:“Eachmodulehasanassociatedheaderfile(forexamplecore.hpp).”

Ablockofcodeissetasfollows:

#include<opencv2/core/core.hpp>

#include<opencv2/highgui/highgui.hpp>

usingnamespacestd;

usingnamespacecv;

intmain(intargc,char*argv[])

{

Matframe;//Containerforeachframe

Whenwewishtodrawyourattentiontoaparticularpartofacodeblock,therelevantlinesoritemsaresetinbold:

#include<opencv2/core/core.hpp>

#include<opencv2/highgui/highgui.hpp>

#include<iostream>

usingnamespacestd;

usingnamespacecv;

intmain(intargc,char*argv[])

{

Anycommand-lineinputoroutputiswrittenasfollows:

C:\opencv-buildQt\install

Newtermsandimportantwordsareshowninbold.Wordsthatyouseeonthescreen,forexample,inmenusordialogboxes,appearinthetextlikethis:“ClickingtheNextbuttonmovesyoutothenextscreen.”

NoteWarningsorimportantnotesappearinaboxlikethis.

TipTipsandtricksappearlikethis.

ReaderfeedbackFeedbackfromourreadersisalwayswelcome.Letusknowwhatyouthinkaboutthisbook—whatyoulikedordisliked.Readerfeedbackisimportantforusasithelpsusdeveloptitlesthatyouwillreallygetthemostoutof.

Tosendusgeneralfeedback,simplye-mail<feedback@packtpub.com>,andmentionthebook’stitleinthesubjectofyourmessage.

Ifthereisatopicthatyouhaveexpertiseinandyouareinterestedineitherwritingorcontributingtoabook,seeourauthorguideatwww.packtpub.com/authors.

CustomersupportNowthatyouaretheproudownerofaPacktbook,wehaveanumberofthingstohelpyoutogetthemostfromyourpurchase.

DownloadingtheexamplecodeYoucandownloadtheexamplecodefilesfromyouraccountathttp://www.packtpub.comforallthePacktPublishingbooksyouhavepurchased.Ifyoupurchasedthisbookelsewhere,youcanvisithttp://www.packtpub.com/supportandregistertohavethefilese-maileddirectlytoyou.

DownloadingthecolorimagesofthisbookWealsoprovideyouwithaPDFfilethathascolorimagesofthescreenshots/diagramsusedinthisbook.Thecolorimageswillhelpyoubetterunderstandthechangesintheoutput.Youcandownloadthisfilefrom:https://www.packtpub.com/sites/default/files/downloads/ImageProcessingwithOpenCV_Graphics.pdf

ErrataAlthoughwehavetakeneverycaretoensuretheaccuracyofourcontent,mistakesdohappen.Ifyoufindamistakeinoneofourbooks—maybeamistakeinthetextorthecode—wewouldbegratefulifyoucouldreportthistous.Bydoingso,youcansaveotherreadersfromfrustrationandhelpusimprovesubsequentversionsofthisbook.Ifyoufindanyerrata,pleasereportthembyvisitinghttp://www.packtpub.com/submit-errata,selectingyourbook,clickingontheErrataSubmissionFormlink,andenteringthedetailsofyourerrata.Onceyourerrataareverified,yoursubmissionwillbeacceptedandtheerratawillbeuploadedtoourwebsiteoraddedtoanylistofexistingerrataundertheErratasectionofthattitle.

Toviewthepreviouslysubmittederrata,gotohttps://www.packtpub.com/books/content/supportandenterthenameofthebookinthesearchfield.TherequiredinformationwillappearundertheErratasection.

PiracyPiracyofcopyrightedmaterialontheInternetisanongoingproblemacrossallmedia.AtPackt,wetaketheprotectionofourcopyrightandlicensesveryseriously.IfyoucomeacrossanyillegalcopiesofourworksinanyformontheInternet,pleaseprovideuswiththelocationaddressorwebsitenameimmediatelysothatwecanpursuearemedy.

Pleasecontactusat<copyright@packtpub.com>withalinktothesuspectedpiratedmaterial.

Weappreciateyourhelpinprotectingourauthorsandourabilitytobringyouvaluablecontent.

QuestionsIfyouhaveaproblemwithanyaspectofthisbook,youcancontactusat<questions@packtpub.com>,andwewilldoourbesttoaddresstheproblem.

Chapter1.HandlingImageandVideoFilesThischapterisintendedasafirstcontactwithOpenCV,itsinstallation,andfirstbasicprograms.Wewillcoverthefollowingtopics:

AbriefintroductiontoOpenCVforthenovice,followedbyaneasystep-by-stepguidetotheinstallationofthelibraryAquicktourofOpenCV’sstructureaftertheinstallationintheuser’slocaldiskQuickrecipestocreateprojectsusingthelibrarywithsomecommonprogrammingframeworksHowtousethefunctionstoreadandwriteimagesandvideosFinally,wedescribethelibraryfunctionstoaddrichuserinterfacestothesoftwareprojects,includingmouseinteraction,drawingprimitives,andQtsupport

AnintroductiontoOpenCVInitiallydevelopedbyIntel,OpenCV(OpenSourceComputerVision)isafreecross-platformlibraryforreal-timeimageprocessingthathasbecomeadefactostandardtoolforallthingsrelatedtoComputerVision.Thefirstversionwasreleasedin2000underBSDlicenseandsincethen,itsfunctionalityhasbeenverymuchenrichedbythescientificcommunity.In2012,thenonprofitfoundationOpenCV.orgtookonthetaskofmaintainingasupportsitefordevelopersandusers.

NoteAtthetimeofwritingthisbook,anewmajorversionofOpenCV(Version3.0)isavailable,stillonbetastatus.Throughoutthebook,wewillpresentthemostrelevantchangesbroughtwiththisnewversion.

OpenCVisavailableforthemostpopularoperatingsystems,suchasGNU/Linux,OSX,Windows,Android,iOS,andsomemore.ThefirstimplementationwasintheCprogramminglanguage;however,itspopularitygrewwithitsC++implementationasofVersion2.0.NewfunctionsareprogrammedwithC++.However,nowadays,thelibraryhasafullinterfaceforotherprogramminglanguages,suchasJava,Python,andMATLAB/Octave.Also,wrappersforotherlanguages(suchasC#,Ruby,andPerl)havebeendevelopedtoencourageadoptionbyprogrammers.

Inanattempttomaximizetheperformanceofcomputingintensivevisiontasks,OpenCVincludessupportforthefollowing:

MultithreadingonmulticorecomputersusingThreadingBuildingBlocks(TBB)—atemplatelibrarydevelopedbyIntel.AsubsetofIntegratedPerformancePrimitives(IPP)onIntelprocessorstoboostperformance.ThankstoIntel,theseprimitivesarefreelyavailableasofVersion3.0beta.InterfacesforprocessingonGraphicProcessingUnit(GPU)usingComputeUnifiedDeviceArchitecture(CUDA)andOpenComputingLanguage(OpenCL).

TheapplicationsforOpenCVcoverareassuchassegmentationandrecognition,2Dand3Dfeaturetoolkits,objectidentification,facialrecognition,motiontracking,gesturerecognition,imagestitching,highdynamicrange(HDR)imaging,augmentedreality,andsoon.Moreover,tosupportsomeofthepreviousapplicationareas,amodulewithstatisticalmachinelearningfunctionsisincluded.

DownloadingandinstallingOpenCVOpenCVisfreelyavailablefordownloadathttp://opencv.org.Thissiteprovidesthelastversionfordistribution(currently,3.0beta)andolderversions.

NoteSpecialcareshouldbetakenwithpossibleerrorswhenthedownloadedversionisanonstablerelease,forexample,thecurrent3.0betaversion.

Onhttp://opencv.org/downloads.html,suitableversionsofOpenCVforeachplatformcanbefound.Thecodeandinformationofthelibrarycanbeobtainedfromdifferentrepositoriesdependingonthefinalpurpose:

Themainrepository(athttp://sourceforge.net/projects/opencvlibrary),devotedtofinalusers.Itcontainsbinaryversionsofthelibraryandready-to‑compilesourcesforthetargetplatform.Thetestdatarepository(athttps://github.com/itseez/opencv_extra)withsetsofdatatotestpurposesofsomelibrarymodules.Thecontributionsrepository(athttp://github.com/itseez/opencv_contrib)withthesourcecodecorrespondingtoextraandcutting-edgefeaturessuppliedbycontributors.Thiscodeismoreerror-proneandlesstestedthanthemaintrunk.

TipWiththelastversion,OpenCV3.0beta,theextracontributedmodulesarenotincludedinthemainpackage.Theyshouldbedownloadedseparatelyandexplicitlyincludedinthecompilationprocessthroughtheproperoptions.Becautiousifyouincludesomeofthosecontributedmodules,becausesomeofthemhavedependenciesonthird‑partysoftwarenotincludedwithOpenCV.Thedocumentationsite(athttp://docs.opencv.org/master/)foreachofthemodules,includingthecontributedones.Thedevelopmentrepository(athttps://github.com/Itseez/opencv)withthecurrentdevelopmentversionofthelibrary.Itisintendedfordevelopersofthemainfeaturesofthelibraryandthe“impatient”userwhowishestousethelastupdateevenbeforeitisreleased.

RatherthanGNU/LinuxandOSX,whereOpenCVisdistributedassourcecodeonly,intheWindowsdistribution,onecanfindprecompiled(withMicrosoftVisualC++v10,v11,andv12)versionsofthelibrary.EachprecompiledversionisreadytobeusedwithMicrosoftcompilers.However,iftheprimaryintentionistodevelopprojectswithadifferentcompilerframework,weneedtocompilethelibraryforthatspecificcompiler(forexample,GNUGCC).

TipThefastestroutetoworkingwithOpenCVistouseoneoftheprecompiledversionsincludedwiththedistribution.Then,abetterchoiceistobuildafine-tunedversionofthe

librarywiththebestsettingsforthelocalplatformusedforsoftwaredevelopment.ThischapterprovidestheinformationtobuildandinstallOpenCVonWindows.FurtherinformationtosetthelibraryonLinuxcanbefoundathttp://docs.opencv.org/doc/tutorials/introduction/linux_installandhttps://help.ubuntu.com/community/OpenCV.

GettingacompilerandsettingCMakeAgoodchoiceforcross‑platformdevelopmentwithOpenCVistousetheGNUtoolkit(includinggmake,g++,andgdb).TheGNUtoolkitcanbeeasilyobtainedforthemostpopularoperatingsystems.OurpreferredchoiceforadevelopmentenvironmentconsistsoftheGNUtoolkitandthecross‑platformQtframework,whichincludestheQtlibraryandtheQtCreatorIntegratedDevelopmentEnvironment(IDE).TheQtframeworkisfreelyavailableathttp://qt-project.org/.

NoteAfterinstallingthecompileronWindows,remembertoproperlysetthePathenvironmentvariable,addingthepathforthecompiler’sexecutable,forexample,C:\Qt\Qt5.2.1\5.2.1\mingw48_32\binfortheGNU/compilersincludedwiththeQtframework.OnWindows,thefreeRapidEnvironmentEditortool(availableathttp://www.rapidee.com)providesaconvenientwaytochangePathandotherenvironmentvariables.

TomanagethebuildprocessfortheOpenCVlibraryinacompiler-independentway,CMakeistherecommendedtool.CMakeisafreeandopensourcecross‑platformtoolavailableathttp://www.cmake.org/.

ConfiguringOpenCVwithCMakeOncethesourcesofthelibraryhavebeendownloadedintothelocaldisk,itisrequiredthatyouconfigurethemakefilesforthecompilationprocessofthelibrary.CMakeisthekeytoolforaneasyconfigurationofOpenCV’sinstallationprocess.Itcanbeusedfromthecommandlineorinamoreuser‑friendlywaywithitsGraphicalUserInterface(GUI)version.

ThestepstoconfigureOpenCVwithCMakecanbesummarizedasfollows:

1. Choosethesource(let’scallitOPENCV_SRCinwhatfollows)andtarget(OPENCV_BUILD)directories.Thetargetdirectoryiswherethecompiledbinarieswillbelocated.

2. MarktheGroupedandAdvancedcheckboxesandclickontheConfigurebutton.3. Choosethedesiredcompiler(forexample,GNUdefaultcompilers,MSVC,andso

on).4. Setthepreferredoptionsandunsetthosenotdesired.5. ClickontheConfigurebuttonandrepeatsteps4and5untilnoerrorsareobtained.6. ClickontheGeneratebuttonandcloseCMake.

ThefollowingscreenshotshowsyouthemainwindowofCMakewiththesourceandtargetdirectoriesandthecheckboxestogroupalltheavailableoptions:

ThemainwindowofCMakeafterthepreconfigurationstep

NoteForbrevity,weuseOPENCV_BUILDandOPENCV_SRCinthistexttodenotethetargetandsourcedirectoriesoftheOpenCVlocalsetup,respectively.Keepinmindthatalldirectoriesshouldmatchyourcurrentlocalconfiguration.

Duringthepreconfigurationprocess,CMakedetectsthecompilerspresentandmanyotherlocalpropertiestosetthebuildprocessofOpenCV.ThepreviousscreenshotdisplaysthemainCMakewindowafterthepreconfigurationprocess,showingthegroupedoptionsinred.

Itispossibletoleavethedefaultoptionsunchangedandcontinuetheconfigurationprocess.However,someconvenientoptionscanbeset:

BUILD_EXAMPLES:ThisissettobuildsomeexamplesusingOpenCV.BUILD_opencv_<module_name>:Thisissettoincludethemodule(module_name)in

thebuildprocess.OPENCV_EXTRA_MODULES_PATH:Thisisusedwhenyouneedsomeextracontributedmodule;setthepathforthesourcecodeoftheextramoduleshere(forexample,C:/opencv_contrib-master/modules).WITH_QT:ThisisturnedontoincludetheQtfunctionalityintothelibrary.WITH_IPP:Thisoptionisturnedonbydefault.ThecurrentOpenCV3.0versionincludesasubsetoftheIntelIntegratedPerformancePrimitives(IPP)thatspeeduptheexecutiontimeofthelibrary.

TipIfyoucompilethenewOpenCV3.0(beta),becautiousbecausesomeunexpectederrorshavebeenreportedrelatedtotheIPPinclusion(thatis,withthedefaultvalueofthisoption).WerecommendthatyouunsettheWITH_IPPoption.

IftheconfigurationstepswithCMake(loopthroughsteps4and5)don’tproduceanyfurthererrors,itispossibletogeneratethefinalmakefilesforthebuildprocess.ThefollowingscreenshotshowsyouthemainwindowofCMakeafteragenerationstepwithouterrors:

CompilingandinstallingthelibraryThenextstepafterthegenerationprocessofmakefileswithCMakeisthecompilationwiththepropermaketool.Thistoolisusuallyexecutedonthecommandline(theconsole)fromthetargetdirectory(theonesetattheCMakeconfigurationstep).Forexample,inWindows,thecompilationshouldbelaunchedfromthecommandlineasfollows:

OPENCV_BUILD>mingw32-make

ThiscommandlaunchesabuildprocessusingthemakefilesgeneratedbyCMake.Thewholecompilationtypicallytakesseveralminutes.Ifthecompilationendswithouterrors,theinstallationcontinueswiththeexecutionofthefollowingcommand:

OPENCV_BUILD>mingw32-makeinstall

ThiscommandcopiestheOpenCVbinariestotheOPENCV_BUILD\installdirectory.

Ifsomethingwentwrongduringthecompilation,weshouldrunCMakeagaintochangetheoptionsselectedduringtheconfiguration.Then,weshouldregeneratethemakefiles.

Theinstallationendsbyaddingthelocationofthelibrarybinaries(forexample,inWindows,theresultingDLLfilesarelocatedatOPENCV_BUILD\install\x64\mingw\bin)tothePathenvironmentvariable.WithoutthisdirectoryinthePathfield,theexecutionofeveryOpenCVexecutablewillgiveanerrorasthelibrarybinarieswon’tbefound.

Tocheckthesuccessoftheinstallationprocess,itispossibletorunsomeoftheexamplescompiledalongwiththelibrary(iftheBUILD_EXAMPLESoptionwassetusingCMake).Thecodesamples(writteninC++)canbefoundatOPENCV_BUILD\install\x64\mingw\samples\cpp.

NoteTheshortinstructionsgiventoinstallOpenCVapplytoWindows.AdetaileddescriptionwiththeprerequisitesforLinuxcanbereadathttp://docs.opencv.org/doc/tutorials/introduction/linux_install/linux_install.html.AlthoughthetutorialappliestoOpenCV2.0,almostalltheinformationisstillvalidforVersion3.0.

ThestructureofOpenCVOnceOpenCVisinstalled,theOPENCV_BUILD\installdirectorywillbepopulatedwiththreetypesoffiles:

Headerfiles:ThesearelocatedintheOPENCV_BUILD\install\includesubdirectoryandareusedtodevelopnewprojectswithOpenCV.Librarybinaries:Thesearestaticordynamiclibraries(dependingontheoptionselectedwithCMake)withthefunctionalityofeachoftheOpenCVmodules.Theyarelocatedinthebinsubdirectory(forexample,x64\mingw\binwhentheGNUcompilerisused).Samplebinaries:Theseareexecutableswithexamplesthatusethelibraries.Thesourcesforthesesamplescanbefoundinthesourcepackage(forexample,OPENCV_SRC\sources\samples).

OpenCVhasamodularstructure,whichmeansthatthepackageincludesastaticordynamic(DLL)libraryforeachmodule.Theofficialdocumentationforeachmodulecanbefoundathttp://docs.opencv.org/master/.Themainmodulesincludedinthepackageare:

core:ThisdefinesthebasicfunctionsusedbyalltheothermodulesandthefundamentaldatastructuresincludingtheimportantmultidimensionalarrayMat.highgui:Thisprovidessimpleuserinterface(UI)capabilities.BuildingthelibrarywithQtsupport(theWITH_QTCMakeoption)allowsUIcompatibilitywithsuchaframework.imgproc:Theseareimageprocessingfunctionsthatincludefiltering(linearandnonlinear),geometrictransformations,colorspaceconversion,histograms,andsoon.imgcodecs:Thisisaneasy-to-useinterfacetoreadandwriteimages.

NotePayattentiontothechangesinmodulessinceOpenCV3.0assomefunctionalityhasbeenmovedtoanewmodule(forexample,readingandwritingimagesfunctionsweremovedfromhighguitoimgcodecs).

photo:ThisincludesComputationalPhotographyincludinginpainting,denoising,HighDynamicRange(HDR)imaging,andsomeothers.stitching:Thisisusedforimagestitching.videoio:Thisisaneasy-to-useinterfaceforvideocaptureandvideocodecs.video:Thissuppliesthefunctionalityofvideoanalysis(motionestimation,backgroundextraction,andobjecttracking).features2d:Thesearefunctionsforfeaturedetection(cornersandplanarobjects),featuredescription,featurematching,andsoon.objdetect:Thesearefunctionsforobjectdetectionandinstancesofpredefineddetectors(suchasfaces,eyes,smile,people,cars,andsoon).

Someothermodulesarecalib3d(cameracalibration),flann(clusteringandsearch),ml(machinelearning),shape(shapedistanceandmatching),superres(superresolution),video(videoanalysis),andvideostab(videostabilization).

NoteAsofVersion3.0beta,thenewcontributedmodulesaredistributedinaseparatepackage(opencv_contrib-master.zip)thatcanbedownloadedfromhttps://github.com/itseez/opencv_contrib.Thesemodulesprovideextrafeaturesthatshouldbefullyunderstoodbeforeusingthem.ForaquickoverviewofthenewfunctionalityinthenewreleaseofOpenCV(Version3.0),refertothedocumentathttp://opencv.org/opencv-3-0-beta.html.

CreatinguserprojectswithOpenCVInthisbook,weassumethatC++isthemainlanguageforprogrammingimageprocessingapplications,althoughinterfacesandwrappersforotherprogramminglanguagesareactuallyprovided(forinstance,Python,Java,MATLAB/Octave,andsomemore).

Inthissection,weexplainhowtodevelopapplicationswithOpenCV’sC++APIusinganeasy-to-usecross-platformframework.

GeneralusageofthelibraryTodevelopanOpenCVapplicationwithC++,werequireourcodeto:

IncludetheOpenCVheaderfileswithdefinitionsLinktheOpenCVlibraries(binaries)togetthefinalexecutable

TheOpenCVheaderfilesarelocatedintheOPENCV_BUILD\install\include\opencv2directorywherethereisafile(*.hpp)foreachofthemodules.Theinclusionoftheheaderfileisdonewiththe#includedirective,asshownhere:

#include<opencv2/<module_name>/<module_name>.hpp>

//Includingtheheaderfileforeachmoduleusedinthecode

Withthisdirective,itispossibletoincludeeveryheaderfileneededbytheuserprogram.Ontheotherhand,iftheopencv.hppheaderfileisincluded,alltheheaderfileswillbeautomaticallyincludedasfollows:

#include<opencv2/opencv.hpp>

//IncludingalltheOpenCV'sheaderfilesinthecode

NoteRememberthatallthemodulesinstalledlocallyaredefinedintheOPENCV_BUILD\install\include\opencv2\opencv_modules.hppheaderfile,whichisgeneratedautomaticallyduringthebuildingprocessofOpenCV.

Theuseofthe#includedirectiveisnotalwaysaguaranteeforthecorrectinclusionoftheheaderfiles,becauseitisnecessarytotellthecompilerwheretofindtheincludefiles.Thisisachievedbypassingaspecialargumentwiththelocationofthefiles(suchasI\<location>forGNUcompilers).

Thelinkingprocessrequiresyoutoprovidethelinkerwiththelibraries(dynamicorstatic)wheretherequiredOpenCVfunctionalitycanbefound.Thisisusuallydonewithtwotypesofargumentsforthelinker:thelocationofthelibrary(suchas‑L<location>forGNUcompilers)andthenameofthelibrary(suchas-l<module_name>).

NoteYoucanfindacompletelistofavailableonlinedocumentationforGNUGCCandMakeathttps://gcc.gnu.org/onlinedocs/andhttps://www.gnu.org/software/make/manual/.

ToolstodevelopnewprojectsThemainprerequisitestodevelopourownOpenCVC++applicationsare:

OpenCVheaderfilesandlibrarybinaries:OfcourseweneedtocompileOpenCV,andtheauxiliarylibrariesareprerequisitesforsuchacompilation.Thepackageshouldbecompiledwiththesamecompilerusedtogeneratetheuserapplication.AC++compiler:Someassociatetoolsareconvenientasthecodeeditor,debugger,projectmanager,buildprocessmanager(forinstanceCMake),revisioncontrolsystem(suchasGit,Mercurial,SVN,andsoon),andclassinspector,amongothers.Usually,thesetoolsaredeployedtogetherinaso-calledIntegratedDevelopmentEnvironment(IDE).Anyotherauxiliarylibraries:Optionally,anyotherauxiliarylibrariesneededtoprogramthefinalapplication,suchasgraphical,statistical,andsoonwillberequired.

ThemostpopularavailablecompilerkitstoprogramOpenCVC++applicationsare:

MicrosoftVisualC(MSVC):ThisisonlysupportedonWindowsanditisverywellintegratedwiththeIDEVisualStudio,althoughitcanbealsointegratedwithothercross-platformIDEs,suchasQtCreatororEclipse.VersionsofMSVCthatcurrentlycompatiblewiththelatestOpenCVreleaseareVC10,VC11,andVC12(VisualStudio2010,2012,and2013).GNUCompilerCollectionGNUGCC:Thisisacross‑platformcompilersystemdevelopedbytheGNUproject.ForWindows,thiskitisknownasMinGW(MinimalGNUGCC).TheversioncompatiblewiththecurrentOpenCVreleaseisGNUGCC4.8.ThiskitmaybeusedwithseveralIDEs,suchasQtCreator,Code::Blocks,Eclipse,amongothers.

Fortheexamplespresentedinthisbook,weusedtheMinGW4.8compilerkitforWindowsplustheQt5.2.1libraryandtheQtCreatorIDE(3.0.1).Thecross-platformQtlibraryisrequiredtocompileOpenCVwiththenewUIcapabilitiesprovidedbysuchalibrary.

NoteForWindows,itispossibletodownloadaQtbundle(includingQtlibrary,QtCreator,andtheMinGWkit)fromhttp://qt-project.org/.Thebundleisapproximately700MB.

QtCreatorisacross-platformIDEforC++thatintegratesthetoolsweneedtocodeapplications.InWindows,itmaybeusedwithMinGWorMSVC.ThefollowingscreenshotshowsyoutheQtCreatormainwindowwiththedifferentpanelsandviewsforanOpenCVC++project:

ThemainwindowofQtCreatorwithsomeviewsfromanOpenCVC++project

CreatinganOpenCVC++programwithQtCreatorNext,weexplainhowtocreateacodeprojectwiththeQtCreatorIDE.Inparticular,weapplythisdescriptiontoanOpenCVexample.

WecancreateaprojectforanyOpenCVapplicationusingQtCreatorbynavigatingtoFile|NewFileorFile|Project…andthennavigatingtoNon-QtProject|PlainC++Project.Then,wehavetochooseaprojectnameandthelocationatwhichitwillbestored.Thenextstepistopickakit(thatis,thecompiler)fortheproject(inourcase,DesktopQt5.2.1MinGW32bit)andthelocationforthebinariesgenerated.Usually,twopossiblebuildconfigurations(profiles)areused:debugandrelease.Theseprofilessettheappropriateflagstobuildandrunthebinaries.

WhenaprojectiscreatedusingQtCreator,twospecialfiles(with.proand.pro.userextensions)aregeneratedtoconfigurethebuildandrunprocesses.Thebuildprocessisdeterminedbythekitchosenduringthecreationoftheproject.WiththeDesktopQt5.2.1MinGW32bitkit,thisprocessreliesontheqmakeandmingw32‑maketools.Usingthe*.profileastheinput,qmakegeneratesthemakefilethatdrivesthebuildprocessforeachprofile(thatis,releaseanddebug).TheqmaketoolisusedfromtheQtCreatorIDEasanalternativetoCMaketosimplifythebuildprocessofsoftwareprojects.Itautomatesthegenerationofmakefilesfromafewlinesofinformation.

Thefollowinglinesrepresentanexampleofa*.profile(forexample,showImage.pro):

TARGET:showImage

TEMPLATE=app

CONFIG+=console

CONFIG-=app_bundle

CONFIG-=qt

SOURCES+=\

showImage.cpp

INCLUDEPATH+=C:/opencv300-buildQt/install/include

LIBS+=-LC:/opencv300-buildQt/install/x64/mingw/lib\

-lopencv_core300.dll\

-lopencv_imgcodecs300.dll\

-lopencv_highgui300.dll\

-lopencv_imgproc300.dll

Theprecedingfileillustratestheoptionsthatqmakeneedstogeneratetheappropriatemakefilestobuildthebinariesforourproject.Eachlinestartswithatagindicatinganoption(TARGET,CONFIG,SOURCES,INCLUDEPATH,andLIBS)followedwithamarktoadd(+=)orremove(-=)thevalueoftheoption.Inthissampleproject,weusethenon-Qtconsoleapplication.TheexecutablefileisshowImage.exe(TARGET)andthesourcefileisshowImage.cpp(SOURCES).AsthisprojectisanOpenCV-basedapplication,thetwolasttagsindicatethelocationoftheheaderfiles(INCLUDEPATH)andtheOpenCVlibraries(LIBS)usedbythisparticularproject(core,imgcodecs,highgui,andimgproc).Notethatabackslashattheendofthelinedenotescontinuationinthenextline.

NoteForadetaileddescriptionofthetools(includingQtCreatorandqmake)developedwithintheQtproject,visithttp://doc.qt.io/.

ReadingandwritingimagefilesImageprocessingreliesongettinganimage(forinstance,aphotographoravideofame)and“playing”withitbyapplyingsignalprocessingtechniquesonittogetthedesiredresults.Inthissection,weshowyouhowtoreadimagesfromfilesusingthefunctionssuppliedbyOpenCV.

ThebasicAPIconceptsTheMatclassisthemaindatastructurethatstoresandmanipulatesimagesinOpenCV.Thisclassisdefinedinthecoremodule.OpenCVhasimplementedmechanismstoallocateandreleasememoryautomaticallyforthesedatastructures.However,theprogrammershouldstilltakespecialcarewhendatastructuressharethesamebuffermemory.Forinstance,theassignmentoperatordoesnotcopythememorycontentfromanobject(MatA)toanother(MatB);itonlycopiesthereference(thememoryaddressofthecontent).Then,achangeinoneobject(AorB)affectsbothobjects.ToduplicatethememorycontentofaMatobject,theMat::clone()memberfunctionshouldbeused.

NoteManyfunctionsinOpenCVprocessdensesingleormultichannelarrays,usuallyusingtheMatclass.However,insomecases,adifferentdatatypemaybeconvenient,suchasstd::vector<>,Matx<>,Vec<>,orScalar.Forthispurpose,OpenCVprovidestheproxyclassesInputArrayandOutputArray,whichallowanyoftheprevioustypestobeusedasparametersforfunctions.

TheMatclassisusedfordensen-dimensionalsingleormultichannelarrays.Itcanactuallystorerealorcomplex-valuedvectorsandmatrices,coloredorgrayscaleimages,histograms,pointclouds,andsoon.

TherearemanydifferentwaystocreateaMatobject,themostpopularbeingtheconstructorwherethesizeandtypeofthearrayarespecifiedasfollows:

Mat(nrows,ncols,type,fillValue)

TheinitialvalueforthearrayelementsmightbesetbytheScalarclassasatypicalfour-elementvector(foreachRGBandtransparencycomponentoftheimagestoredinthearray).Next,weshowyouausageexampleofMatasfollows:

Matimg_A(4,4,CV_8U,Scalar(255));

//Whiteimage:

//4x4single-channelarraywith8bitsofunsignedintegers

//(upto255values,validforagrayscaleimage,forexample,

//255=white)

TheDataTypeclassdefinestheprimitivedatatypesforOpenCV.Theprimitivedatatypescanbebool,unsignedchar,signedchar,unsignedshort,signedshort,int,float,double,oratupleofvaluesofoneoftheseprimitivetypes.Anyprimitivetypecanbedefinedbyanidentifierinthefollowingform:

CV_<bitdepth>{U|S|F}C(<numberofchannels>)

IntheprecedingcodeU,S,andFstandforunsigned,signed,andfloat,respectively.Forthesinglechannelarrays,thefollowingenumerationisapplied,describingthedatatypes:

enum{CV_8U=0,CV_8S=1,CV_16U=2,CV_16S=3,CV_32S=4,CV_32F=5,CV_64F=6};

Note

Here,itshouldbenotedthatthesethreedeclarationsareequivalent:CV_8U,CV_8UC1,andCV_8UC(1).Thesingle-channeldeclarationfitswellforintegerarraysdevotedtograyscaleimages,whereasthethreechanneldeclarationofanarrayismoreappropriateforimageswiththreecomponents(forexample,RGB,BRG,HSV,andsoon).Forlinearalgebraoperations,thearraysoftypefloat(F)mightbeused.

Wecandefinealloftheprecedingdatatypesformultichannelarrays(upto512channels).Thefollowingscreenshotsillustrateanimage’sinternalrepresentationwithonesinglechannel(CV_8U,grayscale)andthesameimagerepresentedwiththreechannels(CV_8UC3,RGB).ThesescreenshotsaretakenbyzoominginonanimagedisplayedinthewindowofanOpenCVexecutable(theshowImageexample):

An8-bitrepresentationofanimageinRGBcolorandgrayscale

NoteItisimportanttonoticethattoproperlysaveaRGBimagewithOpenCVfunctions,theimagemustbestoredinmemorywithitschannelsorderedasBGR.Inthesameway,whenanRGBimageisreadfromafile,itisstoredinmemorywithitschannelsinaBGRorder.Moreover,itneedsasupplementaryfourthchannel(alpha)tomanipulateimageswiththreechannels,RGB,plusatransparency.ForRGBimages,alargerintegervaluemeansabrighterpixelormoretransparencyforthealphachannel.

AllOpenCVclassesandfunctionsareinthecvnamespace,andconsequently,wewillhavethefollowingtwooptionsinoursourcecode:

Addtheusingnamespacecvdeclarationafterincludingtheheaderfiles(thisistheoptionusedinallthecodeexamplesinthisbook).Appendthecv::prefixtoalltheOpenCVclasses,functions,anddatastructuresthat

weuse.ThisoptionisrecommendediftheexternalnamesprovidedbyOpenCVconflictwiththeoften-usedstandardtemplatelibrary(STL)orotherlibraries.

Imagefile-supportedformatsOpenCVsupportsthemostcommonimageformats.However,someofthemneed(freelyavailable)third-partylibraries.ThemainformatssupportedbyOpenCVare:

Windowsbitmaps(*.bmp,*dib)Portableimageformats(*.pbm,*.pgm,*.ppm)Sunrasters(*.sr,*.ras)

Theformatsthatneedauxiliarylibrariesare:

JPEG(*.jpeg,*.jpg,*.jpe)JPEG2000(*.jp2)PortableNetworkGraphics(*.png)TIFF(*.tiff,*.tif)WebP(*.webp).

Inadditiontotheprecedinglistedformats,withtheOpenCV3.0version,itincludesadriverfortheformats(NITF,DTED,SRTM,andothers)supportedbytheGeographicDataAbstractionLibrary(GDAL)setwiththeCMakeoption,WITH_GDAL.NoticethattheGDALsupporthasnotbeenextensivelytestedonWindowsOSesyet.InWindowsandOSX,codecsshippedwithOpenCVareusedbydefault(libjpeg,libjasper,libpng,andlibtiff).Then,intheseOSes,itispossibletoreadtheJPEG,PNG,andTIFFformats.Linux(andotherUnix-likeopensourceOSes)looksforcodecsinstalledinthesystem.ThecodecscanbeinstalledbeforeOpenCVorelsethelibrariescanbebuiltfromtheOpenCVpackagebysettingtheproperoptionsinCMake(forexample,BUILD_JASPER,BUILD_JPEG,BUILD_PNG,andBUILD_TIFF).

TheexamplecodeToillustratehowtoreadandwriteimagefileswithOpenCV,wewillnowdescribetheshowImageexample.Theexampleisexecutedfromthecommandlinewiththecorrespondingoutputwindowsasfollows:

<bin_dir>\showImage.exefruits.jpgfruits_bw.jpg

TheoutputwindowfortheshowImageexample

Inthisexample,twofilenamesaregivenasarguments.Thefirstoneistheinputimagefiletoberead.Thesecondoneistheimagefiletobewrittenwithagrayscalecopyoftheinputimage.Next,weshowyouthesourcecodeanditsexplanation:

#include<opencv2/opencv.hpp>

#include<iostream>

usingnamespacestd;

usingnamespacecv;

intmain(int,char*argv[])

{

Matin_image,out_image;

//Usage:<cmd><file_in><file_out>

//Readoriginalimage

in_image=imread(argv[1],IMREAD_UNCHANGED);

if(in_image.empty()){

//Checkwhethertheimageisreadornot

cout<<"Error!Inputimagecannotberead…\n";

return-1;

}

//Createstwowindowswiththenamesoftheimages

namedWindow(argv[1],WINDOW_AUTOSIZE);

namedWindow(argv[2],WINDOW_AUTOSIZE);

//Showstheimageintothepreviouslycreatedwindow

imshow(argv[1],in_image);

cvtColor(in_image,out_image,COLOR_BGR2GRAY);

imshow(argv[2],in_image);

cout<<"Pressanykeytoexit…\n";

waitKey();//Waitforkeypress

//Writingimage

imwrite(argv[2],in_image);

return0;

}

Here,weusethe#includedirectivewiththeopencv.hppheaderfilethat,infact,includesalltheOpenCVheaderfiles.Byincludingthissinglefile,nomorefilesneedtobeincluded.Afterdeclaringtheuseofcvnamespace,allthevariablesandfunctionsinsidethisnamespacedon’tneedthecv::prefix.Thefirstthingtodointhemainfunctionistocheckthenumberofargumentspassedinthecommandline.Then,ahelpmessageisdisplayedifanerroroccurs.

ReadingimagefilesIfthenumberofargumentsiscorrect,theimagefileisreadintotheMatin_imageobjectwiththeimread(argv[1],IMREAD_UNCHANGED)function,wherethefirstparameteristhefirstargument(argv[1])passedinthecommandlineandthesecondparameterisaflag(IMREAD_UNCHANGED),whichmeansthattheimagestoredintothememoryobjectshouldbeunchanged.Theimreadfunctiondeterminesthetypeofimage(codec)fromthefilecontentratherthanfromthefileextension.

Theprototypefortheimreadfunctionisasfollows:

Matimread(constString&filename,

intflags=IMREAD_COLOR)

Theflagspecifiesthecoloroftheimagereadandtheyaredefinedandexplainedbythefollowingenumerationintheimgcodecs.hppheaderfile:

enum{IMREAD_UNCHANGED=-1,//8bit,colorornot

IMREAD_GRAYSCALE=0,//8bit,gray

IMREAD_COLOR=1,//unchangeddepth,color

IMREAD_ANYDEPTH=2,//anydepth,unchangedcolor

IMREAD_ANYCOLOR=4,//unchangeddepth,anycolor

IMREAD_LOAD_GDAL=8//Usegdaldriver

};

NoteAsofVersion3.0ofOpenCV,theimreadfunctionisintheimgcodecsmoduleandnotinhighguilikeinOpenCV2.x.

Tip

AsseveralfunctionsanddeclarationsaremovedintoOpenCV3.0,itispossibletogetsomecompilationerrorsasoneormoredeclarations(symbolsand/orfunctions)arenotfoundbythelinker.Tofigureoutwhere(*.hpp)asymbolisdefinedandwhichlibrarytolink,werecommendthefollowingtrickusingtheQtCreatorIDE:

Addthe#include<opencv2/opencv.hpp>declarationtothecode.PresstheF2functionkeywiththemousecursoroverthesymbolorfunction;thisopensthe*.hppfilewherethesymbolorfunctionisdeclared.

Aftertheinputimagefileisread,checktoseewhethertheoperationsucceeded.Thischeckisachievedwiththein_image.empty()memberfunction.Iftheimagefileisreadwithouterrors,twowindowsarecreatedtodisplaytheinputandoutputimages,respectively.Thecreationofwindowsiscarriedoutwiththefollowingfunction:

voidnamedWindow(constString&winname,intflags=WINDOW_AUTOSIZE)

OpenCVwindowsareidentifiedbyaunivocalnameintheprogram.Theflags’definitionandtheirexplanationaregivenbythefollowingenumerationinthehighgui.hppheaderfile:

enum{WINDOW_NORMAL=0x00000000,

//theusercanresizethewindow(noconstraint)

//alsousetoswitchafullscreenwindowtoanormalsize

WINDOW_AUTOSIZE=0x00000001,

//theusercannotresizethewindow,

//thesizeisconstrainedbytheimagedisplayed

WINDOW_OPENGL=0x00001000,//windowwithopenglsupport

WINDOW_FULLSCREEN=1,

WINDOW_FREERATIO=0x00000100,

//theimageexpendsasmuchasitcan(noratioconstraint)

WINDOW_KEEPRATIO=0x00000000

//theratiooftheimageisrespected

};

Thecreationofawindowdoesnotshowanythingonscreen.Thefunction(belongingtothehighguimodule)todisplayanimageinawindowis:

voidimshow(constString&winname,InputArraymat)

Theimage(mat)isshownwithitsoriginalsizeifthewindow(winname)wascreatedwiththeWINDOW_AUTOSIZEflag.

IntheshowImageexample,thesecondwindowshowsagrayscalecopyoftheinputimage.Toconvertacolorimagetograyscale,thecvtColorfunctionfromtheimgprocmoduleisused.Thisfunctioncanactuallybeusedtochangetheimagecolorspace.

Anywindowcreatedinaprogramcanberesizedandmovedfromitsdefaultsettings.Whenanywindowisnolongerrequired,itshouldbedestroyedinordertoreleaseitsresources.Thisresourceliberationisdoneimplicitlyattheendofaprogram,likeintheexample.

Eventhandlingintotheintrinsicloop

Ifwedonothingmoreaftershowinganimageonawindow,surprisingly,theimagewillnotbeshownatall.Aftershowinganimageonawindow,weshouldstartalooptofetchandhandleeventsrelatedtouserinteractionwiththewindow.Suchataskiscarriedoutbythefollowingfunction(fromthehighguimodule):

intwaitKey(intdelay=0)

Thisfunctionwaitsforakeypressedduringanumberofmilliseconds(delay>0)returningthecodeofthekeyor-1ifthedelayendswithoutakeypressed.Ifdelayis0ornegative,thefunctionwaitsforeveruntilakeyispressed.

NoteRememberthatthewaitKeyfunctiononlyworksifthereisacreatedandactivewindowatleast.

WritingimagefilesAnotherimportantfunctionintheimgcodecsmoduleis:

boolimwrite(constString&filename,InputArrayimg,constvector<int>&

params=vector<int>())

Thisfunctionsavestheimage(img)intoafile(filename),beingthethirdoptionalargumentavectorofproperty-valuepairsspecifyingtheparametersofthecodec(leaveitemptytousethedefaultvalues).Thecodecisdeterminedbytheextensionofthefile.

NoteForadetailedlistofcodecproperties,takealookattheimgcodecs.hppheaderfileandtheOpenCVAPIreferenceathttp://docs.opencv.org/master/modules/refman.html.

ReadingandwritingvideofilesRatherthanstillimages,avideodealswithmovingimages.Thesourcesofvideocanbeadedicatedcamera,awebcam,avideofile,orasequenceofimagefiles.InOpenCV,theVideoCaptureandVideoWriterclassesprovideaneasy-to-useC++APIforthetaskofcapturingandrecordinginvolvedinvideoprocessing.

TheexamplecodeTherecVideoexampleisashortsnippetofcodewhereyoucanseehowtouseadefaultcameraasacapturedevicetograbframes,processthemforedgedetection,andsavethisnewconvertedframetoafile.Also,twowindowsarecreatedtosimultaneouslyshowyoutheoriginalframeandtheprocessedone.Theexamplecodeis:

#include<opencv2/opencv.hpp>

#include<iostream>

usingnamespacestd;

usingnamespacecv;

intmain(int,char**)

{

Matin_frame,out_frame;

constcharwin1[]="Grabbing…",win2[]="Recording…";

doublefps=30;//Framespersecond

charfile_out[]="recorded.avi";

VideoCaptureinVid(0);//Opendefaultcamera

if(!inVid.isOpened()){//Checkerror

cout<<"Error!Cameranotready…\n";

return-1;

}

//Getsthewidthandheightoftheinputvideo

intwidth=(int)inVid.get(CAP_PROP_FRAME_WIDTH);

intheight=(int)inVid.get(CAP_PROP_FRAME_HEIGHT);

VideoWriterrecVid(file_out,

VideoWriter::fourcc('M','S','V','C'),

fps,Size(width,height));

if(!recVid.isOpened()){

cout<<"Error!Videofilenotopened…\n";

return-1;

}

//Createtwowindowsfororig.andfinalvideo

namedWindow(win1);

namedWindow(win2);

while(true){

//Readframefromcamera(grabbinganddecoding)

inVid>>in_frame;

//Converttheframetograyscale

cvtColor(in_frame,out_frame,COLOR_BGR2GRAY);

//Writeframetovideofile(encodingandsaving)

recVid<<out_frame;

imshow(win1,in_frame);//Showframeinwindow

imshow(win2,out_frame);//Showframeinwindow

if(waitKey(1000/fps)>=0)

break;

}

inVid.release();//Closecamera

return0;

}

Inthisexample,thefollowingfunctionsdeserveaquickreview:

doubleVideoCapture::get(intpropId):ThisreturnsthevalueofthespecifiedpropertyforaVideoCaptureobject.AcompletelistofpropertiesbasedonDC1394(IEEE1394DigitalCameraSpecifications)isincludedwiththevideoio.hppheaderfile.staticintVideoWriter::fourcc(charc1,charc2,charc3,charc4):Thisconcatenatesfourcharacterstoafourcccode.Intheexample,MSVCstandsforMicrosoftVideo(onlyavailableforWindows).Thelistofvalidfourcccodesispublishedathttp://www.fourcc.org/codecs.php.boolVideoWriter::isOpened():Thisreturnstrueiftheobjectforwritingthevideowassuccessfullyinitialized.Forinstance,usinganimpropercodecproducesanerror.

TipBecautious;thevalidfourcccodesinasystemdependonthelocallyinstalledcodecs.Toknowtheinstalledfourcccodecsavailableinthelocalsystem,werecommendtheopensourcetoolMediaInfo,availableformanyplatformsathttp://mediaarea.net/en/MediaInfo.

VideoCapture&VideoCapture::operator>>(Mat&image):Thisgrabs,decodes,andreturnsthenextframe.ThismethodhastheequivalentboolVideoCapture::read(OutputArrayimage)function.ItcanbeusedratherthanusingtheVideoCapture::grab()function,followedbyVideoCapture::retrieve().VideoWriter&VideoWriter::operator<<(constMat&image):Thiswritesthenextframe.ThismethodhastheequivalentvoidVideoWriter::write(constMat&image)function.

Inthisexample,thereisareading/writingloopwherethewindoweventsarefetchedandhandledaswell.ThewaitKey(1000/fps)functioncallisinchargeofthat;however,inthiscase,1000/fpsindicatesthenumberofmillisecondstowaitbeforereturningtotheexternalloop.Althoughnotexact,anapproximatemeasureofframespersecondisobtainedfortherecordedvideo.

voidVideoCapture::release():Thisreleasesthevideofileorcapturingdevice.Althoughnotexplicitlynecessaryinthisexample,weincludeittoillustrateitsuse.

User-interactionstoolsIntheprevioussections,weexplainedhowtocreate(namedWindow)awindowtodisplay(imshow)animageandfetch/handleevents(waitKey).TheexamplesweprovideshowyouaveryeasymethodforuserinteractionwithOpenCVapplicationsthroughthekeyboard.ThewaitKeyfunctionreturnsthecodeofakeypressedbeforeatimeoutexpires.

Fortunately,OpenCVprovidesmoreflexiblewaysforuserinteraction,suchastrackbarsandmouseinteraction,whichcanbecombinedwithsomedrawingfunctionstoprovidearicheruserexperience.Moreover,ifOpenCVislocallycompiledwithQtsupport(theWITH_QToptionofCMake),asetofnewfunctionsareavailabletoprogramanevenbetterUI.

Inthissection,weprovideaquickreviewoftheavailablefunctionalitytoprogramuserinterfacesinanOpenCVprojectwithQtsupport.WeillustratethisreviewonOpenCVUIsupportwiththenextexamplenamedshowUI.

Theexampleshowsyouacolorimageinawindow,illustratinghowtousesomebasicelementstoenrichtheuserinteraction.ThefollowingscreenshotdisplaystheUIelementscreatedintheexample:

TheoutputwindowfortheshowUIexample

ThesourcecodeoftheshowUIexample(withoutthecallbackfunctions)isasfollows:

#include<opencv2/opencv.hpp>

#include<iostream>

usingnamespacestd;

usingnamespacecv;

//Callbackfunctionsdeclarations

voidcbMouse(intevent,intx,inty,intflags,void*);

voidtb1_Callback(intvalue,void*);

voidtb2_Callback(intvalue,void*);

voidcheckboxCallBack(intstate,void*);

voidradioboxCallBack(intstate,void*id);

voidpushbuttonCallBack(int,void*font);

//Globaldefinitionsandvariables

Matorig_img,tmp_img;

constcharmain_win[]="main_win";

charmsg[50];

intmain(int,char*argv[]){

constchartrack1[]="TrackBar1";

constchartrack2[]="TrackBar2";

constcharcheckbox[]="CheckBox";

constcharradiobox1[]="RadioBox1";

constcharradiobox2[]="RadioBox2";

constcharpushbutton[]="PushButton";

inttb1_value=50;//Initialvalueoftrackbar1

inttb2_value=25;//Initialvalueoftrackbar1

orig_img=imread(argv[1]);//Openandreadtheimage

if(orig_img.empty()){

cout<<"Error!!!Imagecannotbeloaded…"<<endl;

return-1;

}

namedWindow(main_win);//Createsmainwindow

//Createsafontforaddingtexttotheimage

QtFontfont=fontQt("Arial",20,Scalar(255,0,0,0),

QT_FONT_BLACK,QT_STYLE_NORMAL);

//CreationofCallBackfunctions

setMouseCallback(main_win,cbMouse,NULL);

createTrackbar(track1,main_win,&tb1_value,

100,tb1_Callback);

createButton(checkbox,checkboxCallBack,0,

QT_CHECKBOX);

//Passingvalues(font)totheCallBack

createButton(pushbutton,pushbuttonCallBack,

(void*)&font,QT_PUSH_BUTTON);

createTrackbar(track2,NULL,&tb2_value,

50,tb2_Callback);

//PassingvaluestotheCallBack

createButton(radiobox1,radioboxCallBack,

(void*)radiobox1,QT_RADIOBOX);

createButton(radiobox2,radioboxCallBack,

(void*)radiobox2,QT_RADIOBOX);

imshow(main_win,orig_img);//Showsoriginalimage

cout<<"Pressanykeytoexit…"<<endl;

waitKey();//Infiniteloopwithhandleforevents

return0;

}

WhenOpenCVisbuiltwithQtsupport,everycreatedwindow—throughthehighguimodule—showsadefaulttoolbar(seetheprecedingfigure)withoptions(fromlefttoright)forpanning,zooming,saving,andopeningthepropertieswindow.

Additionaltothementionedtoolbar(onlyavailablewithQt),inthenextsubsections,wecommentthedifferentUIelementscreatedintheexampleandthecodetoimplementthem.

TrackbarsTrackbarsarecreatedwiththecreateTrackbar(constString&trackbarname,constString&winname,int*value,intcount,TrackbarCallbackonChange=0,void*

userdata=0)functioninthespecifiedwindow(winname),withalinkedintegervalue(value),amaximumvalue(count),anoptionalcallbackfunction(onChange)tobecalledonchangesoftheslider,andanargument(userdata)tothecallbackfunction.Thecallbackfunctionitselfgetstwoarguments:value(selectedbytheslider)andapointertouserdata(optional).WithQtsupport,ifnowindowisspecified,thetrackbariscreatedinthepropertieswindow.IntheshowUIexample,wecreatetwotrackbars:thefirstinthemainwindowandthesecondoneinthepropertieswindow.Thecodeforthetrackbarcallbacksis:

voidtb1_Callback(intvalue,void*){

sprintf(msg,"Trackbar1changed.Newvalue=%d",value);

displayOverlay(main_win,msg);

return;

}

voidtb2_Callback(intvalue,void*){

sprintf(msg,"Trackbar2changed.Newvalue=%d",value);

displayStatusBar(main_win,msg,1000);

return;

}

MouseinteractionMouseeventsarealwaysgeneratedsothattheuserinteractswiththemouse(movingandclicking).Bysettingtheproperhandlerorcallbackfunctions,itispossibletoimplementactionssuchasselect,draganddrop,andsoon.Thecallbackfunction(onMouse)isenabledwiththesetMouseCallback(constString&winname,MouseCallbackonMouse,void*userdata=0)functioninthespecifiedwindow(winname)andoptionalargument(userdata).

Thesourcecodeforthecallbackfunctionthathandlesthemouseeventis:

voidcbMouse(intevent,intx,inty,intflags,void*){

//Staticvarsholdvaluesbetweencalls

staticPointp1,p2;

staticboolp2set=false;

//Leftmousebuttonpressed

if(event==EVENT_LBUTTONDOWN){

p1=Point(x,y);//Setorig.point

p2set=false;

}elseif(event==EVENT_MOUSEMOVE&&

flags==EVENT_FLAG_LBUTTON){

//Checkmovingmouseandleftbuttondown

//Checkoutbounds

if(x>orig_img.size().width)

x=orig_img.size().width;

elseif(x<0)

x=0;

//Checkoutbounds

if(y>orig_img.size().height)

y=orig_img.size().height;

elseif(y<0)

y=0;

p2=Point(x,y);//Setfinalpoint

p2set=true;

//Copyorig.totemp.image

orig_img.copyTo(tmp_img);

//Drawsrectangle

rectangle(tmp_img,p1,p2,Scalar(0,0,255));

//Drawtemporalimagewithrect.

imshow(main_win,tmp_img);

}elseif(event==EVENT_LBUTTONUP

&&p2set){

//Checkifleftbuttonisreleased

//andselectedanarea

//Setsubarrayonorig.image

//withselectedrectangle

Matsubmat=orig_img(Rect(p1,p2));

//Heresomeprocessingforthesubmatrix

//...

//Marktheboundariesofselectedrectangle

rectangle(orig_img,p1,p2,Scalar(0,0,255),2);

imshow("main_win",orig_img);

}

return;

}

IntheshowUIexample,themouseeventsareusedtocontrolthroughacallbackfunction(cbMouse),theselectionofarectangularregionbydrawingarectanglearoundit.Intheexample,thisfunctionisdeclaredasvoidcbMouse(intevent,intx,inty,intflags,void*),theargumentsbeingthepositionofthepointer(x,y)wheretheeventoccurs,theconditionwhentheeventoccurs(flags),andoptionally,userdata.

NoteTheavailableevents,flags,andtheircorrespondingdefinitionsymbolscanbefoundinthehighgui.hppheaderfile.

ButtonsOpenCV(onlywithQtsupport)allowsyoutocreatethreetypesofbuttons:checkbox(QT_CHECKBOX),radiobox(QT_RADIOBOX),andpushbutton(QT_PUSH_BUTTON).Thesetypesofbuttoncanbeusedrespectivelytosetoptions,setexclusiveoptions,andtakeactionsonpush.ThethreearecreatedwiththecreateButton(constString&button_name,ButtonCallbackon_change,void*userdata=0,int

type=QT_PUSH_BUTTON,boolinit_state=false)functioninthepropertieswindowarrangedinabuttonbarafterthelasttrackbarcreatedinthiswindow.Theargumentsforthebuttonareitsname(button_name),thecallbackfunctioncalledonthestatuschange(on_change),andoptionally,anargument(userdate)tothecallback,thetypeofbutton(type),andtheinitialstateofthebutton(init_state).

Next,weshowyouthesourcecodeforthecallbackfunctionscorrespondingtobuttonsintheexample:

voidcheckboxCallBack(intstate,void*){

sprintf(msg,"Checkboxchanged.Newstate=%d",state);

displayStatusBar(main_win,msg);

return;

}

voidradioboxCallBack(intstate,void*id){

//IdoftheradioboxpassedtothecallBack

sprintf(msg,"%schanged.Newstate=%d",

(char*)id,state);

displayStatusBar(main_win,msg);

return;

}

voidpushbuttonCallBack(int,void*font){

//Addtexttotheimage

addText(orig_img,"Pushbuttonclicked",

Point(50,50),*((QtFont*)font));

imshow(main_win,orig_img);//Showsoriginalimage

return;

}

Thecallbackfunctionforabuttongetstwoarguments:itsstatusand,optionally,apointertouserdata.IntheshowUIexample,weshowyouhowtopassaninteger(radioboxCallBack(intstate,void*id))toidentifythebuttonandamorecomplexobject(pushbuttonCallBack(int,void*font)).

DrawinganddisplayingtextAveryefficientwaytocommunicatetheresultsofsomeimageprocessingtotheuserisbydrawingshapesor/anddisplayingtextoverthefigurebeingprocessed.Throughtheimgprocmodule,OpenCVprovidessomeconvenientfunctionstoachievesuchtasksasputtingtext,drawinglines,circles,ellipses,rectangles,polygons,andsoon.TheshowUIexampleillustrateshowtoselectarectangularregionoveranimageanddrawarectangletomarktheselectedarea.Thefollowingfunctiondraws(img)arectangledefinedbytwopoints(p1,p2)overanimagewiththespecifiedcolorandotheroptionalparametersasthickness(negativeforafillshape)andthetypeoflines:

voidrectangle(InputOutputArrayimg,Pointpt1,Pointpt2,constScalar&

color,intthickness=1,intlineType=LINE_8,intshift=0)

Additionaltoshapes’drawingsupport,theimgprocmoduleprovidesafunctiontoputtextoveranimagewiththefunction:

voidputText(InputOutputArrayimg,constString&text,Pointorg,int

fontFace,doublefontScale,Scalarcolor,intthickness=1,int

lineType=LINE_8,boolbottomLeftOrigin=false)

NoteTheavailablefontfacesforthetextcanbeinspectedinthecore.hppheaderfile.

Qtsupport,inthehighguimodule,addssomeadditionalwaystoshowtextonthemainwindowofanOpenCVapplication:

Textovertheimage:WegetthisresultusingtheaddText(constMat&img,constString&text,Pointorg,constQtFont&font)function.ThisfunctionallowsyoutoselecttheoriginpointforthedisplayedtextwithafontpreviouslycreatedwiththefontQt(constString&nameFont,intpointSize=-1,Scalarcolor=Scalar::all(0),intweight=QT_FONT_NORMAL,int

style=QT_STYLE_NORMAL,intspacing=0)function.IntheshowUIexample,thisfunctionisusedtoputtextovertheimagewhenthepushbuttonisclickedon,callingtheaddTextfunctioninsidethecallbackfunction.Textonthestatusbar:UsingthedisplayStatusBar(constString&winname,constString&text,intdelayms=0)function,wedisplaytextinthestatusbarforanumberofmillisecondsgivenbythelastargument(delayms).IntheshowUIexample,thisfunctionisused(inthecallbackfunctions)todisplayaninformativetextwhenthebuttonsandtrackbarofthepropertieswindowchangetheirstate.Textoverlaidontheimage:UsingthedisplayOverlay(constString&winname,constString&text,intdelayms=0)function,wedisplaytextoverlaidontheimageforanumberofmillisecondsgivenbythelastargument.IntheshowUIexample,thisfunctionisused(inthecallbackfunction)todisplayinformativetextwhenthemainwindowtrackbarchangesitsvalue.

SummaryInthischapter,yougotaquickreviewofthemainpurposeoftheOpenCVlibraryanditsmodules.Youlearnedthefoundationsofhowtocompile,install,andusethelibraryinyourlocalsystemtodevelopC++OpenCVapplicationswithQtsupport.Todevelopyourownsoftware,weexplainedhowtostartwiththefreeQtCreatorIDEandtheGNUcompilerkit.

Tostartwith,fullcodeexampleswereprovidedinthechapter.Theseexamplesshowedyouhowtoreadandwriteimagesandvideo.Finally,thechaptergaveyouanexampleofdisplayingsomeeasy-to-implementuserinterfacecapabilitiesinOpenCVprograms,suchastrackbars,buttons,puttingtextonimages,drawingshapes,andsoon.

Thenextchapterwillbedevotedtoestablishingthemainimageprocessingtoolsandtasksthatwillsetthebasisfortheremainingchapters.

Chapter2.EstablishingImageProcessingToolsThischapterdescribesthemaindatastructuresandbasicproceduresthatwillbeusedinsubsequentchapters:

ImagetypesPixelaccessBasicoperationswithimagesHistograms

Thesearesomeofthemostfrequentoperationsthatwewillhavetoperformonimages.Mostofthefunctionalitycoveredhereisinthecoremoduleofthelibrary.

BasicdatatypesThefundamentaldatatypeinOpenCVisMat,asitisusedtostoreimages.Basically,animageisstoredasaheaderplusamemoryzonecontainingthepixeldata.Imageshaveanumberofchannels.Grayscaleimageshaveasinglechannel,whilecolorimagestypicallyhavethreeforthered,green,andbluecomponents(althoughOpenCVstorestheminareverseorder,thatisblue,green,andred).Afourthtransparency(alpha)channelcanalsobeused.Thenumberofchannelsforanimgimagecanberetrievedwithimg.channels().

Eachpixelinanimageisstoredusinganumberofbits.Thisiscalledtheimagedepth.Forgrayscaleimages,pixelsarecommonlystoredin8bits,thusallowing256graylevels(integervalues0to255).Forcolorimages,eachpixelisstoredinthreebytes,onepercolorchannel.Insomeoperations,itwillbenecessarytostorepixelsinafloating-pointformat.Theimagedepthcanbeobtainedwithimg.depth(),andthevaluesreturnedare:

CV_8U,8-bitunsignedintegers(0..255)CV_8S,8-bitsignedintegers(-128..127)CV_16U,16-bitunsignedintegers(0..65,535)CV_16S,16-bitsignedintegers(-32,768..32,767)CV_32S,32-bitsignedintegers(-2,147,483,648..2,147,483,647)CV_32F,32-bitfloating-pointnumbersCV_64F,64-bitfloating-pointnumbers

NotethatthemostcommonimagedepthwillbeCV_8Uforbothgrayscaleandcolorimages.ItispossibletoconvertfromonedepthtoanotherusingtheconvertTomethod:

Matimg=imread("lena.png",IMREAD_GRAYSCALE);

Matfp;

img.convertTo(fp,CV_32F);

Itiscommontoperformanoperationonfloating-pointimages(thatis,pixelvaluesaretheresultofamathematicaloperation).Ifweuseimshow()todisplaythisimage,wewillnotseeanythingmeaningful.Inthiscase,wehavetoconvertpixelstotheintegerrange0..255.TheconvertTofunctionimplementsalineartransformationandhastwoadditionalparameters,alphaandbeta,whichrepresentascalefactorandadeltavaluetoadd,respectively.Thismeansthateachpixelpisconvertedwith:

newp=alpha*p+beta

Thiscanbeusedtodisplayfloatingpointimagesproperly.AssumingthattheimgimagehasmandMminimumandmaximumvalues(refertothefollowingcodetoseehowtoobtainthesevalues),wewouldusethis:

Matm1=Mat(100,100,CV_32FC1);

randu(m1,0,1e6);//randomvaluesbetween0and1e6

imshow("original",m1);

doubleminRange,MaxRange;

PointmLoc,MLoc;

minMaxLoc(m1,&minRange,&MaxRange,&mLoc,&MLoc);

Matimg1;

m1.convertTo(img1,CV_8U,255.0/(MaxRange-minRange),-255.0/minRange);

imshow("result",img1);

Thiscodemapstherangeoftheresultimagevaluestotherange0-255.Thefollowingimageshowsyoutheresultofrunningthecode:

TheresultofconvertTo(notethattheimageontheleft-handsideisdisplayedaswhite)

Theimagesizecanbeobtainedwiththerowsandcolsattributes.Thereisalsoasizeattributethatretrievesboth:

MatSizes=img.size;

intr=l[0];

intc=l[1];

Apartfromtheimageitself,otherdatatypesareverycommon;refertothefollowingtable:

Type Typekeyword Example

(Small)vector VecAB,whereAcanbe2,3,4,5or6,Bcanbeb,s,i,f,ord Vec3brgb;

rgb[0]=255;

(Upto4)scalars Scalar

Scalara;

a[0]=0;

a[1]=0;

Point PointAB,whereAcanbe2or3andBcanbei,f,ordPoint3dp;

p.x=0;

p.y=0;

p.z=0;

Size Size

Sizes;

s.width=30;

s.height=40;

Rectangle Rect

Rectr;

r.x=r.y=0;

r.width=r.height=100;

Someofthesetypeshaveadditionaloperations.Forexample,wecancheckwhetherapointliesinsidearectangle:

p.inside(r)

Thepandrargumentsare(two-dimensional)pointandrectangle,respectively.Notethatinanycase,theprecedingtableisnotexhaustive;OpenCVprovidesmanymoresupport

structureswithassociatedmethods.

Pixel-levelaccessToprocessimages,wehavetoknowhowtoaccesseachpixelindependently.OpenCVprovidesanumberofwaystodothis.Inthissection,wecovertwomethods;thefirstoneiseasyfortheprogrammer,whilethesecondoneismoreefficient.

Thefirstmethodusestheat<>templatefunction.Inordertouseit,wehavetospecifythetypeofmatrixcells,suchasinthisshortexample:

Matsrc1=imread("lena.jpg",IMREAD_GRAYSCALE);

ucharpixel1=src1.at<uchar>(0,0);

cout<<"Valueofpixel(0,0):"<<(unsignedint)pixel1<<endl;

Matsrc2=imread("lena.jpg",IMREAD_COLOR);

Vec3bpixel2=src2.at<Vec3b>(0,0);

cout<<"Bcomponentofpixel(0,0):"<<(unsignedint)pixel2[0]<<endl;

Theexamplereadsanimageinbothgrayscaleandcolorandaccessesthefirstpixelat(0,0).Inthefirstcase,thepixeltypeisunsignedchar(thatis,uchar).Inthesecondcase,whentheimageisreadinfullcolor,wehavetousetheVec3btype,whichreferstoatripletofunsignedchars.Ofcourse,theat<>functioncanalsoappearontheleft-handsideofanassignment,thatis,tochangethevalueofapixel.

Thefollowingisanothershortexampleinwhichafloating-pointmatrixisinitializedtothePivalueusingthismethod:

MatM(200,200,CV_64F);

for(inti=0;i<M.rows;i++)

for(intj=0;j<M.cols;j++)

M.at<double>(i,j)=CV_PI;

Notethattheat<>methodisnotveryefficientasithastocalculatetheexactmemorypositionfromthepixelrowandcolumn.Thiscanbeverytimeconsumingwhenweprocessthewholeimagepixelbypixel.Thesecondmethodusestheptrfunction,whichreturnsapointertoaspecificimagerow.Thefollowingsnippetobtainsthepixelvalueofeachpixelinacolorimage:

ucharR,G,B;

for(inti=0;i<src2.rows;i++)

{

Vec3b*pixrow=src2.ptr<Vec3b>(i);

for(intj=0;j<src2.cols;j++)

{

B=pixrow[j][0];

G=pixrow[j][1];

R=pixrow[j][2];

}

}

Intheexampleabove,ptrisusedtogetapointertothefirstpixelineachrow.Usingthispointer,wecannowaccesseachcolumnintheinnermostloop.

MeasuringthetimeProcessingimagestakestime(comparablymuchmorethanthetimeittakestoprocess1Ddata).Often,processingtimeisthecrucialfactorthatdecideswhetherasolutionispracticalornot.OpenCVprovidestwofunctionstomeasuretheelapsedtime:getTickCount()andgetTickFrequency().You’llusethemlikethis:

doublet0=(double)getTickCount();

//yourstuffhere…

elapsed=((double)getTickCount()–t0)/getTickFrequency();

Here,elapsedisinseconds.

CommonoperationswithimagesThefollowingtablesummarizesthemosttypicaloperationswithimages:

Operation Codeexamples

Setmatrixvalues img.setTo(0);//for1-channelimg

img.setTo(Scalar(B,G,R);//3-channelimg

MATLAB-stylematrixinitializationMatm1=Mat::eye(100,100,CV_64F);

Matm3=Mat::zeros(100,100,CV_8UC1);

Matm2=Mat::ones(100,100,CV_8UC1)*255;

Randominitialization Matm1=Mat(100,100,CV_8UC1);

randu(m1,0,255);

Createacopyofthematrix Matimg1=img.clone();

Createacopyofthematrix(withthemask)

img.copy(img1,mask);

Referenceasubmatrix(thedataisnotcopied)

Matimg1=img(Range(r1,r2),Range(c1,c2));

Imagecrop Rectroi(r1,c2,width,height);

Matimg1=img(roi).clone();//datacopied

Resizeimage resize(img,imag1,Size(),0.5,0.5);//decimatebyafactor

of2

Flipimageflip(imgsrc,imgdst,code);

//code=0=>verticalflipping

//code>0=>horizontalflipping

//code<0=>vertical&horizontalflipping

SplitchannelsMatchannel[3];

split(img,channel);

imshow("B",channel[0]);//showblue

Mergechannels merge(channel,img);

Countnonzeropixels intnz=countNonZero(img);

Minimumandmaximum doublem,M;

PointmLoc,MLoc;minMaxLoc(img,&m,&M,&mLoc,&MLoc);

ThemeanpixelvalueScalarm,stdd;

meanStdDev(img,m,stdd);

uintmean_pxl=mean.val[0];

Checkwhethertheimagedataisnull If(img.empty())

cout<<"couldn'tloadimage";

ArithmeticoperationsArithmeticoperatorsareoverloaded.ThismeansthatwecanoperateonMatimageslikewecaninthisexample:

imgblend=0.2*img1+0.8*img2;

InOpenCV,theresultvalueofanoperationissubjecttotheso-calledsaturationarithmetic.Thismeansthatthefinalvalueisactuallythenearestintegerinthe0..255range.

Bitwiseoperationsbitwise_and(),bitwise_or(),bitwise_xor(),andbitwise_not()canbeveryusefulwhenworkingwithmasks.Masksarebinaryimagesthatindicatethepixelsinwhichanoperationistobeperformed(insteadofthewholeimage).Thefollowingbitwise_andexampleshowsyouhowtousetheANDoperationtocroppartofanimage:

#include<opencv2/opencv.hpp>

usingnamespacecv;

usingnamespacestd;

intmain()

{

Matimg1=imread("lena.png",IMREAD_GRAYSCALE);

if(img1.empty())

{

cout<<"Cannotloadimage!"<<endl;

return-1;

}

imshow("Original",img1);//Original

//Createmaskimage

Matmask(img1.rows,img1.cols,CV_8UC1,Scalar(0,0,0));

circle(mask,Point(img1.rows/2,img1.cols/2),150,255,-1);

imshow("Mask",mask);

//performAND

Matr;

bitwise_and(img1,mask,r);

//filloutsidewithwhite

constucharwhite=255;

for(inti=0;i<r.rows;i++)

for(intj=0;j<r.cols;j++)

if(!mask.at<uchar>(i,j))

r.at<uchar>(i,j)=white;

imshow("Result",r);

waitKey(0);

return0;

}

Afterreadinganddisplayingtheinputimage,wecreateamaskbydrawingafilledwhitecircle.IntheANDoperation,thismaskisused.Thelogicaloperationisonlyappliedinthosepixelsinwhichthemaskvalueisnotzero;otherpixelsarenotaffected.Finally,inthisexample,wefilltheouterpartoftheresultimage(thatis,outsidethecircle)withwhite.Thisisdoneusingoneofthepixelaccessmethodsexplainedpreviously.Seetheresultingimagesinthefollowingscreenshot:

Theresultofthebitwise_andexample

Next,anothercoolexampleisshowninwhichweestimatethevalueofPi.Let’sconsiderasquareanditsenclosedcircle:

Theirareasaregivenby:

Fromthis,wehave:

Let’sassumethatwehaveasquareimageofunknownsidelengthandanenclosedcircle.Wecanestimatetheareaoftheenclosedcirclebypaintingmanypixelsinrandompositionswithintheimageandcountingthosethatfallinsidetheenclosedcircle.Ontheotherhand,theareaofthesquareisestimatedasthetotalnumberofpixelspainted.ThiswouldallowyoutoestimatethevalueofPiusingthepreviousequation.

Thefollowingalgorithmsimulatesthis:

1. Onablacksquareimage,paintasolidwhiteenclosedcircle.2. Onanotherblacksquareimage(samedimensions),paintalargenumberofpixelsat

randompositions.3. PerformanANDoperationbetweenthetwoimagesandcountnonzeropixelsinthe

resultingimage.4. EstimatePiusingtheequation.

ThefollowingisthecodefortheestimatePiexample:

#include<opencv2/opencv.hpp>

usingnamespacecv;

usingnamespacestd;

intmain()

{

constintside=100;

constintnpixels=8000;

inti,j;

Mats1=Mat::zeros(side,side,CV_8UC1);

Mats2=s1.clone();

circle(s1,Point(side/2,side/2),side/2,255,-1);

imshow("s1",s1);

for(intk=0;k<npixels;k++)

{

i=rand()%side;

j=rand()%side;

s2.at<uchar>(i,j)=255;

}

Matr;

bitwise_and(s1,s2,r);

imshow("s2",s2);

imshow("r",r);

intAcircle=countNonZero(r);

intAsquare=countNonZero(s2);

floatPi=4*(float)Acircle/Asquare;

cout<<"EstimatedvalueofPi:"<<Pi<<endl;

waitKey();

return0;

}

Theprogramfollowstheprecedingalgorithmexactly.NotethatweusethecountNonZerofunctiontocountnonzero(white,inthiscase)pixels.Fornpixels=8000,theestimatewas3.125.Thelargerthevalueofnpixels,thebettertheestimation.

TheoutputoftheestimatePiexample

DatapersistenceApartfromthespecificfunctionstoreadandwriteimagesandvideo,inOpenCV,thereisamoregenericwaytosave/loadthedata.Thisisreferredtoasdatapersistence:thevalueofobjectsandvariablesintheprogramcanberecorded(serialized)onthedisk.Thiscanbeveryusefultosaveresultsandloadtheconfigurationdata.ThemainclassistheaptlynamedFileStorage,whichrepresentsafileonadisk.DataisactuallystoredinXMLorYAMLformats.

Thesearethestepsinvolvedwhenwritingdata:

1. CalltheFileStorageconstructor,passingafilenameandaflagwiththeFileStorage::WRITEvalue.Thedataformatisdefinedbythefileextension(thatis,.xml,.yml,or.yaml).

2. Usethe<<operatortowritedatatothefile.Dataistypicallywrittenasstring-valuepairs.

3. Closethefileusingthereleasemethod.

Readingdatarequiresthatyoufollowthesesteps:

1. CalltheFileStorageconstructor,passingafilenameandaflagwiththeFileStorage::READvalue.

2. Usethe[]or>>operatortoreaddatafromthefile.3. Closethefileusingthereleasemethod.

Thefollowingexampleusesdatapersistencetosaveandloadtrackbarvalues.

#include<opencv2/opencv.hpp>

usingnamespacecv;

usingnamespacestd;

Matimg1;

voidtb1_Callback(intvalue,void*)

{

Mattemp=img1+value;

imshow("main_win",temp);

}

intmain()

{

img1=imread("lena.png",IMREAD_GRAYSCALE);

if(img1.empty())

{

cout<<"Cannotloadimage!"<<endl;

return-1;

}

inttb1_value=0;

//loadtrackbarvalue

FileStoragefs1("config.xml",FileStorage::READ);

tb1_value=fs1["tb1_value"];//method1

fs1["tb1_value"]>>tb1_value;//method2

fs1.release();

//createtrackbar

namedWindow("main_win");

createTrackbar("brightness","main_win",&tb1_value,

255,tb1_Callback);

tb1_Callback(tb1_value,NULL);

waitKey();

//savetrackbarvalueuponexiting

FileStoragefs2("config.xml",FileStorage::WRITE);

fs2<<"tb1_value"<<tb1_value;

fs2.release();

return0;

}

TipWhenOpenCVhasbeencompiledwithQtsupport,windowpropertiescanbesaved,includingtrackbarvalues,withthesaveWindowParameters()function.

Oncethetrackbarisusedtocontrolanintegervalue,itissimplyaddedtotheoriginalimage,makingitbrighter.Thisvalueisreadwhentheprogramstarts(thevaluewillbe0thefirsttime)andsavedwhentheprogramexitsnormally.Notethattwoequivalentmethodsareshowntoreadthevalueofthetb1_valuevariable.Thecontentsoftheconfig.xmlfileare:

<?xmlversion="1.0"?>

<opencv_storage>

<tb1_value>112</tb1_value>

</opencv_storage>

HistogramsOncetheimagehasbeendefinedwithadatatypeandweareabletoaccessitsgraylevelvalues,thatis,thepixels,wemaywanttoobtainaprobabilitydensityfunctionofthedifferentgraylevels,whichiscalledthehistogram.Theimagehistogramrepresentsthefrequencyofoccurrenceofthevariousgraylevelsintheimage.Thehistogramcanbemodeledsothattheimagemaychangeitscontrastlevels.Thisisknownashistogramequalization.Histogrammodelingisapowerfultechniqueforimageenhancementbymeansofcontrastvariation.Theequalizationallowsforimageareasoflowercontrasttogainahighercontrast.Thefollowingimageshowsyouanexampleofanequalizedimageanditshistogram:

Anexampleofanequalizedimagehistogram

InOpenCV,theimagehistogramcanbecalculatedwiththevoidcalcHistfunctionandhistogramequalizationisperformedwiththevoidequalizeHistfunction.

Theimagehistogramcalculationisdefinedwithtenparameters:voidcalcHist(constMat*images,intnimages,constint*channels,InputArraymask,OutputArrayhist,intdims,constint*histSize,constfloat**ranges,booluniform=true,andboolaccumulate=false).

constMat*images:Thefirstparameteristheaddressofthefirstimagefromacollection.Thiscanbeusedtoprocessabatchofimages.intnimages:Thesecondparameteristhenumberofsourceimages.constint*channels:Thethirdinputparameteristhelistofthechannelsusedtocomputethehistogram.Thenumberofchannelsgoesfrom0to2.InputArraymask:Thisisanoptionalmasktoindicatetheimagepixelscountedinthehistogram.OutputArrayhist:Thefifthparameteristheoutputhistogram.intdims:Thisparameterallowsyoutoindicatethedimensionofthehistogram.constint*histSize:Thisparameteristhearrayofhistogramsizesineachdimension.constfloat**ranges:Thisparameteristhearrayofthedimsarraysofthehistogrambinboundariesineachdimension.booluniform=true:Bydefault,theBooleanvalueistrue.Itindicatesthatthehistogramisuniform.boolaccumulate=false:Bydefault,theBooleanvalueisfalse.Itindicatesthatthe

histogramisnonaccumulative.

Thehistogramequalizationrequiresonlytwoparameters,voidequalizeHist(InputArraysrc,OutputArraydst).Thefirstparameteristheinputimageandthesecondoneistheoutputimagewiththehistogramequalized.

Itispossibletocalculatethehistogramofmorethanoneinputimage.Thisallowsyoutocompareimagehistogramsandcalculatethejointhistogramofseveralimages.Thecomparisonoftwoimagehistograms,histImage1andhistImage2,canbeperformedwiththevoidcompareHist(InputArrayhistImage1,InputArrayhistImage2,method)function.TheMethodmetricistheusedtocomputethematchingbetweenbothhistograms.TherearefourmetricsimplementedinOpenCV,thatis,correlation(CV_COMP_CORREL),chi-square(CV_COMP_CHISQR),intersectionorminimumdistance(CV_COMP_INTERSECT),andBhattacharyyadistance(CV_COMP_BHATTACHARYYA).

Itispossibletocalculatethehistogramofmorethanonechannelofthesamecolorimage.Thisispossiblethankstothethirdparameter.

Thefollowingsectionsshowyoutwoexamplecodesforcolorhistogramcalculation(ColourImageEqualizeHist)andcomparisonColourImageComparison.InColourImageEqualizeHist,itisalsoshownhowtocalculatethehistogramequalizationaswellasthe2Dhistogramfortwochannels,thatis,hue(H)andsaturation(S),intheColourImageComparisonexample.

TheexamplecodeThefollowingColourImageEqualizeHistexampleshowsyouhowtoequalizeacolorimageanddisplaythehistogramofeachchannelatthesametime.ThehistogramcalculationofeachcolorchannelintheRGBimageisdonewiththehistogramcalculation(InputArrayImagesrc,OutputArrayhistoImage)function.Tothisend,thecolorimageissplitintothechannels:R,G,andB.Thehistogramequalizationisalsoappliedtoeachchannelthatisthenmergedtoformtheequalizedcolorimage:

#include"opencv2/highgui/highgui.hpp"

#include"opencv2/imgproc/imgproc.hpp"

#include<iostream>

#include<stdio.h>

usingnamespacecv;

usingnamespacestd;

voidhistogramcalculation(constMat&Image,Mat&histoImage)

{

inthistSize=255;

//Settheranges(forB,G,R))

floatrange[]={0,256};

constfloat*histRange={range};

booluniform=true;boolaccumulate=false;

Matb_hist,g_hist,r_hist;

vector<Mat>bgr_planes;

split(Image,bgr_planes);

//Computethehistograms:

calcHist(&bgr_planes[0],1,0,Mat(),b_hist,1,&histSize,&histRange,

uniform,accumulate);

calcHist(&bgr_planes[1],1,0,Mat(),g_hist,1,&histSize,&histRange,

uniform,accumulate);

calcHist(&bgr_planes[2],1,0,Mat(),r_hist,1,&histSize,&histRange,

uniform,accumulate);

//DrawthehistogramsforB,GandR

inthist_w=512;inthist_h=400;

intbin_w=cvRound((double)hist_w/histSize);

MathistImage(hist_h,hist_w,CV_8UC3,Scalar(0,0,0));

//Normalizetheresultto[0,histImage.rows]

normalize(b_hist,b_hist,0,histImage.rows,NORM_MINMAX,-1,Mat());

normalize(g_hist,g_hist,0,histImage.rows,NORM_MINMAX,-1,Mat());

normalize(r_hist,r_hist,0,histImage.rows,NORM_MINMAX,-1,Mat());

//Drawforeachchannel

for(inti=1;i<histSize;i++){

line(histImage,Point(bin_w*(i-1),hist_h-cvRound(b_hist.at<float>(i-

1))),Point(bin_w*(i),hist_h-cvRound(b_hist.at<float>(i))),Scalar(

255,0,0),2,8,0);

line(histImage,Point(bin_w*(i-1),hist_h-cvRound(g_hist.at<float>(i-

1))),Point(bin_w*(i),hist_h-cvRound(g_hist.at<float>(i))),Scalar(

0,255,0),2,8,0);

line(histImage,Point(bin_w*(i-1),hist_h-cvRound(r_hist.at<float>(i-

1))),Point(bin_w*(i),hist_h-cvRound(r_hist.at<float>(i))),Scalar(

0,0,255),2,8,0);

}

histoImage=histImage;

}

intmain(int,char*argv[])

{

Matsrc,imageq;

MathistImage;

//Readoriginalimage

src=imread("fruits.jpg");

if(!src.data)

{printf("Errorimagen\n");exit(1);}

//Separatetheimagein3places(B,GandR)

vector<Mat>bgr_planes;

split(src,bgr_planes);

//Displayresults

imshow("Sourceimage",src);

//Calculatethehistogramtoeachchannelofthesourceimage

histogramcalculation(src,histImage);

//Displaythehistogramforeachcolourchannel

imshow("ColourImageHistogram",histImage);

//EqualizedImage

//ApplyHistogramEqualizationtoeachchannel

equalizeHist(bgr_planes[0],bgr_planes[0]);

equalizeHist(bgr_planes[1],bgr_planes[1]);

equalizeHist(bgr_planes[2],bgr_planes[2]);

//Mergetheequalizedimagechannelsintotheequalizedimage

merge(bgr_planes,imageq);

//DisplayEqualizedImage

imshow("EqualizedImage",imageq);

//Calculatethehistogramtoeachchanneloftheequalizedimage

histogramcalculation(imageq,histImage);

//DisplaytheHistogramoftheEqualizedImage

imshow("EqualizedColourImageHistogram",histImage);

//Waituntiluserexitstheprogram

waitKey();

return0;

}

Theexamplecreatesfourwindowswith:

Thesourceimage:Thisisshowninthefollowingfigureintheupper-leftcorner.Theequalizedcolorimage:Thisisshowninthefollowingfigureintheupper-rightcorner.Thehistogramofthreechannels:Here,R=Read,G=GreenandB=Blue,forthesourceimage.Thisisshowninthefollowingfigureinthelower-leftcorner.ThehistogramofRGBchannelfortheequalizedimage:Thisisshowninnextfigureinthelower-rightcorner.ThefigureshowsyouhowthemostfrequentintensityvaluesforR,G,andBhavebeenstretchedoutduetotheequalizationprocess.

Thefollowingfigureshowsyoutheresultsofthealgorithm:

TheexamplecodeThefollowingColourImageComparisonexampleshowsyouhowtocalculatea2Dhistogramcomposedoftwochannelsfromthesamecolorimage.Theexamplecodealsoperformsacomparisonbetweentheoriginalimageandtheequalizedimagebymeansofhistogrammatching.Themetricsusedforthematchingarethefourmetricsthathavebeenmentionedpreviously,thatis,Correlation,Chi-Square,Minimumdistance,andBhattacharyyadistance.The2DhistogramcalculationoftheHandScolorchannelisdonewiththehistogram2Dcalculation(InputArrayImagesrc,OutputArrayhisto2D)function.Toperformthehistogramcomparison,thenormalized1DhistogramhasbeencalculatedfortheRGBimage.Inordertocomparethehistogram,theyhavebeennormalized.ThisisdoneinhistogramRGcalculation(InputArrayImagesrc,OutputArrayhisto):

voidhistogram2Dcalculation(constMat&src,Mat&histo2D)

{

Mathsv;

cvtColor(src,hsv,CV_BGR2HSV);

//Quantizethehueto30-255levels

//andthesaturationto32-255levels

inthbins=255,sbins=255;

inthistSize[]={hbins,sbins};

//huevariesfrom0to179,seecvtColor

floathranges[]={0,180};

//saturationvariesfrom0(black-gray-white)to

//255(purespectrumcolor)

floatsranges[]={0,256};

constfloat*ranges[]={hranges,sranges};

MatNDhist,hist2;

//wecomputethehistogramfromthe0-thand1-stchannels

intchannels[]={0,1};

calcHist(&hsv,1,channels,Mat(),hist,1,histSize,ranges,true,

false);

doublemaxVal=0;

minMaxLoc(hist,0,&maxVal,0,0);

intscale=1;

MathistImg=Mat::zeros(sbins*scale,hbins*scale,CV_8UC3);

for(inth=0;h<hbins;h++)

for(ints=0;s<sbins;s++)

{

floatbinVal=hist.at<float>(h,s);

intintensity=cvRound(binVal*255/maxVal);

rectangle(histImg,Point(h*scale,s*scale),

Point((h+1)*scale-1,(s+1)*scale-1),

Scalar::all(intensity),

CV_FILLED);

}

histo2D=histImg;

}

voidhistogramRGcalculation(constMat&src,Mat&histoRG)

{

//Using50binsforredand60forgreen

intr_bins=50;intg_bins=60;

inthistSize[]={r_bins,g_bins};

//redvariesfrom0to255,greenfrom0to255

floatr_ranges[]={0,255};

floatg_ranges[]={0,255};

constfloat*ranges[]={r_ranges,g_ranges};

//Usetheo-thand1-stchannels

intchannels[]={0,1};

//Histograms

MatNDhist_base;

//CalculatethehistogramsfortheHSVimages

calcHist(&src,1,channels,Mat(),hist_base,2,histSize,ranges,

true,false);

normalize(hist_base,hist_base,0,1,NORM_MINMAX,-1,Mat());

histoRG=hist_base;

}

intmain(intargc,char*argv[])

{

Matsrc,imageq;

MathistImg,histImgeq;

MathistHSorg,histHSeq;

//Readoriginalimage

src=imread("fruits.jpg");

if(!src.data)

{printf("Errorimagen\n");exit(1);}

//Separatetheimagein3places(B,GandR)

vector<Mat>bgr_planes;

split(src,bgr_planes);

//Displayresults

namedWindow("Sourceimage",0);

imshow("Sourceimage",src);

//Calculatethehistogramofthesourceimage

histogram2Dcalculation(src,histImg);

//Displaythehistogramforeachcolourchannel

imshow("H-SHistogram",histImg);

//EqualizedImage

//ApplyHistogramEqualizationtoeachchannel

equalizeHist(bgr_planes[0],bgr_planes[0]);

equalizeHist(bgr_planes[1],bgr_planes[1]);

equalizeHist(bgr_planes[2],bgr_planes[2]);

//Mergetheequalizedimagechannelsintotheequalizedimage

merge(bgr_planes,imageq);

//DisplayEqualizedImage

namedWindow("EqualizedImage",0);

imshow("EqualizedImage",imageq);

//Calculatethe2DhistogramforHandSchannels

histogram2Dcalculation(imageq,histImgeq);

//Displaythe2DHistogram

imshow("H-SHistogramEqualized",histImgeq);

histogramRGcalculation(src,histHSorg);

histogramRGcalculation(imageq,histHSeq);

///Applythehistogramcomparisonmethods

for(inti=0;i<4;i++)

{

intcompare_method=i;

doubleorig_orig=compareHist(histHSorg,histHSorg,compare_method

);

doubleorig_equ=compareHist(histHSorg,histHSeq,compare_method);

printf("Method[%d]Original-Original,Original-Equalized:%f,%f

\n",i,orig_orig,orig_equ);

}

printf("Done\n");

waitKey();

}

Theexamplecreatesfourwindowswiththesourceimage,theequalizedcolorimageandthe2DhistogramforHandSchannelsforbothimagestheoriginal,andtheequalizedimage.ThealgorithmalsodisplaysthefournumericalmatchingparametersobtainedfromthecomparisonoftheoriginalRGBimagehistogramwithitselfandwiththeequalizedRGBimage.Forthecorrelationandintersectionmethods,thehigherthemetric,themoreaccuratethematch.Forthechi-squareandBhattacharyyadistance,thelesstheresult,thebetterthematch.ThefollowingfigureshowsyoutheoutputoftheColourImageComparisonalgorithm:

Finally,youcanrefertoChapter3,CorrectingandEnhancingImages,aswellastheexampleswithintocoveressentialaspectsofthisbroadtopic,suchasimageenhancementbymeansofhistogrammodeling.

NoteFormoreinformation,refertoOpenCVEssentials,DenizO.,FernándezM.M.,VállezN.,BuenoG.,SerranoI.,Patón.A.,SalidoJ.byPacktPublishing,https://www.packtpub.com/application-development/opencv-essentials.

SummaryThischaptercoveredandestablishedthebasisofapplyingimageprocessingmethodsusedincomputervision.Imageprocessingisoftenthefirststeptofurthercomputervisionapplications,andtherefore,ithasbeencoveredhere:basicdatatypes,pixellevelaccess,commonoperationswithimages,arithmeticoperations,datapersistence,andhistograms.

YoucanalsorefertoChapter3,CorrectingandEnhancingImages,ofOpenCVEssentialsbyPacktPublishingtocoverfurtheressentialaspectsofthisbroadtopic,suchasimageenhancement,imagerestorationbymeansoffiltering,andgeometricalcorrection.

Thenextchapterwillcoverfurtheraspectsofimageprocessingtocorrectandenhanceimagesbymeansofsmoothing,sharpening,imageresolutionanalysis,morphologicalandgeometricaltransforms,inpainting,anddenoising.

Chapter3.CorrectingandEnhancingImagesThischapterpresentsmethodsforimageenhancementandcorrection.Sometimes,itisnecessarytoreducethenoiseinanimageoremphasizeorsuppresscertaindetailsinit.Theseproceduresareusuallycarriedoutbymodifyingpixelvalues,performingsomeoperationsonthem,orontheirlocalneighborhoodaswell.Bydefinition,image-enhancementoperationsareusedtoimproveimportantimagedetails.Enhancementoperationsincludenoisereduction,smoothing,andedgeenhancement.Ontheotherhand,imagecorrectionattemptstorestoreadamagedimage.InOpenCV,theimgprocmodulecontainsfunctionsforimageprocessing.

Inthischapter,wewillcover:

Imagefiltering.Thisincludesimagesmoothing,imagesharpening,andworkingwithimagepyramids.Applyingmorphologicaloperations,suchasdilation,erosion,opening,orclosing.Geometricaltransformations(affineandperspectivetransformations).Inpainting,whichisusedtoreconstructdamagedpartsofimages.Denoising,whichisnecessarytoreducetheimagenoiseproducedbytheimage-capturedevice.

ImagefilteringImagefilteringisaprocesstomodifyorenhanceimages.Emphasizingcertainfeaturesorremovingothersinanimageareexamplesofimagefiltering.Filteringisaneighborhoodoperation.Theneighborhoodisasetofpixelsaroundaselectedone.Imagefilteringdeterminestheoutputvalueofacertainpixellocatedataposition(x,y)byperformingsomeoperationswiththevaluesofthepixelsinitsneighborhood.

OpenCVprovidesseveralfilteringfunctionsforcommonimage-processingoperations,suchassmoothingorsharpening.

SmoothingSmoothing,alsocalledblurring,isanimage-processingoperationthatisfrequentlyusedtoreducenoise,amongotherpurposes.Asmoothingoperationisperformedbyapplyinglinearfilterstotheimage.Then,thepixelvaluesoftheoutputatpositions(xi,yj)arecomputedasaweightedsumoftheinputpixelvaluesatpositions(xi,yj)andtheirneighborhoods.Theweightsforthepixelsinthelinearoperationareusuallystoredinamatrixcalledkernel.Therefore,afiltercouldberepresentedasaslidingwindowofcoefficients.

Therepresentationofthepixelneighborhood

LetKbethekernelandIandOtheinputandoutputimages,respectively.Then,eachoutputpixelvalueat(i,j)iscalculatedasfollows:

Median,Gaussian,andbilateralarethemostusedOpenCVsmoothingfilters.Medianfilteringisverygoodtogetridofsalt-and-pepperorspecklenoise,whileGaussianisamuchbetterpreprocessingstepforedgedetection.Ontheotherhand,bilateralfilteringisagoodtechniquetosmoothanimagewhilerespectingstrongedges.

ThefunctionsincludedinOpenCVforthispurposeare:

voidboxFilter(InputArraysrc,OutputArraydst,intddepth,Sizeksize,

Pointanchor=Point(-1,-1),boolnormalize=true,intborderType=

BORDER_DEFAULT):Thisisaboxfilterwhosekernelcoefficientsareequal.Withnormalize=true,eachoutputpixelvalueisthemeanofitskernelneighborswithallcoefficientsequalto1/n,wheren=thenumberofelements.Withnormalize=false,allcoefficientsareequalto1.Thesrcargumentistheinputimage,whilethefilteredimageisstoredindst.Theddepthparameterindicatestheoutputimagedepththatis-1tousethesamedepthastheinputimage.Thekernelsizeisindicatedinksize.Theanchorpointindicatesthepositionoftheso-calledanchorpixel.The(-1,-1)defaultvaluemeansthattheanchorisatthecenterofthekernel.Finally,theborder-typetreatmentisindicatedintheborderTypeparameter.voidGaussianBlur(InputArraysrc,OutputArraydst,Sizeksize,double

sigmaX,doublesigmaY=0,intborderType=BORDER_DEFAULT):ThisisdonebyconvolvingeachpointinthesrcinputarraywithaGaussiankerneltoproducethe

dstoutput.ThesigmaXandsigmaYparametersindicatetheGaussiankernelstandarddeviationinXandYdirections.IfsigmaYiszero,itissettobeequaltosigmaX,andifbothareequaltozero,theyarecomputedusingthewidthandheightgiveninksize.

NoteConvolutionisdefinedastheintegraloftheproductoftwofunctionsinwhichoneofthemispreviouslyreversedandshifted.

voidmedianBlur(InputArraysrc,OutputArraydst,intksize):Thisrunsthrougheachelementoftheimageandreplaceseachpixelwiththemedianofitsneighboringpixels.voidbilateralFilter(InputArraysrc,OutputArraydst,intd,double

sigmaColor,doublesigmaSpace,intborderType=BORDER_DEFAULT):ThisisanalogoustotheGaussianfilterconsideringtheneighboringpixelswithweightsassignedtoeachofthembuthavingtwocomponentsoneachweight,whichisthesameusedbytheGaussianfilter,andanotheronethattakesintoaccountthedifferenceinintensitybetweentheneighboringandevaluatedpixels.Thisfunctionneedsthediameterofthepixelneighborhoodasparameterd,andsigmaColorsigmaSpacevalues.AlargervalueofthesigmaColorparametermeansthatfarthercolorswithinthepixelneighborhoodwillbemixedtogether,generatinglargerareasofsemi-equalcolors,whereasalargervalueofthesigmaSpaceparametermeansthatfartherpixelswillinfluenceeachotheraslongastheircolorsarecloseenough.voidblur(InputArraysrc,OutputArraydst,Sizeksize,Point

anchor=Point(-1,-1),intborderType=BORDER_DEFAULT):Thisblursanimageusingthenormalizedboxfilter.ItisequivalenttousingboxFilterwithnormalize=true.Thekernelusedinthisfunctionis:

NoteThegetGaussianKernelandgetGaborKernelfunctionscanbeusedinOpenCVtogeneratecustomkernels,whichcanthenbepassedontofilter2D.

Inallcases,itisnecessarytoextrapolatethevaluesofthenon-existentpixelsoutsidetheimageboundary.OpenCVenablesthespecificationoftheextrapolationmethodinmostofthefilterfunctions.Thesemethodsare:

BORDER_REPLICATE:Thisrepeatsthelastknownpixelvalue:aaaaaa|abcdefgh|hhhhhhhBORDER_REFLECT:Thisreflectstheimageborder:fedcba|abcdefgh|hgfedcb

BORDER_REFLECT_101:Thisreflectstheimageborderwithoutduplicatingthelastpixeloftheborder:gfedcb|abcdefgh|gfedcbaBORDER_WRAP:Thisappendsthevalueoftheoppositeborder:cdefgh|abcdefgh|abcdefgBORDER_CONSTANT:Thisestablishesaconstantoverthenewborder:kkkkkk|abcdefgh|kkkkkk

TheexamplecodeThefollowingSmoothexampleshowsyouhowtoloadanimageandapplyGaussianandmedianblurringtoitthroughGaussianBlurandmedianBlurfunctions:

#include"opencv2/opencv.hpp"

usingnamespacecv;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

//Applythefilters

Matdst,dst2;

GaussianBlur(src,dst,Size(9,9),0,0);

medianBlur(src,dst2,9);

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("GAUSSIANBLUR",WINDOW_AUTOSIZE);

imshow("GAUSSIANBLUR",dst);

namedWindow("MEDIANBLUR",WINDOW_AUTOSIZE);

imshow("MEDIANBLUR",dst2);

waitKey();

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

OriginalandblurredimagesfromGaussianandMedianblurringtransformations

SharpeningSharpeningfiltersareusedtohighlightbordersandotherfinedetailswithinimages.Theyarebasedonfirst-andsecond-orderderivatives.Thefirstderivativeofanimagecomputesanapproximationoftheimageintensitygradient,whereasthesecondderivativeisdefinedasthedivergenceofthisgradient.Sincedigitalimageprocessingdealswithdiscretequantities(pixelvalues),thediscreteversionsofthefirstandsecondderivativesareusedforsharpening.

First-orderderivativesproducethickerimageedgesandarewidelyusedforedge-extractionpurposes.However,second-orderderivativesareusedforimageenhancementduetotheirbetterresponsetofinedetails.TwopopularoperatorsusedtoobtainderivativesaretheSobelandtheLaplacian.

TheSobeloperatorcomputesthefirstimagederivativeofanimage,I,through:

TheSobelgradientmagnitudecanbeobtainedbycombiningthegradientapproximationsinthetwodirections,asfollows:

Ontheotherhand,thediscreteLaplacianofanimagecanbegivenasaconvolutionwiththefollowingkernel:

ThefunctionsincludedinOpenCVforthispurposeare:

voidSobel(InputArraysrc,OutputArraydst,intddepth,intdx,intdy,

intksize=3,doublescale=1,doubledelta=0,intborderType=

BORDER_DEFAULT):Thiscalculatesthefirst,second,third,ormixed-imagederivativeswiththeSobeloperatorfromanimageinsrc.Theddepthparameterindicatestheoutputimagedepth,thatis,-1tousethesamedepthastheinputimage.Thekernelsizeisindicatedinksizeandthedesiredderivativeordersindxanddy.Ascalefactorforthecomputedderivativevalescanbeestablishedwithscale.Finally,theborder-typetreatmentisindicatedintheborderTypeparameterandadeltavaluecanbeaddedtotheresultsbeforestoringthemindst.voidScharr(InputArraysrc,OutputArraydst,intddepth,intdx,int

dy,doublescale=1,doubledelta=0,intborderType=BORDER_DEFAULT

):Thiscalculatesamoreaccuratederivativeforakernelofsize3x3.Scharr(src,dst,ddepth,dx,dy,scale,delta,borderType)isequivalenttoSobel(src,dst,ddepth,dx,dy,CV_SCHARR,scale,delta,borderType).voidLaplacian(InputArraysrc,OutputArraydst,intddepth,intksize=

1,doublescale=1,doubledelta=0,intborderType=BORDER_DEFAULT):ThiscalculatestheLaplacianofanimage.AlltheparametersareequivalenttotheonesfromtheSobelandScharrfunctionsexceptforksize.Whenksize>1,itcalculatestheLaplacianoftheimageinsrcbyaddingupthesecondxandyderivativescalculatedusingSobel.Whenksize=1,theLaplacianiscalculatedbyfilteringtheimagewitha3x3kernelthatcontains-4forthecenter,0forthecorners,and1fortherestofthecoefficients.

NotegetDerivKernelscanbeusedinOpenCVtogeneratecustomderivativekernels,whichcanthenbepassedontosepFilter2D.

TheexamplecodeThefollowingSharpenexampleshowsyouhowtocomputeSobelandLaplacianderivativesfromanimagethroughSobelandLaplacianfunctions.Theexamplecodeis:

#include"opencv2/opencv.hpp"

usingnamespacecv;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

//ApplySobelandLaplacian

Matdst,dst2;

Sobel(src,dst,-1,1,1);

Laplacian(src,dst2,-1);

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("SOBEL",WINDOW_AUTOSIZE);

imshow("SOBEL",dst);

namedWindow("LAPLACIAN",WINDOW_AUTOSIZE);

imshow("LAPLACIAN",dst2);

waitKey();

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

ContoursobtainedbySobelandLaplacianderivatives

WorkingwithimagepyramidsOnsomeoccasions,workingwithafixedimagesizeisnotpossible,andwewillneedtheoriginalimageatdifferentresolutions.Forexample,inobject-detectionproblems,examiningthewholeimagetryingtofindtheobjecttakestoomuchtime.Inthiscase,searchingforobjectsbystartingatsmallerresolutionsismoreefficient.Thistypeofimagesetiscalledpyramidormipmapduetothesimilaritywiththepyramidstructuretypeifimagesareorganizedfromthelargesttothesmallestfromthebottomtothetop.

TheGaussianpyramid

Therearetwokindsofimagepyramids:GaussianpyramidsandLaplacianpyramids.

GaussianpyramidsGaussianpyramidsarecreatedbyalternatelyremovingrowsandcolumnsinthelowerlevelandthenobtainingthevalueofthehigher-levelpixelbyapplyingaGaussianfilterusingtheneighborhoodfromtheunderlyinglevel.Aftereachpyramidstep,theimagereducesitswidthandheightbyhalfanditsareaisaquarterofthepreviouslevelimagearea.InOpenCV,GaussianpyramidscanbecomputedusingthepyrDown,pyrUp,andbuildPyramidfunctions:

voidpyrDown(InputArraysrc,OutputArraydst,constSize&dstsize=

Size(),intborderType=BORDER_DEFAULT):Thissubsamplesandblursansrcimage,savingtheresultindst.ThesizeoftheoutputimageiscomputedasSize((src.cols+1)/2,(src.rows+1)/2)whenitisnotsetwiththedstsizeparameter.voidpyrUp(InputArraysrc,OutputArraydst,constSize&dstsize=

Size(),intborderType=BORDER_DEFAULT):ThiscomputestheoppositeprocessofpyrDown.voidbuildPyramid(InputArraysrc,OutputArrayOfArraysdst,int

maxlevel,intborderType=BORDER_DEFAULT):ThisbuildsaGaussianpyramidforanimagestoredinsrc,obtainingmaxlevelnewimagesandstoringtheminthedstarrayfollowingtheoriginalimagethatisstoredindst[0].Thus,dststoresmaxlevel+1imagesasaresult.

Pyramidsarealsousedforsegmentation.OpenCVprovidesafunctiontocomputemean-shiftpyramidsbasedonthefirststepofthemean-shiftsegmentationalgorithm:

voidpyrMeanShiftFiltering(InputArraysrc,OutputArraydst,doublesp,

doublesr,intmaxLevel=1,TermCriteriatermcrit=TermCriteria

(TermCriteria::MAX_ITER+TermCriteria::EPS,5,1)):Thisimplementsthefilteringstageofthemean-shiftsegmentation,obtaininganimage,dst,withcolorgradientsandfine-graintextureflattened.Thespandsrparametersindicatethespatialwindowandthecolorwindowradii.

NoteMoreinformationaboutthemean-shiftsegmentationcanbefoundathttp://docs.opencv.org/trunk/doc/py_tutorials/py_video/py_meanshift/py_meanshift.html?highlight=meanshift.

LaplacianpyramidsLaplacianpyramidsdonothaveaspecificfunctionimplementationinOpenCV,buttheyareformedfromtheGaussianpyramids.Laplacianpyramidscanbeseenasborderimageswheremostofitselementsarezeros.TheithlevelintheLaplacianpyramidisthedifferencebetweentheithlevelintheGaussianpyramidandtheexpandedversionoftheith+1levelintheGaussianpyramid.

TheexamplecodeThefollowingPyramidsexampleshowsyouhowtoobtaintwolevelsfromaGaussianpyramidthroughthepyrDownfunctionandtheoppositeoperationthroughpyrUp.NoticethattheoriginalimagecannotbeobtainedafterusingpyrUp:

#include"opencv2/opencv.hpp"

usingnamespacecv;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

//ApplytwotimespyrDown

Matdst,dst2;

pyrDown(src,dst);

pyrDown(dst,dst2);

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("1stPYRDOWN",WINDOW_AUTOSIZE);

imshow("1stPYRDOWN",dst);

namedWindow("2stPYRDOWN",WINDOW_AUTOSIZE);

imshow("2stPYRDOWN",dst2);

//ApplytwotimespyrUp

pyrUp(dst2,dst);

pyrUp(dst,src);

//Showtheresults

namedWindow("NEWORIGINAL",WINDOW_AUTOSIZE);

imshow("NEWORIGINAL",dst2);

namedWindow("1stPYRUP",WINDOW_AUTOSIZE);

imshow("1stPYRUP",dst);

namedWindow("2stPYRUP",WINDOW_AUTOSIZE);

imshow("2stPYRUP",src);

waitKey();

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

TheoriginalandtwolevelsoftheGaussianpyramid

MorphologicaloperationsMorphologicaloperationsprocessimagesaccordingtoshapes.Theyapplyadefined“structuringelement”toanimage,obtaininganewimagewherethepixelsatpositions(xi,yj)arecomputedbycomparingtheinputpixelvaluesatpositions(xi,yj)andtheirneighborhoods.Dependingonthestructuringelementselected,amorphologicaloperationismoresensitivetoonespecificshapeortheother.

Thetwobasicmorphologicaloperationsaredilationanderosion.Dilationaddspixelsfromthebackgroundtotheboundariesoftheobjectsinanimage,whileerosionremovespixels.Hereiswherethestructuringelementistakenintoaccounttoselectthepixelsthataretobeaddedordeleted.Indilation,thevalueoftheoutputpixelisthemaximumofallthepixelsintheneighborhood.Usingerosion,thevalueoftheoutputpixelistheminimumvalueofallthepixelsintheneighborhood.

Anexampleofdilationanderosion

Otherimage-processingoperationscanbedefinedbycombiningdilationanderosion,suchastheopeningandclosingoperations,andthemorphologicalgradient.Theopeningoperationisdefinedaserosion,followedbydilation,whileclosingisitsreverseoperation—dilationfollowedbyerosion.Therefore,openingremovessmallobjectsfromanimagewhilepreservingthelargeronesandclosingisusedtoremovesmallholeswhilepreservingthelargeronesinamannersimilartoopening.Themorphologicalgradientisdefinedasthedifferencebetweenthedilationandtheerosionofanimage.Furthermore,twomoreoperationsaredefinedusingopeningandclosing:top-hatandblack-hatoperations.Theyaredefinedasthedifferencebetweenthesourceimageanditsopeninginthecaseoftophatandthedifferencebetweentheclosingofanimageandthesourceimageinthecaseofblackhat.Alltheoperationsareappliedwiththesamestructuringelement.

InOpenCV,itispossibletoapplydilation,erosion,opening,andclosingthroughthefollowingfunctions:

voiddilate(InputArraysrc,OutputArraydst,InputArraykernel,Point

anchor=Point(-1,-1),intiterations=1,intborderType=

BORDER_CONSTANT,constScalar&borderValue=

morphologyDefaultBorderValue()):Thisdilatesanimagestoredinsrcusinga

specificstructuringelement,savingtheresultindst.Thekernelparameteristhestructuringelementused.Theanchorpointindicatesthepositionofanchorpixel.The(-1,-1)valuemeansthattheanchorisatthecenter.Theoperationcanbeappliedseveraltimesusingiterations.Theborder-typetreatmentisindicatedintheborderTypeparameterandisthesameasinotherfiltersfromprevioussections.Finally,aconstantisindicatedinborderValueiftheBORDER_CONSTANTbordertypeisused.voiderode(InputArraysrc,OutputArraydst,InputArraykernel,Point

anchor=Point(-1,-1),intiterations=1,intborderType=

BORDER_CONSTANT,constScalar&borderValue=

morphologyDefaultBorderValue()):Thiserodesanimageusingaspecificstructuringelement.Itsparametersarethesameasthatindilate.voidmorphologyEx(InputArraysrc,OutputArraydst,intop,InputArray

kernel,Pointanchor=Point(-1,-1),intiterations=1,intborderType

=BORDER_CONSTANT,constScalar&borderValue=

morphologyDefaultBorderValue()):Thisperformsadvancedmorphologicaloperationsdefinedusingtheopparameter.PossibleopvaluesareMORPH_OPEN,MORPH_CLOSE,MORPH_GRADIENT,MORPH_TOPHAT,andMORPH_BLACKHAT.MatgetStructuringElement(intshape,Sizeksize,Pointanchor=

Point(-1,-1)):Thisreturnsastructuringelementofthespecifiedsizeandshapeformorphologicaloperations.SupportedtypesareMORPH_RECT,MORPH_ELLIPSE,andMORPH_CROSS.

TheexamplecodeThefollowingMorphologicalexampleshowsyouhowtosegmentredcheckersinacheckerboard,applyingabinarythreshold(theinRangefunction)andthenrefiningtheresultswithdilationanderosionoperations(throughdilateanderodefunctions).Thestructureusedisacircleof15x15pixels.Theexamplecodeis:

#include"opencv2/opencv.hpp"

usingnamespacecv;

usingnamespacestd;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

//Applythefilters

Matdst,dst2,dst3;

inRange(src,Scalar(0,0,100),Scalar(40,30,255),dst);

Matelement=getStructuringElement(MORPH_ELLIPSE,Size(15,15));

dilate(dst,dst2,element);

erode(dst2,dst3,element);

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("SEGMENTED",WINDOW_AUTOSIZE);

imshow("SEGMENTED",dst);

namedWindow("DILATION",WINDOW_AUTOSIZE);

imshow("DILATION",dst2);

namedWindow("EROSION",WINDOW_AUTOSIZE);

imshow("EROSION",dst3);

waitKey();

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

Original,redcolorsegmentation,dilation,anderosion

LUTsLook-uptables(LUTs)areverycommonincustomfiltersinwhichtwopixelswiththesamevalueintheinputinvolvesthesamevalueintheoutputtoo.AnLUTtransformationassignsanewpixelvaluetoeachpixelintheinputimageaccordingtothevaluesgivenbyatable.Inthistable,theindexrepresentstheinputintensityvalueandthecontentofthecellgivenbytheindexrepresentsthecorrespondingoutputvalue.Asthetransformationisactuallycomputedforeachpossibleintensityvalue,thisresultsinareductioninthetimeneededtoapplythetransformationoveranimage(imagestypicallyhavemorepixelsthanthenumberofintensityvalues).

TheLUT(InputArraysrc,InputArraylut,OutputArraydst,intinterpolation=0)OpenCVfunctionappliesalook-uptabletransformationoveran8-bitsignedoransrcunsignedimage.Thus,thetablegiveninthelutparametercontains256elements.Thenumberofchannelsinlutiseither1orsrc.channels.Ifsrchasmorethanonechannelbutluthasasingleone,thesamelutchannelisappliedtoalltheimagechannels.

TheexamplecodeThefollowingLUTexampleshowsyouhowtodivide(bytwo)theintensityofthepixelsfromanimageusingalook-uptable.TheLUTneedstobeinitializedbeforeusingitwiththiscode:

uchar*M=(uchar*)malloc(256*sizeof(uchar));

for(inti=0;i<256;i++){

M[i]=i*0.5;//Theresultisroundedtoanintegervalue

}

Matlut(1,256,CV_8UC1,M);

AMatobjectiscreatedwhereeachcellcontainsthenewvalue.Theexamplecodeis:

#include"opencv2/opencv.hpp"

usingnamespacecv;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

//CreatetheLUT

uchar*M=(uchar*)malloc(256*sizeof(uchar));

for(inti=0;i<256;i++){

M[i]=i*0.5;

}

Matlut(1,256,CV_8UC1,M);

//ApplytheLUT

Matdst;

LUT(src,lut,dst);

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("LUT",WINDOW_AUTOSIZE);

imshow("LUT",dst);

waitKey();

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

TheoriginalandLUT-transformedimages

GeometricaltransformationsGeometricaltransformationsdonotchangetheimagecontentbutinsteaddeformtheimagebydeformingtheirgrid.Inthiscase,outputimagepixelvaluesarecomputedbyfirstobtainingthecoordinatesoftheappropriateinputpixelsbyapplyingthecorrespondingmappingfunctionsandcopyingtheoriginalpixelvaluesfromtheobtainedpositionstothenewones:

Thistypeofoperationhastwoproblems:

Extrapolation:fx(x,y)andfy(x,y)couldobtainvaluesthatindicateapixeloutsidetheimageboundary.TheextrapolationmethodsusedingeometricaltransformationsarethesameastheonesusedinimagefilteringplusanotheronecalledBORDER_TRANSPARENT.Interpolation:fx(x,y)andfy(x,y)areusuallyfloating-pointnumbers.InOpenCV,itispossibletoselectbetweennearest-neighborandpolynomialinterpolationmethods.Nearest-neighborinterpolationconsistsofroundingthefloating-pointcoordinatetothenearestinteger.Thesupportedinterpolationmethodsare:

INTER_NEAREST:Thisisthenearest-neighborinterpolationexplainedpreviously.INTER_LINEAR:Thisisabilinearinterpolationmethod.Itisusedbydefault.INTER_AREA:Thisresamplesusingpixelarearelation.INTER_CUBIC:Thisisbicubicinterpolationmethodovera4x4pixelneighborhood.INTER_LANCZOS4:ThisistheLanczosinterpolationmethodoveran8x8pixelneighborhood.

ThegeometricaltransformationssupportedinOpenCVincludeaffine(scaling,translation,rotation,andsoon)andperspectivetransformations.

AffinetransformationAnaffinetransformationisageometrictransformationthatpreservesallpointsfromaninitiallineonalineafterapplyingit.Furthermore,distanceratiosfromeachofthesepointstotheendsofthelinesarealsopreserved.Ontheotherhand,affinetransformationsdon’tnecessarilypreserveanglesandlengths.

Geometrictransformationssuchasscaling,translation,rotation,skewing,andreflectionareallaffinetransformations.

ScalingScalinganimageisresizingitbyshrinkingorzooming.ThefunctioninOpenCVforthispurposeisvoidresize(InputArraysrc,OutputArraydst,Sizedsize,doublefx=0,doublefy=0,intinterpolation=INTER_LINEAR).Apartfromsrcanddst,theinputandoutputimages,ithassomeparameterstospecifythesizetowhichtheimageistoberescaled.Ifthenewimagesizeisspecifiedbysettingdsizetoavaluedifferentfrom0,thescaledfactorparameters,fxandfy,areboth0andfxandfyarecalculatedfromdsizeandtheoriginalsizeoftheinputimage.Iffxandfyaredifferentfrom0anddsizeequals0,dsizeiscalculatedfromtheotherparameters.Ascaleoperationcouldberepresentedbyitstransformationmatrix:

Here,sxandsyarethescalefactorsinthexandyaxis.

Theexamplecode

ThefollowingScaleexampleshowsyouhowtoscaleanimagethroughtheresizefunction.Theexamplecodeis:

#include"opencv2/opencv.hpp"

usingnamespacecv;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

//Applythescale

Matdst;

resize(src,dst,Size(0,0),0.5,0.5);

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("SCALED",WINDOW_AUTOSIZE);

imshow("SCALED",dst);

waitKey();

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

Originalandscaledimages;fxandfyareboth0.5

TranslationTranslationissimplymovingtheimagealongaspecificdirectionanddistance.Thus,atranslationcouldberepresentedbymeansofavector,(tx,ty),oritstransformationmatrix:

InOpenCV,itispossibletoapplytranslationsusingthevoidwarpAffine(InputArraysrc,OutputArraydst,InputArrayM,Sizedsize,intflags=INTER_LINEAR,

intborderMode=BORDER_CONSTANT,constScalar&borderValue=Scalar())

function.TheMparameteristhetransformationmatrixthatconvertssrcintodst.Theinterpolationmethodisspecifiedusingtheflagsparameter,whichalsosupportstheWARP_INVERSE_MAPvalue,whichmeansthatMistheinversetransformation.TheborderModeparameteristheextrapolationmethod,andborderValueisusedwhenborderModeisBORDER_CONSTANT.

Theexamplecode

TheTranslationexampleshowsyouhowtousethewarpAffinefunctiontotranslateanimage.Theexamplecodeis:

#include"opencv2/opencv.hpp"

usingnamespacecv;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

//Applytranslation

Matdst;

MatM=(Mat_<double>(2,3)<<1,0,200,0,1,150);

warpAffine(src,dst,M,src.size());

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("TRANSLATED",WINDOW_AUTOSIZE);

imshow("TRANSLATED",dst);

waitKey();

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

Originalanddisplacedimages.Thehorizontaldisplacementis200andverticaldisplacementis150.

ImagerotationImagerotationinvolvesaspecificangle,θ.OpenCVsupportsscaledrotationsinaspecificlocationusingatransformationmatrixdefinedasfollows:

Here,xandyarethecoordinatesoftherotationpointandsfisthescalefactor.

RotationsareappliedliketranslationsbymeansofthewarpAffinefunctionbutusingtheMatgetRotationMatrix2D(Point2fcenter,doubleangle,doublescale)functiontocreatetherotationtransformationmatrix.TheMparameteristhetransformationmatrixthatconvertssrcintodst.Asthenamesoftheparametersindicate,centeristhecenterpointoftherotation,angleistherotationangle(inacounter-clockwisedirection),andscaleisthescalefactor.

Theexamplecode

ThefollowingRotateexampleshowsyouhowtousethewarpAffinefunctiontorotateanimage.A45-degreecenteredrotationmatrixisfirstlyobtainedbygetRotationMatrix2D(Point2f(src.cols/2,src.rows/2),45,1).Theexamplecodeis:

#include"opencv2/opencv.hpp"

usingnamespacecv;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

//Applytherotation

Matdst;

MatM=getRotationMatrix2D(Point2f(src.cols/2,src.rows/2),45,1);

warpAffine(src,dst,M,src.size());

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("ROTATED",WINDOW_AUTOSIZE);

imshow("ROTATED",dst);

waitKey();

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

Theoriginalimageandtheimageaftera45-degreecenteredrotationisapplied

SkewingAskewingtransformationdisplaceseachpointinafixeddirectionbyanamountproportionaltoitssigneddistancefromalinethatisparalleltothatdirection.Therefore,itwillusuallydistorttheshapeofageometricfigure,forexample,turningsquaresintonon-squareparallelogramsandcirclesintoellipses.However,askewingpreservestheareaofgeometricfigures,thealignment,andrelativedistancesofcollinearpoints.Askewingmappingisthemaindifferencebetweenuprightandslanted(oritalic)stylesofletters.

Skewingcanalsobedefinedbyitsangle,θ.

Theoriginalanditsrotated45degreesfromcenterimage

Usingtheskewingangle,thetransformationmatricesforhorizontalandverticalskewing

are:

Duetothesimilaritieswithprevioustransformations,thefunctionusedtoapplyskewingiswarpAffine.

TipOnmostoccasions,itwillbenecessarytoaddsomesizetotheoutputimageand/orapplytranslation(changingthelastcolumnonthesheartransformationmatrix)inordertodisplaytheoutputimagecompletelyandinacenteredmanner.

Theexamplecode

ThefollowingSkewexampleshowsyouhowtousethewarpAffinefunctiontoskewθ=π/3horizontallyinanimage.Theexamplecodeis:

#include"opencv2/opencv.hpp"

#include<math.h>

usingnamespacecv;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

//Applyskew

Matdst;

doublem=1/tan(M_PI/3);

MatM=(Mat_<double>(2,3)<<1,m,0,0,1,0);

warpAffine(src,dst,M,Size(src.cols+0.5*src.cols,src.rows));

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("SKEWED",WINDOW_AUTOSIZE);

imshow("SKEWED",dst);

waitKey();

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

Theoriginalimageandtheimagewhenskewedhorizontally

ReflectionAsreflectionisdoneoverthexandyaxesbydefault,itisnecessarytoapplytranslation(thelastcolumnofthetransformationmatrix).Then,thereflectionmatrixis:

Here,txisthenumberofimagecolumnsandtyisthenumberofimagerows.

Aswithprevioustransformations,thefunctionusedtoapplyreflectioniswarpAffine.

NoteOtheraffinetransformationscanbeappliedusingthewarpAffinefunctionwiththeircorrespondingtransformationmatrices.

Theexamplecode

ThefollowingReflectexampleshowsyouanexampleofhorizontal,vertical,andcombinedreflectionofanimageusingthewarpAffinefunction.Theexamplecodeis:

#include"opencv2/opencv.hpp"

usingnamespacecv;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

//Applythereflections

Matdsth,dstv,dst;

MatMh=(Mat_<double>(2,3)<<-1,0,src.cols,0,1,0

MatMv=(Mat_<double>(2,3)<<1,0,0,0,-1,src.rows);

MatM=(Mat_<double>(2,3)<<-1,0,src.cols,0,-1,src.rows);

warpAffine(src,dsth,Mh,src.size());

warpAffine(src,dstv,Mv,src.size());

warpAffine(src,dst,M,src.size());

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("H-REFLECTION",WINDOW_AUTOSIZE);

imshow("H-REFLECTION",dsth);

namedWindow("V-REFLECTION",WINDOW_AUTOSIZE);

imshow("V-REFLECTION",dstv);

namedWindow("REFLECTION",WINDOW_AUTOSIZE);

imshow("REFLECTION",dst);

waitKey();

return0;

}

Thefollowingfigureshowsyoutheoutputforthecode:

TheoriginalandrotatedimagesinX,Y,andbothaxes

PerspectivetransformationForperspectivetransformation,a3x3transformationmatrixisneeded,althoughtheworkisperformedovertwo-dimensionalimages.Straightlinesremainstraightintheoutputimage,butinthiscase,proportionschange.Findingthetransformationmatrixismorecomplexthanwithaffinetransformations.Whenworkingwithperspective,thecoordinatesoffourpointsoftheinputimagematrixandtheircorrespondingcoordinatesontheoutputimagematrixareusedtoperformthisoperation.

WiththesepointsandthegetPerspectiveTransformOpenCVfunction,itispossibletofindtheperspectivetransformationmatrix.Afterobtainingthematrix,warpPerspectiveisappliedtoobtaintheoutputoftheperspectivetransformation.Thetwofunctionsareexplainedindetailhere:

MatgetPerspectiveTransform(InputArraysrc,InputArraydst)andMatgetPerspectiveTransform(constPoint2fsrc[],constPoint2fdst[]):Thisreturnstheperspectivetransformationmatrixcalculatedfromsrcanddst.voidwarpPerspective(InputArraysrc,OutputArraydst,InputArrayM,

Sizedsize,intflags=INTER_LINEAR,intborderMode=BORDER_CONSTANT,

constScalar&borderValue=Scalar()):ThisappliesanMaffinetransformationtoansrcimage,obtainingthenewdstimage.Therestoftheparametersarethesameasinothergeometricaltransformationsdiscussed.

TheexamplecodeThefollowingPerspectiveexampleshowsyouanexampleofhowtochangetheperspectiveofanimageusingthewarpPerspectivefunction.Inthiscase,itisnecessarytoindicatethecoordinatesoffourpointsfromthefirstimageandanotherfourfromtheoutputtocalculatetheperspectivetransformationmatrixthroughgetPerspectiveTransform.Theselectedpointsare:

Point2fsrc_verts[4];

src_verts[2]=Point(195,140);

src_verts[3]=Point(410,120);

src_verts[1]=Point(220,750);

src_verts[0]=Point(400,750);

Point2fdst_verts[4];

dst_verts[2]=Point(160,100);

dst_verts[3]=Point(530,120);

dst_verts[1]=Point(220,750);

dst_verts[0]=Point(400,750);

Theexamplecodeis:

#include"opencv2/opencv.hpp"

usingnamespacecv;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

Matdst;

Point2fsrc_verts[4];

src_verts[2]=Point(195,140);

src_verts[3]=Point(410,120);

src_verts[1]=Point(220,750);

src_verts[0]=Point(400,750);

Point2fdst_verts[4];

dst_verts[2]=Point(160,100);

dst_verts[3]=Point(530,120);

dst_verts[1]=Point(220,750);

dst_verts[0]=Point(400,750);

//ObtainandApplytheperspectivetransformation

MatM=getPerspectiveTransform(src_verts,dst_verts);

warpPerspective(src,dst,M,src.size());

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("PERSPECTIVE",WINDOW_AUTOSIZE);

imshow("PERSPECTIVE",dst);

waitKey();

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

Perspectiveresultswiththepointsmarkedintheoriginalimage

InpaintingInpaintingistheprocessofreconstructingdamagedpartsofimagesandvideos.Thisprocessisalsoknownasimageorvideointerpolation.Thebasicideaistosimulatetheprocessdonebyrestorerswithantiques.Nowadays,withthewideuseofdigitalcameras,inpaintinghasbecomeanautomaticprocessthatisusednotonlyforimagerestorationbydeletingscratches,butalsoforothertasks,suchasobjectortextremoval.

OpenCVsupportsaninpaintingalgorithmasofVersion2.4.Thefunctionforthispurposeis:

voidinpaint(InputArraysrc,InputArrayinpaintMask,OutputArraydst,

doubleinpaintRadius,intflags):Thisrestorestheareasindicatedwithnon-zerovaluesbytheinpaintMaskparameterinthesource(src)image.TheinpaintRadiusparameterindicatestheneighborhoodtobeusedbythealgorithmspecifiedbyflags.TwomethodscouldbeusedinOpenCV:

INPAINT_NS:ThisistheNavier-Stokes-basedmethodINPAINT_TELEA:ThisisthemethodproposedbyAlexandruTelea

Finally,therestoredimageisstoredindst.

NoteMoredetailsabouttheinpaintingalgorithmsusedinOpenCVcanbefoundathttp://www.ifp.illinois.edu/~yuhuang/inpainting.html

TipForvideoinpainting,considerthevideoasasequenceofimagesandapplythealgorithmoverallofthem.

TheexamplecodeThefollowinginpaintingexampleshowsyouhowtousetheinpaintfunctiontoinpainttheareasofanimagespecifiedinanimagemask.

Theexamplecodeis:

#include"opencv2/opencv.hpp"

usingnamespacecv;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

//Readthemaskfile

Matmask;

mask=imread(argv[2]);

cvtColor(mask,mask,COLOR_RGB2GRAY);

//Applytheinpaintingalgorithms

Matdst,dst2;

inpaint(src,mask,dst,10,INPAINT_TELEA);

inpaint(src,mask,dst2,10,INPAINT_NS);

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("MASK",WINDOW_AUTOSIZE);

imshow("MASK",mask);

namedWindow("INPAINT_TELEA",WINDOW_AUTOSIZE);

imshow("INPAINT_TELEA",dst);

namedWindow("INPAINT_NS",WINDOW_AUTOSIZE);

imshow("INPAINT_NS",dst2);

waitKey();

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

Resultsofapplyinginpainting

NoteThefirstrowcontainstheoriginalimageandthemaskused.ThesecondrowcontainstheresultsfromtheinpaintingproposedbyTeleaontheleft-handsideandtheNavier-Stokes-basedmethodontheright-handside.

Gettingtheinpaintingmaskisnotaneasytask.Theinpainting2examplecodeshowsyouanexampleofhowwecanobtainthemaskfromthesourceimageusingbinarythresholdingthroughthreshold(mask,mask,235,255,THRESH_BINARY):

#include"opencv2/opencv.hpp"

usingnamespacecv;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

//Createthemask

Matmask;

cvtColor(src,mask,COLOR_RGB2GRAY);

threshold(mask,mask,235,255,THRESH_BINARY);

//Applytheinpaintingalgorithms

Matdst,dst2;

inpaint(src,mask,dst,10,INPAINT_TELEA);

inpaint(src,mask,dst2,10,INPAINT_NS);

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("MASK",WINDOW_AUTOSIZE);

imshow("MASK",mask);

namedWindow("INPAINT_TELEA",WINDOW_AUTOSIZE);

imshow("INPAINT_TELEA",dst);

namedWindow("INPAINT_NS",WINDOW_AUTOSIZE);

imshow("INPAINT_NS",dst2);

waitKey();

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

Resultsofapplyinginpaintingalgorithmswithoutknowingthemask

NoteThefirstrowcontainstheoriginalimageandtheextractedmask.ThesecondrowcontainstheresultsfromtheinpaintingproposedbyTeleaontheleft-handsideandtheNavier-Stokes-basedmethodontheright-handside.

Theresultsfromthisexampleshowyouthatobtainingaperfectmaskisnotalwayspossible.Someotherpartsoftheimage,suchasthebackgroundornoise,areincludedsometimes.However,theinpaintingresultsremainacceptableastheresultingimagesareclosetotheonesobtainedintheothercase.

DenoisingDenoisingornoisereductionistheprocessofremovingnoisefromsignalsobtainedfromanalogordigitaldevices.Thissectionfocusesitsattentiononreducingnoisefromdigitalimagesandvideos.

Althoughsmoothingandmedianfilteringaregoodoptionstodenoiseanimage,OpenCVprovidesotheralgorithmstoperformthistask.ThesearethenonlocalmeansandtheTVL1(TotalVariationL1)algorithms.Thebasicideaofthenonlocalmeansalgorithmistoreplacethecolorofapixelwithanaverageofthecolorsfromseveralimagesub-windowsthataresimilartotheonethatcomprisesthepixelneighborhood.Ontheotherhand,theTVL1variationaldenoisingmodel,whichisimplementedwiththeprimal-dualoptimizationalgorithm,considerstheimage-denoisingprocessavariationalproblem.

NoteMoreinformationaboutthenonlocalmeansandtheTVL1denoisingalgorithmscanbefoundathttp://www.ipol.im/pub/art/2011/bcm_nlmandhttp://znah.net/rof-and-tv-l1-denoising-with-primal-dual-algorithm.html,respectively.

OpenCVprovidesfourfunctionstodenoisecolorandgrayscaleimagesfollowingthenonlocalmeansapproach.FortheTVL1model,onefunctionisprovided.Thesefunctionsare:

voidfastNlMeansDenoising(InputArraysrc,OutputArraydst,floath=3,

inttemplateWindowSize=7,intsearchWindowSize=21):Thisdenoisesasinglegrayscaleimageloadedinsrc.ThetemplateWindowSizeandsearchWindowSizeparametersarethesizesinpixelsofthetemplatepatchthatisusedtocomputeweightsandthewindowthatisusedtocomputetheweightedaverageforthegivenpixel.Theseshouldbeoddandtheirrecommendedvaluesare7and21pixels,respectively.Thehparameterregulatestheeffectofthealgorithm.Largerhvaluesremovemorenoisedefectsbutwiththedrawbackofremovingmoreimagedetails.Theoutputisstoredindst.voidfastNlMeansDenoisingColored(InputArraysrc,OutputArraydst,float

h=3,floathForColorComponents=3,inttemplateWindowSize=7,int

searchWindowSize=21):Thisisamodificationofthepreviousfunctionforcoloredimages.ItconvertsthesrcimagetotheCIELABcolorspaceandthenseparatelydenoisestheLandABcomponentswiththefastNlMeansDenoisingfunction.voidfastNlMeansDenoisingMulti(InputArrayOfArrayssrcImgs,OutputArray

dst,intimgToDenoiseIndex,inttemporalWindowSize,floath=3,int

templateWindowSize=7,intsearchWindowSize=21):Thisusesanimagesequencetoobtainadenoisedimage.Twomoreparametersareneededinthiscase:imgToDenoiseIndexandtemporalWindowSize.ThevalueofimgToDenoiseIndexisthetargetimageindexinsrcImgstobedenoised.Finally,temporalWindowSizeisusedtoestablishthenumberofsurroundingimagestobeusedfordenoising.Thisshouldbeodd.

voidfastNlMeansDenoisingColoredMulti(InputArrayOfArrayssrcImgs,

OutputArraydst,intimgToDenoiseIndex,inttemporalWindowSize,floath

=3,floathForColorComponents=3,inttemplateWindowSize=7,int

searchWindowSize=21):ThisisbasedonthefastNlMeansDenoisingColoredandfastNlMeansDenoisingMultifunctions.Theparametersareexplainedintherestofthefunctions.voiddenoise_TVL1(conststd::vector<Mat>&observations,Mat&result,

doublelambda,intniters):Thisobtainsadenoisedimageinresultfromoneormorenoisyimagesstoredinobservations.Thelambdaandnitersparameterscontrolthestrengthandthenumberofiterationsofthealgorithm.

TheexamplecodeThefollowingdenoisingexampleshowsyouhowtouseoneofthedenoisingfunctionsfornoisereductionoveracoloredimage(fastNlMeansDenoisingColored).Astheexampleusesanimagewithoutnoise,somethingneedstobeadded.Forthispurpose,thefollowinglinesofcodeareused:

Matnoisy=src.clone();

Matnoise(src.size(),src.type());

randn(noise,0,50);

noisy+=noise;

AMatelementiscreatedwiththesamesizeandtypeoftheoriginalimagetostorenoisegeneratedbytherandnfunctiononit.Finally,thenoiseisaddedtotheclonedimagetoobtainthenoisyimage.

Theexamplecodeis:

#include"opencv2/opencv.hpp"

usingnamespacecv;

intmain(intargc,char**argv)

{

//Readthesourcefile

Matsrc;

src=imread(argv[1]);

//Addsomenoise

Matnoisy=src.clone();

Matnoise(src.size(),src.type());

randn(noise,0,50);

noisy+=noise;

//Applythedenoisingalgorithm

Matdst;

fastNlMeansDenoisingColored(noisy,dst,30,30,7,21);

//Showtheresults

namedWindow("ORIGINAL",WINDOW_AUTOSIZE);

imshow("ORIGINAL",src);

namedWindow("ORIGINALWITHNOISE",WINDOW_AUTOSIZE);

imshow("ORIGINALWITHNOISE",noisy);

namedWindow("DENOISED",WINDOW_AUTOSIZE);

imshow("DENOISED",dst);

waitKey();

return0;

}

Thefollowingfigureshowsyounoisyanddenoisedimagesfromexecutingthepreviouscode:

Resultsfromapplyingdenoising

SummaryInthischapter,weexplainedmethodsforimageenhancementandcorrection,includingnoisereduction,edgeenhancement,morphologicaloperations,geometricaltransformations,andtherestorationofdamagedimages.DifferentoptionshavebeenpresentedineachcasetoprovidethereaderwithalltheoptionsthatcanbeusedinOpenCV.

Thenextchapterwillcovercolorspacesandhowtoconvertthem.Inaddition,color-space-basedsegmentationandcolor-transfermethodswillbeexplained.

Chapter4.ProcessingColorColorisaperceptualresultcreatedinresponsetotheexcitationofourvisualsystembylightincidentupontheretinainthevisibleregionofthespectrum.Thecolorofanimagemaycontainagreatdealofinformation,whichcanbeusedforsimplifyingimageanalysis,objectidentification,andextractionbasedoncolor.Theseproceduresareusuallycarriedoutconsideringthepixelvaluesinthecolorspaceinwhichitisdefined.Inthischapter,thefollowingtopicswillbecovered:

ThecolorspacesusedinOpenCVandhowtoconvertanimagefromonecolormodeltoanotherAnexampleofhowtosegmentapictureconsideringthecolorspaceinwhichitisdefinedHowtotransfertheappearanceofanimagetoanotherusingthecolortransfermethod

ColorspacesThehumanvisualsystemisabletodistinguishhundredsofthousandsofcolors.Toobtainthisinformation,thehumanretinahasthreetypesofcolorphotoreceptorconecells,whichrespondtoincidentradiation.Becauseofthis,mosthumancolorperceptionscanbegeneratedwiththreenumericalcomponentscalledprimaries.

Tospecifyacolorintermsofthreeormoreparticularcharacteristics,thereareanumberofmethodscalledcolorspacesorcolormodels.Selectingbetweenthemtorepresentanimagedependsontheoperationstobeperformed,becausesomearemoreappropriateaccordingtotherequiredapplication.Forexample,insomecolorspacessuchasRGB,thebrightnessaffectsthethreechannels,afactthatcouldbeunfavorableforsomeimage-processingoperations.ThenextsectionexplainscolorspacesusedinOpenCVandhowtoconvertapicturefromonecolormodeltoanother.

Conversionbetweencolorspaces(cvtColor)Therearemorethan150color-spaceconversionmethodsavailableinOpenCV.ThefunctionprovidedbyOpenCVintheimgprocmoduleisvoidcvtColor(InputArraysrc,OutputArraydst,intcode,intdstCn=0).Theargumentsofthisfunctionare:

src:Thisisaninputimage8-bitunsigned,16-bitunsigned(CV_16UC),orsingle-precisionfloating-point.dst:Thisistheoutputimageofthesamesizeanddepthassrc.code:Thisisthecolorspaceconversioncode.ThestructureofthisparameterisCOLOR_SPACEsrc2SPACEdst.SomeexamplevaluesareCOLOR_BGR2GRAYandCOLOR_YCrCb2BGR.dstCn:Thisisthenumberofchannelsinthedestinationimage.Ifthisparameteris0oromitted,thenumberofthechannelsisderivedautomaticallyfromsrcandcode.

Examplesofthisfunctionwillbedescribedintheupcomingsections.

TipThecvtColorfunctioncanonlyconvertfromRGBtoanothercolorspaceorfromanothercolorspacetoRGB,soifthereaderwantstoconvertbetweentwocolorspacesotherthanRGB,afirstconversiontoRGBmustbedone.

VariouscolorspacesinOpenCVarediscussedintheupcomingsections.

RGBRGBisanadditivemodelinwhichanimageconsistsofthreeindependentimageplanesorchannels:red,green,andblue(andoptionally,afourthchannelforthetransparency,sometimescalledalphachannel).Tospecifyaparticularcolor,eachvalueindicatestheamountofeachofthecomponentspresentoneachpixel,withhighervaluescorrespondingtobrighterpixels.Thiscolorspaceiswidelyusedbecauseitcorrespondstothethreephotoreceptorsofthehumaneye.

NoteThedefaultcolorformatinOpenCVisoftenreferredtoasRGBbutitisactuallystoredasBGR(thechannelsarereversed).

Theexamplecode

ThefollowingBGRsplitexampleshowsyouhowtoloadanRGBimage,splittingandshowingeachparticularchannelingrayandinacolor.Thefirstpartofthecodeisusedtoloadandshowthepicture:

#include<opencv2/opencv.hpp>

#include<opencv2/imgproc.hpp>

usingnamespacestd;

usingnamespacecv;

vector<Mat>showSeparatedChannels(vector<Mat>channels);

intmain(intargc,constchar**argv)

{

//Loadtheimage

Matimage=imread("BGR.png");

imshow("Picture",image);

Thenextpartofthecodesplitsthepictureintoeachchannelandshowsit:

vector<Mat>channels;

split(image,channels);

//showchannelsingrayscale

namedWindow("Bluechannel(gray)",WINDOW_AUTOSIZE);

imshow("Bluechannel(gray)",channels[0]);

namedWindow("Greenchannel(gray)",WINDOW_AUTOSIZE);

imshow("Greenchannel(gray)",channels[1]);

namedWindow("Redchannel(gray)",WINDOW_AUTOSIZE);

imshow("Redchannel(gray)",channels[2]);

//showchannelsinBGR

vector<Mat>separatedChannels=showSeparatedChannels(channels);

namedWindow("Bluechannel",WINDOW_AUTOSIZE);

imshow("Bluechannel",separatedChannels[0]);

namedWindow("Greenchannel",WINDOW_AUTOSIZE);

imshow("Greenchannel",separatedChannels[1]);

namedWindow("Redchannel",WINDOW_AUTOSIZE);

imshow("Redchannel",separatedChannels[2]);

waitKey(0);

return0;

}

Itisworthnotingtheuseofthevoidsplit(InputArraym,OutputArrayOfArraysmv)OpenCVfunctiontosplittheimageminitsthreechannelsandsaveitinavectorofMatcalledmv.Onthecontrary,thevoidmerge(InputArrayOfArraysmv,OutputArraydst)functionisusedtomergeallthemvchannelsinonedstimage.Furthermore,afunctiondenominatedasshowSeparatedChannelsisusedtocreatethreecolorimagesrepresentingeachofthechannels.Foreachchannel,thefunctiongeneratesvector<Mat>auxcomposedbythechannelitselfandtwoauxiliarychannelsorderedwithalltheirvaluessetto0,whichrepresenttheothertwochannelsofthecolormodel.Finally,theauxpictureismerged,generatinganimagewithonlyonechannelfulfilled.Thisfunctioncode,whichwillalsobeusedinotherexamplesofthischapter,isasfollows:

vector<Mat>showSeparatedChannels(vector<Mat>channels){

vector<Mat>separatedChannels;

//createeachimageforeachchannel

for(inti=0;i<3;i++){

Matzer=Mat::zeros(channels[0].rows,channels[0].cols,

channels[0].type());

vector<Mat>aux;

for(intj=0;j<3;j++){

if(j==i)

aux.push_back(channels[i]);

else

aux.push_back(zer);

}

Matchann;

merge(aux,chann);

separatedChannels.push_back(chann);

}

returnseparatedChannels;

}

Thefollowingfigureshowsyoutheoutputoftheexample:

TheoriginalRGBimageandchannelsplitting

GrayscaleIngrayscale,thevalueofeachpixelisrepresentedasasinglevaluecarryingonlytheintensityinformation,composinganimageexclusivelyformedfromdifferentshadesofgray.ThecolorspaceconversioncodetoconvertbetweenRGBandgrayscale(Y)inOpenCVusingcvtColorisCOLOR_BGR2GRAY,COLOR_RGB2GRAY,COLOR_GRAY2BGR,andCOLOR_GRAY2RGB.Thesetransformationsareinternallycomputedasfollows:

Note

Notefromtheprecedingformulathatitisnotpossibletoretrievecolorsdirectlyfromagrayscaleimage.

Examplecode

ThefollowingGrayexampleshowsyouhowtoconvertanRGBimagetograyscale,showingthetwopictures.Theexamplecodeis:

#include<opencv2/opencv.hpp>

#include<opencv2/imgproc.hpp>

usingnamespacecv;

intmain(intargc,constchar**argv)

{

//Loadtheimage

Matimage=imread("Lovebird.jpg");

namedWindow("Picture",WINDOW_AUTOSIZE);

imshow("Picture",image);

MatimageGray;

cvtColor(image,imageGray,COLOR_BGR2GRAY);

namedWindow("Graypicture",WINDOW_AUTOSIZE);

imshow("Graypicture",imageGray);

waitKey(0);

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

TheoriginalRGBimageandthegrayscaleconversion

NoteThismethodtoconvertfromRGBtograyscalehasthedisadvantageoflosingthecontrastoftheoriginalimage.Chapter6,ComputationalPhotography,ofthisbookdescribesthedecolorizationprocess,whichmakesthissameconversionwhileovercomingthisissue.

CIEXYZTheCIEXYZsystemdescribescolorwithaluminancecomponentY,whichisrelatedtothebrightnesssensitivityofhumanvisionandtwoadditionalchannels,XandZ,standardizedbytheCommissionInternationaledeL’Éclairage(CIE)usingstatisticsfromexperimentswithseveralhumanobservers.Thiscolorspaceisusedtoreportcolorfrommeasuringinstruments,suchasacolorimeteroraspectrophotometer,anditisusefulwhenaconsistentcolorrepresentationacrossdifferentdevicesisneeded.Themainproblemofthiscolorspaceisthatthecolorsarescaledinanon-uniformmanner.ThisfactcausedtheCIEtoadopttheCIEL*a*b*andCIEL*u*v*colormodels.

ThecolorspaceconversioncodetoconvertbetweenRGBandCIEXYZinOpenCVusingcvtColorisCOLOR_BGR2XYZ,COLOR_RGB2XYZ,COLOR_XYZ2BGR,andCOLOR_XYZ2RGB.Thesetransformationsarecomputedasfollows:

Theexamplecode

ThefollowingCIExyzexampleshowsyouhowtoconvertanRGBimagetotheCIEXYZcolorspace,splittingandshowingeachparticularchannelingrayandinacolor.Thefirstpartofthecodeisusedtoloadandconvertthepicture:

#include<opencv2/opencv.hpp>

#include<opencv2/imgproc.hpp>

usingnamespacestd;

usingnamespacecv;

vector<Mat>showSeparatedChannels(vector<Mat>channels);

intmain(intargc,constchar**argv)

{

//Loadtheimage

Matimage=imread("Lovebird.jpg");

imshow("Picture",image);

//transformtoCIEXYZ

cvtColor(image,image,COLOR_BGR2XYZ);

ThenextpartofthecodesplitsthepictureineachoftheCIEXYZchannelsandshowsthem:

vector<Mat>channels;

split(image,channels);

//showchannelsingrayscale

namedWindow("Xchannel(gray)",WINDOW_AUTOSIZE);

imshow("Xchannel(gray)",channels[0]);

namedWindow("Ychannel(gray)",WINDOW_AUTOSIZE);

imshow("Ychannel(gray)",channels[1]);

namedWindow("Zchannel(gray)",WINDOW_AUTOSIZE);

imshow("Zchannel(gray)",channels[2]);

//showchannelsinBGR

vector<Mat>separatedChannels=showSeparatedChannels(channels);

for(inti=0;i<3;i++){

cvtColor(separatedChannels[i],separatedChannels[i],COLOR_XYZ2BGR);

}

namedWindow("Xchannel",WINDOW_AUTOSIZE);

imshow("Xchannel",separatedChannels[0]);

namedWindow("Ychannel",WINDOW_AUTOSIZE);

imshow("Ychannel",separatedChannels[1]);

namedWindow("Zchannel",WINDOW_AUTOSIZE);

imshow("Zchannel",separatedChannels[2]);

waitKey(0);

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

TheoriginalRGBimageandtheCIEXYZchannelsplitting

YCrCbThiscolorspaceiswidelyusedinvideo-andimage-compressionschemes,anditisnotanabsolutecolorspacebecauseitisawaytoencodetheRGBcolorspace.TheYchannelrepresentsluminance,whileCrandCbrepresentred-difference(thedifferencebetweentheRchannelintheRGBcolorspaceandY)andblue-difference(thedifferencebetweentheBchannelintheRGBcolorspaceandY)chromacomponents,respectively.Itisusedwidelyinvideo-andimage-compressionschemes,suchasMPEGandJPEG.

ThecolorspaceconversioncodetoconvertbetweenRGBandYCrCbinOpenCVusingcvtColorisCOLOR_BGR2YCrCb,COLOR_RGB2YCrCb,COLOR_YCrCb2BGR,andCOLOR_YCrCb2RGB.Thesetransformationsarecomputedasfollows:

Then,takealookatthefollowing:

Theexamplecode

ThefollowingYCrCbcolorexampleshowsyouhowtoconvertanRGBimagetotheYCrCbcolorspace,splittingandshowingeachparticularchannelingrayandinacolor.Thefirstpartofthecodeisusedtoloadandconvertthepicture:

#include<opencv2/opencv.hpp>

#include<opencv2/imgproc.hpp>

usingnamespacestd;

usingnamespacecv;

vector<Mat>showSeparatedChannels(vector<Mat>channels);

intmain(intargc,constchar**argv)

{

//Loadtheimage

Matimage=imread("Lovebird.jpg");

imshow("Picture",image);

//transformtoYCrCb

cvtColor(image,image,COLOR_BGR2YCrCb);

ThenextpartofthecodesplitsthepictureintoeachoftheYCrCbchannelsandshowsthem:

vector<Mat>channels;

split(image,channels);

//showchannelsingrayscale

namedWindow("Ychannel(gray)",WINDOW_AUTOSIZE);

imshow("Ychannel(gray)",channels[0]);

namedWindow("Crchannel(gray)",WINDOW_AUTOSIZE);

imshow("Crchannel(gray)",channels[1]);

namedWindow("Cbchannel(gray)",WINDOW_AUTOSIZE);

imshow("Cbchannel(gray)",channels[2]);

//showchannelsinBGR

vector<Mat>separatedChannels=showSeparatedChannels(channels);

for(inti=0;i<3;i++){

cvtColor(separatedChannels[i],separatedChannels[i],COLOR_YCrCb2BGR);

}

namedWindow("Ychannel",WINDOW_AUTOSIZE);

imshow("Ychannel",separatedChannels[0]);

namedWindow("Crchannel",WINDOW_AUTOSIZE);

imshow("Crchannel",separatedChannels[1]);

namedWindow("Cbchannel",WINDOW_AUTOSIZE);

imshow("Cbchannel",separatedChannels[2]);

waitKey(0);

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

TheoriginalRGBimageandtheYCrCbchannelsplitting

HSVTheHSVcolorspacebelongstothegroupoftheso-calledhue-orientedcolor-coordinatesystems.Thistypeofcolormodelcloselyemulatesmodelsofhumancolorperception.Whileinothercolormodels,suchasRGB,animageistreatedasanadditiveresultofthreebasecolors,thethreechannelsofHSVrepresenthue(Hgivesameasureofthespectralcompositionofacolor),saturation(Sgivestheproportionofpurelightofthedominantwavelength,whichindicateshowfaracolorisfromagrayofequalbrightness),andvalue(Vgivesthebrightnessrelativetothebrightnessofasimilarlyilluminatedwhitecolor)correspondingtotheintuitiveappealoftint,shade,andtone.HSViswidelyusedtomakeacomparisonofcolorsbecauseHisalmostindependentlightvariations.Thefollowingfigureshowsyouthiscolormodelrepresentingeachofthechannelsasapartofacylinder:

ThecolorspaceconversioncodetoconvertbetweenRGBandHSVinOpenCVusingcvtColorisCOLOR_BGR2HSV,COLOR_RGB2HSV,COLOR_HSV2BGR,andCOLOR_HSV2RGB.Inthiscase,itisworthnotingthatifthesrcimageformatis8-bitor16-bit,cvtColorfirstconvertsittoafloating-pointformat,scalingthevaluesbetween0and1.Afterthat,thetransformationsarecomputedasfollows:

IfH<0,thenH=H+360.Finally,thevaluesarereconvertedtothedestinationdatatype.

Theexamplecode

ThefollowingHSVcolorexampleshowsyouhowtoconvertanRGBimagetotheHSVcolorspace,splittingandshowingeachparticularchannelingrayscaleandtheHSVimage.Theexamplecodeis:

#include<opencv2/opencv.hpp>

#include<opencv2/imgproc.hpp>

usingnamespacestd;

usingnamespacecv;

intmain(intargc,constchar**argv)

{

//Loadtheimage

Matimage=imread("Lovebird.jpg");

imshow("Picture",image);

//transformtoHSV

cvtColor(image,image,COLOR_BGR2HSV);

vector<Mat>channels;

split(image,channels);

//showchannelsingrayscale

namedWindow("Hchannel(gray)",WINDOW_AUTOSIZE);

imshow("Hchannel(gray)",channels[0]);

namedWindow("Schannel(gray)",WINDOW_AUTOSIZE);

imshow("Schannel(gray)",channels[1]);

namedWindow("Vchannel(gray)",WINDOW_AUTOSIZE);

imshow("Vchannel(gray)",channels[2]);

namedWindow("HSVimage(allchannels)",WINDOW_AUTOSIZE);

imshow("HSVimage(allchannels)",image);

waitKey(0);

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

TheoriginalRGBimage,HSVconversion,andchannelsplitting

NoteTheimshowfunctionofOpenCVassumesthatthecoloroftheimagetobeshownisRGB,

soitdisplaysitincorrectly.Ifyouhaveanimageinanothercolorspaceandyouwanttodisplayitcorrectly,youfirsthavetoconvertitbacktoRGB.

HLSTheHLScolorspacebelongstothegroupofhue-orientedcolor-coordinatesystems,suchastheHSVcolormodelexplainedpreviously.Thismodelwasdevelopedtospecifythevaluesofhue,lightness,andsaturationofacolorineachchannel.ThedifferencewithrespecttotheHSVcolormodelisthatthelightnessofapurecolordefinedbyHLSisequaltothelightnessofamediumgray,whilethebrightnessofapurecolordefinedbyHSVisequaltothebrightnessofwhite.

ThecolorspaceconversioncodetoconvertbetweenRGBandHLSinOpenCVusingcvtColorisCOLOR_BGR2HLS,COLOR_RGB2HLS,COLOR_HLS2BGR,andCOLOR_HLS2RGB.Inthiscase,aswithHSV,ifthesrcimageformatis8-bitor16-bit,cvtColorfirstconvertsittoafloating-pointformat,scalingthevaluesbetween0and1.Afterthat,thetransformationsarecomputedasfollows:

IfH<0,thenH=H+360.Finally,thevaluesarereconvertedtothedestinationdatatype.

Theexamplecode

ThefollowingHLScolorexampleshowsyouhowtoconvertanRGBimagetoHLScolor

space,splittingandshowingeachparticularchannelingrayscaleandtheHLSimage.Theexamplecodeis:

#include<opencv2/opencv.hpp>

#include<opencv2/imgproc.hpp>

usingnamespacestd;

usingnamespacecv;

intmain(intargc,constchar**argv)

{

//Loadtheimage

Matimage=imread("Lovebird.jpg");

imshow("Picture",image);

//transformtoHSV

cvtColor(image,image,COLOR_BGR2HLS);

vector<Mat>channels;

split(image,channels);

//showchannelsingrayscale

namedWindow("Hchannel(gray)",WINDOW_AUTOSIZE);

imshow("Hchannel(gray)",channels[0]);

namedWindow("Lchannel(gray)",WINDOW_AUTOSIZE);

imshow("Lchannel(gray)",channels[1]);

namedWindow("Schannel(gray)",WINDOW_AUTOSIZE);

imshow("Schannel(gray)",channels[2]);

namedWindow("HLSimage(allchannels)",WINDOW_AUTOSIZE);

imshow("HLSimage(allchannels)",image);

waitKey(0);

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

TheoriginalRGBimage,HLSconversion,andchannelsplitting

CIEL*a*b*TheCIEL*a*b*colorspaceistheseconduniformcolorspacestandardizedbyCIEafterCIEL*u*v*,whichisderivedbasedontheCIEXYZspaceandwhitereferencepoint.Actually,itisthemostcompletecolorspacespecifiedbyCIEandwascreatedtobedevice-independent,liketheCYEXYZmodel,andtobeusedasareference.Itisabletodescribethecolorsvisibletothehumaneye.Thethreechannelsrepresentthelightnessofthecolor(L*),itspositionbetweenmagentaandgreen(a*),anditspositionbetweenyellowandblue(b*).

ThecolorspaceconversioncodetoconvertbetweenRGBandCIEL*a*b*inOpenCVusingcvtColorisCOLOR_BGR2Lab,COLOR_RGB2Lab,COLOR_Lab2BGR,andCOLOR_Lab2RGB.Theprocedureusedtocomputethesetransformationsisexplainedathttp://docs-hoffmann.de/cielab03022003.pdf.

Theexamplecode

ThefollowingCIElabexampleshowsyouhowtoconvertanRGBimagetotheCIEL*a*b*colorspace,splittingandshowingeachparticularchannelingrayscaleandtheCIEL*a*b*image.Theexamplecodeis:

#include<opencv2/opencv.hpp>

#include<opencv2/imgproc.hpp>

usingnamespacestd;

usingnamespacecv;

intmain(intargc,constchar**argv)

{

//Loadtheimage

Matimage=imread("Lovebird.jpg");

imshow("Picture",image);

//transformtoCIELab

cvtColor(image,image,COLOR_BGR2Lab);

vector<Mat>channels;

split(image,channels);

//showchannelsingrayscale

namedWindow("Lchannel(gray)",WINDOW_AUTOSIZE);

imshow("Lchannel(gray)",channels[0]);

namedWindow("achannel(gray)",WINDOW_AUTOSIZE);

imshow("achannel(gray)",channels[1]);

namedWindow("bchannel(gray)",WINDOW_AUTOSIZE);

imshow("bchannel(gray)",channels[2]);

namedWindow("CIELabimage(allchannels)",WINDOW_AUTOSIZE);

imshow("CIELabimage(allchannels)",image);

waitKey(0);

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

TheoriginalRGBimage,CIEL*a*b*conversion,andchannelsplitting

CIEL*u*v*TheCIEL*u*v*colorspaceisthefirstuniformcolorspacestandardizedbyCIE.Itisasimple-to-computetransformationoftheCIEXYZspaceandwhitereferencepoint,whichattemptsperceptualuniformity.LiketheCIEL*a*b*colorspace,itwascreatedtobedevice-independent.Thethreechannelsrepresentthelightnessofthecolor(L*)anditspositionbetweengreenandred(u*),andthelastonerepresentsmostlyblueandpurpletypecolors(v*).Thiscolormodelisusefulforadditivemixturesoflightsduetoitslinearadditionproperties.

ThecolorspaceconversioncodetoconvertbetweenRGBandCIEL*u*v*inOpenCVusingcvtColorisCOLOR_BGR2Luv,COLOR_RGB2Luv,COLOR_Luv2BGR,andCOLOR_Luv2RGB.Theprocedureusedtocomputethesetransformationscanbeseenathttp://docs.opencv.org/trunk/modules/imgproc/doc/miscellaneous_transformations.html#cvtcolor

Theexamplecode

ThefollowingCIELuvcolorexampleshowsyouhowtoconvertanRGBimagetotheCIEL*u*v*colorspace,splittingandshowingeachparticularchannelingrayscaleandtheCIEL*u*v*image.Theexamplecodeis:

#include<opencv2/opencv.hpp>

#include<opencv2/imgproc.hpp>

usingnamespacestd;

usingnamespacecv;

intmain(intargc,constchar**argv)

{

//Loadtheimage

Matimage=imread("Lovebird.jpg");

imshow("Picture",image);

//transformtoCIELuv

cvtColor(image,image,COLOR_BGR2Luv);

vector<Mat>channels;

split(image,channels);

//showchannelsingrayscale

namedWindow("Lchannel(gray)",WINDOW_AUTOSIZE);

imshow("Lchannel(gray)",channels[0]);

namedWindow("uchannel(gray)",WINDOW_AUTOSIZE);

imshow("uchannel(gray)",channels[1]);

namedWindow("vchannel(gray)",WINDOW_AUTOSIZE);

imshow("vchannel(gray)",channels[2]);

namedWindow("CIELuvimage(allchannels)",WINDOW_AUTOSIZE);

imshow("CIELuvimage(allchannels)",image);

waitKey(0);

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

OriginalRGBimage,CIEL*u*v*conversion,andchannelsplitting

BayerTheBayerpixel-spacecompositioniswidelyusedindigitalcameraswithonlyoneimagesensor.Unlikecameraswiththreesensors(oneperRGBchannel,whichisabletoobtainalltheinformationofaparticularcomponent),inonesensorcamera,everypixeliscoveredbyadifferentcolorfilter,soeachpixelisonlymeasuredinthiscolor.ThemissingcolorinformationisextrapolatedfromitsneighborsusingtheBayermethod.Itallowsyoutogetcompletecolorpicturesfromasingleplanewherethepixelsareinterleavedasfollows:

ABayerpatternexample

NoteNotethattheBayerpatternisrepresentedbymoreGpixelsthanRandBbecausethehumaneyeismoresensitivetogreenfrequencies.

Thereareseveralmodificationsoftheshownpatternobtainedbyshiftingthepatternbyonepixelinanydirection.ThecolorspaceconversioncodetoconvertfromBayertoRGBinOpenCVisdefinedconsideringthecomponentsofthesecondandthirdcolumnsofthesecondrow(XandY,respectively)asCOLOR_BayerXY2BGR.Forexample,thepatternofthepreviouspicturehasa“BG”type,soitsconversioncodeisCOLOR_BayerBG2BGR.

Theexamplecode

ThefollowingBayerexampleshowsyouhowtoconvertapicturedefinedbyanRGBayerpatternobtainedfromanimagesensortoanRGBimage.Theexamplecodeis:

#include<opencv2/opencv.hpp>

#include<opencv2/imgproc.hpp>

usingnamespacecv;

intmain(intargc,constchar**argv)

{

//Showbayeredimageincolor

Matbayer_color=imread("Lovebird_bayer_color.jpg");

namedWindow("Bayerpictureincolor",WINDOW_AUTOSIZE);

imshow("Bayerpictureincolor",bayer_color);

//Loadbayeredimage

Matbayer=imread("Lovebird_bayer.jpg",CV_8UC3);

namedWindow("Bayerpicture",WINDOW_AUTOSIZE);

imshow("Bayerpicture",bayer);

MatimageColor;

cvtColor(bayer,imageColor,COLOR_BayerRG2BGR);

namedWindow("Colorpicture",WINDOW_AUTOSIZE);

imshow("Colorpicture",imageColor);

waitKey(0);

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

TheBayerpatternimageandRGBconversion

Color-space-basedsegmentationEachcolorspacerepresentsanimageindicatingthenumericvalueofthespecificcharacteristicmeasuredbyeachchanneloneachpixel.Consideringthesecharacteristics,itispossibletopartitionthecolorspaceusinglinearboundaries(forexample,planesinthree-dimensionalspacesandonespaceperchannel),allowingyoutoclassifyeachpixelaccordingtothepartitionitliesin,thereforeallowingyoutoselectasetofpixelswithpredefinedcharacteristics.Thisideacanbeusedtosegmentobjectsofanimageweareinterestedin.

OpenCVprovidesthevoidinRange(InputArraysrc,InputArraylowerb,InputArrayupperb,OutputArraydst)functiontocheckwhetheranarrayofelementsliebetweentheelementsoftwootherarrays.Withrespecttocolor-space-basedsegmentation,thisfunctionallowsyoutoobtainthesetofpixelsofansrcimage,thevaluesofwhosechannelsliebetweenthelowerblowerboundariesandupperbupperboundaries,obtainingthedstimage.

NoteThelowerbandupperbboundariesareusuallydefinedasScalar(x,y,z),wherex,y,andzarethenumericalvaluesofeachchanneldefinedaslowerorupperboundaries.

Thefollowingexamplesshowyouhowtodetectpixelsthatcanbeconsideredtobeskin.Ithasbeenobservedthatskincolordiffersmoreinintensitythanchrominance,sonormally,theluminancecomponentisnotconsideredforskindetection.ThisfactmakesitdifficulttodetectskininapicturerepresentedinRGBbecauseofthedependenceofthiscolorspaceonluminance,soHSVandYCrCbcolormodelsareused.Itisworthnotingthatforthistypeofsegmentation,itisnecessarytoknoworobtainthevaluesoftheboundariesperchannel.

HSVsegmentationAsstatedpreviously,HSViswidelyusedtomakeacomparisonofcolorsbecauseHisalmostindependentoflightvariations,soitisusefulinskindetection.Inthisexample,thelowerboundaries(0,10,60)andtheupperboundaries(20,150,255)areselectedtodetecttheskinineachpixel.Theexamplecodeis:

#include<opencv2/opencv.hpp>

#include<opencv2/imgproc.hpp>

usingnamespacestd;

usingnamespacecv;

intmain()

{

//Loadtheimage

Matimage=imread("hand.jpg");

namedWindow("Picture",WINDOW_AUTOSIZE);

imshow("Picture",image);

Mathsv;

cvtColor(image,hsv,COLOR_BGR2HSV);

//selectpixels

Matbw;

inRange(hsv,Scalar(0,10,60),Scalar(20,150,255),bw);

namedWindow("Selectedpixels",WINDOW_AUTOSIZE);

imshow("Selectedpixels",bw);

waitKey(0);

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

SkindetectionusingtheHSVcolorspace

YCrCbsegmentationTheYCrCbcolorspacereducestheredundancyofRGBcolorchannelsandrepresentsthecolorwithindependentcomponents.Consideringthattheluminanceandchrominancecomponentsareseparated,thisspaceisagoodchoiceforskindetection.

ThefollowingexampleusestheYCrCbcolorspaceforskindetectionusingthelowerboundaries(0,133,77)andtheupperboundaries(255,173,177)ineachpixel.Theexamplecodeis:

#include<opencv2/opencv.hpp>

#include<opencv2/imgproc.hpp>

usingnamespacestd;

usingnamespacecv;

intmain()

{

//Loadtheimage

Matimage=imread("hand.jpg");

namedWindow("Picture",WINDOW_AUTOSIZE);

imshow("Picture",image);

Matycrcb;

cvtColor(image,ycrcb,COLOR_BGR2HSV);

//selectpixels

Matbw;

inRange(ycrcb,Scalar(0,133,77),Scalar(255,173,177),bw);

namedWindow("Selectedpixels",WINDOW_AUTOSIZE);

imshow("Selectedpixels",bw);

waitKey(0);

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

SkindetectionusingtheYCrCbcolorspace

NoteFormoreimage-segmentationmethods,refertoChapter4ofOpenCVEssentialsbyPacktPublishing.

ColortransferAnothertaskcommonlycarriedoutinimageprocessingistomodifythecolorofanimage,specificallyincaseswhereitisnecessarytoremoveadominantorundesirablecolorcast.Oneofthesemethodsiscalledcolortransfer,whichcarriesoutasetofcolorcorrectionsthatborrowonesourceimage’scolorcharacteristics,andtransfertheappearanceofthesourceimagetothetargetimage.

TheexamplecodeThefollowingcolorTransferexampleshowsyouhowtotransferthecolorfromasourcetotargetimage.ThismethodfirstconvertstheimagecolorspacetoCIEL*a*b*.Next,itsplitsthechannelsforsourceandtargetimages.Afterthat,itfitsthechanneldistributionfromoneimagetoanotherusingthemeanandthestandarddeviation.Finally,thechannelsaremergedbacktogetherandconvertedtoRGB.

NoteForfulltheoreticaldetailsofthetransformationusedintheexample,refertoColorTransferbetweenImagesathttp://www.cs.tau.ac.il/~turkel/imagepapers/ColorTransfer.pdf.

ThefirstpartofthecodeconvertstheimagetotheCIEL*a*b*colorspace,whilealsochangingthetypeoftheimagetoCV_32FC1:

#include<opencv2/opencv.hpp>

#include<opencv2/imgproc.hpp>

usingnamespacestd;

usingnamespacecv;

intmain(intargc,constchar**argv)

{

//Loadtheimages

Matsrc=imread("clock_tower.jpg");

Mattar=imread("big_ben.jpg");

//ConverttoLabspaceandCV_32F1

Matsrc_lab,tar_lab;

cvtColor(src,src_lab,COLOR_BGR2Lab);

cvtColor(tar,tar_lab,COLOR_BGR2Lab);

src_lab.convertTo(src_lab,CV_32FC1);

tar_lab.convertTo(tar_lab,CV_32FC1);

Thenextpartofthecodeperformsthecolortransferasstatedpreviously:

//Findmeanandstdofeachchannelforeachimage

Matmean_src,mean_tar,stdd_src,stdd_tar;

meanStdDev(src_lab,mean_src,stdd_src);

meanStdDev(tar_lab,mean_tar,stdd_src);

//Splitintoindividualchannels

vector<Mat>src_chan,tar_chan;

split(src_lab,src_chan);

split(tar_lab,tar_chan);

//Foreachchannelcalculatethecolordistribution

for(inti=0;i<3;i++){

tar_chan[i]-=mean_tar.at<double>(i);

tar_chan[i]*=(stdd_src.at<double>(i)/stdd_src.at<double>(i));

tar_chan[i]+=mean_src.at<double>(i);

}

//Mergethechannels,converttoCV_8UC1eachchannelandconvertto

BGR

Matoutput;

merge(tar_chan,output);

output.convertTo(output,CV_8UC1);

cvtColor(output,output,COLOR_Lab2BGR);

//showpictures

namedWindow("Sourceimage",WINDOW_AUTOSIZE);

imshow("Sourceimage",src);

namedWindow("Targetimage",WINDOW_AUTOSIZE);

imshow("Targetimage",tar);

namedWindow("Resultimage",WINDOW_AUTOSIZE);

imshow("Resultimage",output);

waitKey(0);

return0;

}

Thefollowingfigureshowsyoutheoutputofthecode:

Anightappearancecolor-transferexample

SummaryInthischapter,weprovidedadeeperviewofthecolorspacesusedinOpenCVandshowedyouhowtoconvertbetweenthemusingthecvtColorfunction.Furthermore,thepossibilitiesofimageprocessingusingdifferentcolormodelsandtheimportanceofselectingthecorrectcolorspaceconsideringtheoperationsweneedtomakewashighlighted.Tothisend,color-space-basedsegmentationandcolor-transfermethodswereimplemented.

Thenextchapterwillcoverimage-processingtechniquesusedforvideoorasequenceofimages.Wewillseehowtoimplementvideostabilization,superresolution,andstitchingalgorithmswithOpenCV.

Chapter5.ImageProcessingforVideoThischaptershowsyoudifferenttechniquesrelatedtoimageprocessingforvideo.Whilemostclassicalimageprocessingdealswithstaticimages,video-basedprocessingisbecomingpopularandaffordable.

Thischaptercoversthefollowingtopics:

VideostabilizationThevideosuperresolutionprocessImagestitching

Inthischapter,wewillworkwithavideosequenceoralivecameradirectly.Theoutputofimageprocessingmaybeeitherasetofmodifiedimagesorusefulhigh-levelinformation.Mostimage-processingtechniquesconsiderimagesasatwo-dimensionaldigitalsignalandapplydifferenttechniquestoit.Inthischapter,asequenceofimagesfromavideoorlivecamerawillbeusedtomakeorimproveanewenhancedsequencewithdifferenthigh-leveltechniques.Thus,moreusefulinformationisobtained,thatis,athirdtimedimensionisincorporated.

VideostabilizationVideostabilizationreferstoafamilyofmethodsusedtoreducetheblurringassociatedwiththemotionofthecamera.Inotherwords,itcompensatesforanyangularmovement,equivalenttoyaw,pitch,roll,andxandytranslationsofthecamera.Thefirstimagestabilizersappearedintheearly60s.Thesesystemswereabletoslightlycompensateforcamerashakesandinvoluntarymovements.Theywerecontrolledbygyroscopesandaccelerometersbasedonmechanismsthatcouldcancelorreduceunwantedmovementbychangingthepositionofalens.Currently,thesemethodsarewidelyusedinbinoculars,videocameras,andtelescopes.

Therearevariousmethodsforimageorvideostabilization,andthischapterfocusesonthemostextendedfamiliesofmethods:

Mechanicalstabilizationsystems:Thesesystemsuseamechanicalsystemonthecameralenssothatwhenthecameraismoved,motionisdetectedbyaccelerometersandgyroscopes,andthesystemgeneratesamovementonthelens.Thesesystemswillnotbeconsideredhere.Digitalstabilizationsystems:Thesearenormallyusedinvideoandtheyactdirectlyontheimageobtainedfromthecamera.Inthesesystems,thesurfaceofthestabilizedimageisslightlysmallerthanthesourceimage’ssurface.Whenthecameraismoved,thecapturedimageismovedtocompensatethismovement.Althoughthesetechniqueseffectivelyworktocancelmovementbyreducingtheusableareaofmovementsensor,resolutionandimageclarityaresacrificed.

Video-stabilizationalgorithmsusuallyencompassthefollowingsteps:

GeneralstepsofVideo-Stabilizationalgorithms

ThischapterfocusesonthevideostabmoduleinOpenCV3.0Alpha,whichcontainsasetoffunctionsandclassesthatcanbeusedtosolvethevideo-stabilizationproblem.

Let’sexplorethegeneralprocessinmoredetail.Videostabilizationisachievedbyafirstestimationoftheinter-framemotionbetweenconsecutiveframesusingtheRANSACmethod.Attheendofthisstep,anarrayof3x3matricesisobtained,andeachofthemdescribesthemotionofthetwopairsofconsecutiveframes.Globalmotionestimationisveryimportantforthisstepanditaffectstheaccuracyofthestabilizedfinalsequence.

Note

YoucanfindmoredetailedinformationabouttheRANSACmethodathttp://en.wikipedia.org/wiki/RANSAC.

Thesecondstepgeneratesanewsequenceofframesbasedontheestimatedmotion.Additionalprocessingisperformed,suchassmoothing,deblurring,borderextrapolation,andsoon,toimprovethequalityofstabilization.

Thethirdstepremovestheannoyingirregularperturbations—refertothefollowingfigure.Thereareapproachesthatassumeacamera-motionmodel,whichworkwellwhensomeassumptionscanbemadeabouttheactualcameramotion.

Removingtheirregularperturbations

IntheOpenCVexamples([opencv_source_code]/samples/cpp/videostab.cpp),avideo-stabilizationprogramexamplecanbefound.ForthefollowingvideoStabilizerexample,thevideoStabilizer.proprojectneedstheselibraries:lopencv_core300,lopencv_highgui300,lopencv_features2d300,lopencv_videoio300,andlopencv_videostab300.

ThefollowingvideoStabilizerexamplehasbeencreatedusingthevideostabmoduleofOpenCV3.0Alpha:

#include<string>

#include<iostream>

#include<opencv2/opencv.hpp>

#include<opencv2/videostab.hpp>

usingnamespacestd;

usingnamespacecv;

usingnamespacecv::videostab;

voidprocessing(Ptr<IFrameSource>stabilizedFrames,stringoutputPath);

intmain(intargc,constchar**argv)

{

Ptr<IFrameSource>stabilizedFrames;

try

{

//1-Preparetheinputvideoandcheckit

stringinputPath;

stringoutputPath;

if(argc>1)

inputPath=argv[1];

else

inputPath=".\\cube4.avi";

if(argc>2)

outputPath=argv[2];

else

outputPath=".\\cube4_stabilized.avi";

Ptr<VideoFileSource>source=makePtr<VideoFileSource>(inputPath);

cout<<"framecount(rough):"<<source->count()<<endl;

//2-Preparethemotionestimator

//first,preparethemotiontheestimationbuilder,RANSACL2

doublemin_inlier_ratio=0.1;

Ptr<MotionEstimatorRansacL2>est=makePtr<MotionEstimatorRansacL2>

(MM_AFFINE);

RansacParamsransac=est->ransacParams();

ransac.size=3;

ransac.thresh=5;

ransac.eps=0.5;

est->setRansacParams(ransac);

est->setMinInlierRatio(min_inlier_ratio);

//second,createafeaturedetector

intnkps=1000;

Ptr<GoodFeaturesToTrackDetector>feature_detector=

makePtr<GoodFeaturesToTrackDetector>(nkps);

//third,createthemotionestimator

Ptr<KeypointBasedMotionEstimator>motionEstBuilder=

makePtr<KeypointBasedMotionEstimator>(est);

motionEstBuilder->setDetector(feature_detector);

Ptr<IOutlierRejector>outlierRejector=makePtr<NullOutlierRejector>();

motionEstBuilder->setOutlierRejector(outlierRejector);

//3-Preparethestabilizer

StabilizerBase*stabilizer=0;

//first,preparetheoneortwopassstabilizer

boolisTwoPass=1;

intradius_pass=15;

if(isTwoPass)

{

//withatwopassstabilizer

boolest_trim=true;

TwoPassStabilizer*twoPassStabilizer=newTwoPassStabilizer();

twoPassStabilizer->setEstimateTrimRatio(est_trim);

twoPassStabilizer-

>setMotionStabilizer(makePtr<GaussianMotionFilter>(radius_pass));

stabilizer=twoPassStabilizer;

}

else

{

//withanonepassstabilizer

OnePassStabilizer*onePassStabilizer=newOnePassStabilizer();

onePassStabilizer-

>setMotionFilter(makePtr<GaussianMotionFilter>(radius_pass));

stabilizer=onePassStabilizer;

}

//second,setuptheparameters

intradius=15;

doubletrim_ratio=0.1;

boolincl_constr=false;

stabilizer->setFrameSource(source);

stabilizer->setMotionEstimator(motionEstBuilder);

stabilizer->setRadius(radius);

stabilizer->setTrimRatio(trim_ratio);

stabilizer->setCorrectionForInclusion(incl_constr);

stabilizer->setBorderMode(BORDER_REPLICATE);

//caststabilizertosimpleframesourceinterfacetoread

stabilizedframes

stabilizedFrames.reset(dynamic_cast<IFrameSource*>(stabilizer));

//4-Processingthestabilizedframes.Theresultsareshowedandsaved.

processing(stabilizedFrames,outputPath);

}

catch(constexception&e)

{

cout<<"error:"<<e.what()<<endl;

stabilizedFrames.release();

return-1;

}

stabilizedFrames.release();

return0;

}

voidprocessing(Ptr<IFrameSource>stabilizedFrames,stringoutputPath)

{

VideoWriterwriter;

MatstabilizedFrame;

intnframes=0;

doubleoutputFps=25;

//foreachstabilizedframe

while(!(stabilizedFrame=stabilizedFrames->nextFrame()).empty())

{

nframes++;

//initwriter(once)andsavestabilizedframe

if(!outputPath.empty())

{

if(!writer.isOpened())

writer.open(outputPath,VideoWriter::fourcc('X','V','I','D'),

outputFps,stabilizedFrame.size());

writer<<stabilizedFrame;

}

imshow("stabilizedFrame",stabilizedFrame);

charkey=static_cast<char>(waitKey(3));

if(key==27){cout<<endl;break;}

}

cout<<"processedframes:"<<nframes<<endl;

cout<<"finished"<<endl;

}

Thisexampleacceptsthenameofaninputvideofileasadefaultvideofilename(.\cube4.avi).Theresultingvideowillbedisplayedandthensavedas.\cube4_stabilized.avi.Notehowthevideostab.hppheaderisincludedandthecv::videostabnamespaceisused.Theexampletakesfourimportantsteps.Thefirststeppreparestheinputvideopath,andthisexampleusesthestandardcommand-lineinputarguments(inputPath=argv[1])toselectthevideofile.Ifitdoesnothaveaninputvideofile,thenitusesthedefaultvideofile(.\cube4.avi).

Thesecondstepbuildsamotionestimator.ArobustRANSAC-basedglobal2Dmethodiscreatedforthemotionestimatorusingasmartpointer(Ptr<object>)ofOpenCV(Ptr<MotionEstimatorRansacL2>est=makePtr<MotionEstimatorRansacL2>(MM_AFFINE)).Therearedifferentmotionmodelstostabilizethevideo:

MM_TRANSLATION=0MM_TRANSLATION_AND_SCALE=1MM_ROTATIO=2MM_RIGID=3MM_SIMILARITY=4MM_AFFINE=5MM_HOMOGRAPHY=6MM_UNKNOWN=7

Thereisatrade-offbetweenaccuracytostabilizethevideoandcomputationaltime.Themorebasicmotionmodelshaveworseaccuracyandbettercomputationaltime;however,themorecomplexmodelshavebetteraccuracyandworsecomputationaltime.

TheRANSACobjectisnowcreated(RansacParamsransac=est->ransacParams())andtheirparametersareset(ransac.size,ransac.threshandransac.eps).Afeaturedetectorisalsoneededtoestimatethemovementbetweeneachconsecutiveframethatwillbeusedbythestabilizer.ThisexampleusestheGoodFeaturesToTrackDetectormethodtodetect(nkps=1000)salientfeaturesineachframe.Then,itusestherobustRANSACandfeature-detectormethodstocreatethemotionestimatorusingthePtr<KeypointBasedMotionEstimator>motionEstBuilder=

makePtr<KeypointBasedMotionEstimator>(est)classandsettingthefeaturedetector

withmotionEstBuilder->setDetector(feature_detector).

RANSACparameters

Size Thesubsetsize

Thresh Themaximumerrortoclassifyasinliers

Eps Themaximumoutliersratio

Prob Theprobabilityofsuccess

Thethirdstepcreatesastabilizerthatneedsthepreviousmotionestimator.Youcanselect(isTwoPass=1)aone-ortwo-passstabilizer.Ifyouusethetwo-passstabilizer(TwoPassStabilizer*twoPassStabilizer=newTwoPassStabilizer()),theresultsareusuallybetter.However,inthisexample,thisiscomputationallyslower.Ifyouusetheotheroption,one-passstabilizer(OnePassStabilizer*onePassStabilizer=newOnePassStabilizer()),theresultsareworsebuttheresponseisfaster.Thestabilizerneedstosetotheroptionstoworkcorrectly,suchasthesourcevideofile(stabilizer->setFrameSource(source))andthemotionestimator(stabilizer->setMotionEstimator(motionEstBuilder)).Italsoneedstocastthestabilizertosimpleframesourcevideotoreadstabilizedframes(stabilizedFrames.reset(dynamic_cast<IFrameSource*>(stabilizer))).

Thelaststepstabilizesthevideousingthecreatedstabilizer.Theprocessing(Ptr<IFrameSource>stabilizedFrames)functioniscreatedtoprocessandstabilizeeachframe.Thisfunctionneedstointroduceapathtosavetheresultingvideo(stringoutputPath=".//stabilizedVideo.avi")andsettheplaybackspeed(doubleoutputFps=25).Afterwards,thisfunctioncalculateseachstabilizedframeuntiltherearenomoreframes(stabilizedFrame=stabilizedFrames->nextFrame().empty()).Internally,thestabilizerfirstestimatesthemotionofeveryframe.Thisfunctioncreatesavideowriter(writer.open(outputPath,VideoWriter::fourcc('X','V','I','D'),outputFps,stabilizedFrame.size()))tostoreeachframeintheXVIDformat.Finally,itsavesandshowseachstabilizedframeuntiltheuserpressestheEsckey.

TodemonstratehowtostabilizeavideowithOpenCV,thepreviousvideoStabilizerexampleisused.Theexampleisexecutedfromthecommandlineasfollows:

<bin_dir>\videoStabilizer.exe.\cube4.avi.\cube4_stabilized.avi

NoteThiscube4.avivideocanbefoundintheOpenCVsamplesfolder.Italsohasagreatdealofcameramovement,whichisperfectforthisexample.

Toshowthestabilizationresults,first,seefourframesofcube4.aviinthefollowingfigure.Thefigurefollowingtheseframesshowsthefirst10framesofcube4.aviandcube4_stabilized.avisuperimposedwithout(left-handsideofthefigure)andwith(right-handsideofthefigure)stabilization.

Thefourconsecutiveframesofcube4.avivideothatarecameramovements

10superimposedframesofcube4.aviandcube4_stabilizatedvideoswithoutandwithstabilization

Lookingattheprecedingfigure,ontheright-handside,youcanseethatthevibrationsproducedbythecameramovementhavebeenreducedduetothestabilization.

SuperresolutionSuperresolutionreferstothetechniquesoralgorithmsdesignedtoincreasethespatialresolutionofanimageorvideo,usuallyfromasequenceofimagesoflowerresolution.Itdiffersfromthetraditionaltechniquesofimagescaling,whichuseasingleimagetoincreasetheresolution,keepingthesharpedges.Incontrast,superresolutionmergesinformationfrommultipleimagestakenfromthesamescenetorepresentdetailsthatwerenotinitiallycapturedintheoriginalimages.

Theprocessofcapturinganimageorvideofromareal-lifescenerequiresthefollowingsteps:

Sampling:ThisisthetransformationofthecontinuoussystemfromthesceneofanidealdiscretesystemwithoutaliasingGeometrictransformation:Thisreferstoapplyingasetoftransformations,suchastranslationorrotation,duetothecamerapositionandlenssystemtoinfer,ideally,detailsofthescenethatarriveateachsensorBlur:ThishappensduetothelenssystemortheexistingmotioninthesceneduringtheintegrationtimeSubsampling:Withthis,thesensoronlyintegratesthenumberofpixelsatitsdisposal(photosites)

Youcanseethisprocessofimagecapturinginthefollowingfigure:

Theprocessofcapturinganimagefromarealscene

Duringthiscaptureprocess,thedetailsofthesceneareintegratedbydifferentsensorssothateachpixelineachcaptureincludesdifferentinformation.Therefore,superresolutionisbasedontryingtofindtherelationshipbetweendifferentcapturesthathaveobtaineddifferentdetailsofthesceneinordertocreateanewimagewithmoreinformation.Superresolutionis,therefore,usedtoregenerateadiscretizedscenewithahigherresolution.

Superresolutioncanbeobtainedbyvarioustechniques,rangingfromthemostintuitivein

thespatialdomaintotechniquesbasedonanalyzingthefrequencyspectrum.Techniquesarebasicallydividedintooptical(usinglenses,zoom,andsoon)orimage-processing-basedtechniques.Thischapterfocusesonimage-processing-basedsuperresolution.Thesemethodsuseotherpartsofthelower-resolutionimages,orotherunrelatedimages,toinferwhatthehigh-resolutionimageshouldlooklike.Thesealgorithmscanbealsodividedintothefrequencyorspatialdomain.Originally,superresolutionmethodsonlyworkedwellongrayscaleimages,butnewmethodshavebeendevelopedtoadaptthemtocolorimages.

Ingeneral,superresolutioniscomputationallydemanding,bothspatiallyandtemporally,becausethesizeoflow-resolutionandhigh-resolutionimagesishighandhundredsofsecondsmaybeneededtogenerateanimage.Totrytoreducethecomputationaltime,preconditionersareusedcurrentlyforoptimizersthatareresponsibleforminimizingthesefunctions.AnotheralternativeistouseGPUprocessingtoimprovethecomputationaltimebecausethesuperresolutionprocessisinherentlyparallelizable.

ThischapterfocusesonthesuperresmoduleinOpenCV3.0Alpha,whichcontainsasetoffunctionsandclassesthatcanbeusedtosolvetheproblemofresolutionenhancement.Themoduleimplementsanumberofmethodsbasedonimage-processingsuperresolution.ThischapterspecificallyfocusesontheBilateralTV-L1(BTVL1)superresolutionmethodimplemented.Amajordifficultyofthesuperresolutionprocessistoestimatethewarpingfunctiontobuildthesuperresolutionimage.TheBilateralTV-L1usesopticalflowtoestimatethewarpingfunction.

NoteYoucanfindmoredetailedinformationabouttheBilateralTV-Lmethodathttp://www.ipol.im/pub/art/2013/26/andopticalflowathttp://en.wikipedia.org/wiki/Optical_flow.

IntheOpenCVexamples([opencv_source_code]/samples/gpu/super_resolution.cpp),abasicexampleofsuperresolutioncanbefound.

NoteYoucanalsodownloadthisexamplefromtheOpenCVGitHubrepositoryathttps://github.com/Itseez/opencv/blob/master/samples/gpu/super_resolution.cpp.

Forthefollowingsuperresolutionexampleproject,thesuperresolution.proprojectfileshallincludetheselibraries:lopencv_core300,lopencv_imgproc300,lopencv_highgui300,lopencv_features2d300,lopencv_videoio300,andlopencv_superres300toworkcorrectly:

#include<iostream>

#include<iomanip>

#include<string>

#include<opencv2/core.hpp>

#include<opencv2/core/utility.hpp>

#include<opencv2/highgui.hpp>

#include<opencv2/imgproc.hpp>

#include<opencv2/superres.hpp>

#include<opencv2/superres/optical_flow.hpp>

#include<opencv2/opencv_modules.hpp>

usingnamespacestd;

usingnamespacecv;

usingnamespacecv::superres;

staticPtr<DenseOpticalFlowExt>createOptFlow(stringname);

intmain(intargc,char*argv[])

{

//1-Initializetheinitialparameters

//Inputandoutputvideo

stringinputVideoName;

stringoutputVideoName;

if(argc>1)

inputVideoName=argv[1];

else

inputVideoName=".\\tree.avi";

if(argc>2)

outputVideoName=argv[2];

else

outputVideoName=".\\tree_superresolution.avi";

constintscale=4;//Scalefactor

constintiterations=180;//Iterationscount

constinttemporalAreaRadius=4;//Radiusofthetemporalsearcharea

stringoptFlow="farneback";//Opticalflowalgorithm

//optFlow="farneback";

//optFlow="tvl1";

//optFlow="brox";

//optFlow="pyrlk";

doubleoutputFps=25.0;//Playbackspeedoutput

//2-Createanopticalflowmethod

Ptr<DenseOpticalFlowExt>optical_flow=createOptFlow(optFlow);

if(optical_flow.empty())return-1;

//3-Createthesuperresolutionmethodandsetitsparameters

Ptr<SuperResolution>superRes;

superRes=createSuperResolution_BTVL1();

superRes->set("opticalFlow",optical_flow);

superRes->set("scale",scale);

superRes->set("iterations",iterations);

superRes->set("temporalAreaRadius",temporalAreaRadius);

Ptr<FrameSource>frameSource;

frameSource=createFrameSource_Video(inputVideoName);

superRes->setInput(frameSource);

//Notusethefirstframe

Matframe;

frameSource->nextFrame(frame);

//4-Processingtheinputvideowiththesuperresolution

//Showtheinitialoptions

cout<<"Input:"<<inputVideoName<<""<<

frame.size()<<endl;

cout<<"Output:"<<outputVideoName<<endl;

cout<<"Playbackspeedoutput:"<<outputFps<<endl;

cout<<"Scalefactor:"<<scale<<endl;

cout<<"Iterations:"<<iterations<<endl;

cout<<"Temporalradius:"<<temporalAreaRadius<<endl;

cout<<"OpticalFlow:"<<optFlow<<endl;

cout<<endl;

VideoWriterwriter;

doublestart_time,finish_time;

for(inti=0;;++i)

{

cout<<'['<<setw(3)<<i<<"]:";

Matresult;

//Calculatetheprocessingtime

start_time=getTickCount();

superRes->nextFrame(result);

finish_time=getTickCount();

cout<<(finish_time-start_time)/getTickFrequency()<<"secs,

Size:"<<result.size()<<endl;

if(result.empty())break;

//Showtheresult

imshow("SuperResolution",result);

if(waitKey(1000)>0)break;

//Savetheresultonoutputfile

if(!outputVideoName.empty())

{

if(!writer.isOpened())

writer.open(outputVideoName,VideoWriter::fourcc('X','V',

'I','D'),outputFps,result.size());

writer<<result;

}

}

writer.release();

return0;

}

staticPtr<DenseOpticalFlowExt>createOptFlow(stringname)

{

if(name=="farneback")

returncreateOptFlow_Farneback();

elseif(name=="tvl1")

returncreateOptFlow_DualTVL1();

elseif(name=="brox")

returncreateOptFlow_Brox_CUDA();

elseif(name=="pyrlk")

returncreateOptFlow_PyrLK_CUDA();

else

cerr<<"IncorrectOpticalFlowalgorithm-"<<name<<endl;

returnPtr<DenseOpticalFlowExt>();

}

Thisexamplecreatesaprogram(superresolution)toobtainvideoswithsuperresolution.Ittakesthepathofaninputvideoorusesadefaultvideopath(.\tree.avi).Theresultingvideoisdisplayedandsavedas.\tree_superresolution.avi.Inthefirstplace,thesuperres.hppandsuperres/optical_flow.hppheadersareincludedandthecv::superresnamespaceisused.Theexamplefollowsfourimportantsteps.

Thefirststepsetstheinitialparameters.Itistheinputvideopaththatusesthestandardinput(inputVideoName=argv[1])toselectthevideofile,andifitdoesnothaveaninputvideofile,thenitusesadefaultvideofile.Theoutputvideopathalsousestheinputstandard(outputVideoName=argv[2])toselecttheoutputvideofile,andifitdoesnothaveanoutputvideofile,thenitusesthedefaultoutputvideofile(.\tree_superresolution)andtheoutputplaybackspeedisalsoset(doubleoutputFps=25.0).Otherimportantparametersofthesuperresolutionmethodarethescalefactor(constintscale=4),theiterationcount(constintiterations=100),theradiusofthetemporalsearcharea(constinttemporalAreaRadius=4),andtheoptical-flowalgorithm(stringoptFlow="farneback").

Thesecondstepcreatesanoptical-flowmethodtodetectsalientfeaturesandtrackthemforeachvideoframe.Anewmethod(staticPtr<DenseOpticalFlowExt>createOptFlow(stringname))hasbeencreatedtoselectbetweenthedifferentoptical-flowmethods.Youcanselectbetweenfarneback,tvl1,brox,andpyrlkoptical-flowmethods.Anewmethod(staticPtr<DenseOpticalFlowExt>createOptFlow(stringname))iswrittentocreateanoptical;-flowmethodtotrackfeatures.ThetwomostimportantmethodsareFarneback(createOptFlow_Farneback())andTV-L1(createOptFlow_DualTVL1()).ThefirstmethodisbasedonGunnerFarneback’salgorithmthatcomputestheopticalflowforallpointsintheframe.ThesecondmethodcalculatestheopticalflowbetweentwoimageframesthatarebasedondualformulationontheTVenergyandemploysanefficientpoint-wisethresholdingstep.Thissecondmethodiscomputationallymoreefficient.

Comparisonbetweenthedifferentopticalflowmethods

Method Complexity Parallelizable

Farneback Quadratic No

TV-L1 Lineal Yes

Brox Lineal Yes

PyrLK Lineal No

NoteYoucanalsolearnmoreabouttheFarnebackoptical-flowmethodusedathttp://www.diva-portal.org/smash/get/diva2:273847/FULLTEXT01.pdf.

Thethirdstepcreatesandsetsthesuperresolutionmethod.Aninstanceofthismethodiscreated(Ptr<SuperResolution>superRes),whichusestheBilateralTV-L1algorithm(superRes=createSuperResolution_BTVL1()).Thismethodhasthefollowingparametersforthealgorithm:

scale:Thisisthescalefactoriterations:Thisistheiterationcounttau:Thisisanasymptoticvalueofthesteepestdescentmethodlambda:Thisistheweightparametertobalancethedatatermandsmoothnesstermalpha:ThisisaparameterofspatialdistributioninBilateral-TVbtvKernelSize:ThisisthekernelsizeoftheBilateral-TVfilterblurKernelSize:ThisistheGaussianblurkernelsizeblurSigma:ThisistheGaussianblursigmatemporalAreaRadius:ThisistheradiusofthetemporalsearchareaopticalFlow:Thisisadenseoptical-flowalgorithm

Theseparametersaresetasfollows:

superRes->set("parameter",value);

Onlythefollowingparametersareset;theotherparametersusetheirdefaultvalues:

superRes->set("opticalFlow",optical_flow);

superRes->set("scale",scale);

superRes->set("iterations",iterations);

superRes->set("temporalAreaRadius",temporalAreaRadius);

Afterwards,theinputvideoframeisselected(superRes->setInput(frameSource)).

Thelaststepprocessestheinputvideotocomputethesuperresolution.Foreachvideoframe,thesuperresolutioniscalculated(superRes->nextFrame(result));thiscalculationiscomputationallyveryslow,thustheprocessingtimeisestimatedtoshowprogress.Finally,eachresultframeisshown(imshow("SuperResolution",result))andsaved(writer<<result).

Toshowthesuperresolutionresults,asmallpartofthefirstframeofthetree.aviandtree_superresolution.avivideosarecomparedwithandwithoutsuperresolution:

Partofthefirstframeoftree.aviandtree_superresolution.avivideoswithoutandwiththesuperresolutionprocess

Intheright-hand-sidesectionoftheprecedingfigure,youcanobservemoredetailsintheleavesandbranchesofthetreeduetothesuperresolutionprocess.

StitchingImagestitching,orphotostitching,candiscoverthecorrespondencerelationshipbetweenimageswithsomedegreeofoverlap.Thisprocesscombinesasetofimageswithoverlappingfieldsofviewtoproduceapanoramaorhigher-resolutionimage.Mostofthetechniquesforimagestitchingneednearlyexactoverlapsbetweentheimagestoproduceseamlessresults.Somedigitalcamerascaninternallystitchasetofimagestobuildapanoramaimage.Anexampleisshowninthefollowingfigure:

Apanoramaimagecreatedwithstitching

NoteTheprecedingimageexampleandmoreinformationaboutimagestitchingcanbefoundathttp://en.wikipedia.org/wiki/Image_stitching.

Stitchingcannormallybedividedintothreeimportantsteps:

Registration(images)impliesmatchingfeaturesinasetofimagestosearchforadisplacementthatminimizesthesumofabsolutevaluesinthedifferencesbetweenoverlappingpixels.Direct-alignmentmethodscouldbeusedtogetbetterresults.Theusercouldalsoaddaroughmodelofthepanoramatohelpthefeaturematchingstage,inwhichcase,theresultsaretypicallymoreaccurateandcomputationallyfaster.Calibration(images)focusesonminimizingdifferencesbetweenanidealmodelandthecamera-lenssystem:differentcamerapositionsandopticaldefectssuchasdistortions,exposure,chromaticaberrations,andsoon.Compositing(images)usestheresultsofthepreviousstep,calibration,combinedwiththeremappingoftheimagestoanoutputprojection.Colorsarealsoadjustedbetweenimagestocompensateforexposuredifferences.Imagesareblendedtogetherandseam-lineadjustmentisdonetominimizethevisibilityofseamsbetweenimages.

Whenthereareimagesegmentsthathavebeentakenfromthesamepointinspace,stitchingcanbeperformedusingoneofvariousmapprojections.Themostimportantmapprojectionsareshownasfollows:

Rectilinearprojection:Here,thestitchingimageisviewedonatwo-dimensionalplaneintersectingthepanoramasphereinasinglepoint.Linesthatarestraightin

realityareshownsimilarregardlessoftheirdirectionontheimage.Whentherearewideviews(around120degrees),imagesaredistortedneartheborders.Cylindricalprojection:Here,thestitchingimageshowsa360-degreehorizontalfieldofviewandalimitedverticalfieldofview.Thisprojectionismeanttobeviewedasthoughtheimageiswrappedintoacylinderandviewedfromwithin.Whenviewedona2Dplane,horizontallinesappearcurved,whileverticallinesremainstraight.Sphericalprojection:Here,thestitchingimageshowsa360-degreehorizontalfieldofviewand180-degreeverticalfieldofview,thatis,thewholesphere.Panoramaimageswiththisprojectionaremeanttobeviewedasthoughtheimageiswrappedintoasphereandviewedfromwithin.Whenviewedona2Dplane,horizontallinesappearcurvedasinacylindricalprojection,whileverticallinesremainvertical.Stereographicprojectionorfisheyeprojection:Thiscanbeusedtoformalittleplanetpanoramabypointingthevirtualcamerastraightdownandsettingthefieldofviewlargeenoughtoshowthewholegroundandsomeoftheareasaboveit;pointingthevirtualcameraupwardscreatesatunneleffect.Paniniprojection:Thishasspecializedprojectionsthatmayhavemoreaestheticallypleasingadvantagesovernormalcartographyprojections.Thisprojectioncombinesdifferentprojectionsinthesameimagetofine-tunethefinallookoftheoutputpanoramaimage.

ThischapterfocusesonthestitchingmoduleandthedetailsubmoduleinOpenCV3.0Alpha,whichcontainsasetoffunctionsandclassesthatimplementastitcher.Usingthesemodules,itispossibletoconfigureorskipsomesteps.Theimplementedstitchingexamplehasthefollowinggeneraldiagram:

IntheOpenCVexamples,therearetwobasicexamplesofstitching,whichcanbefoundat([opencv_source_code]/samples/cpp/stitching.cpp])and([opencv_source_code]/samples/cpp/stitching_detailed.cpp]).

Forthefollowing,moreadvancedstitchingAdvancedexample,thestitchingAdvanced.proprojectfilemustincludethefollowinglibrariestoworkcorrectly:lopencv_core300,lopencv_imgproc300,lopencv_highgui300,lopencv_features2d300,lopencv_videoio300,lopencv_imgcodecs300,andlopencv_stitching300:

#include<iostream>

#include<string>

#include<opencv2/opencv_modules.hpp>

#include<opencv2/core/utility.hpp>

#include<opencv2/imgcodecs.hpp>

#include<opencv2/highgui.hpp>

#include<opencv2/features2d.hpp>

#include<opencv2/stitching/detail/blenders.hpp>

#include<opencv2/stitching/detail/camera.hpp>

#include<opencv2/stitching/detail/exposure_compensate.hpp>

#include<opencv2/stitching/detail/matchers.hpp>

#include<opencv2/stitching/detail/motion_estimators.hpp>

#include<opencv2/stitching/detail/seam_finders.hpp>

#include<opencv2/stitching/detail/util.hpp>

#include<opencv2/stitching/detail/warpers.hpp>

#include<opencv2/stitching/warpers.hpp>

usingnamespacestd;

usingnamespacecv;

usingnamespacecv::detail;

intmain(intargc,char*argv[])

{

//Defaultparameters

vector<String>img_names;

doublescale=1;

stringfeatures_type="orb";//"surf"or"orb"featurestype

floatmatch_conf=0.3f;

floatconf_thresh=1.f;

stringadjuster_method="ray";//"reproj"or"ray"adjustermethod

booldo_wave_correct=true;

WaveCorrectKindwave_correct_type=WAVE_CORRECT_HORIZ;

stringwarp_type="spherical";

intexpos_comp_type=ExposureCompensator::GAIN_BLOCKS;

stringseam_find_type="gc_color";

floatblend_strength=5;

intblend_type=Blender::MULTI_BAND;

stringresult_name="panorama_result.jpg";

doublestart_time=getTickCount();

//1-Inputimages

if(argc>1)

{

for(inti=1;i<argc;i++)

img_names.push_back(argv[i]);

}

else

{

img_names.push_back("./panorama_image1.jpg");

img_names.push_back("./panorama_image2.jpg");

}

//Checkifhaveenoughimages

intnum_images=static_cast<int>(img_names.size());

if(num_images<2){cout<<"Needmoreimages"<<endl;return-1;}

//2-Resizeimagesandfindfeaturessteps

cout<<"Findingfeatures…"<<endl;

doublet=getTickCount();

Ptr<FeaturesFinder>finder;

if(features_type=="surf")

finder=makePtr<SurfFeaturesFinder>();

elseif(features_type=="orb")

finder=makePtr<OrbFeaturesFinder>();

else{cout<<"Unknown2Dfeaturestype:'"<<features_type<<endl;

return-1;}

Matfull_img,img;

vector<ImageFeatures>features(num_images);

vector<Mat>images(num_images);

vector<Size>full_img_sizes(num_images);

for(inti=0;i<num_images;++i)

{

full_img=imread(img_names[i]);

full_img_sizes[i]=full_img.size();

if(full_img.empty()){cout<<"Can'topenimage"<<img_names[i]

<<endl;return-1;}

resize(full_img,img,Size(),scale,scale);

images[i]=img.clone();

(*finder)(img,features[i]);

features[i].img_idx=i;

cout<<"Featuresinimage#"<<i+1<<"are:"<<

features[i].keypoints.size()<<endl;

}

finder->collectGarbage();

full_img.release();

img.release();

cout<<"Findingfeatures,time:"<<((getTickCount()-t)/

getTickFrequency())<<"sec"<<endl;

//3-Matchfeatures

cout<<"Pairwisematching"<<endl;

t=getTickCount();

vector<MatchesInfo>pairwise_matches;

BestOf2NearestMatchermatcher(false,match_conf);

matcher(features,pairwise_matches);

matcher.collectGarbage();

cout<<"Pairwisematching,time:"<<((getTickCount()-t)/

getTickFrequency())<<"sec"<<endl;

//4-Selectimagesandmatchessubsettobuildpanorama

vector<int>indices=leaveBiggestComponent(features,pairwise_matches,

conf_thresh);

vector<Mat>img_subset;

vector<String>img_names_subset;

vector<Size>full_img_sizes_subset;

for(size_ti=0;i<indices.size();++i)

{

img_names_subset.push_back(img_names[indices[i]]);

img_subset.push_back(images[indices[i]]);

full_img_sizes_subset.push_back(full_img_sizes[indices[i]]);

}

images=img_subset;

img_names=img_names_subset;

full_img_sizes=full_img_sizes_subset;

//Estimatecameraparametersrough

HomographyBasedEstimatorestimator;

vector<CameraParams>cameras;

if(!estimator(features,pairwise_matches,cameras)){cout<<

"Homographyestimationfailed."<<endl;return-1;}

for(size_ti=0;i<cameras.size();++i)

{

MatR;

cameras[i].R.convertTo(R,CV_32F);

cameras[i].R=R;

cout<<"Initialintrinsic#"<<indices[i]+1<<":\n"<<

cameras[i].K()<<endl;

}

//5-Refinecameraparametersglobally

Ptr<BundleAdjusterBase>adjuster;

if(adjuster_method=="reproj")

//"reproj"method

adjuster=makePtr<BundleAdjusterReproj>();

else//"ray"method

adjuster=makePtr<BundleAdjusterRay>();

adjuster->setConfThresh(conf_thresh);

if(!(*adjuster)(features,pairwise_matches,cameras)){cout<<"Camera

parametersadjustingfailed."<<endl;return-1;}

//Findmedianfocallength

vector<double>focals;

for(size_ti=0;i<cameras.size();++i)

{

cout<<"Camera#"<<indices[i]+1<<":\n"<<cameras[i].K()<<

endl;

focals.push_back(cameras[i].focal);

}

sort(focals.begin(),focals.end());

floatwarped_image_scale;

if(focals.size()%2==1)

warped_image_scale=static_cast<float>(focals[focals.size()/2]);

else

warped_image_scale=static_cast<float>(focals[focals.size()/2-

1]+focals[focals.size()/2])*0.5f;

//6-Wavecorrelation(optional)

if(do_wave_correct)

{

vector<Mat>rmats;

for(size_ti=0;i<cameras.size();++i)

rmats.push_back(cameras[i].R.clone());

waveCorrect(rmats,wave_correct_type);

for(size_ti=0;i<cameras.size();++i)

cameras[i].R=rmats[i];

}

//7-Warpimages

cout<<"Warpingimages(auxiliary)..."<<endl;

t=getTickCount();

vector<Point>corners(num_images);

vector<UMat>masks_warped(num_images);

vector<UMat>images_warped(num_images);

vector<Size>sizes(num_images);

vector<UMat>masks(num_images);

//Prepareimagesmasks

for(inti=0;i<num_images;++i)

{

masks[i].create(images[i].size(),CV_8U);

masks[i].setTo(Scalar::all(255));

}

//Mapprojections

Ptr<WarperCreator>warper_creator;

if(warp_type=="rectilinear")

warper_creator=makePtr<cv::CompressedRectilinearWarper>(2.0f,1.0f);

elseif(warp_type=="cylindrical")

warper_creator=makePtr<cv::CylindricalWarper>();

elseif(warp_type=="spherical")

warper_creator=makePtr<cv::SphericalWarper>();

elseif(warp_type=="stereographic")

warper_creator=makePtr<cv::StereographicWarper>();

elseif(warp_type=="panini")

warper_creator=makePtr<cv::PaniniWarper>(2.0f,1.0f);

if(!warper_creator){cout<<"Can'tcreatethefollowingwarper'"<<

warp_type<<endl;return1;}

Ptr<RotationWarper>warper=warper_creator->create(static_cast<float>

(warped_image_scale*scale));

for(inti=0;i<num_images;++i)

{

Mat_<float>K;

cameras[i].K().convertTo(K,CV_32F);

floatswa=(float)scale;

K(0,0)*=swa;K(0,2)*=swa;

K(1,1)*=swa;K(1,2)*=swa;

corners[i]=warper->warp(images[i],K,cameras[i].R,INTER_LINEAR,

BORDER_REFLECT,images_warped[i]);

sizes[i]=images_warped[i].size();

warper->warp(masks[i],K,cameras[i].R,INTER_NEAREST,BORDER_CONSTANT,

masks_warped[i]);

}

vector<UMat>images_warped_f(num_images);

for(inti=0;i<num_images;++i)

images_warped[i].convertTo(images_warped_f[i],CV_32F);

cout<<"Warpingimages,time:"<<((getTickCount()-t)/

getTickFrequency())<<"sec"<<endl;

//8-Compensateexposureerrors

Ptr<ExposureCompensator>compensator=

ExposureCompensator::createDefault(expos_comp_type);

compensator->feed(corners,images_warped,masks_warped);

//9-Findseammasks

Ptr<SeamFinder>seam_finder;

if(seam_find_type=="no")

seam_finder=makePtr<NoSeamFinder>();

elseif(seam_find_type=="voronoi")

seam_finder=makePtr<VoronoiSeamFinder>();

elseif(seam_find_type=="gc_color")

seam_finder=makePtr<GraphCutSeamFinder>

(GraphCutSeamFinderBase::COST_COLOR);

elseif(seam_find_type=="gc_colorgrad")

seam_finder=makePtr<GraphCutSeamFinder>

(GraphCutSeamFinderBase::COST_COLOR_GRAD);

elseif(seam_find_type=="dp_color")

seam_finder=makePtr<DpSeamFinder>(DpSeamFinder::COLOR);

elseif(seam_find_type=="dp_colorgrad")

seam_finder=makePtr<DpSeamFinder>(DpSeamFinder::COLOR_GRAD);

if(!seam_finder){cout<<"Can'tcreatethefollowingseamfinder'"<<

seam_find_type<<endl;return1;}

seam_finder->find(images_warped_f,corners,masks_warped);

//Releaseunusedmemory

images.clear();

images_warped.clear();

images_warped_f.clear();

masks.clear();

//10-Createablender

Ptr<Blender>blender=Blender::createDefault(blend_type,false);

Sizedst_sz=resultRoi(corners,sizes).size();

floatblend_width=sqrt(static_cast<float>(dst_sz.area()))*

blend_strength/100.f;

if(blend_width<1.f)

blender=Blender::createDefault(Blender::NO,false);

elseif(blend_type==Blender::MULTI_BAND)

{

MultiBandBlender*mb=dynamic_cast<MultiBandBlender*>(blender.get());

mb->setNumBands(static_cast<int>(ceil(log(blend_width)/log(2.))-

1.));

cout<<"Multi-bandblender,numberofbands:"<<mb->numBands()

<<endl;

}

elseif(blend_type==Blender::FEATHER)

{

FeatherBlender*fb=dynamic_cast<FeatherBlender*>(blender.get());

fb->setSharpness(1.f/blend_width);

cout<<"Featherblender,sharpness:"<<fb->sharpness()<<endl;

}

blender->prepare(corners,sizes);

//11-Compositingstep

cout<<"Compositing…"<<endl;

t=getTickCount();

Matimg_warped,img_warped_s;

Matdilated_mask,seam_mask,mask,mask_warped;

for(intimg_idx=0;img_idx<num_images;++img_idx)

{

cout<<"Compositingimage#"<<indices[img_idx]+1<<endl;

//11.1-Readimageandresizeitifnecessary

full_img=imread(img_names[img_idx]);

if(abs(scale-1)>1e-1)

resize(full_img,img,Size(),scale,scale);

else

img=full_img;

full_img.release();

Sizeimg_size=img.size();

MatK;

cameras[img_idx].K().convertTo(K,CV_32F);

//11.2-Warpthecurrentimage

warper->warp(img,K,cameras[img_idx].R,INTER_LINEAR,BORDER_REFLECT,

img_warped);

//Warpthecurrentimagemask

mask.create(img_size,CV_8U);

mask.setTo(Scalar::all(255));

warper->warp(mask,K,cameras[img_idx].R,INTER_NEAREST,

BORDER_CONSTANT,mask_warped);

//11.3-Compensateexposureerrorstep

compensator->apply(img_idx,corners[img_idx],img_warped,mask_warped);

img_warped.convertTo(img_warped_s,CV_16S);

img_warped.release();

img.release();

mask.release();

dilate(masks_warped[img_idx],dilated_mask,Mat());

resize(dilated_mask,seam_mask,mask_warped.size());

mask_warped=seam_mask&mask_warped;

//11.4-Blendingimagesstep

blender->feed(img_warped_s,mask_warped,corners[img_idx]);

}

Matresult,result_mask;

blender->blend(result,result_mask);

cout<<"Compositing,time:"<<((getTickCount()-t)/

getTickFrequency())<<"sec"<<endl;

imwrite(result_name,result);

cout<<"Finished,totaltime:"<<((getTickCount()-start_time)/

getTickFrequency())<<"sec"<<endl;

return0;

}

ThisexamplecreatesaprogramtostitchimagesusingOpenCVsteps.Ittakesaninputpathtoselectthedifferentinputimagesorusesdefaultinputimages(.\panorama_image1.jpgandpanorama_image2.jpg),whichareshownlater.Finally,theresultingimageisshownandsavedas.\panorama_result.jpg.Inthefirstplace,thestitching.hppanddetailheadersareincludedandthecv::detailnamespaceisused.Themoreimportantparametersarealsoset,andyoucanconfigurethestitchingprocesswiththeseparameters.Ifyouneedtouseacustomconfiguration,itisveryusefultounderstandthegeneraldiagramofthestitchingprocess(thepreviousfigure).Thisadvancedexamplehas11importantsteps.Thefirststepreadsandcheckstheinputimages.Thisexampleneedstwoormoreimagestowork.

Thesecondstepresizestheinputimagesusingthedoublescale=1parameterandfindsthefeaturesoneachimage;youcanselectbetweentheSurf(finder=makePtr<SurfFeaturesFinder>())orOrb(finder=makePtr<OrbFeaturesFinder>())featurefindersusingthestringfeatures_type="orb"parameter.Afterwards,thisstepresizestheinputimages(resize(full_img,img,Size(),scale,scale))andfindsthefeatures((*finder)(img,features[i])).

NoteFormoreinformationaboutSURFandORBdescriptors,refertoChapter5ofOpenCVEssentialsbyPacktPublishing.

Thethirdstepmatchesthefeaturesthathavebeenfoundpreviously.Amatcheriscreated(BestOf2NearestMatchermatcher(false,match_conf))withthefloatmatch_conf=0.3fparameter.

Thefourthstepselectsimagesandmatchessubsetstobuildthepanorama.Then,thebestfeaturesareselectedandmatchedusingthevector<int>indices=leaveBiggestComponent(features,pairwise_matches,conf_thresh)function.Withthesefeatures,anewsubsetiscreatedtobeused.

Thefifthsteprefinesparametersgloballyusingbundleadjustmenttobuildanadjuster(Ptr<BundleAdjusterBase>adjuster).Givenasetofimagesdepictinganumberof2Dor3Dpointsfromdifferentviewpoints,bundleadjustmentcanbedefinedastheproblemofsimultaneouslyrefiningthe2Dor3Dcoordinates,describingthescenegeometryaswellastheparametersoftherelativemotionandopticalcharacteristicsofthecamerasemployedtoacquiretheimagesaccordingtoanoptimalitycriterioninvolvingthecorrespondingimageprojectionsofallpoints.Therearetwomethodstocalculatethisbundleadjustment,reproj(adjuster=makePtr<BundleAdjusterReproj>())orray(adjuster=makePtr<BundleAdjusterRay>()),whichareselectedwiththestringadjuster_method="ray"parameter.Finallythisbundleadjustmentisusedas(*adjuster)(features,pairwise_matches,cameras).

Thesixthstepisanoptionalstep(booldo_wave_correct=true)thatcalculatesthewavecorrelationtoimprovethecamerasetting.ThetypeofwavecorrelationisselectedwiththeWaveCorrectKindwave_correct_type=WAVE_CORRECT_HORIZparameterandcalculatedaswaveCorrect(rmats,wave_correct_type).

Theseventhstepcreatesawarperimagethatneedsamapprojection.Themapprojectionshavebeendescribedpreviously,andtheycanberectilinear,cylindrical,spherical,stereographic,orpanini.ThereareactuallymoremapprojectionsimplementedinOpenCV.Themapprojectionscanbeselectedwiththestringwarp_type="spherical"parameter.Afterwards,awarperiscreated(Ptr<RotationWarper>warper=warper_creator->create(static_cast<float>(warped_image_scale*scale)))andeachimageiswarped(warper->warp(masks[i],K,cameras[i].R,INTER_NEAREST,BORDER_CONSTANT,masks_warped[i])).

Theeighthstepcompensatesexposureerrorsbycreatingacompensator(Ptr<ExposureCompensator>compensator=

ExposureCompensator::createDefault(expos_comp_type))anditisappliedtoeachwarpedimage(compensator->feed(corners,images_warped,masks_warped)).

Theninthstepfindsseammasks.Thisprocesssearchesforthebestareasofattachmentforeachpanoramaimage.TherearesomemethodsimplementedinOpenCVtoperformthistask,andthisexampleusesthestringseam_find_type="gc_color"parametertoselectthem.ThesemethodsareNoSeamFinder(there’snouseofthismethod),VoronoiSeamFinder,GraphCutSeamFinderBase::COST_COLOR,GraphCutSeamFinderBase::COST_COLOR_GRAD,DpSeamFinder::COLOR,andDpSeamFinder::COLOR_GRAD.

Thetenthstepcreatesablendertocombineeachimagetobuildthepanorama.TherearetwotypesofblendersimplementedinOpenCV,MultiBandBlender*mb=dynamic_cast<MultiBandBlender*>(blender.get())andFeatherBlender*fb=dynamic_cast<FeatherBlender*>(blender.get()),whichcanbeselectedwiththeintblend_type=Blender::MULTI_BANDparameter.Finally,theblenderisprepared(blender->prepare(corners,sizes)).

Thelaststepcompositesthefinalpanorama.Thisstepneedswhatthepreviousstepshavedonetoconfigurethestitching.Foursub-stepsareperformedtocalculatethefinalpanorama.First,eachinputimageisread(full_img=imread(img_names[img_idx]))and,ifnecessary,resized(resize(full_img,img,Size(),scale,scale)).Second,theseimagesarewarpedwiththecreatedwarper(warper->warp(img,K,cameras[img_idx].R,INTER_LINEAR,BORDER_REFLECT,img_warped)).Third,theseimagesarecompensatedforexposureerrorswiththecreatedcompensator(compensator->apply(img_idx,corners[img_idx],img_warped,mask_warped)).Finally,theseimagesareblendedusingthecreatedblender.Thefinalresultpanoramaisnowsavedinthestringresult_name="panorama_result.jpg"file.

ToshowyouthestitchingAdvancedresults,twoinputimagesarestitchedandtheresultingpanoramaisshownasfollows:

SummaryInthischapter,youlearnedhowtousethreeimportantmodulesofOpenCVthathandleimageprocessinginvideo.Thesemodulesarevideostabilization,superresolution,andstitching.Someofthetheoreticalunderpinningshavealsobeenexplainedforeachmodule.

Ineachsectionofthischapter,acompleteexample,developedinC++,isexplained.Animageresultwasalsoshownforeachmodule,showingthemaineffect.

Thenextchapterintroduceshigh-dynamic-rangeimagesandshowsyouhowtohandlethemwithOpenCV.High-dynamic-rangeimagingistypicallyconsideredwithinwhatisnowcalledcomputationalphotography.Roughlyspeaking,computationalphotographyreferstotechniquesthatallowyoutoextendthetypicalcapabilitiesofdigitalphotography.Thismayincludehardwareadd-onsormodifications,butitmostlyreferstosoftware-basedtechniques.Thesetechniquesmayproduceoutputimagesthatcannotbeobtainedwitha“traditional”digitalcamera.

Chapter6.ComputationalPhotographyComputationalphotographyreferstotechniquesthatallowyoutoextendthetypicalcapabilitiesofdigitalphotography.Thismayincludehardwareadd-onsormodifications,butitmostlyreferstosoftware-basedtechniques.Thesetechniquesmayproduceoutputimagesthatcannotbeobtainedwitha“traditional”digitalcamera.Thischapterintroducessomeofthelesser-knowntechniquesavailableinOpenCVforcomputationalphotography:high-dynamic-rangeimaging,seamlesscloning,decolorization,andnon-photorealisticrendering.Thesethreeareinsidethephotomoduleofthelibrary.Notethatothertechniquesinsidethismodule(inpaintinganddenoising)havebeenalreadyconsideredinpreviouschapters.

High-dynamic-rangeimagesThetypicalimagesweprocesshave8bitsperpixel(bpp).Colorimagesalsouse8bitstorepresentthevalueofeachchannel,thatis,red,green,andblue.Thismeansthatonly256differentintensityvaluesareused.This8bpplimithasprevailedthroughoutthehistoryofdigitalimaging.However,itisobviousthatlightinnaturedoesnothaveonly256differentlevels.Weshould,therefore,considerwhetherthisdiscretizationisdesirableorevensufficient.Thehumaneye,forexample,isknowntocaptureamuchhigherdynamicrange(thenumberoflightlevelsbetweenthedimmestandbrightestlevels),estimatedatbetween1and100millionlightlevels.Withonly256lightlevels,therearecaseswherebrightlightsappearoverexposedorsaturated,whiledarkscenesaresimplycapturedasblack.

Therearecamerasthatcancapturemorethan8bpp.However,themostcommonwaytocreatehigh-dynamic-rangeimagesistousean8bppcameraandtakeimageswithdifferentexposurevalues.Whenwedothis,problemsofalimiteddynamicrangeareevident.Consider,forexample,thefollowingfigure:

Ascenecapturedwithsixdifferentexposurevalues

NoteThetop-leftimageismostlyblack,butwindowdetailsarevisible.Conversely,thebottom-rightimageshowsdetailsoftheroom,butthewindowdetailsarebarelyvisible.

Wecantakepictureswithdifferentexposurelevelsusingmodernsmartphonecameras.WithiPhoneandiPads,forexample,asofiOS8,itisveryeasytochangetheexposurewiththenativecameraapp.Bytouchingthescreen,ayellowboxappearswithasmallsunonitsside.Swipingupordowncanthenchangetheexposure(seethefollowingscreenshot).

Note

Therangeofexposurelevelsisquitelarge,sowemayhavetorepeattheswipinggestureanumberoftimes.

IfyouusepreviousversionsofiOS,youcandownloadcameraappssuchasCamera+thatallowyoutofocusonaspecificpointandchangeexposure.

ForAndroid,tonsofcameraappsareavailableonGooglePlaythatcanadjusttheexposure.OneexampleisCameraFV-5,whichhasbothfreeandpaidversions.

TipIfyouuseahandhelddevicetocapturetheimages,makesurethedeviceisstatic.Infact,youmaywelluseatripod.Otherwise,imageswithdifferentexposureswillnotbealigned.Also,movingsubjectswillinevitablyproduceghostartifacts.Threeimagesaresufficientformostcases,withlow,medium,andhighexposurelevels.

TheexposurecontrolusingthenativecameraappinaniPhone5S

Smartphonesandtablesarehandytocaptureanumberofimageswithdifferentexposures.TocreateHDRimages,weneedtoknowtheexposure(orshutter)timeforeachcapturedimage(seethefollowingsectionforthereason).Notallappsallowyoutocontrol(orevensee)thismanually(theiOS8nativeappdoesn’t).Atthetimeofwritingthis,atleasttwofreeappsallowthisforiOS:ManuallyandManualShot!InAndroid,thefreeCameraFV-5allowsyoutocontrolandseeexposuretimes.NotethatF/StopandISOaretwootherparametersthatcontroltheexposure.

ImagesthatarecapturedcanbetransferredtothedevelopmentcomputerandusedtocreatetheHDRimage.

Note

AsofiOS7,thenativecameraapphasanHDRmodethatautomaticallycapturesthreeimagesinarapidsequence,eachwithdifferentexposure.Theseimagesarealsoautomaticallycombinedintoasingle(sometimesbetter)image.

CreatingHDRimagesHowdowecombinemultiple(three,forexample)exposureimagesintoanHDRimage?Ifweconsideronlyoneofthechannelsandagivenpixel,thethreepixelvalues(oneforeachexposurelevel)mustbemappedtoasinglevalueinthelargeroutputrange(say,16bpp).Thismappingisnoteasy.Firstofall,wehavetoconsiderthatpixelintensitiesarea(rough)measureofsensorirradiance(theamountoflightincidentonthecamerasensor).Digitalcamerasmeasureirradiancebutinanonlinearway.Camerashaveanonlinearresponsefunctionthattranslatesirradiancetopixelintensityvaluesintherangeof0to255].Inordertomapthesevaluestoalargersetofdiscretevalues,wemustestimatethecameraresponsefunction(thatis,theresponsewithinthe0to255range).

Howdoweestimatethecameraresponsefunction?Wedothatfromthepixelsthemselves!TheresponsefunctionisanS-shapedcurveforeachcolorchannel,anditcanbeestimatedfromthepixels(withthreeexposuresofapixel,wehavethreepointsonthecurveforeachcolorchannel).Asthisisverytimeconsuming,usually,asetofrandompixelsischosen.

There’sonlyonethingleft.Wepreviouslytalkedaboutestimatingtherelationshipbetweenirradianceandpixelintensity.Howdoweknowirradiance?Sensorirradianceisdirectlyproportionaltotheexposuretime(orequivalently,theshutterspeed).Thisisthereasonwhyweneedexposuretime!

Finally,theHDRimageiscomputedasaweightedsumoftherecoveredirradiancevaluesfromthepixelsofeachexposure.Notethatthisimagecannotbedisplayedonconventionalscreens,whichalsohavealimitedrange.

NoteAgoodbookonhigh-dynamic-rangeimagingisHighDynamicRangeImaging:Acquisition,Display,andImage-BasedLightingbyReinhardetal,MorganKaufmannPub.ThebookisaccompaniedbyaDVDcontainingimagesindifferentHDRformats.

ExampleOpenCV(asof3.0only)providesfunctionstocreateHDRimagesfromasetofimagestakenwithdifferentexposures.There’sevenatutorialexamplecalledhdr_imaging,whichreadsalistofimagefilesandexposuretimes(fromatextfile)andcreatestheHDRimage.

NoteInordertorunthehdr_imagingtutorial,youwillneedtodownloadtherequiredimagefilesandtextfileswiththelist.Youcandownloadthemfromhttps://github.com/Itseez/opencv_extra/tree/master/testdata/cv/hdr.

TheCalibrateDebevecandMergeDebevecclassesimplementDebevec’smethodtoestimatethecameraresponsefunctionandmergetheexposuresintoanHDRimage,respectively.ThefollowingcreateHDRexampleshowsyouhowtousebothclasses:

#include<opencv2/photo.hpp>

#include<opencv2/highgui.hpp>

#include<iostream>

usingnamespacecv;

usingnamespacestd;

intmain(int,char**argv)

{

vector<Mat>images;

vector<float>times;

//Loadimagesandexposures…

Matimg1=imread("1div66.jpg");

if(img1.empty())

{

cout<<"Error!Inputimagecannotberead…\n";

return-1;

}

Matimg2=imread("1div32.jpg");

Matimg3=imread("1div12.jpg");

images.push_back(img1);

images.push_back(img2);

images.push_back(img3);

times.push_back((float)1/66);

times.push_back((float)1/32);

times.push_back((float)1/12);

//Estimatecameraresponse…

Matresponse;

Ptr<CalibrateDebevec>calibrate=createCalibrateDebevec();

calibrate->process(images,response,times);

//Showtheestimatedcameraresponsefunction…

cout<<response;

//CreateandwritetheHDRimage…

Mathdr;

Ptr<MergeDebevec>merge_debevec=createMergeDebevec();

merge_debevec->process(images,hdr,times,response);

imwrite("hdr.hdr",hdr);

cout<<"\nDone.Pressanykeytoexit…\n";

waitKey();//Waitforkeypress

return0;

}

Theexampleusesthreeimagesofacup(theimagesareavailablealongwiththecodeaccompanyingthisbook).TheimagesweretakenwiththeManualShot!appmentionedpreviously,usingexposuresof1/66,1/32,and1/12seconds;refertothefollowingfigure:

Thethreeimagesusedintheexampleasinputs

NotethatthecreateCalibrateDebevecmethodexpectstheimagesandexposuretimesinanSTLvector(STLisakindoflibraryofusefulcommonfunctionsanddatastructuresavailableinstandardC++).Thecameraresponsefunctionisgivenasa256real-valuedvector.Thisrepresentsthemappingbetweenthepixelvalueandirradiance.Actually,itisa256x3matrix(onecolumnpereachofthethreecolorchannels).Thefollowingfigureshowsyoutheresponsegivenbytheexample:

TheestimatedRGBcameraresponsefunctions

TipThecoutpartofcodedisplaysthematrixintheformatusedbyMATLABandOctave,twowidelyusedpackagesfornumericalcomputation.ItisstraightforwardtocopythematrixintheoutputandpasteitinMATLAB/Octaveinordertodisplayit.

TheresultingHDRimageisstoredinthelosslessRGBEformat.Thisimageformatusesonebytepercolorchannelplusonebyteasasharedexponent.Theformatusesthesameprincipleastheoneusedinthefloating-pointnumberrepresentation:thesharedexponentallowsyoutorepresentamuchwiderrangeofvalues.RGBEimagesusethe.hdrextension.Notethatasitisalosslessimageformat,.hdrfilesarerelativelylarge.Inthisexample,theRGBinputimagesare1224x1632each(100to200KBeach),whiletheoutput.hdrfileoccupies5.9MB.

TheexampleusesDebevecandMalik’smethod,butOpenCValsoprovidesanothercalibrationfunctionbasedonRobertson’smethod.Bothcalibrationandmergefunctionsareavailable,thatis,createCalibrateRobertsonandMergeRobertson.

NoteFormoreinformationontheotherfunctionsandthetheorybehindthem,refertohttp://docs.opencv.org/trunk/modules/photo/doc/hdr_imaging.html.

Finally,notethattheexampledoesnotdisplaytheresultingimage.TheHDRimagecannotbedisplayedinconventionalscreens,soweneedtoperformanotherstepcalledtonemapping.

TonemappingWhenhigh-dynamic-rangeimagesaretobedisplayed,informationcanbelost.Thisisduetothefactthatcomputerscreensalsohavealimitedcontrastratio,andprintedmaterialisalsotypicallylimitedto256tones.Whenwehaveahigh-dynamic-rangeimage,itisnecessarytomaptheintensitiestoalimitedsetofvalues.Thisiscalledtonemapping.

SimplyscalingtheHDRimagevaluestothereducedrangeofthedisplaydeviceisnotsufficientinordertoprovidearealisticoutput.Scalingtypicallyproducesimagesthatappearaslackingdetail(contrast),eliminatingtheoriginalscenecontent.Ultimately,tone-mappingalgorithmsaimatprovidingoutputsthatappearvisuallysimilartotheoriginalscene(thatis,similartowhatahumanwouldseewhenviewingthescene).Varioustone-mappingalgorithmshavebeenproposedanditisstillamatterofextensiveresearch.ThefollowinglinesofcodecanapplytonemappingtotheHDRimageobtainedinthepreviousexample:

Matldr;

Ptr<TonemapDurand>tonemap=createTonemapDurand(2.2f);

tonemap->process(hdr,ldr);//ldrisafloatingpointimagewith

ldr=ldr*255;//valuesininterval[0..1]

imshow("LDR",ldr);

ThemethodwasproposedbyDurandandDorseyin2002.Theconstructoractuallyacceptsanumberofparametersthataffecttheoutput.Thefollowingfigureshowsyoutheoutput.Notehowthisimageisnotnecessarilybetterthananyofthethreeoriginalimages:

Thetone-mappedoutput

Threeothertone-mappingalgorithmsareavailableinOpenCV:createTonemapDrago,

createTonemapReinhard,andcreateTonemapMantiuk.

AnHDRimage(theRGBEformat,thatis,fileswiththe.hdrextension)canbedisplayedusingMATLAB.Allittakesisthreelinesofcode:

hdr=hdrread('hdr.hdr');

rgb=tonemap(hdr);

imshow(rgb);

Notepfstoolsisanopensourcesuiteofcommand-linetoolstoread,write,andrenderHDRimages.Thesuite,whichcanread.hdrandotherformats,includesanumberofcameracalibrationandtone-mappingalgorithms.LuminanceHDRisfreeGUIsoftwarebasedonpfstools.

AlignmentThescenethatwillbecapturedwithmultipleexposureimagesmustbestatic.Thecameramustalsobestatic.Evenifthetwoconditionsmet,itisadvisabletoperformanalignmentprocedure.

OpenCVprovidesanalgorithmforimagealignmentproposedbyG.Wardin2003.Themainfunction,createAlignMTB,takesaninputparameterthatdefinesthemaximumshift(actually,alogarithmthebasetwoofthemaximumshiftineachdimension).Thefollowinglinesshouldbeinsertedrightbeforeestimatingthecameraresponsefunctioninthepreviousexample:

vector<Mat>images_(images);

Ptr<AlignMTB>align=createAlignMTB(4);//4=max16pixelshift

align->process(images_,images);

ExposurefusionWecanalsocombineimageswithmultipleexposureswithneithercameraresponsecalibration(thatis,exposuretimes)norintermediateHDRimage.Thisiscalledexposurefusion.ThemethodwasproposedbyMertensetalin2007.Thefollowinglinesperformexposurefusion(imagesistheSTLvectorofinputimages;refertothepreviousexample):

Matfusion;

Ptr<MergeMertens>merge_mertens=createMergeMertens();

merge_mertens->process(images,fusion);//fusionisa

fusion=fusion*255;//float.pointimagew.valuesin[0..1]

imwrite("fusion.png",fusion);

Thefollowingfigureshowsyoutheresult:

Exposurefusion

SeamlesscloningInphotomontages,wetypicallywanttocutanobject/personinasourceimageandinsertitintoatargetimage.Ofcourse,thiscanbedoneinastraightforwardwaybysimplypastingtheobject.However,thiswouldnotproducearealisticeffect.See,forexample,thefollowingfigure,inwhichwewantedtoinserttheboatinthetophalfoftheimageintotheseaatthebottomhalfoftheimage:

Cloning

AsofOpenCV3,thereareseamlesscloningfunctionsavailableinwhichtheresultismorerealistic.ThisfunctioniscalledseamlessCloneanditusesamethodproposedbyPerezandGangnetin2003.ThefollowingseamlessCloningexampleshowsyouhowitcanbeused:

#include<opencv2/photo.hpp>

#include<opencv2/highgui.hpp>

#include<iostream>

usingnamespacecv;

usingnamespacestd;

intmain(int,char**argv)

{

//Loadandshowimages…

Matsource=imread("source1.png",IMREAD_COLOR);

Matdestination=imread("destination1.png",IMREAD_COLOR);

Matmask=imread("mask.png",IMREAD_COLOR);

imshow("source",source);

imshow("mask",mask);

imshow("destination",destination);

Matresult;

Pointp;//pwillbeneartoprightcorner

p.x=(float)2*destination.size().width/3;

p.y=(float)destination.size().height/4;

seamlessClone(source,destination,mask,p,result,NORMAL_CLONE);

imshow("result",result);

cout<<"\nDone.Pressanykeytoexit…\n";

waitKey();//Waitforkeypress

return0;

}

Theexampleisstraightforward.TheseamlessClonefunctiontakesthesource,destination,andmaskimagesandapointinthedestinationimageinwhichthecroppedobjectwillbeinserted(thesethreeimagescanbedownloadedfromhttps://github.com/Itseez/opencv_extra/tree/master/testdata/cv/cloning/Normal_Cloning).Seetheresultinthefollowingfigure:

Seamlesscloning

ThelastparameterofseamlessClonerepresentstheexactmethodtobeused(therearethreemethodsavailablethatproduceadifferentfinaleffect).Ontheotherhand,thelibraryprovidesthefollowingrelatedfunctions:

Function Effect

colorChangeMultiplieseachofthethreecolorchannelsofthesourceimagebyafactor,applyingthemultiplicationonlyintheregiongivenbythemask

illuminationChange Changesilluminationofthesourceimage,onlyintheregiongivenbythemask

textureFlattening Washesouttexturesinthesourceimage,onlyintheregiongivenbythemask

AsopposedtoseamlessClone,thesethreefunctionsonlyacceptsourceandmaskimages.

DecolorizationDecolorizationistheprocessofconvertingacolorimagetograyscale.Giventhisdefinition,thereadermaywellask,don’twealreadyhavegrayscaleconversion?Yes,grayscaleconversionisabasicroutineinOpenCVandanyimage-processinglibrary.ThestandardconversionisbasedonalinearcombinationoftheR,G,andBchannels.Theproblemisthatsuchaconversionmayproduceimagesinwhichcontrastintheoriginalimageislost.Thereasonisthattwodifferentcolors(whichareperceivedascontrastsintheoriginalimage)mayendupbeingmappedtothesamegrayscalevalue.Considertheconversionoftwocolors,AandB,tograyscale.Let’ssupposethatBisavariationofAintheRandGchannels:

A=(R,G,B)=>G=(R+G+B)/3

B=(R-x,G+x,B)=>G=(R-x+G+x+B)/3=(R+G+B)/3

Eventhoughtheyareperceivedasdistinct,thetwocolorsAandBaremappedtothesamegrayscalevalue!Theimagesfromthefollowingdecolorizationexampleshowthis:

#include<opencv2/photo.hpp>

#include<opencv2/highgui.hpp>

#include<iostream>

usingnamespacecv;

usingnamespacestd;

intmain(int,char**argv)

{

//Loadandshowimages…

Matsource=imread("color_image_3.png",IMREAD_COLOR);

imshow("source",source);

//firstcomputeandshowstandardgrayscaleconversion…

Matgrayscale=Mat(source.size(),CV_8UC1);

cvtColor(source,grayscale,COLOR_BGR2GRAY);

imshow("grayscale",grayscale);

//nowcomputeandshowdecolorization…

Matdecolorized=Mat(source.size(),CV_8UC1);

Matdummy=Mat(source.size(),CV_8UC3);

decolor(source,decolorized,dummy);

imshow("decolorized",decolorized);

cout<<"\nDone.Pressanykeytoexit…\n";

waitKey();//Waitforkeypress

return0;

}

Decolorizationexampleoutput

Theexampleisstraightforward.Afterreadingtheimageandshowingtheresultofastandardgrayscaleconversion,itusesthedecolorfunctiontoperformthedecolorization.Theimageused(thecolor_image_3.pngfile)isincludedintheopencv_extrarepositoryathttps://github.com/Itseez/opencv_extra/tree/master/testdata/cv/decolor.

NoteTheimageusedintheexampleisactuallyanextremecase.Itscolorshavebeenchosensothatthestandardgrayscaleoutputisfairlyhomogeneous.

Non-photorealisticrenderingAspartofthephotomodule,fourfunctionsareavailablethattransformaninputimageinawaythatproducesanon-realisticbutstillartisticoutput.ThefunctionsareveryeasytouseandaniceexampleisincludedwithOpenCV(npr_demo).Forillustrativepurposes,hereweshowyouatablethatallowsyoutograsptheeffectofeachfunction.Takealookatthefollowingfruits.jpginputimage,includedwithOpenCV:

Theinputreferenceimage

Theeffectsare:

Function Effect

edgePreservingFilterSmoothingisahandyandfrequentlyusedfilter.Thisfunctionperformssmoothingwhilepreservingobjectedgedetails.

detailEnhance Enhancesdetailsintheimage

pencilSketch Apencil-likelinedrawingversionoftheinputimage

stylization Watercoloreffect

SummaryInthischapter,youlearnedwhatcomputationalphotographyisandtherelatedfunctionsavailableinOpenCV3.Weexplainedthemostimportantfunctionswithinthephotomodule,butnotethatotherfunctionsofthismodule(inpaintingandnoisereduction)werealsoconsideredinpreviouschapters.Computationalphotographyisarapidlyexpandingfield,withstrongtiestocomputergraphics.Therefore,thismoduleofOpenCVisexpectedtogrowinfutureversions.

Thenextchapterwillbedevotedtoanimportantaspectthatwehavenotyetconsidered:time.Manyofthefunctionsexplainedtakeasignificanttimetocomputetheresults.Thenextchapterwillshowyouhowtodealwiththatusingmodernhardware.

Chapter7.AcceleratingImageProcessingThischapterdealswiththeaccelerationofimageprocessingtasksusingGeneralPurposeGraphicsProcessingUnits(GPGPUs)or,inshort,GPUswithparallelprocessing.AGPUisessentiallyacoprocessordedicatedtographicsprocessingorfloatingpointoperations,aimedatimprovingperformanceonapplicationssuchasvideogamesandinteractive3Dgraphics.WhilethegraphicsprocessingisexecutedintheGPU,theCPUcanbededicatedtoothercalculations(suchastheartificialintelligencepartingames).EveryGPUisequippedwithhundredsofsimpleprocessingcoresthatallowmassiveparallelexecutiononhundredsof“simple”mathematicaloperationson(normally)floatingpointnumbers.

CPUsseemtohavereachedtheirspeedandthermalpowerlimits.BuildingacomputerwithseveralCPUshasbecomeacomplexproblem.ThisiswhereGPUscomeintoplay.GPUprocessingisanewcomputingparadigmthatusestheGPUtoimprovethecomputationalperformance.GPUsinitiallyimplementedcertainparalleloperationscalledgraphicsprimitivesthatwereoptimizedforgraphicsprocessing.Oneofthemostcommonprimitivesfor3Dgraphicsprocessingisantialiasing,whichmakestheedgesofthefigureshaveamorerealisticappearance.Otherprimitivesaredrawingsofrectangles,triangles,circles,andarcs.GPUscurrentlyincludehundredsofgeneral-purposeprocessingfunctionsthatcandomuchmorethanrenderinggraphics.Particularly,theyareveryvaluableintasksthatcanbeparallelized,whichisthecaseformanycomputervisionalgorithms.

OpenCVlibrariesincludesupportfortheOpenCLandCUDAGPUarchitectures.CUDAimplementsmanyalgorithms;however,itonlyworkswithNVIDIAgraphiccards.CUDAisaparallelcomputingplatformandprogrammingmodelcreatedbyNVIDIAandimplementedbytheGPUsthattheyproduce.ThischapterfocusesontheOpenCLarchitecture,asitissupportedbymoredevicesandisevenincludedinsomeNVIDIAgraphiccards.

TheOpenComputingLanguage(OpenCL)isaframeworkthatwritesprogramsthatcanbeexecutedonCPUsorGPUsattachedtoahostprocessor(aCPU).ItdefinesaC-likelanguagetowritefunctions,calledkernels,whichareexecutedonthecomputingdevices.UsingOpenCL,kernelscanberunonallormanyoftheindividualprocessingelements(PEs)inparalleltotheCPUsorGPUs.

Inaddition,OpenCLdefinesanApplicationProgrammingInterface(API)thatallowsprogramsrunningonthehost(theCPU)tolaunchkernelsonthecomputerdevicesandmanagetheirdevicememories,whichare(atleastconceptually)separatedfromthehostmemory.OpenCLprogramsareintendedtobecompiledatruntimesothatapplicationsthatuseOpenCLareportablebetweenimplementationsofvarioushostcomputerdevices.OpenCLisalsoanopenstandardmaintainedbythenonprofittechnologyconsortiumKhronosGroup(https://www.khronos.org/).

OpenCVcontainsasetofclassesandfunctionsthatimplementandacceleratetheOpenCVfunctionalityusingOpenCL.OpenCVcurrentlyprovidesatransparentAPIthatenablestheunificationofitsoriginalAPIwithOpenCL-acceleratedprogramming.

Therefore,youonlyneedtowriteyourcodeonce.Thereisanewunifieddatastructure(UMat)thathandlesdatatransferstotheGPUswhenitisneededandpossible.

SupportforOpenCLinOpenCVhasbeendesignedforeaseofuseanddoesnotrequireanyknowledgeofOpenCL.Ataminimumlevel,itcanbeviewedasasetofaccelerations,whichcantakeadvantageofthehighcomputingpowerwhenusingmodernCPUandGPUdevices.

TocorrectlyrunOpenCLprograms,theOpenCLruntimeshouldbeprovidedbythedevicevendor,typicallyintheformofadevicedriver.Also,touseOpenCVwithOpenCL,acompatibleSDKisneeded.Currently,therearefiveavailableOpenCLSDKs:

AMDAPPSDK:ThisSDKsupportsOpenCLonCPUsandGPUs,suchasX86+SSE2(orhigher)CPUsandAMDFusion,AMDRadeon,AMDMobility,andATIFireProGPUs.IntelSDK:ThisSDKsupportsOpenCLonIntelCoreprocessorsandIntelHDGPUs,suchasIntel+SSE4.1,SSE4.2orAVX,IntelCorei7,i5andi3(1st,2nd,and3rdGeneration),IntelHDGraphics,IntelCore2Solo(DuoQuadandExtreme),andIntelXeonCPUs.IBMOpenCLDevelopmentKit:ThisSDKsupportsOpenCLonAMDserverssuchasIBMPower,IBMPERCS,andIBMBladeCenter.IBMOpenCLCommonRuntime:ThisSDKsupportsOpenCVonCPUsandGPUs,suchasX86+SSE2(orhigher)CPUsandAMDFusionandRaedon,NVIDIAIon,NVIDIAGeForce,andNVIDIAQuadroGPUs.NvidiaOpenCLDriverandTools:ThisSDKsupportsOpenCLonsomeNvidiagraphicdevicessuchasNVIDIATesla,NVIDIAGeForce,NVIDIAIon,andNVIDIAQuadroGPUs.

OpenCVwiththeOpenCLinstallationTheinstallationstepsalreadypresentedinChapter1,HandlingImageandVideoFiles,needsomeadditionalstepstoincludeOpenCL.Thenewlyrequiredsoftwareisexplainedinthefollowingsection.

TherearenewrequirementstocompileandinstallOpenCVwithOpenCLonWindows:

OpenCL-capableGPUorCPU:Thisisthemostimportantrequirement.NotethatOpenCLsupportsmanycomputingdevicesbutnotall.YoucancheckwhetheryourgraphiccardsorprocessorsarecompatiblewithOpenCL.ThischapterusestheAMDAPPSDKforanAMDFireProW5000GPUtoexecutetheexamples.

NoteThereisalistwiththesupportedcomputerdevicesforthisSDKathttp://developer.amd.com/tools-and-sdks/opencl-zone/amd-accelerated-parallel-processing-app-sdk/system-requirements-driver-compatibility/.There,youcanalsoconsulttheminimumSDKversionthatyouneed.

Compilers:OpenCVwithOpenCLiscompatiblewithMicrosoftandMinGWcompilers.ItispossibletoinstallthefreeVisualStudioExpressedition.However,ifyouchooseMicrosofttocompileOpenCV,atleastVisualStudio2012isrecommended.However,theMinGWcompilerisusedinthischapter.AMDAPPSDK:ThisSDKisasetofadvancedsoftwaretechnologiesthatenableustousecompatiblecomputingdevicestoexecuteandacceleratemanyapplicationsbeyondjustgraphics.ThisSDKisavailableathttp://developer.amd.com/tools-and-sdks/opencl-zone/amd-accelerated-parallel-processing-app-sdk/.ThischapterusesVersion2.9oftheSDK(for64bitWindows);youcanseetheinstallationprogressinthefollowingscreenshot.

NoteIfthisstepfails,maybeyoumightneedtoupdatethecontrollerofyourgraphiccard.TheAMDcontrollersareavailableathttp://www.amd.com/en-us/innovations/software-technologies.

InstallingtheAMDAPPSDK

OpenCLBLAS:BasicLinearAlgebraSubroutines(BLAS)isasetofopensourcemathlibrariesforparallelprocessingonAMDdevices.Itcanbedownloadedfromhttp://developer.amd.com/tools-and-sdks/opencl-zone/amd-accelerated-parallel-processing-math-libraries/.Thischapterusesthe1.1BLASversionforWindows32/64bits,andyoucanseetheinstallationprogressinthefollowingscreenshot(theleft-handside).OpenCLFFT:FastFourierTransform(FFT)isaveryusefulfunctionthatmanyalgorithmsofimageprocessingneed.Therefore,thisfunctionisimplementedforparallelprocessingonAMDdevices.ItcanbedownloadedfromthesameURLasgivenpreviously.Thischapterusesthe1.1FFTversionforWindows32/64bits,andyoucanseetheinstallationprogressinthefollowingscreenshot(theright-handside):

InstallingBLASandFFTforOpenCL

QtlibrariesforC++compiler:Inthischapter,theMinGWbinariesofQtlibrariesareusedtocompileOpenCVwithOpenCL.TheotheralternativeistoinstallthelatestversionofQtandusetheVisualC++compiler.YoucanchoosetheQtversionandusedcompiler.Thepackagemanager,bymeansoftheMaintenanceTool.exeapplicationlocatedatC:\Qt\Qt5.3.1,canbeusedtodownloadotherQtversions.ThischapterusesQt(5.3.1)andMinGW(4.8.2)32bitstocompileOpenCVwithOpenCL.

Whenthepreviousrequirementsaremet,youcangenerateanewbuildconfigurationwithCMake.Thisprocessdiffersinsomepointsfromthetypicalinstallationthatwasexplainedinthefirstchapter.Thedifferencesareexplainedinthislist:

Whenselectingthegeneratorfortheproject,youcanchoosethecompilerversioncorrespondingwiththeinstalledenvironmentinyourmachine.ThischapterusesMinGWtocompileOpenCVwithOpenCL,andthentheMinGWMakefilesoptionisselected,specifyingthenativecompilers.Thefollowingscreenshotshowsthisselection:

CMakeselectingthegeneratorproject

TheoptionsshowninthefollowingscreenshotareneededtobuildtheOpenCVwithOpenCLproject.TheWITH_OPENCL,WITH_OPENCLAMDBLAS,andWITH_OPENCLAMDFFToptionsmustbeenabled.TheBLASandFFTpathsmustbeintroducedonCLAMDBLAS_INCLUDE_DIR,CLAMDBLAS_ROOT_DIR,CLAMDFFT_INCLUDE_DIR,andCLAMDFFT_ROOT_DIR.Inaddition,asshowninChapter1,HandlingImageandVideoFiles,youwillneedtoenableWITH_QTanddisabletheWITH_IPPoptionaswell.ItisalsoadvisabletoenableBUILD_EXAMPLES.Thefollowingscreenshotshowsyouthemainoptionsselectedinthebuildconfiguration:

CMakeselectingthemainoptions

Finally,tobuildtheOpenCVwithOpenCLproject,theCMakeprojectthatwaspreviouslygeneratedmustbecompiled.TheprojectwasgeneratedforMinGW,andtherefore,theMinGWcompilerisneededtobuildthisproject.Firstly,the[opencv_build]/folderisselectedwithWindowsConsole,andweexecutethis:

./mingw32-make.exe-j4install

The-j4parameteristhenumberofthecoreCPUsofthesystemthatwewanttousefor

theparallelizationofthecompilation.

NowtheOpenCVwithOpenCLprojectisreadytobeused.Thepathofthenewbinariesfilesmustbeaddedtothesystempath,inthiscase,[opencv_build]/install/x64/mingw/bin.

NoteDonotforgettoremovetheoldbinaryfilesofOpenCVfromthepathenvironmentvariable.

AquickrecipetoinstallOpenCVwithOpenCLTheinstallationprocesscanbesummarizedinthefollowingsteps:

1. DownloadandinstallAMDAPPSDK,whichisavailableathttp://developer.amd.com/tools-and-sdks/opencl-zone/amd-accelerated-parallel-processing-app-sdk.

2. DownloadandinstallBLASandFFTAMD,whichareavailableathttp://developer.amd.com/tools-and-sdks/opencl-zone/amd-accelerated-parallel-processing-math-libraries.

3. ConfiguretheOpenCVbuildwithCMake.EnabletheWITH_OPENCL,WITH_OPENCLAMDBLAS,WITH_QT,andBuild_EXAMPLESWITH_OPENCLAMDFFToptions.DisabletheWITH_IPPoption.Finally,introducetheBLASandFFTpathsonCLAMDBLAS_INCLUDE_DIR,CLAMDBLAS_ROOT_DIR,CLAMDFFT_INCLUDE_DIR,andCLAMDFFT_ROOT_DIR.

4. CompiletheOpenCVprojectwithmingw32-make.exe.5. Finally,modifythepathenvironmentvariabletoupdatetheOpenCVbindirectory

(forexample,[opencv_build]/install/x64/mingw/bin).

ChecktheGPUusageWhentheGPUisbeingusedonaWindowsplatform,thereisnoapplicationtomeasureitsusage.TheGPUusageisusefulfortworeasons:

ItispossibletoknowwhetheryouareusingtheGPUcorrectlyYoucanmonitortheGPUusagepercentage

Therearesomeapplicationsinthemarketforthispurpose.ThischapterusesAMDSystemMonitortochecktheGPUusage.ThisapplicationmonitorstheCPU,memoryRAM,andGPUusage.Refertothefollowingscreenshotforthis:

AMDSystemMonitormonitorstheCPU,GPU,andMemoryRAMusages

Note

TheAMDSystemMonitorcanbedownloadedfromhttp://support.amd.com/es-xl/kb-articles/Pages/AMDSystemMonitor.aspxforMicrosoftWindows(32or64bits).

AcceleratingyourownfunctionsInthissection,therearethreeexamplesofusingOpenCVwithOpenCL.ThefirstexampleallowsyoutocheckwhethertheinstalledSDKisavailableandobtainusefulinformationaboutthecomputingdevicesthatsupportOpenCL.ThesecondexampleshowsyoutwoversionsofthesameprogramusingCPUandGPUprogramming,respectively.Thelastexampleisacompleteprogramtodetectandmarkfaces.Inaddition,acomputationalcomparativeisperformed.

CheckingyourOpenCLThefollowingisasimpleprogramthatisshowntocheckyourSDKandtheavailablecomputingdevices.ThisexampleiscalledcheckOpenCL.ItallowsyoutodisplaythecomputerdevicesusingtheOCLmoduleofOpenCV:

#include<opencv2/opencv.hpp>

#include<opencv2/core/ocl.hpp>

usingnamespacestd;

usingnamespacecv;

usingnamespacecv::ocl;

intmain()

{

vector<ocl::PlatformInfo>info;

getPlatfomsInfo(info);

PlatformInfosdk=info.at(0);

if(sdk.deviceNumber()<1)

return-1;

cout<<"******SDK*******"<<endl;

cout<<"Name:"<<sdk.name()<<endl;

cout<<"Vendor:"<<sdk.vendor()<<endl;

cout<<"Version:"<<sdk.version()<<endl;

cout<<"Numberofdevices:"<<sdk.deviceNumber()<<endl;

for(inti=0;i<sdk.deviceNumber();i++){

Devicedevice;

sdk.getDevice(device,i);

cout<<"\n\n*********************\nDevice"<<i+1<<endl;

cout<<"VendorID:"<<device.vendorID()<<endl;

cout<<"Vendorname:"<<device.vendorName()<<endl;

cout<<"Name:"<<device.name()<<endl;

cout<<"Driverversion:"<<device.driverVersion()<<endl;

if(device.isAMD())cout<<"IsanAMDdevice"<<endl;

if(device.isIntel())cout<<"IsaInteldevice"<<endl;

cout<<"GlobalMemorysize:"<<device.globalMemSize()<<endl;

cout<<"Memorycachesize:"<<device.globalMemCacheSize()<<endl;

cout<<"Memorycachetype:"<<device.globalMemCacheType()<<endl;

cout<<"LocalMemorysize:"<<device.localMemSize()<<endl;

cout<<"LocalMemorytype:"<<device.localMemType()<<endl;

cout<<"MaxClockfrequency:"<<device.maxClockFrequency()<<

endl;

}

return0;

}

ThecodeexplanationThisexampledisplaystheinstalledSDKandtheavailablecomputingdevicesthatarecompatiblewithOpenCL.Firstly,thecore/ocl.hppheaderisincludedandthecv::ocl

namespaceisdeclared.

TheinformationabouttheavailableSDKinyourcomputerisobtainedusingthegetPlatfomsInfo(info)method.Thisinformationisstoredinthevector<ocl::PlatformInfo>infovectorandselectedwithPlatformInfosdk=info.at(0).Afterwards,themaininformationaboutyourSDKisshown,suchasthename,vendor,SDKversion,andthenumberofcomputingdevicescompatiblewithOpenCL.

Finally,foreachcompatibledevice,itsinformationisobtainedwiththesdk.getDevice(device,i)method.Nowdifferentinformationabouteachcomputingdevicecanbeshown,suchasthevendorID,vendorname,driverversion,globalmemorysize,memorycachesize,andsoon.

Thefollowingscreenshotshowsyoutheresultsofthisexampleforthecomputerused:

InformationabouttheSDKusedandcompatiblecomputingdevices

YourfirstGPU-basedprogramInthefollowingcode,twoversionsofthesameprogramareshown:oneonlyusestheCPU(native)toperformthecomputationsandtheotherusestheGPU(withOpenCL).ThesetwoexamplesarecalledcalculateEdgesCPUandcalculateEdgesGPU,respectively,andallowyoutoobservethedifferencesbetweenCPUandGPUversions.

ThecalculateEdgesCPUexampleisshowninthefirstplace:

#include<opencv2/opencv.hpp>

usingnamespacestd;

usingnamespacecv;

intmain(intargc,char*argv[])

{

if(argc<2)

{

cout<<"./calculateEdgesCPU<image>"<<endl;

return-1;

}

MatcpuFrame=imread(argv[1]);

MatcpuBW,cpuBlur,cpuEdges;

namedWindow("CannyEdgesCPU",1);

cvtColor(cpuFrame,cpuBW,COLOR_BGR2GRAY);

GaussianBlur(cpuBW,cpuBlur,Size(1,1),1.5,1.5);

Canny(cpuBlur,cpuEdges,50,100,3);

imshow("CannyEdgesCPU",cpuEdges);

waitKey();

return0;

}

Now,thecalculateEdgesGPUexampleisshown:

#include"opencv2/opencv.hpp"

#include"opencv2/core/ocl.hpp"

usingnamespacestd;

usingnamespacecv;

usingnamespacecv::ocl;

intmain(intargc,char*argv[])

{

if(argc<2)

{

cout<<"./calculateEdgesGPU<image>"<<endl;

return-1;

}

setUseOpenCL(true);

MatcpuFrame=imread(argv[1]);

UMatgpuFrame,gpuBW,gpuBlur,gpuEdges;

cpuFrame.copyTo(gpuFrame);

namedWindow("CannyEdgesGPU",1);

cvtColor(gpuFrame,gpuBW,COLOR_BGR2GRAY);

GaussianBlur(gpuBW,gpuBlur,Size(1,1),1.5,1.5);

Canny(gpuBlur,gpuEdges,50,100,3);

imshow("CannyEdgesGPU",gpuEdges);

waitKey();

return0;

}

ThecodeexplanationThesetwoexamplesobtainthesameresult,asshowninthefollowingscreenshot.Theyreadanimagefromthestandardcommand-lineinputarguments.Afterwards,theimageisconvertedtograyscaleandtheGaussianBlurandtheCannyfilterfunctionsareapplied.

Inthesecondexample,therearesomedifferencesthatarerequiredtoworkwiththeGPU.First,OpenCLmustbeactivatedwiththesetUseOpenCL(true)method.Second,UnifiedMats(UMat)areusedtoallocatememoryintheGPU(UMatgpuFrame,gpuBW,gpuBlur,gpuEdges).Third,theinputimageiscopiedfromtheRAMtoGPUmemorywiththecpuFrame.copyTo(gpuFrame)method.Now,whenthefunctionsareused,iftheyhaveanOpenCLimplementation,thenthesefunctionswillbeexecutedontheGPU.IfsomeofthesefunctionsdonothaveanOpenCLimplementation,thenormalfunctionwillbeexecutedontheCPU.Inthisexample,thetimeelapseusingtheGPUprogramming(secondexample)is10timesbetter:

Resultsoftheprecedingtwoexamples

GoingrealtimeOneofthemainadvantagesofGPUprocessingistoperformcomputationsinamuchfasterway.Thisincreaseinspeedallowsyoutoexecuteheavycomputationalalgorithmsinreal-timeapplications,suchasstereovision,pedestriandetection,opticalflow,orfacedetection.ThefollowingdetectFacesexampleshowsyouanapplicationtodetectfacesfromavideocamera.ThisexamplealsoallowsyoutoselectbetweentheCPUorGPUprocessinginordertocomparethecomputationaltime.

IntheOpenCVexamples([opencv_source_code]/samples/cpp/facedetect.cpp),arelatedfacedetectorexamplecanbefound.ForthefollowingdetectFacesexample,thedetectFace.proprojectneedstheselibraries:-lopencv_core300,-opencv_imgproc300,-lopencv_highgui300,-lopencv_videoio300,andlopencv_objdetct300.

ThedetectFacesexampleusestheoclmoduleofOpenCV:

#include<opencv2/core/core.hpp>

#include<opencv2/core/ocl.hpp>

#include<opencv2/objdetect.hpp>

#include<opencv2/videoio.hpp>

#include<opencv2/highgui.hpp>

#include<opencv2/imgproc.hpp>

#include<iostream>

#include<stdio.h>

usingnamespacestd;

usingnamespacecv;

usingnamespacecv::ocl;

intmain(intargc,char*argv[])

{

//1-Settheinitialparameters

//Vectortostorethefaces

vector<Rect>faces;

CascadeClassifierface_cascade;

Stringface_cascade_name=argv[2];

intface_size=30;

doublescale_factor=1.1;

intmin_neighbours=2;

VideoCapturecap(0);

UMatframe,frameGray;

boolfinish=false;

//2-Loadthefilexmltousetheclassifier

if(!face_cascade.load(face_cascade_name))

{

cout<<"Cannotloadthefacexml!"<<endl;

return-1;

}

namedWindow("VideoCapture");

//3-SelectbetweentheCPUorGPUprocessing

if(argc<2)

{

cout<<"./detectFaces[CPU/GPU|C/G]"<<endl;

cout<<"TryingtouseGPU…"<<endl;

setUseOpenCL(true);

}

else

{

cout<<"./detectFacestryingtouse"<<argv[1]<<endl;

if(argv[1][0]=='C')

//TryingtousetheCPUprocessing

setUseOpenCL(false);

else

//TryingtousetheGPUprocessing

setUseOpenCL(true);

}

Rectr;

doublestart_time,finish_time,start_total_time,finish_total_time;

intcounter=0;

//4-Detectthefacesforeachimagecapture

start_total_time=getTickCount();

while(!finish)

{

start_time=getTickCount();

cap>>frame;

if(frame.empty())

{

cout<<"Nocaptureframe-->finish"<<endl;

break;

}

cvtColor(frame,frameGray,COLOR_BGR2GRAY);

equalizeHist(frameGray,frameGray);

//Detectthefaces

face_cascade.detectMultiScale(frameGray,faces,scale_factor,

min_neighbours,0|CASCADE_SCALE_IMAGE,Size(face_size,face_size));

//Foreachdetectedface

for(intf=0;f<faces.size();f++)

{

r=faces[f];

//Drawarectangleovertheface

rectangle(frame,Point(r.x,r.y),Point(r.x+r.width,r.y+r.height),

Scalar(0,255,0),3);

}

//Showtheresults

imshow("VideoCapture",frame);

//Calculatethetimeprocessing

finish_time=getTickCount();

cout<<"Timeperframe:"<<(finish_time-start_time)/getTickFrequency()

<<"secs"<<endl;

counter++;

//PressEsckeytofinish

if(waitKey(1)==27)finish=true;

}

finish_total_time=getTickCount();

cout<<"Averagetimeperframe:"<<((finish_total_time-

start_total_time)/getTickFrequency())/counter<<"secs"<<endl;

return0;

}

ThecodeexplanationThefirststepsetstheinitialparameters,suchasthexmlfile(Stringface_cascade_nameargv[2])thatusestheclassifiertodetectthefaces,theminimumsizeofeachdetectedface(face_size=30),thescalefactor(scale_factor=1.1),andtheminimumnumberofneighbors(min_neighbours=2)tofindatrade-offbetweentruepositiveandfalsepositivedetections.YoucanalsoseethemoreimportantdifferencesbetweentheCPUandGPUsourcecodes;youonlyneedtousetheUnifiedMat(UMatframe,frameGray).

NoteThereareotheravailablexmlfilesinthe[opencv_source_code]/data/haarcascades/foldertodetectdifferentbodypartssuchaseyes,lowerbodies,smiles,andsoon.

Thesecondstepcreatesadetectorusingtheprecedingxmlfiletodetectfaces.ThisdetectorisbasedonaHaarfeature-basedclassifierthatisaneffectiveobject-detectionmethodproposedbyPaulViolaandMichaelJones.Thisclassifierhasahighaccuracyofdetectingfaces.Thissteploadsthexmlfilewiththeface_cascade.load(face_cascade_name)method.

NoteYoucanfindmoredetailedinformationaboutPaulViolaandMichaelJonesmethodathttp://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework.

ThethirdstepallowsyoutoselectbetweentheCPUorGPUprocessing(setUseOpenCL(false)orsetUseOpenCL(true),respectively).Thisexampleusesthestandardcommand-lineinputarguments(argv[1])toselectthis.TheusercanexecutethefollowingfromtheWindowsconsoletoselectbetweenCPUorGPUprocessing,respectively,andtheclassifierpath:

<bin_dir>/detectFacesCPUpathClassifier

<bin_dir>/detectFacesGPUpathClassifier

Iftheuserdoesnotintroduceaninputargument,thenGPUprocessingisused.

Thefourthstepdetectsfacesforeachimagecapturedfromthevideocamera.Beforethat,

eachcapturedimageisconvertedtograyscale(cvtColor(frame,frameGray,COLOR_BGR2GRAY))anditshistogramisequalized(equalizeHist(frameGray,frameGray)).Afterwards,usingthecreatedfacedetector,thedifferentfacesaresearchedoverthecurrentframeusingtheface_cascade.detectMultiScale(frameGray,faces,scale_factor,min_neighbours,0|CASCADE_SCALE_IMAGE,

Size(face_size,face_size))multiscaledetectionmethod.Finally,agreenrectangleisdrawnovereachdetectedface,andthenitisdisplayed.Ascreenshotofthisexamplerunningisshowninthefollowingscreenshot:

Theprecedingexampledetectingaface

TheperformanceIntheprecedingexample,thecomputationaltimeiscalculatedtocomparetheCPUandGPUprocessing.Theaverageprocessingtimeperframeisobtained.

AbigadvantageofchoosingGPUprogrammingisperformance.Therefore,thepreviousexamplecalculatesthetimemeasurementstocomparethespeedupsobtainedwithrespecttotheCPUversion.ThetimeisstoredatthebeginningoftheprogramusingthegetTickCount()method.Afterwards,attheendoftheprogram,thesamefunctiontoestimatethetimeisusedagain.Acounterisstoredtoknowthenumberofiterationsaswell.Finally,theaverageprocessingtimeperframeiscalculated.Theprecedingexamplehasanaverageprocessingtimeperframeof0.057secondsperframe(or17.5FPS)usingtheGPU,whereasthesameexampleusingtheCPUhasanaverageprocessingtimeperframeof0.335secondsperframe(or2.9FPS).Inconclusion,thereisaspeedincrementof6x.Thisincrementissignificant,especiallywhenyouonlyneedtochangeafewlines

ofthecode.However,itispossibletoachieveamuchhigherrateofspeedincrements,whichisrelatedtotheproblemandevenwithhowthekernelsaredesigned.

SummaryInthischapter,youlearnedhowtoinstallOpenCVwithOpenCLonyourcomputeranddevelopapplicationsusingyourcomputerdevices,whicharecompatiblewithOpenCL,withthelastversionofOpenCV.

ThefirstsectionexplainswhatOpenCLisandtheavailableSDKs.Rememberthatdependingyourcomputingdevices,youwillneedaspecificSDKtoworkcorrectlywithOpenCL.Inthesecondsection,theinstallationprocesstoinstallOpenCVwithOpenCLisexplained,andtheAMDAPPSDKhasbeenused.Inthelastsection,therearethreeexamplesusingGPUprogramming(thesecondexamplealsohasaCPUversioninordertocomparethem).Inaddition,inthelastsection,thereisacomputationalcomparativebetweenCPUandGPUprocessing,wheretheGPUisshowntobesixtimesfasterthantheCPUversion.

IndexA

affinetransformationabout/Affinetransformationscaling/Scalingtranslation/Translationimagerotation/Imagerotationskewing/Skewingreflection/Reflection

alignment,HDRimages/AlignmentAMDAPPSDK

URL/OpenCVwiththeOpenCLinstallation,AquickrecipetoinstallOpenCVwithOpenCL

AMDcontrollersURL/OpenCVwiththeOpenCLinstallation

AMDSystemMonitorURL,fordownloading/ChecktheGPUusage

arithmeticoperationsabout/Arithmeticoperations

BbasicAPIconcepts

about/ThebasicAPIconceptsbasicdatatypes

about/BasicdatatypesBayer

about/Bayerexamplecode/Theexamplecode

BilateralTV-L1algorithmabout/Superresolutionparameters/Superresolution

BilateralTV-Lmethodabout/SuperresolutionURL/Superresolution

bitsperpixel(bpp)/High-dynamic-rangeimagesBLAS

URL/OpenCVwiththeOpenCLinstallation,AquickrecipetoinstallOpenCVwithOpenCL

BSDlicenseabout/AnintroductiontoOpenCV

bundleadjustment/Stitchingbuttons

about/Buttons

CC++

OpenCVapplication,creatingwith/GeneralusageofthelibrarycalculateEdgesCPUexample

about/YourfirstGPU-basedprogramcode/Thecodeexplanation

calculateEdgesGPUexampleabout/YourfirstGPU-basedprogramcode/Thecodeexplanation

checkOpenCLexampleabout/CheckingyourOpenCLcode/Thecodeexplanation

CIEL*a*b*about/CIEL*a*b*examplecode/Theexamplecode

CIEL*u*v*about/CIEL*u*v*examplecode/Theexamplecode

CIEXYZabout/CIEXYZexamplecode/Theexamplecode

CMakesetting/GettingacompilerandsettingCMakeabout/GettingacompilerandsettingCMakeURL/GettingacompilerandsettingCMakeOpenCV,configuringwith/ConfiguringOpenCVwithCMakelibrary,installing/Compilingandinstallingthelibrarylibrary,compiling/Compilingandinstallingthelibrary

color-space-basedsegmentationabout/Color-space-basedsegmentationHSVsegmentation/HSVsegmentationYCrCbsegmentation/YCrCbsegmentation

colorspacesabout/Colorspacesconversion,withcvtColormethod/Conversionbetweencolorspaces(cvtColor)RGB/RGBgrayscale/GrayscaleCIEXYZ/CIEXYZYCrCb/YCrCbHSV/HSVHLS/HLSCIEL*a*b*/CIEL*a*b*CIEL*u*v*/CIEL*u*v*

Bayer/Bayercolortransfer

about/Colortransferexamplecode/Theexamplecode

ColourImageComparisonexample,histogramsabout/Theexamplecode

ColourImageEqualizeHistexample,histogramsabout/Theexamplecodesourceimagewindow/Theexamplecodeequalizedcolorimagewindow/Theexamplecodehistogramofthreechannelswindow/TheexamplecodehistogramofRGBchannelfortheequalizedimagewindow/Theexamplecode

CommissionInternationaledeL’Éclairage(CIE)about/CIEXYZ

compilerkits,OpenCVC++applicationsMicrosoftVisualC(MSVC)/ToolstodevelopnewprojectsGNUGCC(GNUCompilerCollection)/Toolstodevelopnewprojects

ComputeUnifiedDeviceArchitecture(CUDA)about/AnintroductiontoOpenCV

cvtColormethodused,forcolorspacesconversion/Conversionbetweencolorspaces(cvtColor)srcargument/Conversionbetweencolorspaces(cvtColor)dstargument/Conversionbetweencolorspaces(cvtColor)codeargument/Conversionbetweencolorspaces(cvtColor)dstCnargument/Conversionbetweencolorspaces(cvtColor)

cylindricalprojection/Stitching

Ddatapersistence

about/Datapersistencedecolorization

about/Decolorizationexample/Decolorization

denoisingabout/Denoisingreferencelink/Denoisingfunctions/Denoisingexamplecode/Theexamplecode

detectFacesexampleabout/Goingrealtimecode/Goingrealtime,Thecodeexplanation

digitalstabilizationsystemsabout/Videostabilization

EestimatePiexample

about/Arithmeticoperationsexposurefusion,HDRimages/Exposurefusionextrapolationmethods

about/Geometricaltransformations

FFarnebackopticalflowmethod

URL/SuperresolutionFFTAMD

URL/AquickrecipetoinstallOpenCVwithOpenCLfilestructure,OpenCV

headerfiles/ThestructureofOpenCVlibrarybinaries/ThestructureofOpenCVsamplebinaries/ThestructureofOpenCV

fourcccodeURL/Theexamplecode

functions,acceleratingabout/AcceleratingyourownfunctionscheckOpenCLexample/CheckingyourOpenCLcalculateEdgesCPUexample/YourfirstGPU-basedprogramcalculateEdgesGPUexample/YourfirstGPU-basedprogramdetectFacesexample/GoingrealtimeGPUprogramming,performance/Theperformance

functions,seamlessCloningexamplecolorChange/SeamlesscloningilluminationChange/SeamlesscloningtextureFlattening/Seamlesscloning

GGaussianpyramids

about/Gaussianpyramidsfunctions/Gaussianpyramids

GDAL(GeographicDataAbstractionLibrary)about/Imagefile-supportedformats

geometricaltransformationsabout/Geometricaltransformationsextrapolationmethods/Geometricaltransformationsinterpolationmethods/Geometricaltransformationsaffinetransformation/Affinetransformationperspectivetransformation/Perspectivetransformation

GNUGCConlinedocumentation,URL/Generalusageofthelibrary

GNUtoolkit/GettingacompilerandsettingCMakeGPUusage

checking/ChecktheGPUusageGraphicProcessingUnit(GPU)

about/AnintroductiontoOpenCVgrayscale

about/Grayscaleexamplecode/Examplecode

GUI(GraphicalUserInterface)/ConfiguringOpenCVwithCMake

HHaarfeature-basedclassifier/ThecodeexplanationHDRimages

about/High-dynamic-rangeimagescreating/CreatingHDRimagescreateHDRexample/Exampletonemapping/Tonemappingalignment/Alignmentexposurefusion/Exposurefusion

hdr_imagingtutorialURL,forfileprerequisites/Example

HighDynamicRange(HDR)/ThestructureofOpenCVhighdynamicrange(HDR)

about/AnintroductiontoOpenCVhistogramequalization

about/Histogramshistograms

about/HistogramsColourImageEqualizeHistexample/TheexamplecodeColourImageComparisonexample/Theexamplecode

HLSabout/HLSexamplecode/Theexamplecode

HSVabout/HSVhue/HSVsaturation/HSVvalue/HSVexamplecode/Theexamplecode

HSVsegmentationabout/HSVsegmentation

IIDE(IntegratedDevelopmentEnvironment)/GettingacompilerandsettingCMakeimagecapturingprocess

about/Superresolutionsampling/Superresolutiongeometrictransformation/Superresolutionblur/Superresolutionsubsampling/Superresolution

imagefile-supportedformatsabout/Imagefile-supportedformats

imagefilesreading/Readingandwritingimagefiles,Readingimagefileswriting/Readingandwritingimagefiles,Writingimagefilesexamplecode/Theexamplecodeeventhandling,intointrinsicloop/Eventhandlingintotheintrinsicloop

imagefilteringabout/Imagefilteringsmoothing/Smoothingsharpening/Sharpeningimagepyramids/Workingwithimagepyramids

imageprocessingtimemeasuring/Measuringthetime

imagepyramidsabout/WorkingwithimagepyramidsLaplacianpyramids/Workingwithimagepyramids,LaplacianpyramidsGaussianpyramids/Gaussianpyramidsexamplecode/Theexamplecode

imagerotationabout/Imagerotationexamplecode/Theexamplecode

imagestitchingabout/StitchingURL/Stitchingregistration/Stitchingcalibration/Stitchingcompositing/StitchingstitchingAdvancedexample/Stitching

imshowfunctionabout/Theexamplecode

inpaintingabout/Inpaintingfunctions/Inpaintingreferencelink/Inpainting

examplecode/TheexamplecodeIntegratedDevelopmentEnvironment(IDE/ToolstodevelopnewprojectsIntegratedPerformancePrimitives(IPP)

about/AnintroductiontoOpenCVIntelIntegratedPerformancePrimitives(IPP)/ConfiguringOpenCVwithCMakeinterpolationmethods

about/Geometricaltransformations

Kkernel

about/Smoothing

LLaplacianpyramids

about/Workingwithimagepyramids,LaplacianpyramidsLinux/CompilingandinstallingthelibraryLuminanceHDR/TonemappingLUTs

about/LUTsexamplecode/Theexamplecode

MMake

onlinedocumentation,URL/Generalusageofthelibrarymapprojections

about/Stitchingrectilinear/Stitchingcylindrical/Stitchingspherical/Stitchingstereographic/Stitchingpanini/Stitching

mean-siftsegmentationreferencelink/Gaussianpyramids

mechanicalstabilizationsystemsabout/Videostabilization

MinGW(MinimalGNUGCC)/Toolstodevelopnewprojectsmipmap

about/Workingwithimagepyramidsmodules,OpenCV

core/ThestructureofOpenCVhighgui/ThestructureofOpenCVimgproc/ThestructureofOpenCVimgcodecs/ThestructureofOpenCVphoto/ThestructureofOpenCVstitching/ThestructureofOpenCVvideoio/ThestructureofOpenCVvideo/ThestructureofOpenCVfeatures2d/ThestructureofOpenCVobjdetect/ThestructureofOpenCV

morphologicaloperationsabout/Morphologicaloperationsfunctions/Morphologicaloperationsexamplecode/Theexamplecode

mouseinteractionabout/Mouseinteraction

Nnon-photorealisticrendering

about/Non-photorealisticrenderingedgePreservingFiltereffect/Non-photorealisticrenderingdetailEnhanceeffect/Non-photorealisticrenderingstylizationeffect/Non-photorealisticrendering

OOpenComputingLanguage(OpenCL)

about/AnintroductiontoOpenCVOpenCV

about/AnintroductiontoOpenCVURL,fordownloading/DownloadingandinstallingOpenCVdownloading/DownloadingandinstallingOpenCVinstalling/DownloadingandinstallingOpenCVURL,formainrepository/DownloadingandinstallingOpenCVURL,fortestdatarepository/DownloadingandinstallingOpenCVURL,forcontributionsrepository/DownloadingandinstallingOpenCVURL,fordocumentationsite/DownloadingandinstallingOpenCVURL,fordevelopmentrepository/DownloadingandinstallingOpenCVURL,fortutorials/DownloadingandinstallingOpenCVcompiler,obtaining/GettingacompilerandsettingCMakeconfiguring,withCMake/ConfiguringOpenCVwithCMakestructure/ThestructureofOpenCVreferencelink,formodules/ThestructureofOpenCVuserprojects,creatingwith/CreatinguserprojectswithOpenCV

OpenCV,withOpenCLinstallationprocess/AquickrecipetoinstallOpenCVwithOpenCL

OpenCVAPIURL/Writingimagefiles

OpenCVapplicationdeveloping,withC++/Generalusageofthelibrary

OpenCVC++applicationsprerequisites/Toolstodevelopnewprojects

OpenCVC++programcreating,withQtCreator/CreatinganOpenCVC++programwithQtCreator

operations,withimagesabout/Commonoperationswithimages

opticalflowmethodsbrox/Superresolutionpyrlk/Superresolutionfarneback/Superresolutiontvl1/Superresolutioncomparing/Superresolution

Ppaniniprojection/Stitchingparameters,BilateralTV-L1algorithm

scale/Superresolutioniterations/Superresolutiontau/Superresolutionlamba/Superresolutionalpha/SuperresolutionbtvKernelSize/SuperresolutionblurKernelSize/SuperresolutionblurSigma/SuperresolutiontemporalAreaRadius/SuperresolutionopticalFlow/Superresolution

perspectivetransformationabout/Perspectivetransformationfunctions/Perspectivetransformationexamplecode/Theexamplecode

pfstools/Tonemappingpixel-levelaccess

about/Pixel-levelaccessprerequisites,OpenCVC++applications

OpenCVheaderfilesandlibrarybinaries/ToolstodevelopnewprojectsC++compiler/Toolstodevelopnewprojectsauxiliarylibraries/Toolstodevelopnewprojects

pyramidabout/Workingwithimagepyramids

QQtbundle

URL,fordownloading/ToolstodevelopnewprojectsQtCreator

about/ToolstodevelopnewprojectsOpenCVC++program,creatingwith/CreatinganOpenCVC++programwithQtCreator

Qtframeworkabout/GettingacompilerandsettingCMakeURL/GettingacompilerandsettingCMake

QtprojectURL/CreatinganOpenCVC++programwithQtCreator

RRANSACmethod

URL/VideostabilizationRapidEnvironmentEditortool

URL/GettingacompilerandsettingCMakerectilinearprojection/Stitchingreflection

about/Reflectionexamplecode/Theexamplecode

requisites,forinstallingOpenCVwithOpenCLabout/OpenCVwiththeOpenCLinstallationOpenCL-capableGPUorCPU/OpenCVwiththeOpenCLinstallationcompilers/OpenCVwiththeOpenCLinstallationAMDAPPSDK/OpenCVwiththeOpenCLinstallationOpenCLBLAS(BasicLinearAlgebraSubroutines)/OpenCVwiththeOpenCLinstallationOpenCLFFT(FastFourierTransform)/OpenCVwiththeOpenCLinstallationQtlibraries,forC++compiler/OpenCVwiththeOpenCLinstallation

RGBabout/RGBexamplecode/Theexamplecode

Sscaling

about/Scalingexamplecode/Theexamplecode

seamlesscloningabout/Seamlesscloning

seamlessCloningexampleabout/Seamlesscloningfunctions/Seamlesscloning

sharpeningabout/Sharpeningfunctions/Sharpeningexamplecode/Theexamplecode

skewingabout/Skewingexamplecode/Theexamplecode

smoothingabout/Smoothingfunctions/Smoothingexamplecode/Theexamplecode

sphericalprojection/Stitchingstandardtemplatelibrary(STL)

about/ThebasicAPIconceptsstereographicprojection/Stitchingsticher/Stitchingsuperresmodule/Superresolutionsuperresolution

about/SuperresolutionURL,forexample/Superresolutionexample/Superresolution

TThreadingBuildingBlocks(TBB)

about/AnintroductiontoOpenCVtonemapping,HDRimages/Tonemappingtrackbars

about/Trackbarstranslation

about/Translationexamplecode/Theexamplecode

TVL1(TotalVariationL1)about/Denoising

UUnifiedMats(UMat)/Thecodeexplanationuser-interactionstools

about/User-interactionstoolstrackbars/Trackbarsmouseinteraction/Mouseinteractionbuttons/Buttonstext,drawing/Drawinganddisplayingtexttext,displaying/Drawinganddisplayingtext

userinterface(UI)/ThestructureofOpenCVuserprojects

creating,withOpenCV/CreatinguserprojectswithOpenCV

Vvideofiles

writing/Readingandwritingvideofilesreading/ReadingandwritingvideofilesrecVideoexample/Theexamplecode

videostabilizationabout/Videostabilizationmechanicalstabilizationsystems/Videostabilizationdigitalstabilizationsystems/VideostabilizationvideoStabilizerexample/Videostabilization

videostabilizationalgorithmssteps/Videostabilization

WWindows/Compilingandinstallingthelibrary

YYCrCb

about/YCrCbexamplecode/Theexamplecode

YCrCbsegmentationabout/YCrCbsegmentation