Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad...

24
Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad Hossein Daraei June 2014 [email protected] University of California, Santa Cruz Multi-dimensional Signal Processing Lab

Transcript of Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad...

Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur

Mohammad Hossein Daraei

June 2014

[email protected]

University of California, Santa CruzMulti-dimensional Signal Processing Lab

Optical Flow Computation

The problem of optical flow computation for two frames fi and fi+1 can be addressed as finding a displacement field matching the frames with minimum error.

Given pi as a pixel in the first

image

and w(pi) as the flow vector

…it will project onto:Then, the brightness constancy assumption states that the next frame should be:

Objective Function

Let’s consider two consequent frames fi and fj. Then, ED(u,v) must be minimized over (u,v), as horizontal and vertical components of wij.

An interpolation method has to be employed in order to generate this frame, so direct optimization is not straightforward…First order Taylor

approximation…leads to a quadratic objective function in terms of u and v, which benefits from painless optimization, but is not sufficient to uniquely determine both u and v (i.e. aperture problem)

fi fj

(x1,y1)

(x2,y2)

wij(x1,y1)

wij(x2,y2)

Traditional Optical Flow Methods: Local and Global Techniques

Local Methods

Local methods, e.g. Lucas-Kanade, do not provide a dense flow field over the frame. They can estimate the flow at non-smooth locations. However, they are more robust against noise compared to global methods.

In order to cope with the aperture problem, they smooth the data term by convolving it with a Gaussian kernel Kρ

where ρ is the Gaussian parameter

Minimization of ELK could be addressed as a mean squared error minimization of form:

which could turn into an ill-conditioned problem if not much details are present in the neighborhood

Global Methods

Global methods, e.g. Horn-Schunck, generate a densely computed global flow field over the image. They are not as robust as local methods against noise.

Based on the intuition that displacement fields are in general uniform, Horn-Schunck method employs a functional that in addition to data fidelity term incorporates a smoothness term on first-order flow field gradients. data term smoothness term

Optimization is performed by SOR or CG iterations, based on the associated

Euler-Lagrange equations

Traditional Optical Flow Methods: Combined Local-Global (CLG)

Charbonnier penalizer:

which allows outliers in the flow field not to be penalized, as they might be due to object deformation, occlusion, changes in illumination or other dissimilarities, β=0.001

data term smoothness term

Smoothness parameter, set to 0.012 as a constant.

The important characteristic of CLG method is the simultaneous utilization of a smoothness term, and a Gaussian smoothed data fidelity term. The former allows for a densely computed flow, and the latter makes the estimates robust against noise.

Evaluation of Estimated Flow Fields

Estimated flowGround truth

An arbitrary point in the scene (object)

moves along a path

Lets consider the scenario of a camera recording a video sequence fi

in a non-stationary environment

The brightness constancy assumption states that the object will appear similarly in

adjacent frames

However, shutter time is non-zero in practical

cameras

And if the object moves during

acquisition time, it will be

corrupted by motion blur.

So the assumption no longer holds, and

traditional methods result in artifacts.

CLG

BA

BlurFlow

MB-CLG

Ground Truth

Traditional methods

Blur-aware methods

Deformation Artifacts

Motion Blur Model

Lets assume the scene to be projected onto the CCD as a

frame fi at time ti

In practical cameras, the shutter is kept open for a

non-zero time, i.e., acquisition interval.

Thus, the integrated image gi is the aggregation of all fis in the interval

Integrated blurred image

Unblurred frameSpatially-varying motion blur kernelbased on wi

Spatially-varyingconvolution

Motion Blur Model

In order to express Bwi in terms of {wi} = {wi,i-1, wi,i+1}, we take linear approxi-mations for the moving object trajectory.

Moving object trajectory

Point d in fi+1

Point d in fi-1

An arbitrary point d in fi

Coordinates of point d in fi-1

Coordinates of point d in fi+1

Approximated path for d from

fi-1 to fi

Approximated path for d from

fi to fi+1

With the linearized trajectories, each point d in the unblurred frame fi integrates as two line segments, i.e., gi can be expressed as the sum of two terms.

Projection of linearized path of

d from ti-τ to ti

Projection of linearized path of d from ti to ti+τ

The key observation is that if we take the blur functions of each frame, translate them into the coordinates of the other frame, and apply them accordingly, the brightness constancy assumption will be valid for the new set of frames.

The key observation

And warp the flows according to another flow wi,i+1 in order

to transform them onto the coordinates of the fi+1

If we take the flow fields that match fi with the next and

the previous frames…

And we apply the corresponding blur functions

on blurred frame gi+1

And repeat the same procedure for the other

frame in a similar manner

Motion Blur Aware Combined Local-Global (MB-CLG) Optical Flow

We start by generating a Gaussian pyramid of L levels for each frame

gi in the sequence

Starting from the coarsest level, we apply MB-CLG to estimate all of the forward and

backward flows over the sequenceThen, we upscale these estimates and

apply MB-CLG to the next level

We repeat this step until we reach to the finest level, and we refine the flows over the sequence again to get the final estimations.

The Proposed Method:

Initialization:

Then given estimated flows from level l-1, we use this algorithm to refine them for the next level l:

Estimated flows of fi+1 for level l-1

Estimated flows of fi

for level l-1

Brought to the coordinates of fi

Brought to the coordinates of fi+1

Upscaled by pyramid’s scale parameter

Upscaled by pyramid’s scale parameter

As previously mentioned, we apply the corresponding blur functions to get ki

As well as ki+1, in a similar manner

Estimating flows matching ki and ki+1

Then, we get updated

forward/backward flows for the next

level

Handling Moving Objects and Occlusion

Smoothing parameterα = 0.012

K=10σd= 0.4

Smoothing Matrix A(x,y)

Results: Homography Sequences

Error maps for matching latent frames fi and fi+1 with estimated flows

Results: Homography Sequences

Results: Varying Homography Parameters

Results: Effect of Adding NoiseWhite Gaussian iid noise with standard deviation σ

Results: Wrinkle Artifacts

Moving Object Results: The Astronaut Sequence

Moving Object Results: The Astronaut Sequence

Moving Object Results: The Bird Sequence

Moving Object Results: The Bird Sequence

Summary

The proposed method, MB-CLG,

Is aimed to solve optical flow in the presence of motion blur

Employs a coarse-to-fine approach by constructing a Gaussian pyramid

Estimates blur functions of both the target and the source images

Projects the blur functions onto different coordinates using “warp-the-flow”

Applies exchanged blur functions on both frames

Accounts for moving objects and occluded regions by replacing α with A(x,y)

Is proved to have brightness constancy assumption valid for new pair of frames

Achieves superior results compared to BlurFlow and traditional methods

Mai

n co

ntrib

ution

s