Computational Photography - UCSB Computer Sciencemturk/Tampere/5. Computational Photography.pdf ·...

85
Computational Photography

Transcript of Computational Photography - UCSB Computer Sciencemturk/Tampere/5. Computational Photography.pdf ·...

Computational Photography

Tampere 8.2012

Digital photography

Photography has been rapidly changing in recent years, with

the transition from analog to digital

Tampere 8.2012

Digital photography

Computing can now be involved in every step of the imaging

process. This is having a huge impact on the art and

practice of photography.

We don’t just take a picture – we sample the light field

Tampere 8.2012

The light field

Tampere 8.2012

The light field

• The light field describes the amount of light (radiance)

traveling in every direction through every point in space

– Possibly also as a function of wavelength and time

– The plenoptic function

• One goal of image based rendering is to construct the light

field from a set of images, so that new images can be

synthesized

Tampere 8.2012

Computational photography

• Traditional (analog and digital) cameras have limitations,

e.g.:

– Cannot capture scenes with significant differences between bright

and dark areas

– Limited field of view

– Can only focus at a single plane

• Computational photography is an emerging research

field that attempts to extend or enhance the capabilities of

digital photography by adding computational elements to

the imaging process

– Optics + sensors + illumination + processing + interaction

– Convergence of computer vision, digital imaging, and graphics

Tampere 8.2012

Computational photography

• Typical process: take several images, combine them

(perhaps with user interaction), create “better” image(s)

• May modify

– Field of view, depth of field/focus, dynamic range, spatial

resolution, wavelength resolution, temporal resolution

– Aperture, focus, sensor design, exposures, illumination

• Computational photography can provide

– Improved dynamic range

– Variable focus, resolution, and depth of field

– Aesthetic image framing

– Content-based image editing

– Hints about shape, reflectance, and lighting

– New interactive forms of photography

Tampere 8.2012

Traditional Photography

Lens

Sensor array

Pixels

Image

Tampere 8.2012

Computation

Computational Photography

Generalized

Optics

Generalized sensor

Samples

Image

Tampere 8.2012

E.g., a computational lens

Varioptic Liquid Lens: Electrowetting

Applied voltage modifies the curvature of the liquid-

liquid interface, leading to a variable focal length lens

Tampere 8.2012

The Plenoptic Function (a.k.a. the light field)

Q: What is the set of all things that we can ever see?

A: The Plenoptic Function

Let’s start with a stationary person and try to parameterize

everything that he can see…

Tampere 8.2012

A grayscale image

…is intensity of light

Seen from a single view point

At a single time

Averaged over the wavelengths of the visible spectrum

(can also do P(x,y), but spherical coordinate are nicer)

P(q,f)

Tampere 8.2012

A color image

…is intensity of light

Seen from a single view point

At a single time

As a function of wavelength

P(q,f,l)

Tampere 8.2012

A color movie

…is intensity of light

Seen from a single view point

Over time

As a function of wavelength

P(q,f,l,t)

Tampere 8.2012

A holographic movie

…is intensity of light

Seen from ANY viewpoint

Over time

As a function of wavelength

P(q,f,l,t,VX,VY,VZ)

Tampere 8.2012

The Plenoptic Function

Can reconstruct every possible view, at every moment, from

every position, at every wavelength

Contains every photograph, every movie, everything that

anyone has ever seen! It completely captures our visual

reality.

P(q,f,l,t,VX,VY,VZ)

Tampere 8.2012

Sampling the Plenoptic Function (top view)

Tampere 8.2012

Synthesizing novel views

Tampere 8.2012

Image based rendering

We don’t need a (3D) model

of the object

Instead, model the object by

storing its light field

Stuff

Some examples of computational photography

Research and products

Tampere 8.2012

Flash/no-flash image pairs

Flash

No-flash

Result

Eisemann and Durand SIGGRAPH04

Petschnigg et al. SIGGRAPH04

Tampere 8.2012

Flash/no-flash image pairs Agrawal et al. SIGGRAPH05

Flash Result Ambient Reflection Layer

Tampere 8.2012

High dynamic range imaging

Ambient Flash

Raskar et al.

Tampere 8.2012

HDR imaging: varying exposure

Tampere 8.2012

Exposure Time

Flash

Brightness

Tampere 8.2012

?

?

?

Result

Tampere 8.2012

Image retargeting

Setlur et al. SIGGRAPH04

Rearranging the image content to optimize the information of interest,

given the constraints (e.g., reduced aspect ratio)

Tampere 8.2012

The Lytro camera

Link

Tampere 8.2012

A practical light field camera

uv-plane st-plane

Tampere 8.2012

4000 × 4000 pixels ÷ 292 × 292 lenses = 14 × 14 pixels per lens

Adaptive Optics microlens array 125μ square-sided microlenses

The prototype camera

Tampere 8.2012

Digital refocusing

• refocusing = summing windows extracted from

several microlenses

Σ

Σ

Tampere 8.2012

Example of digital refocusing

Tampere 8.2012

Cinemagraph (www.cinemagraph.com)

• Still photographs in which a minor (and repeated)

movement occurs

– Typically in Animated GIF format

• “A Cinemagraph is an image that contains within itself a

living moment that allows a glimpse of time to be

experienced and preserved endlessly.”

–34

Tampere 8.2012

Coded aperture photography

• Regular camera with “patterned aperture” (made from

cardboard!) in place of regular aperture

• From this, can get scene depth and do refocusing

• Uses deconvolution with the known patterns

Levin et al. SIGGRAPH 2007

Single input image:

Output #1: Depth map

Output #2: All-focused image

Lens

Camera

sensor

Point spread

function

Image of a point

light source

Lens and defocus

Focal plane

Lens’ aperture

Lens Object

Camera

sensor

Point spread

function

Image of a

defocused point

light source

Lens and defocus

Lens’ aperture

Focal plane

Lens Camera

sensor

Point spread

function

Image of a

defocused point

light source

Lens and defocus

Object

Lens’ aperture

Focal plane

Lens Camera

sensor

Point spread

function

Image of a

defocused point

light source

Lens’ aperture

Lens and defocus

Object

Focal plane

Lens and defocus

Lens Camera

sensor

Point spread

function

Image of a

defocused point

light source

Lens’ aperture

Object

Focal plane

Idea 2: Coded Aperture

• Mask (code) in aperture plane

- make defocus patterns different from

natural images and easier to discriminate

Conventional

aperture Our coded aperture

Lens Camera

sensor

Point spread

function

Object

Solution: lens with occluder

Focal plane

Lens with coded

aperture

Camera

sensor

Point spread

function

Image of a

defocused point

light source

Aperture pattern

Solution: lens with occluder

Object

Focal plane

Lens with coded

aperture

Camera

sensor

Point spread

function

Image of a

defocused point

light source

Aperture pattern

Solution: lens with occluder

Object

Focal plane

Lens with coded

aperture

Camera

sensor

Point spread

function

Image of a

defocused point

light source

Aperture pattern

Solution: lens with occluder

Object

Focal plane

Lens with coded

aperture

Camera

sensor

Point spread

function

Image of a

defocused point

light source

Aperture pattern

Solution: lens with occluder

Object

Focal plane

Solution: lens with occluder

Lens with coded

aperture

Camera

sensor

Point spread

function

Image of a

defocused point

light source

Aperture pattern

Object

Focal plane

Input Local depth estimation Regularized depth

Regularizing depth estimation

2|| minarg yxfx

_

i ix )(l

2

+

Convolution error Derivatives prior

Try deblurring with 10 different aperture scales

Keep minimal error scale in each local window + regularization

Input

Local depth estimation

Regularized depth

Regularizing depth estimation

Input

All-focused

(deconvolved)

Original image

All-focus image

Close-up

Input

All-focused

(deconvolved)

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Image AND depth at a single shot

No loss of image resolution

Simple modification to lens

Depth is coarse

But depth is a pure bonus

Loss some light

But deconvolution increases depth of field

Coded aperture: pros and cons

unable to get depth at untextured areas,

might need manual corrections.

-

+ +

+ - +

+

Some projects in computational photography

Composition Context Photography

w/Daniel Vaquero

Tampere 8.2012

Composition context photography

How can the act of framing an image help to produce better

images? A wider array of images to choose from?

– Higher resolution, higher SNR, wider FOV, optimal subject pose,

etc.

Tampere 8.2012

Composition context photography

The composition context of a picture: the viewfinder frames, their capture

parameters, and inertial sensor data collected while the user is framing the

photograph

Tampere 8.2012

Tampere 8.2012

The Frankencamera

• An open source experimental camera architecture and API

for computational photography

• Fully programmable: permits control and synchronization

of the sensor and image processing pipeline at the

microsecond time scale, as well as the ability to

incorporate and synchronize external hardware like lenses

and flashes.

Stanford, Ulm, Nokia RC, UCSB, Disney Research (SIGGRAPH 2010)

Tampere 8.2012

Generalized Autofocus

w/Daniel Vaquero

Tampere 8.2012

Generalized autofocus

• Choices have to be made in focusing images

• Large depth-of-field small aperture noisy image

• Can we provide alternatives? New focusing options?

– Large depth-of-field with wide aperture?

– Selective focus?

Tampere 8.2012

All-in-focus imaging

Images focused at different distances

(focal stack)

Tampere 8.2012

What is our work about?

• The capture process for focal stacks

– For fusion, we use existing techniques,

but we do it on-camera

• Our scenario, typical in camera phones and low-end

consumer cameras

– Fixed lens aperture

– Handheld camera

– Limited memory and processing power

Tampere 8.2012

Contrast-based passive autofocus

Sharp

But low SNR

Tampere 8.2012

Our method: Generalized autofocus

• Contribution: autofocus for focal stacks

• Find the minimal set of required images to generate an all-

in-focus composite

– The choice of this set depends on the scene

• Why is minimizing the number of images important?

– Faster capture

– Less sensitive to motion

– Requires less memory and processing power

depth

Tampere 8.2012

Method overview

1. Capture a focal stack of low-resolution images while

continuously changing the focal distance (“lens sweep”)

2. Analyze sharpness of low-resolution focal stack

3. Determine the minimal set of required images

4. Recapture the images in the minimal set in high-resolution

5. Perform fusion of the images on-camera

Desired: Good user interface to specify what should be in focus

Nokia N900

depth

Tampere 8.2012

Tampere 8.2012

Standard autofocus

Tampere 8.2012

All-in-focus result (24 image in stack, 3 used)

Tampere 8.2012

Stack images (4)

Tampere 8.2012

All-in-focus result (24 image in stack, 4 used)

Tampere 8.2012

Generalized Autofocus

Tampere 8.2012

Ongoing work

• Improve image fusion

• User selectable focus

– Location and range

– Between x and y

– In focus here and here and here…

• Provide focusing opportunities not possible with lens-only

solutions