Cover Image

Final Project:

Lightfield Camera Playground + Poisson Blending

Project 6.1: Lightfield Camera Playground

  • Here we play around the beautiful focus and real camera effects that can be achieved through an array of images.
  • Operataions are as simple as shifting and averaging, but authentic depth-refocusing and aperture simulation can be achieved.
  • Using the data from: The (New) Stanford Light Field Archive

Part 1: Depth Refocusing

  • When moving the camera around, objects far away don't have as large pixel displacements as those closer to the camera.
  • Naive averaging over all such images will intuitively give us a final result that's "in focus".
  • As shown in the right image below:

Original Image

Naive Averaging Over All Images

  • This observation leads to us thinking: maybe by shifting the image array appropriately,
  • with each image nudging a slightly different value that is a function of depth,
  • we can get a "refocused" image that is "in focus" at any given depth.
  • We shift each image using the distance between their grid coordinates and the center image location (8, 8)
  • Then depth is applied as a weight term that regulates the shift amount. We use simple nearest padding and bilinear interpolation.

Amethyst Refocused

Depth=[-2, 2], Step Size=0.2

Chessboard Refocused

Depth=[-1, 3.4], Step Size=0.2

Truck Refocused

Depth=[-2, 2], Step Size=0.2

  • Additionally, if the task is run on CPU, it took over 288 minutes to just render one, so I use GPU to accelerate.
  • Both task below is to perform a refocused operation over depth range [-3, 3] with step size 0.2 and the image size 1400x800.

Pure CPU: 218 minutes 42.3 s

GPU Accelerated: 2 minutes 22.9 s

Part 2: Aperture Adjustment

  • Changing aperture of a real camera lead to different depth of field.
  • Larger apertures correspond to more shallow depth of field,
  • which is more blurry on the pixels that are out-of-focus,
  • easily emphasizing the objects that are really in focus.
  • We implement this effect by simply controlling the number of images we use for averaging!
  • After an aperture value is set, the images far away from the center image will not be taken into account.
  • Relation goes as:
  • fewer images averaged, less blurry, mimic smaller aperture
  • more images averaged, more blurry, mimic larger aperture

Treasure Aperture Adjusment

Aperture size: [0, 10]

Project 6.2: Gradient Domain Fusion

  • Blending is always a fun topic to explore in computational photography.
  • How to blend one object into another image?
  • There are plenty of native ways, but they typically leave obvious seams, which is so uncomfortable for human eyes.
  • One interesting fact is that human eyes are more sensitive to gradient values in an image rather than the overall intensity.
  • This project will leverage this fact to try out poisson blending as a gradient domain fusion technique.

Part 1: Toy Example

  • This part constructs necessary matrix operations and least square solvers to try reconstruct one image.
  • We're basically setting three sets of constraints for each pixel to minize their gradient difference on x, y, intensity aspect.
  • Results are shown below. The left is original and the left is gradient reconstruction.

Original Toy Image

Reconstructed Toy Image

Part 2: Poisson Blending

  • We aim to blend objects seamlessly into another image by focusing on the gradients inside the masked region.
  • The process starts by cropping the source image and defining a mask region in the target image.
  • I manually draw each mask for the object source image.
  • The blending involves solving constraints that minimize two terms:
    • For each pixel i in the source, and its neighbors j in the source,
    • minimize the squared difference between the gradient values in the source and the intensity gradient.
    • For each pixel i in the source, and its neighbors j outside the source,
    • minimize the squared difference between the value of i and the target intensity of j, adjusted by the source gradients.
  • Optimization is necessary so we use Sparse matrices to reduce time.

Penguin Chick on Snow Hiking v1

Source Object Image

Background Image

Poisson Blended

Penguin Chick on Snow Hiking v2

Source Object Image

Background Image

Poisson Blended

Penguin on Snow Hiking v2

Source Object Image

Background Image

Poisson Blended

Cat on Snow Hiking v1

  • Though not very obvious, you can actually spot the blurry "aura" surrounding our kitten.
  • This is a typical issue with Poisson Blending because:
  • the boundary values within source are calculated as an average with the touching edges of the background.
  • This defect is very obvious and intolerable for the next task.

Source Object Image

Background Image

Poisson Blended

FAILURE CASE:Cal logo on a brick wall.

  • The blurry part has become too obvious to be ignored.
  • We need a method that respects the background more and pick out the high-frequency
  • gradient details to make sure the blending looks more texturized!

Source Object Image

Background Image

Poisson Blended: FAILURE CASE

Bells & Whistles: Mixed Gradients

  • For mixed gradients, we modify the Poisson blending process to only keep large differences in gradient magnitudes.
  • Instead of directly using the source gradient differences, we take the maximum absolute gradient value between the source and target at each step.
  • In this way, we can expect the textures inside background image to be preserved and that source object can become somewhat transparent occasiaonlly.

Cal Logo on a Brick Wall (Fixed!)

Source Object Image

Background Image

Poisson Blended

Whale Over Berkeley Campus

Source Object Image

Background Image

Poisson Blended

Jellyfish In the Mountains

Source Object Image

Background Image

Poisson Blended