Refocusing images captured from a stereoscopic camera

Chia Lun Ku, Yu-Shuen Wang, Chia Sheng Chang, Hung Kuo Chu, Chih Yuan Yao

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Traditional photography projects a 3D scene to a 2D image without recording the depth of each local region, which prevents users from changing the focus plane of a photograph once it has been taken. To tackle this problem, Ng et al. [2005] presented light-field cameras that record all focus planes of a scene and synthesized the refocused image using ray tracing. Nevertheless, the captured photographs are of low resolution because the image sensor is divided into subcells. Levin et al. [2007] embedded a coded aperture on the camera lens and recover depth information from blur patterns in a single image. However, the coded aperture blocks around 50% of light. Their system requires longer exposition time when taking pictures. Liang et al. [2008] also embedded a coded aperture on the camera lens to capture the scene but with multiple exposures. It produces high quality depth maps yet is not suitable to hand-held devices. Recently, Microsoft Kinect directly estimates the depth information using infrared light, which works only in a indoor environment.

Original languageEnglish
Title of host publicationACM SIGGRAPH 2013 Posters, SIGGRAPH 2013
DOIs
StatePublished - 2013
EventACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2013 - Anaheim, CA, United States
Duration: 21 Jul 201325 Jul 2013

Publication series

NameACM SIGGRAPH 2013 Posters, SIGGRAPH 2013

Conference

ConferenceACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2013
Country/TerritoryUnited States
CityAnaheim, CA
Period21/07/1325/07/13

Fingerprint

Dive into the research topics of 'Refocusing images captured from a stereoscopic camera'. Together they form a unique fingerprint.

Cite this