Recently plenoptic cameras have gained much attention, as they capture the 4D light field of a scene which is useful for numerous computer vision and graphics applications. Similar to traditional digital cameras, plenoptic cameras use a color filter array placed onto the image sensor so that each pixel only samples one of three primary color values. A color demosaicing algorithm is then used to generate a full-color plenoptic image, which often introduces color aliasing artifacts. In this paper, we propose a dictionary learning based demosaicing algorithm that recovers a full-color light field from a captured plenoptic image using sparse optimization. Traditional methods consider only spatial correlations between neighboring pixels on a captured plenoptic image. Our method takes advantage of both spatial and angular correlations inherent in naturally occurring light fields. We demonstrate that our method outperforms traditional color demosaicing methods by performing experiments on a wide variety of scenes.
A database of light field images used in this project is available here.
"Dictionary Learning based Color Demosaicing for Plenoptic Cameras"
X. Huang O. Cossairt
IEEE Workshop on Computational Cameras and Displays (CCD), June 2014.
The Bayer Filter:
A Bayer color filter used to multiplex color information onto a 2D sensor. The filter consists of repeatable two-by-two grids of Blue-Green-Green-Red patterns. Bayer filters are used for both conventional 2D cameras,and plenoptic cameras.
The Bayer filter used in a plenoptic camera:
measured in each color channel. The area behind a single lenslet is zoomed in to show the effect of the Bayer filter on the captured light field. The bayer filter effectively applies a subsampling matrix S to the full-color light field X, producing the sensed light field Y=SX. The sensed light field contains gaps: some of the rays in each of the color channel are not measured.
Block sampling of a canonical plenoptic image and lexicographically reordering into a vector. For illustration purpose, here we show sampling from a small block with B_u × B_v = 4 × 4 spatial samples, and B_p × B_q = 3 × 3 angular samples. With color included, the entire signal contains 3 × 3 × 4 × 4 × 3 samples and can be represented as a 432 dimensional vector x.
Experimental results for light field demosaicing:
Comparison of demosaicing performance between our dictionary learning based algorithm (using block size of 5 × 5 × 3 × 3 × 3) and gradient-corrected interpolation proposed by Malvar et al. in 2004. The images shown are a single view from the reconstructed light field (i.e. the set of (u,v) spatial samples for a fixed (p,q)=(3,3) angular sample). The gradient-corrected interpolation produces periodic artifacts caused by the Bayer filter. By taking into account spatial, angular, and color correlations, our method is able to reduce artifacts significantly, increasing PSNR > 5dB
Light Field Demosaicing Database:
Our dataset of 30 light fields captured using a Lytro camera. 20 samples are used for training (i.e. learning a dictionary) and 10 samples for testing (i.e. demosaicing captured light fields).
Xiang Huang and Oliver Cossairt acknowledge support through a Samsung GRO grant.