Recovering 4-dimensional (3D position and time) information from a single image!


Cover story: Recovering subtle motions of crafted hairs blown by wind (air-conditioner).

Project Description

Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device.

Our work exploits both spatial and temporal redundancy in natural scenes and generalizes to a 4D (3D positon with time) system model. We show that by combining digital holography and coded exposure techniques using a CS framework, it is feasible to reconstruct a 4D moving scene from a single 2D hologram. We demonstrate a temporal super resolution of 10×. Note that this increase in frame rate can be achieved for any sensor, regardless of the native frame rate, as long as the spatial-temporal modulator operates at a higher frame rate. We anticipate approximately 1 cm resolution with optical sectioning. As a test case, we focus on macroscopic scenes exhibiting fast motion of small objects (vibrating bars or small particles, etc.).


"Compressive holographic video"
Zihao Wang, Leonidas Spinoulas, Kuan He, Lei Tian, Oliver Cossairt, Aggelos K. Katsaggelos, and Huaijin Chen
Optics Express, 25 (1) 250-262 (2017)



Forward model

Our general system model is based on coherent light propagation (in-line holography) with a space-time modulator M before sensing.


Experimental setup

PG: Pulse Generator; DL: Diode Laser; CL: Collimating Lens; DMD: Digital Micromirror Device; OL: Objective Lens. A trigger signal generated from the DMD is sent to the camera for exposure. The minimum time between successive DMD mask patterns is P_T = 500μs with a pattern exposure P_d = 250μs. The camera is triggered every N patterns.


Subsampling holograms

(a)hologram of two static furs 7.1 cm and 10.1 cm away from sensor. (b) DMD mask, 10%, uniformly random (background divided). (c) subsampled hologram. (d) Comparison of reconstructions from both back- propagation (BP) method and compressed sensing (CS) method using the full hologram. (e) Comparison for BP and CS using 10% subsampled hologram. (f) Normalized variance vs. distance on z direction. Blue series: BP; red series: CS; full curve: 100% hologram; dashed curve: 10% hologram. See Visualization 1 and 2


Performance simulations

(a) Scenario: two Peranema with different sizes moving at different planes (dz), a single image is simulated at the sensor plane; (b) Space-time performance. Horizontal axis indicates different spacing between the two objects. "100%": full resolution; "50%": 50% of the pixels are randomly sampled at each time frame, which corresponds to a temporal increase of 2; "20%": temporal increase of 5; "10%": temporal increase of 10. Lines represent CS results and dashed lines represent BP results. PSNR in dB. (c) Reconstruction results at depth d1. Marked as red circle in (b).


Reconstruction results from a single image (moving hairs).

10 frames of video and two depth frames are reconstructed from a single captured hologram. Due to space constraints, 3 video frames (3rd, 6th, 9th) and two depths (d1 = 73mm, d2 = 111mm) are presented (see Visualization 3 and Visualization 4).


Reconstruction results from a single image (dropping glitters).

(a) Glitters; (b) captured image; (c) normalized image; (d) reconstruction map. 2 depths and 4 out of 10 frames are shown; (e) normalized variance plot from 2 particles at d1 and d2; (f) 4D particle tracking; (g) velocity plotting with time range from 500μs to 4000μs.


Visualization 1:

Visualization 2:

Visualization 3:

Visualization 4:


We have demonstrated two illustrative cases where 4D spatio-temporal data is recovered from a single 2D data. In the case of vibrating hairs, 2 depth layers and 10 video frames in time were recovered. The spatio-temporal compression is 20×. In the case of dropping glitter flakes, a 4D volume was reconstructed to track the motion of small particles. The spatio-temporal compression is 120 × 10. We call our technique “compressive holographic video” to emphasize the compressive sampling approach to acquisition of spatio-temporal information. We show that our technique affords a significant reduction in space-time sampling, enabling 4D events to be acquired using only a single captured image.

In our prototype implementation we use a DMD as a coded aperture that is imaged directly onto a sensor. While non-trivial to implement, in principle it is possible to fabricate a CMOS sensor with pixel-wise coded exposure control. The prototype showed that it is possible to simultaneously exceed the capture rate of imagers and recover multiple depths with reasonable depth resolution. In this paper, as an example, we presented a temporal increase factor of 10×. A potential factor can be 24× based on the DMD we used. By means of spatio-temporal modulator, one is able to significantly increase the frame rate of the sensors. Based on this idea, the recovered frame rate is redefined by the modulator’s frame rate. The coded-exposure technique enables high speed imaging with a simple frame rate camera. Digital in-line holography brings the capability of 3D tomographic imaging with simple experimental setup. Our Compressive Holographic Video technique is also closely related to phase retrieval problems commonly faced in holographic microscopy. Our space-time subsampling technique can be viewed as a sequence of coded apertures applied to a spatiotemporally varying optical field. In the future we plan to explore the connections between our CS reconstruction approach and the methods introduced in [33]. In our general model, we place a coded aperture between the sensor and scene. In our prototype implementation we use a DMD as a coded aperture that is imaged directly onto a sensor. While not explored in this paper, we believe that adding defocus between the coded aperture plane and sensor may be beneficial for phase retrieval tasks, as in [33]. In this work, we focus on a proof-of-principle demonstration of compressive holographic video. In the future, we hope to explore a diverse set of mask designs, as well as techniques for mask optimization.


This work was supported in part by National Science Foundation (NSF) CAREER grant IIS-1453192;
Office of Naval Research (ONR) grant 1(GG010550)//N00014-14-1-0741 and Office of Naval Research (ONR) grant #N00014-15-1-2735.

The authors were grateful for the constructive discussions with Dr. Roarke Horstmeyer, Donghun Ryu and the suggestions from the reviewers.

    Compressive holographic video