Project Description
Phase-retrieval based wavefront sensors have been shown to reconstruct the complex field from an object with a high-spatial resolution. Although the reconstructed complex field encodes the depth information of the object, it is impractical to be used as a depth sensor for macroscopic objects, since the unambiguous depth imaging range is limited by the optical wavelength. To improve the depth range of imaging and handle depth discontinuities, we propose a novel three-dimensional sensor by leveraging wavelength diversity and wavefront sensing. Complex fields at two optical wavelengths are recorded, and a synthetic wavelength can be generated by correlating those wavefronts. The proposed system achieves high lateral and depth resolutions. Our experimental prototype shows an unambiguous range of more than 1,000 larger compared with the optical wavelengths, while the depth precision is up to 9m for smooth objects and up to 69m for rough objects. We experimentally demonstrate 3D reconstructions for transparent, translucent, and opaque objects with smooth and rough surfaces.
Publications
“WISHED: Wavefront imaging sensor with high resolution and depth ranging”
Yicheng Wu*, Fengqiang Li*, Florian Willomitzer, Ashok Veeraraghavan, Oliver Cossairt (* co-first author)
ICCP, 2020
[paper]
Images
Scope of the proposed method (WISHED)
We introduce a wavefront imaging sensor with high resolution and depth ranging (WISHED), which enables depth ranging on macroscopic and rough objects.
Proposed WISHED setup
The laser beam with a tuneable wavelength is first scattered by the object and then collected by the lens. A wavefront sensor containing an SLM and a CMOS imager is used to record the complex field.
Key idea I: Wavelength diversity
By using the tunable laser, we exploit the wavelength diversity which has been used in optical interferometry. During the imaging, we measure two wavefronts using two different wavelengths lambda 1 and lambda 2. When we combine these two wavefronts coherently, we get the beat frequency, which provides a synthetic wavelength as shown. We choose the lambda 1 and lambda 2 to be very close to get a large synthetic wavelength. For example, if we use two wavelengths with 0.03nm difference, we can get a synthetic wavelength of 24.4mm. The unique measurement range is increased by more than ten thousand times. In this way, we are able to measure objects with large height variations.
Key idea II: Wavefront reconstruction
To recover the wavefront for each wavelength, we adopt a computational imaging-based wavefront sensor which consists of an Spatial light modulator and a CMOS sensor. Since the phase SLM is in reflective mode, the beam splitter is inserted. Multiple uncorrelated random phase patterns are displayed on the SLM to modify the incident wavefront. The corresponding intensity image is captured by the sensor. Then, the Gerchberg-Saxton algorithm is applied to iteratively update the field on the SLM and sensor plane. After several iterations, both the amplitude and phase is recovered. Since the surface of the simulated bunny is rough, spackle pattern and random phase is observed.
Depth estimation with WISHED
We recover the phase for two wavelengths. The depth can be estimated by pointwise multiplication between the field of lambda 1 and the conjugate field of lambda 2. The estimated depth map is shown on the right. RMSE is only 85 micron comparing with the 24.4mm synthetic wavelength.
Example 1 of experimental results
We put a glass plate between the adjacent letters introducing an OPD of 0.5mm. With the prototype, we clearly separate these two different depth steps.
Example 2 of experimental results
We scan a coin surface, and the prototype can provide fine details of the surface.
Acknowledgments
This work is supported by DARPA Reveal (HR0011-16-C-0028), NSF CAREER (IIS-1453192, IIS-1652633), NSF Expeditions(CCF-1730574), and ERC PATHS-UP (EEC-1648451). We also thank Prasanna Rangarajan and Muralidhar Madabhushifor assistance in capturing the depth map of the coin with the lock-in sensor.