Imaging systems with miniaturized device footprint, real-time processing speed and high resolution three-dimensional (3D) visualization are critical to broad microscopy applications such as biomedical endoscope as well as photography applications such as automated machine vision. Single-shot three-dimensional (3D) imaging with compact device footprint, high imaging quality, and fast processing speed is challenging in computational imaging. Due to mechanical constraints, most existing imaging systems rely on bulky lenses and mechanically refocusing to perform 3D imaging. Lensless imagers replace bulky optical lenses with a thin optical element to modulate the light, and enable compact 3D imaging through single exposure.Existing lensless imaging typically requires extensive calibration of its point spread function and heavy computational resources to reconstruct the object. Mask-based integrated microscopes (miniscopes) are similar to macroscopic cameras that form images through a thin optical mask near the camera, enabling snapshot three-dimensional imaging and a scalable field of view (FOV) without increasing device thickness. Integrated microscopy relies on computational algorithms to solve the inverse problem of object reconstruction. However, there has been a lack of efficient reconstruction algorithms for large-scale data. Furthermore, the entire FOV is typically reconstructed as a whole, which demands substantial computational resources and limits the scalability of the FOV.
In this study, we demonstrated compact integrated single-shot mask-based 3D imaging devices. We innovated both optical hardware and reconstruction algorithms. We developed three generations of devices for imaging microscopic, mesoscopic, and macroscopic objects.
For mesoscopic scale imaging, we propose a compact optical device, GEOMScope, a lensless single-shot 3D imager for computation imaging applications that use custom design and fabricated microlens array. We developed a hybrid reconstruction model to resolve objects through an innovative algorithm combining geometrical-optics-based pixel back projection and background suppressions. We verify the effectiveness of GEOMScope on resolution target, fluorescent particles and volumetric objects. Compared to other widefield lensless imaging devices, the required computational resource is significantly reduced by orders of magnitude, and system calibration and initializations are no longer required. This enables the image and recovery of large volume 3D objects in high resolution with real-time processing speed. Such a low computational complexity is attributed to the joint design of imaging optics and reconstruction algorithms, and a joint application of geometrical optics and machine learning in the 3D reconstruction. More broadly, the excellent performance of GEOMScope in imaging resolution, volume, and reconstruction speed implicates that geometrical optics could greatly benefit and play an important role in computational imaging. The pixel back-projection principle serves as our baseline design, and we developed our second and third generation of imagers with improved reconstruction algorithms performance for photography and microscopy applications, respectively.
To address challenges in lensless macroscopic imaging, we demonstrate a compact and learnable lensless 3D camera for real-time photorealistic imaging. We custom designed and fabricated the optical phase mask with an optimized spatial frequency support and axial resolving ability. We developed a simple and robust physics-aware deep learning model with adversarial learning module for real-time depth-resolved photorealistic reconstructions. Our lensless imager does not require calibrating the point spread function and has the capability to resolve depth and “see-through” opaque obstacles to image features being blocked, enabling broad applications in computational imaging for machine vision.
To address challenges in lensless microscopic imaging on efficient reconstructions for large-scale data, we developed DeepInMiniscope, a lensless microscope with a custom designed optical mask and a multi-stage physics-informed deep learning model. This not only enables the reconstruction of localized FOVs, but also significantly reduces the computational resource demands and facilitates real-time reconstruction. Our deep learning algorithm can reconstruct object volumes over 4×6×0.6 mm3, achieving lateral and axial resolution of ~10 µm and ~50 µm respectively. We demonstrated significant improvement in both reconstruction quality and speed compared to traditional methods, across various fluorescent samples with dense structures. Notably, we achieved high-quality reconstruction of 3D motion of hydra and the neuronal activity with near-cellular resolution in awake mouse cortex. DeepInMiniscope holds great promise for scalable, large FOV, real-time, 3D imaging applications with compact device footprint.
In the future work, we will explore optical inverse design and end-to-end optimization strategies of hardware and algorithms to improve performance of our imagers. We aim to apply our work in recording and visualizing neuronal activity in freely-behaving animals as well as guiding the behaviors of intelligent machines.