Dual-Camera Joint Deblurring-Denoising


Shayan Shekarforoush1,4       Amanpreet Walia2      
Marcus Brubaker2,3,4       Alex Levinshtein2      

1University of Toronto     2Samsung AI Center Toronto     3York University     4Vector Institute


Paper Code (Coming Soon!)





Demos

Synthetic Data

Real Data


Abstract

Recent image enhancement methods have shown the advantages of using a pair of long and short-exposure images for low-light photography. These image modalities offer complementary strengths and weaknesses. The former yields an image that is clean but blurry due to camera or object motion, whereas the latter is sharp but noisy due to low photon count. Motivated by the fact that modern smartphones come equipped with multiple rear-facing camera sensors, we propose a novel dual-camera method for obtaining a high-quality image. Our method uses a synchronized burst of short exposure images captured by one camera and a long exposure image simultaneously captured by another. Having a synchronized short exposure burst alongside the long exposure image enables us to (i) obtain better denoising by using a burst instead of a single image, (ii) recover motion from the burst and use it for motion-aware deblurring of the long exposure image, and (iii) fuse the two results to further enhance quality. Our method is able to achieve state-of-the-art results on synthetic dual-camera images from the GoPro dataset with five times fewer training parameters compared to the next best method. We also show that our method qualitatively outperforms competing approaches on real synchronized dual-camera captures.


Joint Deblurring-Denoising

We first visualize qualitative, interactive examples, comparing our method with previous work on the joint deblurring-denoising task. On the left hand side, you can select among long exposure blurry image, the middle frame from the noisy short exposure burst, and the ground-truth. On the right hand side, you can choose your method of interest to visualize the output. Some regions are also outlined with a red box. PSNR values computed for the entire image are also reported (higher is better). Note that D2HNet, comprising 5x more parameters, is pre-trained on a larger dataset beforehand and then fine-tuned on our synthetic data.

Synthetic Data. On test synthetic data, our method achieves competitive qualitative results with D2HNet and outperforms LSD2 and LSF. You can switch between examples with the provided selector. On the highlighted regions:

Real Data. We also demonstrate qualitative examples on real data captured by our dual-camera imaging system. On the left hand side, you can select among long exposure blurry image and the middle frame from the noisy short exposure burst. As we do not have the ground-truth for the real data, we report Natural Image Quality Evaluator (NIQE), a non-reference based metric (lower is better). Some regions are highlighted to better show qualitative differences. D2HNet on real data is more noisy than other methods, resulting in poor NIQE.


Single-Task Methods vs. Ours

Here, we compare our method, approaching the joint task, with baselines that separately address single tasks of deblurring or denoising. Motion-ETR and Flow-guided Deblurring approach the deblurring task, in blind and non-blind fashions, respectively. Burst Denoiser uses the burst of short exposure images to recover a clean version of the reference frame.

Synthetic Data. Both qualitatively and quantitatively (in terms of PSNR), our method significantly outperforms the baselines. Note that the Burst Denoiser output has slightly wrong colors, compared to the deblurring outputs. Our method is able to alleviate this by extracting color information from the long exposure image.

Real Data. Due to lack of the ground-truth, we report the NIQE metric. Our Flow-Guided Deblurring outperforms Motion-ETR both qualitatively and quantitatively, illustrating that the optical flow computed from the burst can be effectively used to remove blur from long exposure images.

Acknowledgements

This work was done during an internship at the Samsung AI Center in Toronto, funded by internship by Mitacs Accelerate. We would like to thank Konstantinos Derpanis for the support during the enitre project, and Abhijith Punnappurath and Michael Brown for the discussion on data generation.