- 无标题文档

题名:

 多尺度通用型三维高分辨率光场重建算法研究    

作者:

 林炳志    

学号:

 SX2202078    

保密级别:

 公开    

语种:

 chi    

学科代码:

 080700    

学科:

 工学 - 动力工程及工程热物理    

学生类型:

 硕士    

学位:

 工学硕士    

入学年份:

 2022    

学校:

 南京航空航天大学    

院系:

 能源与动力学院    

专业:

 动力工程及工程热物理    

研究方向:

 实验流体力学及图像处理    

导师姓名:

 张悦    

导师单位:

 能源与动力学院    

第二导师姓名:

 王德鹏    

完成日期:

 2025-03-10    

答辩日期:

 2025-03-11    

外文题名:

 

Multi-scale universal three-dimensional high-resolution light field reconstruction algorithm research

    

关键词:

 光场成像 ; 三维重建 ; 深度学习 ; 微观重建 ; 介观重建 ; 宏观重建 ; 粒子图像测速     

外文关键词:

 Light Field Imaging ; 3D Reconstruction ; Deep Learning ; Micro-Reconstruction ; Mesoscopic Reconstruction ; Macro-Reconstruction ; Particle Image Velocimetry     

摘要:

光场成像是近年来新兴的单帧三维成像方式,在多个领域得到了广泛关注与应用。然而,当前主流的光场成像方法普遍存在成像分辨率较低的问题,从而制约了光场成像的发展和应用。通过有效算法提高光场图像分辨率是当前的热点和难点。近年来,基于深度学习的超分辨率光场重建方法已经有效提高了光场重建的分辨率,然而现有方法无法充分利用视角间的互补信息,导致重建结果与预期仍有差距。此外,当前的算法均是针对某一尺度的光场图像进行重建,尚未存在一种可以对不同尺度光场图像均能进行高精度重建的算法。针对这些问题,本研究提出了一种精度高、重建速度快、泛化能力强且多尺度通用型光场重建算法,为高精度的多尺度光场成像提供了新技术。
本文首先详细介绍了光场成像技术的发展及研究现状,分析了当前光场采集技术的优缺点和光场重建技术的现状。而后,深入探讨了基于深度学习的光场三维重建算法的基本原理,对光场三维重建问题进行了分析。在此基础上介绍了单帧超分辨率图像重建技术及生成对抗网络(GAN)的基本原理,提出了一种基于条件对抗的多尺度实时光场体积重建算法,深入阐述了模型结构、损失函数的设计思路,以及模型的训练与预测环节,并对算法的重建速度进行了测试,结果表明,该算法对不同大小光场图像三维重建速度高达到250 Hz左右,为后续实验提供了坚实基础。
然后,本文对该算法在微观光场图像重建的性能进行了研究,评估了模型对不同复杂程度微观结构的重建性能。在简单结构方面,利用该算法对1 μm和10 μm直径的微管蛋白的光场图像进行了重建。在峰值信噪比 (PSNR) 方面,其重建的1 μm直径和10 μm直径的微管蛋白图像的平均PSNR分别为41.22 dB和40 dB (高于其他现有方法5 dB以上); 在结构相似性 (SSIM) 方面,其重建的1 μm微管蛋白图像的傅里叶频域图与真值的傅里叶频域图的SSIM可以达到0.8455,其重建的10 μm微管蛋白与真值的SSIM接近于1。而后在复杂结构方面,利用该算法线粒体细胞膜上进行了测试,结果表明该算法重建的结果可以极大程度减少伪影和成像偏移,在小视野线粒体细胞膜上成像结果的SSIM值达到0.8。这些结果证明了该算法在微观光场在微观光场图像重建的优越性能。
之后,本文对该算法在介观光场图像重建的性能进行了研究,主要包括对介观静态样本和介观动态样本的性能测试。静态样本是小鼠大脑皮层结构的光场图像,采用该算法与传统光场图像重建算法进行了重建,结果表明该算法的信噪比 (SNR) 值达到0.136 dB,截止频率kc达到0.5741,而传统算法的SNR仅0.062 dB,截止频率kc仅有0.4946。动态样本是NAOMi仿真的小鼠大脑皮层视觉1区随时间变化的光场图像,采用该算法对图像进行了重建,结果表明利用该算法的重建结果得到的荧光信号变化百分比与真值信号几乎一致。以上研究证明了该算法在介观光场图像重建的优越性能。
最后,本文对该算法对宏观光场PIV图像重建的性能进行了研究。与传统光场PIV重建算法的相比,该算法可以降低x-y方向的粒子重建误差为原来的1/4,降低z方向粒子重建误差为原来的1/4~1/7倍。而后利用该算法对顶盖驱动流的光场PIV粒子图像进行了三维粒子场重建,并将重建结果与光流法结合进行了三维流场重建,结果表明利用该算法得到的粒子图像可以较好的重建三维漩涡结构,在z方向的速度矢量误差仅有0.1934 ± 0.001 mm/s,而传统的重聚焦算法则不能还原三维漩涡结构,且得到的速度矢量误差达到了0.3603 ± 0.0041 mm/s,高出该算法86.3%。以上研究证明了该算法在宏观光场PIV重建的优越性能。

外摘要要:

Light-field imaging, by capturing the multidimensional information of light, is an emerging single-frame three-dimensional imaging method in recent years, which has gained wide attention and application in several fields. However, the current mainstream light-field imaging methods generally have the problem of low imaging resolution, which restricts the development and application of light-field imaging technology. Improving the resolution of light-field images through effective algorithms is currently an important method to improve the resolution of light-field imaging. In recent years, the super-resolution reconstruction method of light field images based on deep learning has effectively improved the resolution of light field reconstruction, however, the existing methods still have certain limitations, which can not make full use of the complementary information between the viewpoints, and it is difficult to effectively model the non-local attributes of the four-dimensional light field image, which leads to the reconstruction results can not reach the expected results. In addition, the current algorithms are all aimed at reconstructing light-field images at a certain scale, and there is not yet an algorithm that can reconstruct light-field images at different scales with high accuracy. Aiming at these problems, this study proposes a high-precision, multi-scale, fast reconstruction speed and strong model generalization ability of the light field reconstruction algorithm for high-precision microscopic three-dimensional sample imaging, mesoscopic three-dimensional neural imaging and three-dimensional flow field measurement to provide a new technical means.

This thesis firstly introduces the development and research status of light field imaging technology in detail, and analyzes the advantages and disadvantages of the current light field acquisition technologies, including multi-sensor light field acquisition, time series light field acquisition and multiplexed light field acquisition and other types of technologies, as well as the current status of light field reconstruction technology. And then, the basic principle of the deep learning-based light field 3D reconstruction algorithm is discussed in depth, and the light field 3D reconstruction problem is analyzed. On this basis, the single-frame super-resolution image reconstruction technology and the basic principle of generative adversarial network (GAN) are introduced, and a multi-scale real-time light field volume reconstruction algorithm based on conditional adversarial is proposed, which elaborates in depth on the model structure, the design idea of the loss function, as well as the training and prediction link of the model, and the reconstruction speed of the algorithm is tested, and the results show that the reconstruction speed of this algorithm for different sizes of light field images The results show that the reconstruction speed of the algorithm for different sizes of light field images is as high as about 250 Hz, which provides a solid foundation for the subsequent experiments.

Then, the performance of the algorithm in the reconstruction of microscopic light-field images is investigated in this thesis, and the reconstruction performance of the model for microscopic structures of different complexity levels is evaluated. In terms of simple structures, the algorithm was used to reconstruct light field images of 1 μm- and 10 μm-diameter microtubulin, and the average PSNRs of the reconstructed 1 μm- and 10 μm-diameter microtubulin images were 41.22 dB and 40 dB in terms of the peak signal-to-noise ratio (PSNR), respectively, which were higher than those of other existing methods by more than 5 dB; and in terms of structural similarity (SSIM), the reconstructed 1 μm-diameter microtubulin image was 41.22 dB and 40 dB, respectively. In terms of structural similarity (SSIM), the Fourier frequency domain image of 1 μm microtubulin image reconstructed by the algorithm has an SSIM of 0.8455 to the true value, and the SSIM of 10 μm microtubulin reconstructed by the algorithm has an SSIM of close to 1 to the true value, and the algorithm is tested on mitochondrial cell membranes in terms of complex structures, and the results show that the algorithm reconstructed by the algorithm can greatly reduce the artifacts and the imaging bias in the small field of view of mitochondrial cell membranes. The SSIM value of the imaging results on mitochondrial cell membranes in small field of view reaches 0.8. It proves the superior performance of the algorithm in image reconstruction in the microscopic light field in the microscopic light field.

After that, the performance of the algorithm in mesoscopic light field image reconstruction is investigated in this thesis, which mainly includes the performance test of mesoscopic static samples and mesoscopic dynamic samples. The static sample is the light field image of mouse cerebral cortex structure, which was reconstructed by using the algorithm with the traditional light field image reconstruction algorithm, and the results show that the signal-to-noise ratio (SNR) value of the algorithm reaches 0.136 dB, and the cutoff frequency kc reaches 0.5741, while the traditional algorithm's SNR only reaches 0.062 dB, and the cutoff frequency kc only reaches 0.4946. The dynamic sample is the NAOMi simulation of the light field image of visual area 1 of mouse cerebral cortex over time, the image was reconstructed using the algorithm, and the results showed that the percentage change of fluorescence signal obtained from the reconstruction results using the algorithm was almost the same as the true value signal. The above study proved the superior performance of the algorithm in mesopic light field image reconstruction.

Finally, the performance of the algorithm in macroscopic light field PIV image reconstruction is investigated in this thesis. Compared with the traditional light field PIV reconstruction algorithm, the algorithm can reduce the particle reconstruction error in the x-y direction by 4 times, and reduce the particle reconstruction error in the z direction by 4-7 times. And then the algorithm was used to reconstruct the 3D particle field of the light-field PIV particle image of the lid-driven flow, and the reconstruction results were combined with the optical flow method for the 3D flow field reconstruction, and the results show that the particle image obtained by the algorithm can reconstruct the 3D vortex structure better, and the velocity vector error in the z-direction is only 0.1934 ± 0.001 mm/s, whereas the traditional refocusing algorithm can't restore the The traditional refocusing algorithm cannot restore the 3D vortex structure, and the obtained velocity vector error reaches 0.3603 ± 0.0041 mm/s which is 86.3% higher than that of the algorithm. The above study demonstrates the superior performance of this algorithm in macroscopic light field PIV reconstruction.

参考文献:

[1] GOLDSTEIN R. Fluid mechanics measurements[J]. 2017.

[2] 赵洲, 丁俊飞, 实验流体力学 施 J. 基于单相机光场 PIV 的逆压湍流边界层测量[J]. 2019, 33(2): 66-71.

[3] FAHRINGER T, THUROW B. Tomographic reconstruction of a 3-D flow field using a plenoptic camera[C]// Tomographic reconstruction of a 3-D flow field using a plenoptic camera. 42nd AIAA fluid dynamics conference and exhibit. 2826.

[4] RAYMUND T D, AUSTEN J R, FRANKE S, et al. Application of computerized tomography to the investigation of ionospheric structures[J]. 1990, 25(5): 771-89.

[5] ANDERSEN A H, KAK A C J U I. Simultaneous algebraic reconstruction technique (SART): a superior implementation of the ART algorithm[J]. 1984, 6(1): 81-94.

[6] ATKINSON C, SORIA J J E I F. An efficient simultaneous reconstruction technique for tomographic particle image velocimetry[J]. 2009, 47(4): 553-68.

[7] 丁俊飞, 许晟明, 实验流体力学 施 J. 光场单相机三维流场测试技术[J]. 2016, 30(6): 50-8.

[8] DISCETTI S, NATALE A, ASTARITA T J E I F. Spatial filtering improved tomographic PIV[J]. 2013, 54: 1-13.

[9] 吴治安, 朱效宇, 李健, et al. 基于体标定追迹法的光场 PIV 权系数计算方法[J]. 2021, 41(20): 2010001.

[10] YOON Y-G, WANG Z, PAK N, et al. Sparse decomposition light-field microscopy for high speed imaging of neuronal activity[J]. 2020, 7(10): 1457-68.

[11] ZHANG Y, LU Z, WU J, et al. Computational optical sectioning with an incoherent multiscale scattering model for light-field microscopy[J]. 2021, 12(1): 6391.

[12] ZHANG Z, BAI L, CONG L, et al. Imaging volumetric dynamics at high speed in mouse and zebrafish brain with confocal light field microscopy[J]. 2021, 39(1): 74-83.

[13] DEMAS J, MANLEY J, TEJERA F, et al. High-speed, cortex-wide volumetric recording of neuroactivity at cellular resolution using light beads microscopy[J]. 2021, 18(9): 1103-11.

[14] 化希耀. 光场图像超分辨率重建方法研究[D]. City, 2023.

[15] 王炜豪. 微透镜阵列光场成像技术的空间采样率及散射场景成像问题研究[D]. City, 2022.

[16] WANG S, YOU F, WANG X. Research on 3D Particle Field Reconstruction Method Based on Deep Learning[C]// Research on 3D Particle Field Reconstruction Method Based on Deep Learning. Journal of Physics: Conference Series. IOP Publishing,2562: 012049.

[17] GAO Q, PAN S, WANG H, et al. Particle reconstruction of volumetric particle image velocimetry with the strategy of machine learning[J]. 2021, 3(1): 28.

[18] ZHANG C, CHEN T. The Light Field[J]. 2006.

[19] ADELSON E H. The Plenoptic Function and the Elements of Early Vision[J]. Computational Models of Visual Processing, 1991.

[20] MCMILLAN L. Plenoptic Modeling: An Image-Based Rendering System[J]. Proc SIGGRAPH'96, 1995.

[21] LEVOY M S, HANRAHAN P M. Light field rendering[J]. 1996.

[22] ZHOU S, ZHU T, SHI K, et al. Review of light field technologies[J]. 2021, 4(1): 29.

[23] 方璐, SINICA 戴 J A O. 计算光场成像[J]. 2020, 40(1): 0111001.

[24] 聂云峰, 相里斌, 中国科学院大学学报 周 J. 光场成像技术进展[J]. 2011, 28(5): 563.

[25] ADELSON E H, BERGEN J R. The plenoptic function and the elements of early vision[M]. City: Vision and Modeling Group, Media Laboratory, Massachusetts Institute of …, 1991.

[26] WILBURN B S, SMULSKI M, LEE H-H K, et al. Light field video camera[C]// Light field video camera. Media Processors 2002. SPIE,4674: 29-36.

[27] WILBURN B, JOSHI N, VAISH V, et al. High performance imaging using large camera arrays[M]. // ACM siggraph 2005 papers. City, 2005: 765-76.

[28] WILBURN B, JOSHI N, VAISH V, et al. High-speed videography using a dense camera array[C]// High-speed videography using a dense camera array. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004 CVPR 2004. IEEE,2: II-II.

[29] CHAN S-C, NG K-T, GAN Z-F, et al. The plenoptic video[J]. 2005, 15(12): 1650-9.

[30] VENKATARAMAN K, LELESCU D, DUPARRé J, et al. Picam: An ultra-thin high performance monolithic camera array[J]. 2013, 32(6): 1-13.

[31] SHUM H-Y, CHAN S-C, KANG S B J I-B R. Geometric Analysis of Light Field Rendering[J]. 2007: 115-39.

[32] LIN X, WU J, ZHENG G, et al. Camera array based light field microscopy[J]. 2015, 6(9): 3179-89.

[33] QUICKTIME V. An Image-Based Approach to Virtual Environment Navigation, Shenchang Eric Chen[C]// An Image-Based Approach to Virtual Environment Navigation, Shenchang Eric Chen. SIGGRAPH.

[34] UNGER J, WENGER A, HAWKINS T, et al. Capturing and Rendering with Incident Light Fields[J]. 2003, 2003: 1-10.

[35] KIM C, ZIMMER H, PRITCH Y, et al. Scene reconstruction from high spatio-angular resolution light fields[J]. 2013, 32(4): 73:1-:12.

[36] DANSEREAU D G, SCHUSTER G, FORD J, et al. A wide-field-of-view monocentric light field camera[C]// A wide-field-of-view monocentric light field camera. Proceedings of the IEEE conference on computer vision and pattern recognition. 5048-57.

[37] IHRKE I, STICH T, GOTTSCHLICH H, et al. Fast incident light field acquisition and rendering[J]. 2008.

[38] TAGUCHI Y, AGRAWAL A, RAMALINGAM S, et al. Axial light field for curved mirrors: Reflect your perspective, widen your view[C]// Axial light field for curved mirrors: Reflect your perspective, widen your view. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE: 499-506.

[39] LIANG C-K, LIN T-H, WONG B-Y, et al. Programmable aperture photography: multiplexed light field acquisition[M]. // ACM siggraph 2008 papers. City, 2008: 1-10.

[40] LIPPMANN G J C-R. La photographie integrale[J]. 1908, 146: 446-51.

[41] ADELSON E H, WANG J Y J I T O P A, INTELLIGENCE M. Single lens stereo with a plenoptic camera[J]. 1992, 14(2): 99-106.

[42] NG R, LEVOY M, BRéDIF M, et al. Light field photography with a hand-held plenoptic camera[D]. City: Stanford university, 2005.

[43] WEI L-Y, LIANG C-K, MYHRE G, et al. Improving light field camera sample design with irregularity and aberration[J]. 2015, 34(4): 1-11.

[44] FACHADA S, LOSFELD A, SENOH T, et al. A calibration method for subaperture views of plenoptic 2.0 camera arrays[C]// A calibration method for subaperture views of plenoptic 2.0 camera arrays. 2021 IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP). IEEE: 1-6.

[45] LEVOY M, NG R, ADAMS A, et al. Light field microscopy[M]. // Acm siggraph 2006 papers. City, 2006: 924-34.

[46] MIGNARD-DEBISE L, IHRKE I. Light-field microscopy with a consumer light-field camera[C]// Light-field microscopy with a consumer light-field camera. 2015 International Conference on 3D Vision. IEEE: 335-43.

[47] GEORGIEV T G, ZHENG K C, CURLESS B, et al. Spatio-Angular Resolution Tradeoffs in Integral Photography[J]. 2006, 2006(263-272): 21.

[48] FU W, TONG X, SHAN C, et al. Implementing light field image refocusing algorithm[C]// Implementing light field image refocusing algorithm. 2015 2nd International Conference on Opto-Electronics and Applied Optics (IEM OPTRONIX). IEEE: 1-8.

[49] XIAO C, WANG Y, YANG J, et al. Flexible Refocusing Method for Camera Arrays Based on Reprojection[C]// Flexible Refocusing Method for Camera Arrays Based on Reprojection. 2017 International Conference on Computer Technology, Electronics and Communication (ICCTEC). IEEE: 51-4.

[50] HUANG X, AN P, CHEN Y, et al. Low bitrate light field compression with geometry and content consistency[J]. 2020, 24: 152-65.

[51] ALAIN M, AENCHBACHER W, SMOLIC A J A P A. Interactive light field tilt-shift refocus with generalized shift-and-sum[J]. 2019.

[52] 孙超, 吕俊伟, 李健伟, et al. 基于去卷积的快速图像超分辨率方法[J]. 2017, 37(12): 1210004.

[53] MERLIN T, VISVIKIS D, FERNANDEZ P, et al. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction[J]. 2015, 42(2): 804-19.

[54] CHEN G, SUN Y, SONG X. Blind-deconvolution Airy-beam photoacoustic microscopy for removing the sidelobe[C]// Blind-deconvolution Airy-beam photoacoustic microscopy for removing the sidelobe. Optical Design and Engineering VIII. SPIE,11871: 175-81.

[55] JIN W. Image restoration in neutron radiography using complex-wavelet denoising and lucy-richardson deconvolution[C]// Image restoration in neutron radiography using complex-wavelet denoising and lucy-richardson deconvolution. 2006 8th international Conference on Signal Processing. IEEE,2.

[56] BROXTON M, GROSENICK L, YANG S, et al. Wave optics theory and 3-D deconvolution for the light field microscope[J]. 2013, 21(21): 25418-39.

[57] LU Z, WU J, QIAO H, et al. Phase-space deconvolution for light field microscopy[J]. 2019, 27(13): 18131-45.

[58] VADATHYA A K, CHOLLETI S, RAMAJAYAM G, et al. Learning light field reconstruction from a single coded image[C]// Learning light field reconstruction from a single coded image. 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR). IEEE: 328-33.

[59] GUO M, ZHU H, ZHOU G, et al. Dense light field reconstruction from sparse sampling using residual network[C]// Dense light field reconstruction from sparse sampling using residual network. Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers, Part VI 14. Springer: 50-65.

[60] KALANTARI N K, WANG T-C, RAMAMOORTHI R J A T O G. Learning-based view synthesis for light field cameras[J]. 2016, 35(6): 1-10.

[61] WU G, ZHAO M, WANG L, et al. Light field reconstruction using deep convolutional network on EPI[C]// Light field reconstruction using deep convolutional network on EPI. Proceedings of the IEEE conference on computer vision and pattern recognition. 6319-27.

[62] MENG N, GE Z, ZENG T, et al. LightGAN: A deep generative model for light field reconstruction[J]. 2020, 8: 116052-63.

[63] YOON Y, JEON H-G, YOO D, et al. Light-field image super-resolution using convolutional neural network[J]. 2017, 24(6): 848-52.

[64] WANG Y, LIU F, ZHANG K, et al. LFNet: A novel bidirectional recurrent convolutional neural network for light-field image super-resolution[J]. 2018, 27(9): 4274-86.

[65] FARRUGIA R A, GALEA C, GUILLEMOT C J I J O S T I S P. Super resolution of light field images using linear subspace projection of patch-volumes[J]. 2017, 11(7): 1058-71.

[66] ZHANG S, LIN Y, SHENG H. Residual networks for light field image super-resolution[C]// Residual networks for light field image super-resolution. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 11046-55.

[67] NöBAUER T, ZHANG Y, KIM H, et al. Mesoscale volumetric light-field (MesoLF) imaging of neuroactivity across cortical areas at 18 Hz[J]. 2023, 20(4): 600-9.

[68] LI Y, XU F, ZHANG F, et al. DLBI: deep learning guided Bayesian inference for structure reconstruction of super-resolution fluorescence microscopy[J]. 2018, 34(13): i284-i94.

[69] VIZCAINO J P, SALTARIN F, BELYAEV Y, et al. Learning to reconstruct confocal microscopy stacks from single light field images[J]. 2021, 7: 775-88.

[70] WANG Z, ZHU L, ZHANG H, et al. Real-time volumetric reconstruction of biological dynamics with light-field microscopy and deep learning[J]. 2021, 18(5): 551-6.

[71] SHI W, CABALLERO J, HUSZáR F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network[C]// Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. Proceedings of the IEEE conference on computer vision and pattern recognition. 1874-83.

[72] YI C, ZHU L, SUN J, et al. Video-rate 3D imaging of living cells using Fourier view-channel-depth light field microscopy[J]. 2023, 6(1): 1259.

[73] WAGNER N, BEUTTENMUELLER F, NORLIN N, et al. Deep learning-enhanced light-field imaging with continuous validation[J]. 2021, 18(5): 557-63.

[74] MOROZOV O G, OVCHINNIKOV D L, AKHTIAMOV R A, et al. Two-frequency scanning LFM lidars: theory and applications[C]// Two-frequency scanning LFM lidars: theory and applications. Remote Sensing of Clouds and the Atmosphere VI. SPIE,4539: 158-68.

[75] WU J, LU Z, JIANG D, et al. Iterative tomography with digital adaptive optics permits hour-long intravital observation of 3D subcellular dynamics at millisecond scale[J]. 2021, 184(12): 3318-32. e17.

[76] LU Z, LIU Y, JIN M, et al. Virtual-scanning light-field microscopy for robust snapshot high-resolution volumetric imaging[J]. 2023, 20(5): 735-46.

[77] YUE L, SHEN H, LI J, et al. Image super-resolution: The techniques, applications, and future[J]. 2016, 128: 389-408.

[78] HORNIK K, STINCHCOMBE M, WHITE H J N N. Multilayer feedforward networks are universal approximators[J]. 1989, 2(5): 359-66.

[79] RHEE S, KANG M G J O E. Discrete cosine transform based regularized high-resolution image reconstruction algorithm[J]. 1999, 38(8): 1348-56.

[80] JI H, FERMüLLER C J I T O P A, INTELLIGENCE M. Robust wavelet-based super-resolution reconstruction: theory and algorithm[J]. 2008, 31(4): 649-60.

[81] PANAGIOTOPOULOU A, ANASTASSOPOULOS V. Super-resolution image reconstruction employing Kriging interpolation technique[C]// Super-resolution image reconstruction employing Kriging interpolation technique. 2007 14th International Workshop on Systems, Signals and Image Processing and 6th EURASIP Conference focused on Speech and Image Processing, Multimedia Communications and Services. IEEE: 144-7.

[82] IRANI M, PELEG S J J O V C, REPRESENTATION I. Motion analysis for image enhancement: Resolution, occlusion, and transparency[J]. 1993, 4(4): 324-35.

[83] XIAO C, YU J, SU K J F O C S I C. Gibbs artifact reduction for POCS super-resolution image reconstruction[J]. 2008, 2: 87-93.

[84] CHANTAS G K, GALATSANOS N P, WOODS N A J I T O I P. Super-resolution based on fast registration and maximum a posteriori reconstruction[J]. 2007, 16(7): 1821-30.

[85] DONG C, LOY C C, HE K, et al. Learning a deep convolutional network for image super-resolution[C]// Learning a deep convolutional network for image super-resolution. Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV 13. Springer: 184-99.

[86] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[J]. 2014, 27.

[87] ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]// The unreasonable effectiveness of deep features as a perceptual metric. Proceedings of the IEEE conference on computer vision and pattern recognition. 586-95.

[88] SUGAWARA Y, SHIOTA S, KIYA H. Super-resolution using convolutional neural networks without any checkerboard artifacts[C]// Super-resolution using convolutional neural networks without any checkerboard artifacts. 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE: 66-70.

[89] LAINA I, RUPPRECHT C, BELAGIANNIS V, et al. Deeper depth prediction with fully convolutional residual networks[C]// Deeper depth prediction with fully convolutional residual networks. 2016 Fourth international conference on 3D vision (3DV). IEEE: 239-48.

[90] ZHAO H, GALLO O, FROSIO I, et al. Loss functions for image restoration with neural networks[J]. 2016, 3(1): 47-57.

[91] MIRZA M, OSINDERO S J A P A. Conditional generative adversarial nets[J]. 2014.

[92] WANG Z, SIMONCELLI E P, BOVIK A C. Multiscale structural similarity for image quality assessment[C]// Multiscale structural similarity for image quality assessment. The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003. Ieee,2: 1398-402.

[93] SONG A, GAUTHIER J L, PILLOW J W, et al. Neural anatomy and optical microscopy (NAOMi) simulation for evaluating calcium imaging methods[J]. 2021, 358: 109173.

[94] BADURA A, SUN X R, GIOVANNUCCI A, et al. Fast calcium sensor proteins for monitoring neural activity[J]. 2014, 1(2): 025008-.

[95] DESCLOUX A, GRUßMAYER K S, RADENOVIC A J N M. Parameter-free image resolution estimation based on decorrelation analysis[J]. 2019, 16(9): 918-24.

[96] LUCAS B D, KANADE T. An iterative image registration technique with an application to stereo vision[C]// An iterative image registration technique with an application to stereo vision. IJCAI'81: 7th international joint conference on Artificial intelligence.2: 674-9.

[97] HORN B K, SCHUNCK B G J A I. Determining optical flow[J]. 1981, 17(1-3): 185-203.

[98] YU C, BI X, FAN Y, et al. LightPIVNet: An effective convolutional neural network for particle image velocimetry[J]. 2021, 70: 1-15.

中图分类号:

 V211.76    

馆藏号:

 2025-002-0102    

开放日期:

 2025-09-27    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式