







摘 要: 低照度圖像存在亮度低、噪聲偽影、細節(jié)丟失、顏色失真等退化問題,使得低照度圖像增強成為一個多目標(biāo)增強任務(wù)。現(xiàn)有多數(shù)增強算法不能很好地在多個增強目標(biāo)上取得綜合的性能,對此,提出PNet——融合注意力機制的多級低照度圖像增強網(wǎng)絡(luò)模型,通過構(gòu)建多級串聯(lián)增強任務(wù)子網(wǎng),結(jié)合注意力機制設(shè)計多通道信息融合模塊進行有效特征篩選及記憶,網(wǎng)絡(luò)以序列方式處理圖像流,協(xié)同漸進式完成圖像全局自適應(yīng)亮度提升、噪聲偽影抑制、細節(jié)恢復(fù)、顏色矯正等多任務(wù)。此外,通過與現(xiàn)有主流算法進行定量及定性分析對比,結(jié)果顯示該方法能實現(xiàn)自適應(yīng)圖像亮度增強、細節(jié)對比度提升,增強后圖像整體亮度自然,沒有明顯光暈及偽影且色彩較豐富真實,在PSNR、SSIM、RMSE指標(biāo)中較次優(yōu)算法分別提升0.229、0.112、0.335。實驗結(jié)果表明,該方法在低照度圖像增強的多目標(biāo)任務(wù)上取得了綜合較優(yōu)秀的表現(xiàn),具有一定的應(yīng)用價值。
關(guān)鍵詞: 低照度圖像增強; 注意力機制; 長短記憶; 監(jiān)督學(xué)習(xí); 多級子網(wǎng)
中圖分類號: TP391"" 文獻標(biāo)志碼: A
文章編號: 1001-3695(2022)05-051-1579-07
doi:10.19734/j.issn.1001-3695.2021.09.0384
PNet:multi-level low-illumination image enhancement network based on attention mechanism
Yang Wei1,2a,2b, Zhang Zhiwei1, Cheng Haixiu2a,2c
(1.Dept. of Software Engineering, Software Engineering Institute of Guangzhou, Guangzhou 510990, China; 2.a.School of Computer Science amp; Engineering, b.Machine Learning amp; Data Mining Team, c.Guangdong Province Computer Network Key Laboratory,
South China University of Technology, Guangzhou 510641, China)
Abstract: Low-illumination image has degradation problems such as low brightness,noise artifact,detail loss and color distortion,which makes it a multiobjective task of the low-illumination image enhancement.As most existing enhancement algorithms fail to provide comprehensive performance in enhancing multiple targets,this paper proposed a model——PNet:multi-level low-illumination image enhancement network based on attention mechanism,which built a multi-stage tandem enhancement task subnet,and designed a multi-channel information fusion module for effective feature selection and memory with attention mechanism. With it,the network could process the image stream in a sequential manner,and collaboratively and incrementally completed multi-tasks such as the global brightness adaptive image enhancement,noise and artifact suppression,detail restoration,and color correction. In addition,through quantitative and qualitative comparison with existing mainstream algorithms,it showed that the proposed method could achieve brightness adaptive image enhancement and detail contrast enhancement. The enhanced image had an overall natural brightness with no obvious halo and artifacts on the one hand,and the color of which was rich and true on the other.Besides,compared with the sub-optimal algorithm,the index of the PSNR,SSIM and RMSE of the images processed by the proposed model was increased by 0.229,0.112,and 0.335.Experiment results show that the proposed method achieves excellent performance in multi-objective task of the low-illumination image enhancement,which has certain value in practice.
Key words: low-illumination image enhancement; attention mechanism; LSTM; supervised learning; multi-level subnet
0 引言
低照度圖像指低照度、不理想光照環(huán)境采集的圖像。低照度圖像存在能見度低、信噪比低、色彩被破壞等問題,導(dǎo)致圖像信息被覆蓋,影響圖像信息的傳達,使得基于圖像信息傳達的高層計算機視覺任務(wù)失效,如圖像分類、圖像識別、自動駕駛、視頻監(jiān)控、醫(yī)療圖像處理等。雖然可以通過購買更高端的相機設(shè)備或延長圖像拍攝曝光時間等物理方法提升低照度環(huán)境成像質(zhì)量,但會增加圖像采集成本且采集圖像可能出現(xiàn)過曝光或運動模糊現(xiàn)象。因此,低照度圖像增強即基于軟件方法提升采集的低照度圖像質(zhì)量,恢復(fù)圖像隱含信息內(nèi)容的表達,成為計算機底層視覺研究領(lǐng)域的重要研究內(nèi)容。
針對低照度圖像增強,研究者近年來提出各種不同的算法,主要歸納為基于傳統(tǒng)算法、 基于物理模型先驗算法、基于學(xué)習(xí)驅(qū)動算法三類。基于傳統(tǒng)算法如灰度變換、空間濾波、直方圖均衡化等,其中典型代表是伽馬變換及HE改進系列方法[1~3],通過擴大圖像的動態(tài)范圍從全局角度增強整幅圖像的對比度,但算法缺少對像素間關(guān)系的考慮,增強結(jié)果易出現(xiàn)過增強或欠增強。 Retinex[4]、大氣散射[5]、Rybak[6]是基于物理模型先驗算法中常用的物理模型,其中Retinex模型是基于人類視覺理論構(gòu)建的經(jīng)典物理模型,其認為圖像是照明圖和反射圖的卷積結(jié)果,提出基于路徑模型、PDE模型、變分模型、周邊模型等估算照明圖求解反射圖的系列方法,如SSR[7]、MSR[8]、LIME[9]等。但Retinex物理模型反射圖求解過程是NP病態(tài)問題,求解需要構(gòu)建復(fù)雜先驗正則項,且需進行未知次數(shù)的迭代,其耗時長。
隨著深度學(xué)習(xí)技術(shù)在計算機視覺領(lǐng)域的應(yīng)用取得突出表現(xiàn),基于學(xué)習(xí)驅(qū)動方法的低照度圖像增強算法成為目前主流的低照度增強方法。LLNet[10]最先應(yīng)用深度學(xué)習(xí),構(gòu)建了一種能同時完成亮度提升和去噪的深度自編碼模型進行低照度圖像增強;LLCNN[11]基于卷積神經(jīng)網(wǎng)絡(luò)提取圖像多尺度的特征,以SSIM為損失函數(shù)驅(qū)動網(wǎng)絡(luò)有監(jiān)督學(xué)習(xí)低照度圖像與真實參考圖像之間映射關(guān)系,完成低照度圖像增強任務(wù);MSR-net[12]受MSR[8]原理的啟發(fā),將多尺度加權(quán)高斯環(huán)繞轉(zhuǎn)換為等價的高斯差分結(jié)構(gòu),設(shè)計帶殘差結(jié)構(gòu)卷積神經(jīng)網(wǎng)絡(luò)MSR-net有監(jiān)督學(xué)習(xí)差分高斯環(huán)繞函數(shù)卷積核, 進行低照度圖像增強;LightenNet[13]基于低照度圖像數(shù)據(jù)集,訓(xùn)練CNN學(xué)習(xí)圖像光照圖,進行g(shù)amma矯正后基于Retinex理論獲得反射圖作為低照度圖像增強結(jié)果;RetinexNet[14]結(jié)合Retinex理論和深度學(xué)習(xí)技術(shù),構(gòu)建了圖像分解、光照增強、圖像重建三階段網(wǎng)絡(luò),將低光照圖像分解為反射層和光照層,通過增強光照層并把增強的光照層和反射層合成得到增強的圖像;EnlightenGAN[15]是一種無監(jiān)督生成對抗網(wǎng)絡(luò),在沒有低/正常光圖像對的情況下完成訓(xùn)練,消除了對成對訓(xùn)練數(shù)據(jù)的依賴;KinD[16]基于卷積神經(jīng)網(wǎng)絡(luò)設(shè)計了側(cè)重于光照圖像的靈活調(diào)整和反射圖像細節(jié)增強的網(wǎng)絡(luò)結(jié)構(gòu),通過調(diào)節(jié)因子訓(xùn)練網(wǎng)絡(luò)恢復(fù)不同程度的光照層和反射層重建增強圖像;Zero-DCE++[17] 則構(gòu)建了一個無監(jiān)督的訓(xùn)練網(wǎng)絡(luò),不依賴訓(xùn)練數(shù)據(jù),通過估計輸入圖像的亮度曲線,采用迭代遞進的方式進行低照度圖像增強,并取得了較好的泛化能力。
1 相關(guān)工作
基于學(xué)習(xí)驅(qū)動算法在低照度圖像增強中取得了圖像質(zhì)量的提升,但大多數(shù)現(xiàn)有的低照度圖像增強算法是在多個增強目標(biāo)任務(wù)中權(quán)衡取舍,側(cè)重于某單個增強性能的提升,不能在亮度提升、噪聲抑制、細節(jié)恢復(fù)、顏色矯正等多個增強目標(biāo)上獲得綜合優(yōu)秀的表現(xiàn)。如圖1所示,算法提升圖像亮度但損失圖像的自然性,圖像亮度強度和光的分布沒有恢復(fù)正確。如圖1(a)中DRBN[18]、MBLLEN[19]、RRDNet[20]圖像增強不足,整體偏暗;增強圖像但呈現(xiàn)過曝光或欠曝光,如圖1(a)中DSLR[21]的天空區(qū)域、MBLLEN紅框區(qū)域(見電子版);提升圖像亮度但放大了隱藏在圖像信號中的噪聲,如圖1(a)中LLNet的建筑物區(qū)域,破壞圖像的紋理細節(jié)和顏色信息;平滑抑制圖像噪聲但使圖像邊緣及細節(jié)對比度模糊丟失,如圖1(b)中LLNet處理結(jié)果;提升圖像亮度但丟失圖像顏色信息,使圖像產(chǎn)生顏色失真,如圖1(a)中Retinex-Net、Zero-DCE++。為了獲得低照度圖像增強多目標(biāo)任務(wù)綜合優(yōu)秀表現(xiàn),有研究者通過增加額外的預(yù)處理或后處理模塊在圖像增強之前或增強后增加去噪和顏色恢復(fù)模塊等[12~14],如RetinexNet等結(jié)合傳統(tǒng)BM3D進行圖像去噪、LightenNet[13]通過gamma變換調(diào)整光照圖、MSR-net額外增加顏色矯正模塊以矯正顏色。但是研究表明,額外的模塊效果都不能達到最優(yōu)。
針對低照度圖像增強中相互關(guān)聯(lián)又矛盾的多目標(biāo)任務(wù),研究領(lǐng)域進一步提出分而治之的策略。a)Retinex理論的系列增強方法[13,14,22,23]基于圖像分解思想實現(xiàn)低照度圖像增強,如文獻[22]采用相繼優(yōu)化分解的光照層和反射層圖像實現(xiàn)低照度圖像亮度增強及去噪,文獻[23]將圖像分解成全局光照層、反射層、噪聲層后進行亮度增強及去噪等;b)基于低照度圖像增強任務(wù)分解思想,文獻[19,24]設(shè)計多個增強任務(wù)子網(wǎng)學(xué)習(xí)低照度圖像增強,文獻[19]在合成數(shù)據(jù)集上訓(xùn)練,基于任務(wù)分解的思想,設(shè)計了包含10個卷積層的特征提取模塊及多個同構(gòu)并行增強子網(wǎng)進行圖像增強,文獻[24]進一步改進,將同構(gòu)子網(wǎng)升級為異構(gòu)子網(wǎng)增強學(xué)習(xí)能力,增強兩個子網(wǎng)有監(jiān)督學(xué)習(xí)光照及噪聲注意力為后繼各增強子網(wǎng)提供輸入圖像光照和噪聲先驗信息,豐富網(wǎng)絡(luò)可利用的信息量。
圖1 不同算法對MEF測試集采樣低光照圖像增強視覺效果示例
Fig.1 Examples of enhanced visual effects of different algorithms on sampled data of the MEF test set
為了進一步加強有效特征利用率,提升深度學(xué)習(xí)網(wǎng)絡(luò)訓(xùn)練效率,將注意力機制應(yīng)用到計算機視覺領(lǐng)域[25]。文獻[26]對特征通道間相關(guān)性進行建模,形成圖像通道注意力機制,強化重要特征的權(quán)重提升目標(biāo)任務(wù)的準(zhǔn)確率;文獻[27]基于輸入特征通道生成空間維度和特征維度的權(quán)重,形成通道和空間注意力機制;文獻[28]通過編解碼網(wǎng)絡(luò)獲得全局光照預(yù)測,在低照度圖像增強工作中形成圖像全局光照注意力機制引;文獻[29]通過學(xué)習(xí)特征通道的相關(guān)性形成注意力機制進行關(guān)鍵特征選擇;文獻[30]基于通道注意力機制及空間注意力機制,設(shè)計了雙重注意單元(DAU)結(jié)構(gòu)提取卷積流中的特征。
受到任務(wù)分解及注意力機制啟發(fā),針對低照度圖像增強多目標(biāo)任務(wù)挑戰(zhàn),本文提出融合注意力機制的多級低照度圖像增強網(wǎng)絡(luò)(簡稱PNet),如圖2所示,其貢獻包括:
a)設(shè)計PNet。不同于文獻[19,24],PNet網(wǎng)絡(luò)包含四個串聯(lián)低照度圖像增強子網(wǎng),以序列方式處理圖像流,協(xié)同漸進式進行圖像增強,取得了低照度圖像自適應(yīng)全局亮度提升、噪聲偽影抑制、細節(jié)對比度提升、色彩矯正恢復(fù)等多方面綜合較突出表現(xiàn)。
b)在PNet中融合了注意力機制。受LSTM[31]思想和注意力機制[27,29,30,32]啟發(fā),在豐富網(wǎng)絡(luò)學(xué)習(xí)可利用的特征信息的基礎(chǔ)上,設(shè)計了帶記憶功能的多通道信息融合模塊進行有效信息篩選,選擇有效特征并記憶,抑制無效信息傳播。
c)構(gòu)建驅(qū)動PNet訓(xùn)練的損失函數(shù)。包括噪聲損失、對比度損失、亮度自適應(yīng)損失、顏色一致性損失等,有效驅(qū)動有監(jiān)督完成網(wǎng)絡(luò)端到端的學(xué)習(xí)訓(xùn)練,具體見2.3節(jié)。
2 PNet:融合注意力機制的多級低照度圖像增強網(wǎng)絡(luò)
2.1 PNet主干結(jié)構(gòu)
如圖2(a)所示, PNet整體框架由四個串聯(lián)增強子網(wǎng)組成,各增強子網(wǎng)以序列方式處理圖像流,聯(lián)合完成由粗到細的圖像去噪、對比度細節(jié)紋理保持、顏色矯正、亮度增強等任務(wù),協(xié)同漸進式進行圖像增強。PNet增強子網(wǎng)基于殘差設(shè)計,在網(wǎng)絡(luò)中增加輸入到輸出的跳躍連接,學(xué)習(xí)低照度圖像和真實參考圖像的殘差,保證網(wǎng)絡(luò)訓(xùn)練過程信息流動,緩解梯度消失情況出現(xiàn)[33]。
PNet增強子網(wǎng)間的串聯(lián)結(jié)構(gòu),使上一子網(wǎng)輸出融合到下一子網(wǎng)中,能夠保持全局特征信息在后面增強階段不斷增加,加強網(wǎng)絡(luò)訓(xùn)練過程有用特征的重復(fù)利用。為了提取低照度圖像豐富的特征表達信息,PNet采用空洞卷積網(wǎng)絡(luò)組設(shè)計了特征提取模塊SF,避免圖像在下采樣及上采樣過程中產(chǎn)生細節(jié)信息丟失,同時保證網(wǎng)絡(luò)感受野大小。SF模塊的輸入是串聯(lián)圖像光照估計層(以R、G、B三通道最大值像素值)的四通道;由三組卷積層組成,每組卷積層為Conv+BN+ReLU構(gòu)成,其中Conv為空洞卷積,卷積核大小為3×3[34],膨脹系數(shù)為2,通道數(shù)為32;第一組卷積輸出結(jié)果為32通道特征圖F1,依次得到各個增強子網(wǎng)輸入特征組F2、F3。
3.2 實驗結(jié)果定性分析
本文從Fusion測試數(shù)據(jù)集、LIME測試數(shù)據(jù)集[9]、MEF測試數(shù)據(jù)集[42]、DICM數(shù)據(jù)集[43]、NPE數(shù)據(jù)集[40]中選擇7張不同光照條件低照度圖像,與目前研究領(lǐng)域流行的10種基于深度學(xué)習(xí)的低照度增強算法[10,13~15,17~21,44]進行對比,圖像增強視覺效果對比如圖1、5~7所示。其中,對比算法圖像增強結(jié)果通過文獻[45]搭建的LLIE在線平臺(LoLiPlatform)獲得。
圖1(a)紅色亮化框表示增強效果不佳區(qū)域,(b)紅色亮化框為局部放大對比區(qū)域,藍色亮化框為觀察。圖1(a)整體存在光照分布不均,主體建筑細節(jié)信息多且局部區(qū)域灰度值低,但PNet對圖1(a)實現(xiàn)了全局自然、自適應(yīng)增強沒有出現(xiàn)過曝光及偽影模糊(DRBN、MBLLEN、RRDNet處理結(jié)果增強不足,整體偏暗;DRBN、DSLR增強結(jié)果過曝光;LLNet、LightenNet、Retinex-Net、EnlightenGAN、DRBN、KinD++增強結(jié)果存在偽影及模糊);有效抑制噪聲(LLNet、Retinex-Net、EnlightenGAN紅色框內(nèi)噪聲被放大),較好地矯正失真顏色,保留了圖像場景信息(Retinex-Net出現(xiàn)嚴(yán)重顏色失真,Zero-DCE++整體偏灰丟失場景信息)。圖1(b)是極低光照條件,圖像中水面、樹葉、樹干、涼亭建筑、人物等多種信息細節(jié)由于低光照被掩蓋,顏色失真嚴(yán)重。PNet對圖1(b)的增強結(jié)果取得了亮度、色彩豐富度、細節(jié)、噪聲偽影抑制方面綜合最優(yōu),整體樹枝樹葉顏色最真實(LLNet,Retinex-Net、DRBN、DSLR、KinD++存在不同程度失真?zhèn)斡埃环糯罅粱t色方框,樹葉區(qū)域顏色飽滿(LLNet出現(xiàn)光暈失真);放大紅色亮化方框,景觀亭石塊對比度凸顯沒有明顯模糊及顏色失真(MBLLEN、RRDNet、DSLR增強失敗;LLNet、Retinex-Net出現(xiàn)模糊;Zero-DCE++、DRBN、KinD++存在顏色失真)。
圖5(a)(b)中紅色亮化框表示局部放大對比區(qū)域,對比圖5(a)視覺效果,PNet對圖5(a)的增強結(jié)果整體亮度較自然,沒有明顯偽影(MBLLEN、RRDNet、DSLR整體亮度偏低,LLNet、LightenNet、EnlightenGAN天空區(qū)域出現(xiàn)過曝光,DSLR天空區(qū)域偽影嚴(yán)重);對比紅色亮化框放大圖,PNet能夠較好地恢復(fù)極低像素局部區(qū)域的結(jié)構(gòu),抑制噪聲和偽影的出現(xiàn),恢復(fù)失真顏色。對比圖5(b)視覺效果,PNet增強結(jié)果整體自然沒有偽影(DSLR天空區(qū)域出現(xiàn)偽影),對遠處的建筑群和窗戶邊緣進行了合適的增強,恢復(fù)了這兩部分圖像區(qū)域的大量細節(jié)信息(對LLNet、LightenNet、Retinex-Net、KinD++增強結(jié)果存在亮度增強不足、模糊、顏色失真等現(xiàn)象);對比紅色亮化框放大圖,PNet對主體建筑增強亮度對比度自然(DRBN、MBLLEN、RRDNet、DSLR存在整體亮度低),沒有出現(xiàn)模糊(LLNet存在模糊)、噪聲放大(LightenNet、EnlightenGAN、KinD++放大噪聲)、顏色失真等情況(Retinex-Net、KinD++存在顏色失真)。
圖6(a)(b)中亮化框表示特別對比區(qū)域,對比圖6(a)增強視覺效果,PNet取得了顯著凸出的圖像增強效果,如圖像亮度自適應(yīng)提升、顏色矯正,噪聲偽影抑制等,獲得了最好的視覺美感。圖6(b)增強視覺效果,對比LLNet、Retinex-Net、RRDNet,PNet增強結(jié)果整體沒有出現(xiàn)不自然的光暈和偽影;對比KinD++,PNet增強結(jié)果沒有明顯的噪聲;觀察對比圖像中頭像放大區(qū)域,PNet增強結(jié)果臉部輪廓自然清晰(KinD++[44]存在黑色偽影)。如圖7所示,PNet對圖像不同光照區(qū)域?qū)崿F(xiàn)亮度自適應(yīng)提升,表現(xiàn)自然,特別觀察圖像中紅色亮化框放大區(qū)域,PNet在對比算法中唯一實現(xiàn)了對白鶴嘴巴區(qū)域顏色的恢復(fù),眼睛、鶴嘴各區(qū)域?qū)Ρ榷燃氈拢收鎸嵜利悾幌鄬Ρ龋渌椒ɑ謴?fù)出現(xiàn)不同程度黑色偽影和色彩破壞,如LightenNet、EnlightenGAN、RRDNet、DSLR、KinD++增強結(jié)果存在偽影。圖8為PNet應(yīng)用在真實采集低照度圖像增強視覺效果示例,第一排為原圖,第二排為增強效果,表明PNet在不同光照條件真實場景采集的低照度圖像增強中同樣取得了自適應(yīng)全局亮度提升、細節(jié)對比度提升、色彩矯正恢復(fù)、抑制模糊偽影等多目標(biāo)任務(wù)上綜合表現(xiàn)優(yōu)秀(見電子版)。
通過對圖1、5~8網(wǎng)絡(luò)增強視覺效果綜合分析, PNet模型在測試數(shù)據(jù)及真實低照度環(huán)境下采集圖像取得了多目標(biāo)增強任務(wù)綜合表現(xiàn)優(yōu)秀,能夠自適應(yīng)地提升圖像全局亮度,避免出現(xiàn)過曝光現(xiàn)象;能夠提升圖像細節(jié)對比度,抑制噪聲減少偽影現(xiàn)象;能夠矯正低照度圖像中的顏色失真,增強結(jié)果色彩豐富飽滿,視覺效果更優(yōu);具有對真實場景低照度圖像增強的泛化性。
3.3 實驗結(jié)果定量分析
在測試數(shù)據(jù)集上計算PSNR、SSIM、RMSE三個指標(biāo)的平均值,選擇了10種基于深度學(xué)習(xí)的低照度增強算法進行對比,如表2所示(加粗表現(xiàn)最好,下畫線其次), PNet在PSNR、SSIM、RMSE指標(biāo)中較次優(yōu)算法分別提升0.229、0.112、0.335,優(yōu)于對比算法。如表3所示, PNet在NIQE指標(biāo)較其他算法更優(yōu);Colorfulness色彩度量指標(biāo)在10種對比算法中位于前4名成績,表明PNet模型的增強結(jié)果具有對比優(yōu)勢的豐富色彩。基于對表2和3數(shù)據(jù)的分析,PNet模型在PSNR、SSIM、RMSE、NIQE四個質(zhì)量評價指標(biāo)上數(shù)據(jù)要優(yōu)于對比的10種基于深度學(xué)習(xí)的低照度增強算法,在色彩度量指標(biāo)中取得了對比優(yōu)勢,取得了綜合性能優(yōu)勢。
4 結(jié)束語
本文介紹了PNet:融合注意力機制的多級低照度圖像增強網(wǎng)絡(luò),通過設(shè)計多任務(wù)增強子網(wǎng)、特征提取模塊、注意力及信息融合復(fù)用機制、損失函數(shù),沒有借助預(yù)處理或后處理等額外模塊,基于端到端完成網(wǎng)絡(luò)訓(xùn)練學(xué)習(xí),取得了圖像增強的全局自適應(yīng)亮度、對比度細節(jié)提升、顏色信息矯正、噪聲偽影的抑制等多目標(biāo)上優(yōu)秀的性能。本文在不同測試數(shù)據(jù)集上選擇了7組圖片,對比了目前深度學(xué)習(xí)領(lǐng)域先進的10種低照度圖像增強算法,采用了5個圖像質(zhì)量評價指標(biāo),進行了實驗結(jié)果的定性及定量分析,表明PNet在低照度增強目標(biāo)優(yōu)化綜合性能上要優(yōu)于對比算法。通過將PNet應(yīng)用于真實場景的低照度圖像增強,證明其對真實環(huán)境的泛化性能。未來筆者將繼續(xù)優(yōu)化網(wǎng)絡(luò)模型,提升增強模型在極低光照環(huán)境下圖像的增強泛化性能及在視頻圖像增強中的應(yīng)用。
參考文獻:
[1]Abdullah-Al-Wadud M,Kabir M H,Dewan M A A,et al.A dynamic histogram equalization for image contrast enhancement[J].IEEE Trans on Consumer Electronics,2007,53(2):593-600.
[2]Reza A M.Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement[J].Journal of VLSI Signal Processing Systems for Signal,Image and Video Technology,2004,38(1):35-44.
[3]Lamberti F,Montrucchio B,Sanna A.CMBFHE:a novel contrast enhancement technique based on cascaded multistep binomial filtering histogram equalization[J].IEEE Trans on Consumer Electronics,2006,52(3):966-974.
[4]Xu Jun,Hou Yingkun,Ren Dongwei,et al.Star:a structure and texture aware retinex model[J].IEEE Trans on Image Processing,2020,29:5022-5037.
[5]胡茵萌,尚媛園,付小雁,等.大氣模型與亮度傳播圖相結(jié)合的低照度視頻增強算法[J].中國圖象圖形學(xué)報,2016,21(8):1010-1020. (Hu Yinmeng,Shang Yuanyuan,F(xiàn)u Xiaoyan,et al.Low-illuminance video enhancement algorithm combining atmospheric model and brightness propagation map[J].Journal of Image and Gra-phics,2016,21(8):1010-1020.)
[6]Qi Yunliang,Yang Zhen,Lian Jing,et al.A new heterogeneous neural network model and its application in image enhancement[J].Neurocomputing,2021,440:336-350.
[7]Jobson D J,Rahman Z,Woodell G A.Properties and performance of a center/surround retinex[J].IEEE Trans on Image Processing,1997,6(3):451-462.
[8]Jobson D J,Rahman Z,Woodell G A.A multiscale Retinex for bri-dging the gap between color images and the human observation of scenes[J].IEEE Trans on Image Processing,1997,6(7):965-976.
[9]Guo Xiaojie,Li Yu,Ling Haibin.LIME:low-light image enhancement via illumination map estimation[J].IEEE Trans on Image Processing,2016,26(2):982-993.
[10]Lore K G,Akintayo A,Sarkar S.LLNet:a deep autoencoder approach to natural low-light image enhancement[J].Pattern Recognition,2017,61:650-662.
[11]Tao Li,Zhu Chuang,Xiang Guoqing,et al.LLCNN:a convolutional neural network for low-light image enhancement[C]//Proc of IEEE Visual Communications and Image Processing.Piscataway,NJ:IEEE Press,2017:1-4.
[12]Shen Liang,Yue Zihan,F(xiàn)eng Fan,et al.MSR-net:low-light image enhancement using deep convolutional network[EB/OL].(2017-11-07).https://arxiv.org/abs/1711.02488.
[13]Li Chongyi,Guo Jichang,Porikli F,et al.LightenNet:a convolutional neural network for weakly illuminated image enhancement[J].Pattern Recognition Letters,2018,104:15-22.
[14]Wei Chen,Wang Wenjing,Yang Wenhan,et al.Deep retinex decomposition for low-light enhancement[EB/OL].(2018-08-14).https://arxiv.org/abs/1808.04560.
[15]Jiang Yifan,Gong Xinyu,Liu Ding,et al.EnlightenGAN:deep light enhancement without paired supervision[J].IEEE Trans on Image Processing,2021,30:2340-2349.
[16]Zhang Yonghua,Zhang Jiawan,Guo Xiaojie.Kindling the darkness:a practical low-light image enhancer[C]//Proc of the 27th ACM International Conference on Multimedia.New York:ACM Press,2019:1632-1640.
[17]Li Chongyi,Guo Chunle,Loy C C.Learning to enhance low-light image via zero-reference deep curve estimation[EB/OL].(2021-03-01).https://arxiv.org/abs/2103.00860.
[18]Yang Wenhan,Wang Shiqi,F(xiàn)ang Yuming,et al.From fidelity to perceptual quality:a semi-supervised approach for low-light image enhancement[C]//Proc of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE Press,2020:3063-3072.
[19]Lyu Feifan,Lu Feng,Wu Jianhua,et al.MBLLEN:low-light image/video enhancement using CNNs [C]//Proc of British Machine Vision Conference.2018:220.
[20]Zhu Anqi,Zhang Lin,Shen Ying,et al.Zero-shot restoration of underexposed images via robust retinex decomposition[C]//Proc of IEEE International Conference on Multimedia and Expo.Piscataway,NJ:IEEE Press,2020:1-6.
[21]Lim S,Kim W.DSLR:deep stacked Laplacian restorer for low-light image enhancement[J].IEEE Trans on Multimedia,2020,23:4272-4284.
[22]Ren Xutong,Li Mading,Cheng Wenhuang,et al.Joint enhancement and denoising method via sequential decomposition[C]//Proc of IEEE International Symposium on Circuits and Systems.Piscataway,NJ:IEEE Press,2018:1-5.
[23]He Renjie,Guan Mingyang,Wen Changyun.SCENS:simultaneous contrast enhancement and noise suppression for low-light images[J].IEEE Trans on Industrial Electronics,2020,68(9):8687-8697.
[24]Lyu Feifan,Li Yu,Lu Feng.Attention guided low-light image enhancement with a large scale low-light simulation dataset[J].International Journal of Computer Vision,2021,129(7):2175-2193.
[25]Mnih V,Heess N,Graves A.Recurrent models of visual attention[C]//Advances in Neural Information Processing Systems.2014.
[26]Hu Jie,Li Shen,Albanie S,et al.Squeeze-and-excitation networks[J].IEEE Trans on Pattern Analysis and Machine Intelligence,2020,42(8):2011-2023.
[27]Woo S,Park J,Lee J Y,et al.CBAM:convolutional block attention module[C]//Proc of European Conference on Computer Vision.Cham:Springer,2018:3-19.
[28]Wang Wenjing,Chen Wei,Yang Wenhan,et al.GLADNet:low-light enhancement network with global awareness[C]//Proc of the 13th IEEE International Conference on Automatic Face amp; Gesture Recognition.Piscataway,NJ:IEEE Press,2018:751-755.
[29]Anwar S,Barnes N.Real image denoising with feature attention[C]//Proc of IEEE/CVF International Conference on Computer Vision.Piscataway,NJ:IEEE Press,2019:3155-3164.
[30]Zamir S W,Arora A,Khan S,et al.Learning enriched features for real image restoration and enhancement[C]//Proc of the 16th European Conference on Computer Vision.Cham:Springer,2020:492-511.
[31]Shi Xingjian,Chen Zhourong,Wang Hao,et al.Convolutional LSTM network:a machine learning approach for precipitation nowcasting[C]//Advances in Neural Information Processing Systems.2015:802-810.
[32]Li Xiang,Wang Wenhai,Hu Xiaolin,et al.Selective kernel networks[C]//Proc of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE Press,2019:510-519.
[33]He Kaiming,Zhang X,Ren Shaoqing,et al.Deep residual learning for image recognition[C]//Proc of IEEE Conference on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE Press,2016:770-778.
[34]Simonyan K,Zisserman A.Very deep convolutional networks for large-scale image recognition[EB/OL].(2014-09-04).https://arxiv.org/abs/1409.1556.
[35]Ronneberger O,F(xiàn)ischer P,Brox T.U-Net:convolutional networks for biomedical image segmentation[C]//Proc of International Conference on Medical Image Computing and Computer-Assisted Intervention.Cham:Springer,2015:234-241.
[36]Guo Chunle,Li Chongyi,Guo Jichang,et al.Zero-reference deep curve estimation for low-light image enhancement[C]//Proc of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE Press,2020:1780-1789.
[37]Ignatov A,Kobyshev N,Timofte R,et al.DSLR-quality photos on mobile devices with deep convolutional networks[C]//Proc of IEEE International Conference on Computer Vision.Piscataway,NJ:IEEE Press,2017:3277-3285.
[38]Wang Zhou,Bovik A C,Sheikh H R,et al.Image quality assessment:from error visibility to structural similarity[J].IEEE Trans on Image Processing,2004,13(4):600-612.
[39]Dang-Nguyen D T,Pasquini C,Conotter V,et al.RAISE:a raw images dataset for digital image forensics[C]//Proc of the 6th ACM Multimedia Systems Conference.New York:ACM Press,2015:219-224.
[40]Wang Shuhang,Zheng Jin,Hu Haimiao,et al.Naturalness preserved enhancement algorithm for non-uniform illumination images[J].IEEE Trans on Image Processing,2013,22(9):3538-3548.
[41]Hasler D,Syusstrunk S.Measuring colourfulness in natural images[C]//Proc of ISamp;T/SPIE Electronic Imaging.2003:87-95.
[42]Lee C W,Lee C,Lee Y Y,et al.Power-constrained contrast enhancement for emissive displays based on histogram equalization[J].IEEE Trans on Image Processing,2011,21(1):80-93.
[43]Liu Ding,Wen Bihan,Liu Xianming,et al.When image denoising meets high-level vision tasks:a deep learning approach [C]// Proc of the 27th International Joint Conference on Artificial Intelligence.San Francisco,CA:Morgan Kaufmann,2018:842-848.
[44]Zhang Yonghua,Guo Xiaojie,Ma Jiayi,et al.Beyond brightening low-light images[J].International Journal of Computer Vision,2021,129(4):1013-1037.
[45]Lu Kun,Zhang Lihong.TBEFN:a two-branch exposure-fusion network for low-light image enhancement[J].IEEE Trans on Multimedia,2020:DOI:10.1109/TMM.2020.3037526.
[46]Zhang Lin,Zhang Lijun,Liu Xinyu,et al.Zero-shot restoration of back-lit images using deep internal learning[C]//Proc of the 27th ACM International Conference on Multimedia.New York:ACM Press,2019:1623-1631.