Dual Head Network for No-Reference Quality Assessment Towards Realistic Night-Time Images
No-reference image quality assessment (NR-IQA), which devotes to predicting image quality without relying on the corresponding pristine counterpart, develops rapidly in recent years. However, little investigation has been dedicated to quality assessment of realistic night-time images. Existing NR-IQ...
Guardado en:
Autores principales: | , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2020
|
Materias: | |
Acceso en línea: | https://doaj.org/article/e7531f9ed5ec4f0d81f9c0c614e0e566 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | No-reference image quality assessment (NR-IQA), which devotes to predicting image quality without relying on the corresponding pristine counterpart, develops rapidly in recent years. However, little investigation has been dedicated to quality assessment of realistic night-time images. Existing NR-IQA algorithms laboriously cope with this night-time scenario since complicated authentic distortions such as low contrast, blurred details, and reduced visibility usually appear on it. In this paper, we propose an end-to-end NR-IQA model to meet this challenge based on a multi-stream deep convolutional neural network (DCNN). Two streams, brightness-aware CNN and naturalness-aware CNN are constructed respectively by a brightness-altered image identification task with a self-established dataset and a quality-prediction regression task with an existing authentically-distorted IQA dataset to improve quality-aware initializations. In this case, given the quick convergence and little transformation in the lower layers, a shallow-layer-shared architecture is explored to reduce computational cost. Finally, the features of these two pipelines are collected by an effective pooling method and then concatenated as the image representation for fine-tuning. The effectiveness and efficiency of the proposed method are verified by several different experiments on the NNID, CCRIQ and LIVE Challenge databases. Furthermore, the superiority of wide applications such as for contrast-distorted and driving scenarios is demonstrated on the CID2013, CCID2014 and BBD-100k databases. |
---|