This is published under the terms of the Creative Commons Attribution licence.
Downloaded: 588 times
We propose a new paradigm for no-reference image quality assessment (IQA) exploiting neurological and psychophysical properties of the human visual system (HVS) . Physiological and psychological evidences exist that HVS has different behavioral patterns under low and high noise/artifact levels. In this paper, we propose a dual-model approach for blind IQA under near-threshold and suprathreshold noise conditions. The underlying assumption for the proposed dual-model approach is that for images with low-level near-threshold noise, HVS tries to gauge the strength of the noise, so image quality can be well approximated via measuring strength of the noise. On the other hand, for images with structures overwhelmed by high-level suprathreshold noise, perceptual quality assessment relies on a cognitive model and the HVS tries to recover meaningful contents from the noisy pixels using past experiences and prior knowledge encoded into an internal generative model of the brain. And image quality is therefore closely related to the agreement between the noisy observation and the internal generative model explainable part of the image. Specifically, under near-threshold noise condition, a noise level estimation algorithm using natural image statistics is used, while under suprathreshold condition, an active inference model based on the free-energy principle is adopted. The near- and suprathreshold models can be seamlessly integrated through a mathematical transformation between estimates from both models. The proposed dual-model algorithm has been tested on additive Gaussian noise contaminated images. Experimental results and comparative studies suggest that although being a no-reference approach, the proposed algorithm has prediction accuracy comparable with some of the best full-reference IQA methods. The dual-model perspective of IQA presented in this paper is expected to facilitate future research in the field of visual perceptual modeling.