A Novel No-Reference Quality Assessment Metric for Stereoscopic Images with Consideration of Comprehensive 3D Quality Information
(This article belongs to the Section Intelligent Sensors)
Abstract
:1. Introduction
2. Previous Work on SIQA
3. The Proposed Algorithm
3.1. Disparity Estimation
3.2. Cyclopean Image
3.3. Feature Extraction
3.3.1. Neighbor Difference-Based 2D Features
3.3.2. Neighbor Product-Based 2D Features
3.3.3. Gradient Magnitude-Based 2D Features
3.3.4. Phase Congruency-Based 2D Features
3.3.5. Log-Gabor Response-Based 2D Features
3.3.6. Binocular Rivalry-Based 3D Features
3.3.7. Binocular Disparity Matching Error-Based 3D Features
3.3.8. Binocular Disparity Consistency-Based 3D Features
3.4. Machine Learning
4. Experimental Results and Analysis
4.1. Databases and Evaluation Criteria
4.2. Implementation Details
4.2.1. PCA for Feature Dimension Reduction
4.2.2. SVR Parameters (C, ) Selection
4.2.3. Two-Fold Cross-Validation
4.2.4. Features with Noise-Addition
4.3. Performance Comparison
4.3.1. Overall Performance Comparison
4.3.2. Performance Comparison of Each Distortion Type
4.4. Performance Evaluation
4.4.1. Performance Evaluation of Each Feature Type
4.4.2. Cross-Database Performance Evaluation
4.4.3. Cross-Distortion Performance Evaluation
4.4.4. Image Content-Based Performance Evaluation
4.5. Influence of Disparity Estimate Algorithm
4.6. Complexity Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Li, H.; Wang, S.; Zhao, Y.; Wei, J.; Piao, M. Large-scale elemental image array generation in integral imaging based on scale invariant feature transform and discrete viewpoint acquisition. Displays 2021, 69, 102025. [Google Scholar] [CrossRef]
- Deng, L.; Pu, Y. Analysis of college martial arts teaching posture based on 3D image reconstruction and wavelet transform. Displays 2021, 69, 102044. [Google Scholar] [CrossRef]
- Qi, S.; Ning, X.; Yang, G.; Zhang, L.; Long, P.; Cai, W.; Li, W. Review of multi-view 3D object recognition methods based on deep learning. Displays 2021, 69, 102053. [Google Scholar] [CrossRef]
- Wang, X.; Wang, C.; Liu, B.; Zhou, X.; Zhang, L.; Zheng, J.; Bai, X. Multi-view stereo in the Deep Learning Era: A comprehensive review. Displays 2021, 70, 102102. [Google Scholar] [CrossRef]
- Winkler, S.; Min, D. Stereoscopic image quality compendium. In Proceedings of the 2011 8th International Conference on Information, Communications & Signal Processing, Singapore, 13–16 December 2011; pp. 1–5. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
- Yang, H.; Li, J. Application of multivariate statistics and 3D visualization analysis in tacit knowledge diffusion map. Displays 2021, 69, 102062. [Google Scholar] [CrossRef]
- Duan, Z.; Chen, Y.; Yu, H.; Hu, B.; Chen, C. RGB-Fusion: Monocular 3D reconstruction with learned depth prediction. Displays 2021, 70, 102100. [Google Scholar] [CrossRef]
- Brandao, T.; Queluz, M.P. No-reference image quality assessment based on DCT-domain statistics. Signal Process. 2008, 88, 822–833. [Google Scholar] [CrossRef]
- Lu, B.; Sun, L.; Yu, L.; Dong, X. An improved graph cut algorithm in stereo matching. Displays 2021, 69, 102052. [Google Scholar] [CrossRef]
- Ye, P.; Wu, X.; Gao, D.; Deng, S.; Xu, N.; Chen, J. DP3 signal as a neuro-indictor for attentional processing of stereoscopic contents in varied depths within the ‘comfort zone’. Displays 2020, 63, 101953. [Google Scholar] [CrossRef]
- Lu, B.; He, Y.; Wang, H. Stereo disparity optimization with depth change constraint based on a continuous video. Displays 2021, 69, 102073. [Google Scholar] [CrossRef]
- Gao, Z.; Zhai, G.; Deng, H.; Yang, X. Extended geometric models for stereoscopic 3D with vertical screen disparity. Displays 2020, 65, 101972. [Google Scholar] [CrossRef]
- Ye, P.; Kumar, J.; Kang, L.; Doermann, D. Unsupervised feature learning framework for no-reference image quality assessment. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 1098–1105. [Google Scholar]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
- You, J.; Xing, L.; Perkis, A.; Wang, X. Perceptual quality assessment for stereoscopic images based on 2D image quality metrics and disparity analysis. In Proceedings of the Fifth International Workshop on Video Processing and Quality Metrics for Consumer Electronics, Scottsdale, AZ, USA, 13–15 January 2010; pp. 4033–4036. [Google Scholar]
- Howard, I.P.; Rogers, B.J. Seeing in Depth; Oxford University Press: New York, NY, USA, 2008. [Google Scholar]
- Shen, L.; Fang, R.; Yao, Y.; Geng, X.; Wu, D. No-Reference Stereoscopic Image Quality Assessment Based on Image Distortion and Stereo Perceptual Information. IEEE Trans. Emerg. Top. Comput. Intell. 2019, 3, 59–72. [Google Scholar] [CrossRef]
- Lebreton, P.; Raake, A.; Barkowsky, M.; Callet, P.L. Evaluating depth perception of 3D stereoscopic videos. IEEE J. Sel. Top. Signal Process. 2012, 6, 710–720. [Google Scholar] [CrossRef] [Green Version]
- Mikkola, M.; Jumisko-Pyykko, S.; Strohmeier, D.; Boev, A.; Gotchev, A. Stereoscopic depth cues outperform monocular ones on autostereoscopic display. IEEE J. Sel. Top. Signal Process. 2012, 6, 698–709. [Google Scholar] [CrossRef]
- Tam, W.J.; Speranza, F.; Yano, S.; Shimono, K.; One, H. Stereoscopic 3D-TV: Visual comfort. IEEE Trans. Broadcast. 2011, 57, 335–346. [Google Scholar] [CrossRef]
- Jung, C.; Liu, H.; Cui, Y. Visual comfort assessment for stereoscopic 3D images based on salient discomfort regions. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 4047–4051. [Google Scholar]
- Coria, L.; Xu, D.; Nasiopoulos, P. Quality of experience of stereoscopic content on displays of different sizes: A comprehensive subjective evaluation. In Proceedings of the 2011 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 9–12 January 2011; pp. 755–756. [Google Scholar]
- Liu, H.; Heynderickx, I. Visual attention in objective image quality assessment: Based on eye-tracking data. IEEE Trans. Circuits Syst. Video Technol. 2011, 21, 971–982. [Google Scholar]
- Kim, H.; Lee, S. Transition of Visual Attention Assessment in Stereoscopic Images with Evaluation of Subjective Visual Quality and Discomfort. IEEE Trans. Multimedia 2015, 17, 2198–2209. [Google Scholar] [CrossRef]
- Gorley, P.; Holliman, N. Stereoscopic image quality metrics and compression. Proc. SPIE 2008, 6803, 45–56. [Google Scholar]
- Benoit, A.; Callet, P.L.; Campisi, P.; Cousseau, R. Quality Assessment of Stereoscopic Images. EURASIP J. Image Video Process. 2008, 2008, 659024. [Google Scholar] [CrossRef] [Green Version]
- Wang, S.; Shao, F.; Li, F.; Yu, M.; Jiang, G. A simple quality assessment index for stereoscopic images based on 3d gradient magnitude. Sci. World J. 2014, 2014, 890562. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lin, Y.-H.; Wu, J.-L. Quality assessment of stereoscopic 3d image compression by binocular integration behaviors. IEEE Trans. Image Process. 2014, 23, 1527–1542. [Google Scholar] [CrossRef] [PubMed]
- Chen, M.J.; Su, C.C.; Kwon, D.K.; Cormack, L.K.; Bovik, A.C. Full-reference quality assessment of stereo pairs accounting for rivalry. Signal Process. Image Commun. 2013, 28, 1143–1155. [Google Scholar] [CrossRef] [Green Version]
- Bensalma, R.; Larabi, M.-C. A perceptual metric for stereoscopic image quality assessment based on the binocular energy. Multidimens. Syst. Signal Process. 2013, 24, 281–316. [Google Scholar] [CrossRef]
- Zhang, Y.; Chandler, D.M. 3D-MAD: A Full Reference Stereoscopic Image Quality Estimator Based on Binocular Lightness and Contrast Perception. IEEE Trans. Image Process. 2015, 24, 3810–3825. [Google Scholar] [CrossRef]
- Li, F.; Shen, L.; Wu, D.; Fang, R. Full-reference quality assessment of stereoscopic images using disparity-gradient-phase similarity. In Proceedings of the 2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP), Chengdu, China, 12–15 July 2015; pp. 658–662. [Google Scholar]
- Jiang, G.; Xu, H.; Yu, M.; Luo, T.; Zhang, Y. Stereoscopic Image Quality Assessment by Learning Non-negative Matrix Factorization-based Color Visual Characteristics and Considering Binocular Interactions. J. Vis. Commun. Image Represent. 2017, 46, 269–279. [Google Scholar] [CrossRef]
- Geng, X.; Shen, L.; Li, K.; An, P. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property. Signal Process. Image Commun. 2017, 52, 54–63. [Google Scholar] [CrossRef]
- Shao, F.; Li, K.; Lin, W.; Jiang, G.; Dai, Q. Learning blind quality evaluator for stereoscopic images using joint sparse representation. IEEE Trans. Multimedia 2016, 18, 2104–2114. [Google Scholar] [CrossRef]
- Akhter, R.; Baltes, J.; Sazzad, Z.M.P.; Horita, Y. No reference stereoscopic image quality assessment. Proc. SPIE 2010, 7524, 75240T. [Google Scholar]
- Chen, M.J.; Cormack, L.K.; Bovik, A.C. No-reference quality assessment of natural stereopairs. IEEE Trans. Image Process. 2013, 22, 3379–3391. [Google Scholar] [CrossRef] [PubMed]
- Wang, S.; Shao, F.; Jiang, G. Supporting binocular visual quality prediction using machine learning. In Proceedings of the 2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), Chengdu, China, 14–18 July 2014; pp. 1–6. [Google Scholar]
- Ryu, S.; Sohn, K. No-Reference Quality Assessment for Stereoscopic Images Based on Binocular Quality Perception. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 591–602. [Google Scholar]
- Shao, F.; Lin, W.; Wang, S.; Jiang, G.; Yu, M. Blind Image Quality Assessment for Stereoscopic Images Using Binocular Guided Quality Lookup and Visual Codebook. IEEE Trans. Broadcast. 2015, 61, 154–165. [Google Scholar] [CrossRef]
- Shao, F.; Lin, W.; Wang, S.; Jiang, G.; Yu, M.; Dai, Q. Learning receptive fields and quality lookups for blind quality assessment of stereoscopic images. IEEE Trans. Cybern. 2016, 46, 730–743. [Google Scholar] [CrossRef] [PubMed]
- Su, C.C.; Cormack, L.K.; Bovik, A.C. Oriented Correlation Models of Distorted Natural Images With Application to Natural Stereopair Quality Evaluation. IEEE Trans. Image Process. 2015, 24, 1685–1699. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Zeng, K.; Wang, Z. Quality prediction of asymmetrically distorted stereoscopic images from single views. In Proceedings of the 2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), Chengdu, China, 14–18 July 2014; pp. 1–6. [Google Scholar]
- Wang, J.; Rehman, A.; Zeng, K.; Wang, S.; Wang, Z. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images. IEEE Trans. Image Process. 2015, 24, 3400–3414. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhang, Y.; Yu, L. Subjective study of binocular rivalry in stereoscopic images with transmission and compression artifacts. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, VIC, Australia, 15–18 September 2013; pp. 132–135. [Google Scholar]
- Zhang, L.; Zhang, L.; Tao, D.; Huang, X.; Du, B. Compression of hyperspectral remote sensing images by tensor approach. Neurocomputing 2015, 147, 358–363. [Google Scholar] [CrossRef]
- Du, B.; Zhang, M.; Zhang, L.; Hu, R.; Tao, D. PLTD: Patch-Based Low-Rank Tensor Decomposition for Hyperspectral Images. IEEE Trans. Multimed. 2017, 19, 67–79. [Google Scholar] [CrossRef]
- Zhou, W.; Qiu, W.; Wu, M. Utilizing dictionary learning and machine learning for Blind Quality Assessment of 3-D Images. IEEE Trans. Broadcast. 2017, 63, 404–415. [Google Scholar] [CrossRef]
- Samek, W.; Binder, A.; Montavon, G.; Lapuschkin, S.; Müller, K. Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 2660–2673. [Google Scholar] [CrossRef] [Green Version]
- Zhou, M.; Li, S. Deformable Convolution Based No-Reference Stereoscopic Image Quality Assessment Considering Visual Feedback Mechanism. In Proceedings of the 2021 International Conference on Visual Communications and Image Processing (VCIP), Munich, Germany, 5–8 December 2021; pp. 1–5. [Google Scholar]
- Jinhui, F.; Li, S.; Chang, Y. No-Reference Stereoscopic Image Quality Assessment Considering Binocular Disparity and Fusion Compensation. In Proceedings of the 2021 International Conference on Visual Communications and Image Processing (VCIP), Munich, Germany, 5–8 December 2021; pp. 1–5. [Google Scholar]
- Bourbia, S.; Karine, A.; Chetouani, A.; Hassoun, M.E. A Multi-Task Convolutional Neural Network For Blind Stereoscopic Image Quality Assessment Using Naturalness Analysis. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 1434–1438. [Google Scholar]
- Sandić-Stanković, D.D.; Kukolj, D.D.; Callet, P.L. Quality Assessment of DIBR-Synthesized Views Based on Sparsity of Difference of Closings and Difference of Gaussians. IEEE Trans. Image Process. 2022, 31, 1161–1175. [Google Scholar] [CrossRef] [PubMed]
- Gu, K.; Tao, D.; Qiao, J.; Lin, W. Learning a no-reference quality assessment model of enhanced images with big data. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 1301–1313. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bosse, S.; Maniry, D.; Müller, K.; Wiegand, T.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2018, 27, 206–219. [Google Scholar] [CrossRef] [Green Version]
- Oh, H.; Ahn, S.; Kim, J.; Lee, S. Blind deep S3D image quality evaluation via local to global feature aggregation. IEEE Trans. Image Process. 2017, 26, 4923–4936. [Google Scholar] [CrossRef]
- Hoa, D.K.; Dung, L.; Dzung, N.T. Efficient determination of disparity map from stereo images with modified sum of absolute differences (SAD) algorithm. In Proceedings of the 2013 International Conference on Advanced Technologies for Communications (ATC 2013), Ho Chi Minh City, Vietnam, 16–18 October 2013; pp. 657–660. [Google Scholar]
- Howard, P.l.; Rogers, B.J. Binocular Vision and Stereopsis; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
- Levelt, W.J.M. On Binocular Rivalry. Ph.D. Thesis, Leiden University, Leiden, The Netherlands, 1965. [Google Scholar]
- Hubel, D.H. The visual cortex of the brain. Sci. Am. 1963, 209, 54–63. [Google Scholar] [CrossRef]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
- Fang, R.; Al-Bayaty, R.; Wu, D. BNB Method for No-Reference Image Quality Assessment. IEEE Trans. Circuits Syst. Video Tech. 2017, 27, 1381–1391. [Google Scholar] [CrossRef]
- Sheikh, H.R.; Bovik, A.C.; de Veciana, G. An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Trans Image Process. 2005, 14, 2117–2128. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Fang, Y.; Ma, K.; Wang, Z.; Lin, W.; Fang, Z.; Zhai, G. No-Reference Quality Assessment of Contrast-Distorted Images Based on Natural Scene Statistics. IEEE Signal Process. Lett. 2015, 22, 838–842. [Google Scholar] [CrossRef]
- Md, S.K.; Appina, B.; Channappayya, S.S. Full-Reference Stereo Image Quality Assessment Using Natural Stereo Scene Statistics. IEEE Signal Process. Lett. 2015, 22, 1985–1989. [Google Scholar]
- Chang, C.C.; Lin, C.J. LIBSVM: A Library for Support Vector Machines. 2001. Available online: http://www.csie.ntu.edu.tw/cjlin/libsvm/ (accessed on 1 January 2020).
- Moorthy, A.K.; Su, C.-C.; Mittal, A.; Bovik, A.C. Subjective evaluation of stereoscopic image quality. Signal Process. Image Commun. 2013, 28, 870–883. [Google Scholar] [CrossRef]
- Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 2006, 15, 3440–3451. [Google Scholar] [CrossRef] [PubMed]
- Gholipour, A.; Araabi, B.N.; Lucas, C. Predicting chaotic time series using neural and neurofuzzy models: A comparative study. Neural Process. Lett. 2006, 24, 217–239. [Google Scholar] [CrossRef]
Algorithm | CYC | NSS | BD | BF | BR | G/P | RM | SD | TD |
---|---|---|---|---|---|---|---|---|---|
Gorley [22] | X | X | X | X | X | G | - | O | X |
You [35] | X | X | O | X | X | G | - | O | X |
Benoit [36] | X | X | O | X | X | G | - | O | X |
Chen [38] | O | X | O | O | O | G | - | O | X |
Chen [14] | O | O | O | O | O | G | SVR | O | X |
Akhter [41] | X | X | O | X | X | P | - | O | X |
Ryu [42] | X | X | X | O | O | G | - | O | O |
Wang [45] | X | X | O | X | X | P | SVR | X | O |
Shao [27] | X | X | O | O | O | P | - | X | O |
Shao [43] | X | X | O | O | O | P | - | X | O |
Shao [44] | X | X | X | X | X | P | SVR | X | O |
Fang [46] | O | O | O | X | O | G | NN | O | X |
Karimi [47] | O | O | O | O | X | G | NN | O | X |
Proposed | O | O | O | O | O | G | SVR | O | O |
Database | LIVE Phase I | LIVE Phase II | ||||
---|---|---|---|---|---|---|
Training/Testing | SROCC | PLCC | RMSE | SROCC | PLCC | RMSE |
MTrain→Training set | 0.978 | 0.979 | 3.265 | 0.971 | 0.976 | 2.452 |
MTrain→Testing set | 0.948 | 0.958 | 4.807 | 0.933 | 0.939 | 3.929 |
MTest→Training set | 0.948 | 0.956 | 4.826 | 0.934 | 0.940 | 3.881 |
MTest→Testing set | 0.977 | 0.980 | 3.282 | 0.971 | 0.976 | 2.482 |
Database | LIVE Phase I | LIVE Phase II | ||||
---|---|---|---|---|---|---|
Training/Testing | SROCC | PLCC | RMSE | SROCC | PLCC | RMSE |
WN→Training set | 0.962 | 0.967 | 4.234 | 0.950 | 0.954 | 3.386 |
WN→Testing set | 0.944 | 0.954 | 4.956 | 0.924 | 0.936 | 4.000 |
WN/F→Training set | 0.976 | 0.978 | 3.401 | 0.969 | 0.973 | 2.259 |
WN/F→Testing set | 0.952 | 0.962 | 4.493 | 0.940 | 0.950 | 3.546 |
Database | LIVE Phase I | LIVE Phase II | ||||
---|---|---|---|---|---|---|
Algorithm | SROCC | PLCC | RMSE | SROCC | PLCC | RMSE |
Akhter [41] | 0.383 | 0.626 | 14.827 | 0.543 | 0.568 | 9.249 |
Chen [14] | 0.891 | 0.895 | 7.247 | 0.880 | 0.895 | 5.102 |
Wang [45] | 0.828 | 0.885 | 7.238 | 0.794 | 0.784 | 7.326 |
Shao [27] | 0.894 | 0.899 | - | - | - | - |
Shao [44] | 0.950 | 0.957 | - | - | - | - |
Fang [46] | 0.932 | 0.936 | - | 0.931 | 0.936 | - |
Proposed | 0.952 | 0.962 | 4.493 | 0.940 | 0.950 | 3.546 |
Database | UW/IVC Phase I | UW/IVC Phase II | ||
---|---|---|---|---|
Algorithm | SROCC | PLCC | SROCC | PLCC |
You [35] | 0.597 | 0.713 | 0.587 | 0.682 |
Chen [38] | 0.682 | 0.734 | 0.578 | 0.613 |
Mittal [58] | 0.845 | 0.869 | 0.794 | 0.849 |
Chen [14] | 0.708 | 0.715 | 0.547 | 0.551 |
Proposed | 0.906 | 0.919 | 0.852 | 0.863 |
Database | LIVE Phase I | LIVE Phase II | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Algorithm | JP2K | JPEG | WN | BLUR | FF | ALL | JP2K | JPEG | WN | BLUR | FF | ALL | |
Benoit [36] | 0.910 | 0.603 | 0.930 | 0.931 | 0.699 | 0.899 | 0.751 | 0.867 | 0.923 | 0.455 | 0.773 | 0.728 | |
You [35] | 0.860 | 0.439 | 0.940 | 0.882 | 0.588 | 0.878 | 0.894 | 0.795 | 0.909 | 0.813 | 0.891 | 0.786 | |
Gorley [22] | 0.015 | 0.569 | 0.741 | 0.750 | 0.366 | 0.142 | 0.110 | 0.027 | 0.875 | 0.770 | 0.601 | 0.146 | |
Chen [38] | 0.888 | 0.530 | 0.948 | 0.925 | 0.707 | 0.916 | 0.814 | 0.843 | 0.940 | 0.908 | 0.884 | 0.889 | |
SROCC | Shao [27] | 0.900 | 0.607 | 0.926 | 0.924 | - | 0.894 | - | - | - | - | - | - |
Shao [44] | 0.936 | 0.818 | 0.935 | 0.927 | 0.814 | 0.950 | - | - | - | - | - | - | |
Akhter [41] | 0.914 | 0.866 | 0.675 | 0.555 | 0.640 | 0.383 | 0.714 | 0.724 | 0.649 | 0.682 | 0.559 | 0.543 | |
Chen [14] | 0.919 | 0.863 | 0.617 | 0.878 | 0.652 | 0.891 | 0.950 | 0.867 | 0.867 | 0.900 | 0.933 | 0.880 | |
Proposed | 0.937 | 0.779 | 0.959 | 0.921 | 0.851 | 0.952 | 0.946 | 0.903 | 0.831 | 0.912 | 0.939 | 0.940 | |
Benoit [36] | 0.939 | 0.640 | 0.925 | 0.948 | 0.747 | 0.902 | 0.784 | 0.853 | 0.926 | 0.535 | 0.807 | 0.748 | |
You [35] | 0.877 | 0.487 | 0.941 | 0.919 | 0.730 | 0.881 | 0.905 | 0.830 | 0.912 | 0.784 | 0.915 | 0.800 | |
Gorley [22] | 0.485 | 0.312 | 0.796 | 0.852 | 0.364 | 0.451 | 0.372 | 0.322 | 0.874 | 0.934 | 0.706 | 0.515 | |
Chen [38] | 0.912 | 0.603 | 0.942 | 0.942 | 0.776 | 0.917 | 0.834 | 0.862 | 0.957 | 0.963 | 0.901 | 0.900 | |
PLCC | Shao [27] | 0.872 | 0.597 | 0.916 | 0.923 | - | 0.899 | - | - | - | - | - | - |
Shao [44] | 0.949 | 0.796 | 0.938 | 0.986 | 0.837 | 0.957 | - | - | - | - | - | - | |
Akhter [41] | 0.904 | 0.905 | 0.729 | 0.617 | 0.503 | 0.626 | 0.722 | 0.776 | 0.786 | 0.795 | 0.674 | 0.568 | |
Chen [14] | 0.917 | 0.907 | 0.695 | 0.917 | 0.735 | 0.895 | 0.947 | 0.899 | 0.901 | 0.941 | 0.932 | 0.895 | |
Proposed | 0.958 | 0.801 | 0.971 | 0.965 | 0.883 | 0.962 | 0.974 | 0.922 | 0.858 | 0.977 | 0.949 | 0.950 | |
Benoit [36] | 4.426 | 5.022 | 6.307 | 4.571 | 8.257 | 7.061 | 6.096 | 3.787 | 4.028 | 11.763 | 6.894 | 7.490 | |
You [35] | 6.206 | 5.709 | 5.621 | 5.679 | 8.492 | 7.746 | 4.186 | 4.086 | 4.396 | 8.649 | 4.649 | 6.772 | |
Gorley [22] | 11.323 | 6.211 | 10.197 | 7.562 | 11.569 | 14.635 | 9.113 | 6.940 | 5.202 | 4.988 | 8.155 | 9.675 | |
Chen [38] | 5.320 | 5.216 | 5.581 | 4.822 | 7.837 | 6.533 | 5.562 | 3.865 | 3.368 | 3.747 | 4.966 | 4.987 | |
RMSE | Shao [27] | - | - | - | - | - | - | - | - | - | - | - | - |
Shao [44] | - | - | - | - | - | - | - | - | - | - | - | - | |
Akhter [41] | 7.092 | 5.483 | 4.273 | 11.387 | 9.332 | 14.827 | 7.416 | 6.189 | 4.535 | 8.450 | 8.505 | 9.249 | |
Chen [14] | 6.433 | 5.402 | 4.523 | 5.898 | 8.322 | 7.247 | 3.513 | 4.298 | 3.342 | 4.725 | 4.180 | 5.102 | |
Proposed | 3.938 | 3.908 | 3.912 | 4.055 | 5.812 | 4.493 | 2.575 | 3.773 | 2.843 | 3.207 | 3.606 | 3.546 |
Database | LIVE Phase I | LIVE Phase II | ||||
---|---|---|---|---|---|---|
Feature Set | SROCC | PLCC | RMSE | SROCC | PLCC | RMSE |
PC | 0.868 | 0.879 | 7.315 | 0.736 | 0.775 | 7.057 |
PC+GM | 0.893 | 0.912 | 6.812 | 0.793 | 0.857 | 5.852 |
PC+GM+GR | 0.903 | 0.922 | 6.201 | 0.832 | 0.869 | 5.031 |
PC+GM+GR+ND | 0.931 | 0.943 | 5.815 | 0.874 | 0.891 | 4.764 |
PC+GM+GR+ND+NP | 0.938 | 0.948 | 5.128 | 0.907 | 0.918 | 4.197 |
PC+GM+GR+ND+NP+3DI | 0.948 | 0.957 | 4.914 | 0.932 | 0.939 | 3.867 |
Criteria | JP2K | JPEG | WN | BLUR | FF | ALL |
---|---|---|---|---|---|---|
SROCC | 0.864 | 0.573 | 0.888 | 0.883 | 0.872 | 0.802 |
PLCC | 0.859 | 0.581 | 0.908 | 0.959 | 0.889 | 0.822 |
RMSE | 5.029 | 5.967 | 4.625 | 3.948 | 5.261 | 6.427 |
Criteria | JP2K | JPEG | WN | BLUR | FF | ALL |
---|---|---|---|---|---|---|
SROCC | 0.886 | 0.540 | 0.879 | 0.915 | 0.787 | 0.867 |
PLCC | 0.928 | 0.573 | 0.889 | 0.944 | 0.835 | 0.872 |
RMSE | 4.843 | 5.359 | 7.618 | 4.779 | 6.845 | 8.014 |
Test | JP2K | JPEG | WN | BLUR | FF | ALL |
---|---|---|---|---|---|---|
Train | ||||||
JP2K | - | 0.758 | 0.911 | 0.936 | 0.664 | 0.898 |
JPEG | 0.586 | - | 0.926 | 0.764 | 0.498 | 0.746 |
WN | 0.805 | 0.552 | - | 0.888 | 0.724 | 0.848 |
BLUR | 0.878 | 0.553 | 0.921 | - | 0.737 | 0.889 |
FF | 0.838 | 0.470 | 0.933 | 0.884 | - | 0.874 |
Test | JP2K | JPEG | WN | BLUR | FF | ALL |
---|---|---|---|---|---|---|
Train | ||||||
JP2K | - | 0.757 | 0.945 | 0.754 | 0.906 | 0.793 |
JPEG | 0.943 | - | 0.812 | 0.693 | 0.700 | 0.758 |
WN | 0.863 | 0.588 | - | 0.621 | 0.833 | 0.778 |
BLUR | 0.945 | 0.386 | 0.922 | - | 0.883 | 0.818 |
FF | 0.930 | 0.782 | 0.922 | 0.841 | - | 0.887 |
Database | LIVE Phase I | LIVE Phase II | ||||
---|---|---|---|---|---|---|
Partition Method | SROCC | PLCC | RMSE | SROCC | PLCC | RMSE |
Random-based partition | 0.952 | 0.962 | 4.493 | 0.940 | 0.950 | 3.546 |
Content-based partition | 0.908 | 0.928 | 6.752 | 0.867 | 0.866 | 6.685 |
Database | LIVE Phase I | LIVE Phase II | ||||
---|---|---|---|---|---|---|
Algorithm | SROCC | PLCC | RMSE | SROCC | PLCC | RMSE |
SAD-based | 0.952 | 0.960 | 4.581 | 0.927 | 0.941 | 3.846 |
SSIM-based | 0.946 | 0.956 | 4.841 | 0.937 | 0.945 | 3.727 |
Improved SSIM-based | 0.952 | 0.962 | 4.493 | 0.940 | 0.950 | 3.546 |
Steps | Time Ratios |
---|---|
SSIM-Based Matching and Disparity Generation | 36.37% |
Cyclopean Generation | 2.48% |
Spatial Domain Features | 1.95% |
Transform Domain Features | 7.42% |
3D Perceptual Features | 0.66% |
PCA and SVM | 51.12% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shen, L.; Yao, Y.; Geng, X.; Fang, R.; Wu, D. A Novel No-Reference Quality Assessment Metric for Stereoscopic Images with Consideration of Comprehensive 3D Quality Information. Sensors 2023, 23, 6230. https://doi.org/10.3390/s23136230
Shen L, Yao Y, Geng X, Fang R, Wu D. A Novel No-Reference Quality Assessment Metric for Stereoscopic Images with Consideration of Comprehensive 3D Quality Information. Sensors. 2023; 23(13):6230. https://doi.org/10.3390/s23136230
Chicago/Turabian StyleShen, Liquan, Yang Yao, Xianqiu Geng, Ruigang Fang, and Dapeng Wu. 2023. "A Novel No-Reference Quality Assessment Metric for Stereoscopic Images with Consideration of Comprehensive 3D Quality Information" Sensors 23, no. 13: 6230. https://doi.org/10.3390/s23136230
APA StyleShen, L., Yao, Y., Geng, X., Fang, R., & Wu, D. (2023). A Novel No-Reference Quality Assessment Metric for Stereoscopic Images with Consideration of Comprehensive 3D Quality Information. Sensors, 23(13), 6230. https://doi.org/10.3390/s23136230