Panoramic Video Quality Assessment Based on Spatial-Temporal Convolutional Neural Networks
Publication Date
1-1-2023
Document Type
Conference Proceeding
Publication Title
Signal and Information Processing, Networking and Computers: Proceedings of the 8th International Conference on Signal and Information Processing, Networking and Computers (ICSINC)
DOI
10.1007/978-981-19-3387-5_161
First Page
1348
Last Page
1356
Abstract
The development of 5G technology and Ultra HD video provide the basis for panoramic video, namely virtual reality (VR). At present, the traditional VQA method is not effective on panoramic video. Therefore, it is crucial to design objective VQA models for the standardization of panoramic video industry. With the development of deep learning, excellent algorithms of VQA methods based on convolutional neural network have emerged. In this paper, we propose a full reference VQA model based on spatial-temporal 3D convolutional neural network, the feature extraction combined the time and spatial information. we verify and optimize the proposed VQA model based on VQA-ODV panoramic video database, its objective score has a higher correlation with subjective scores than that of traditional VQA methods.
Keywords
CNN, FR-VQA, Panoramic video, Video quality assessment
Department
Economics
Recommended Citation
Tingting An, Songlin Sun, and Rui Liu. "Panoramic Video Quality Assessment Based on Spatial-Temporal Convolutional Neural Networks" Signal and Information Processing, Networking and Computers: Proceedings of the 8th International Conference on Signal and Information Processing, Networking and Computers (ICSINC) (2023): 1348-1356. https://doi.org/10.1007/978-981-19-3387-5_161