APSIPA Transactions on Signal and Information Processing > Vol 14 > Issue 2

Efficient Multi-stage Context Based Entropy Model for Learned Lossy Point Cloud Attribute Compression

Kai Wang, Shenzhen University, China, Pingping Zhang, City University of Hong Kong, Hong Kong, Shengjie Jiao, Shenzhen University, China, Hui Yuan, Shandong University, China, Shiqi Wang, City University of Hong Kong, Hong Kong, Xu Wang, Shenzhen University, China, wangxu@szu.edu.cn
 
Suggested Citation
Kai Wang, Pingping Zhang, Shengjie Jiao, Hui Yuan, Shiqi Wang and Xu Wang (2025), "Efficient Multi-stage Context Based Entropy Model for Learned Lossy Point Cloud Attribute Compression", APSIPA Transactions on Signal and Information Processing: Vol. 14: No. 2, e106. http://dx.doi.org/10.1561/116.20240051

Publication Date: 23 Apr 2025
© 2025 K. Wang, P. Zhang, S. Jiao, H. Yuan, S. Wang and X. Wang
 
Subjects
Data compression,  Coding and compression,  Speech/audio/image/video compression
 
Keywords
Point cloud compressionpoint cloud attribute compressionlearned data compression
 

Share

Open Access

This is published under the terms of CC BY-NC.

Downloaded: 31 times

In this article:
Introduction 
Related Works 
Proposed Method 
Experimental Results 
Conclusion 
References 

Abstract

The autoregressive entropy model facilitates high compression efficiency by capturing intricate dependencies but suffers from slow decoding due to its serial context dependencies. To address this, we propose ParaPCAC, a lossy Parallel Point Cloud Attribute Compression scheme, designed to optimize the efficiency of the autoregressive entropy model. Our approach focuses on two main components: a parallel decoding strategy and a multi-stage context-based entropy model. In the parallel decoding strategy, we partition the voxels of the quantized latent features into non-overlapping groups for independent context entropy modeling, enabling parallel processing. The multi-stage context based entropy model is employed to decode neighboring features concurrently, utilizing previously decoded features at each stage. Global hyperprior is incorporated after the first stage to improve the estimation of attribute probability. Through these two techniques, ParaPCAC achieves significant decoding speed enhancements, with an acceleration of up to 160× and a 24.15% BD-Rate reduction compared to serial autoregressive entropy models. Furthermore, experimental results demonstrate that ParaPCAC outperforms existing learning-based methods in rate-distortion performance and decoding latency.

DOI:10.1561/116.20240051

Companion

APSIPA Transactions on Signal and Information Processing Special Issue - Three-dimensional Point Cloud Data Modeling, Processing, and Analysis
See the other articles that are part of this special issue.