Foundations and Trends® in Computer Graphics and Vision > Vol 15 > Issue 2

An Introduction to Neural Data Compression

By Yibo Yang, University of California, Irvine, USA, yibo.yang@uci.edu | Stephan Mandt, University of California, Irvine, USA, mandt@uci.edu | Lucas Theis, Google Research, USA, theis@google.com

 
Suggested Citation
Yibo Yang, Stephan Mandt and Lucas Theis (2023), "An Introduction to Neural Data Compression", Foundations and TrendsĀ® in Computer Graphics and Vision: Vol. 15: No. 2, pp 113-200. http://dx.doi.org/10.1561/0600000107

Publication Date: 25 Apr 2023
© 2023 Y. Yang, S. Mandt and L. Theis
 
Subjects
Variational inference,  Deep learning,  Information theory and computer science,  Data compression,  Rate-distortion theory,  Source coding,  Coding and compression,  Speech/audio/image/video compression
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Introduction
2. Lossless Compression
3. Lossy Compression
4. Discussion and Open Problems
Acknowledgements
References

Abstract

Neural compression is the application of neural networks and other machine learning methods to data compression. Recent advances in statistical machine learning have opened up new possibilities for data compression, allowing compression algorithms to be learned end-to-end from data using powerful generative models such as normalizing flows, variational autoencoders, diffusion probabilistic models, and generative adversarial networks. This monograph aims to introduce this field of research to a broader machine learning audience by reviewing the necessary background in information theory (e.g., entropy coding, rate-distortion theory) and computer vision (e.g., image quality assessment, perceptual metrics), and providing a curated guide through the essential ideas and methods in the literature thus far.

DOI:10.1561/0600000107
ISBN: 978-1-63828-174-0
100 pp. $75.00
Buy book (pb)
 
ISBN: 978-1-63828-175-7
100 pp. $150.00
Buy E-book (.pdf)
Table of contents:
1. Introduction
2. Lossless Compression
3. Lossy Compression
4. Discussion and Open Problems
Acknowledgements
References

An Introduction to Neural Data Compression

The goal of data compression is to reduce the number of bits needed to represent useful information. Neural, or learned compression, is the application of neural networks and related machine learning techniques to this task. This monograph aims to serve as an entry point for machine learning researchers interested in compression by reviewing the prerequisite background and representative methods in neural compression.

Neural compression is the application of neural networks and other machine learning methods to data compression. Recent advances in statistical machine learning have opened up new possibilities for data compression, allowing compression algorithms to be learned end-to-end from data using powerful generative models such as normalizing flows, variational autoencoders, diffusion probabilistic models, and generative adversarial networks. This monograph introduces this field of research to a broader machine learning audience by reviewing the necessary background in information theory (e.g., entropy coding, rate-distortion theory) and computer vision (e.g., image quality assessment, perceptual metrics), and providing a curated guide through the essential ideas and methods in the literature thus far. Instead of surveying the vast literature, essential concepts and methods in neural compression are covered, with a reader in mind who is versed in machine learning but not necessarily data compression.

 
CGV-107