APSIPA Transactions on Signal and Information Processing > Vol 10 > Issue 1

TGHop: an explainable, efficient, and lightweight method for texture generation

Xuejing Lei, Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, USA, xuejing@usc.edu , Ganning Zhao, Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, USA, Kaitai Zhang, Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, USA, C.-C. Jay Kuo, Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, USA
 
Suggested Citation
Xuejing Lei, Ganning Zhao, Kaitai Zhang and C.-C. Jay Kuo (2021), "TGHop: an explainable, efficient, and lightweight method for texture generation", APSIPA Transactions on Signal and Information Processing: Vol. 10: No. 1, e17. http://dx.doi.org/10.1017/ATSIP.2021.15

Publication Date: 27 Oct 2021
© 2021 Xuejing Lei, Ganning Zhao, Kaitai Zhang and C.-C. Jay Kuo
 
Subjects
 
Keywords
Texture generationTexture synthesisGenerative modelSuccessive subspace modeling
 

Share

Open Access

This is published under the terms of the Creative Commons Attribution licence.

Downloaded: 300 times

In this article:
I. INTRODUCTION 
II. RELATED WORK 
III. SUCCESSIVE SUBSPACE ANALYSIS AND GENERATION 
IV. TGHop METHOD 
V. EXPERIMENTS 
VI. CONCLUSION AND FUTURE WORK 

Abstract

An explainable, efficient, and lightweight method for texture generation, called TGHop (an acronym of Texture Generation PixelHop), is proposed in this work. Although synthesis of visually pleasant texture can be achieved by deep neural networks, the associated models are large in size, difficult to explain in theory, and computationally expensive in training. In contrast, TGHop is small in its model size, mathematically transparent, efficient in training and inference, and able to generate high-quality texture. Given an exemplary texture, TGHop first crops many sample patches out of it to form a collection of sample patches called the source. Then, it analyzes pixel statistics of samples from the source and obtains a sequence of fine-to-coarse subspaces for these patches by using the PixelHop++ framework. To generate texture patches with TGHop, we begin with the coarsest subspace, which is called the core, and attempt to generate samples in each subspace by following the distribution of real samples. Finally, texture patches are stitched to form texture images of a large size. It is demonstrated by experimental results that TGHop can generate texture images of superior quality with a small model size and at a fast speed.

DOI:10.1017/ATSIP.2021.15

Companion

APSIPA Transactions on Signal and Information Processing Deep Neural Networks: Representation, Interpretation, and Applications: Articles Overview
See the other articles that are part of this special issue.