APSIPA Transactions on Signal and Information Processing > Vol 13 > Issue 1

Efficient Fine-Tuning with Domain Adaptation for Privacy-Preserving Vision Transformer

Teru Nagamori, Tokyo Metropolitan University, Japan, Sayaka Shiota, Tokyo Metropolitan University, Japan, Hitoshi Kiya, Tokyo Metropolitan University, Japan, kiya@tmu.ac.jp
 
Suggested Citation
Teru Nagamori, Sayaka Shiota and Hitoshi Kiya (2024), "Efficient Fine-Tuning with Domain Adaptation for Privacy-Preserving Vision Transformer", APSIPA Transactions on Signal and Information Processing: Vol. 13: No. 1, e8. http://dx.doi.org/10.1561/116.00000012

Publication Date: 08 Apr 2024
© 2024 T. Nagamori, S. Shiota and H. Kiya
 
Subjects
Classification and prediction,  Deep learning,  Privacy-preserving systems
 
Keywords
Privacy-preservingDomain adaptationImage classificationVision transformer
 

Share

Open Access

This is published under the terms of CC BY-NC.

Downloaded: 72 times

In this article:
Introduction 
Related Work 
Proposed Method 
Experiments 
Conclusion 
References 

Abstract

We propose a novel method for privacy-preserving deep neural networks (DNNs) with the Vision Transformer (ViT). The method allows us not only to train models and test with visually protected images but to also avoid the performance degradation caused from the use of encrypted images, whereas conventional methods cannot avoid the influence of image encryption. A domain adaptation method is used to efficiently fine-tune ViT with encrypted images. In experiments, the method is demonstrated to outperform conventional methods in an image classification task on the CIFAR-10 and ImageNet datasets in terms of classification accuracy.

DOI:10.1561/116.00000012