Foundations and Trends® in Computer Graphics and Vision > Vol 14 > Issue 3–4

Vision-Language Pre-Training: Basics, Recent Advances, and Future Trends

By Zhe Gan, Microsoft Corporation, USA, pkuganzhe@gmail.com | Linjie Li, Microsoft Corporation, USA | Chunyuan Li, Microsoft Corporation, USA | Lijuan Wang, Microsoft Corporation, USA | Zicheng Liu, Microsoft Corporation, USA | Jianfeng Gao, Microsoft Corporation, USA

 
Suggested Citation
Zhe Gan, Linjie Li, Chunyuan Li, Lijuan Wang, Zicheng Liu and Jianfeng Gao (2022), "Vision-Language Pre-Training: Basics, Recent Advances, and Future Trends", Foundations and Trends® in Computer Graphics and Vision: Vol. 14: No. 3–4, pp 163-352. http://dx.doi.org/10.1561/0600000105

Publication Date: 05 Dec 2022
© 2022 Z. Gan et al.
 
Subjects
Video analysis and event recognition,  Learning and statistical methods,  Object and scene recognition,  Image and video retrieval
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Introduction
2. Tasks, Benchmarks, and Early Models
3. VLP for Image-Text Tasks
4. VLP for Core Vision Tasks
5. VLP for Video-Text Tasks
6. VL Systems in Industry
7. Conclusions and Research Trends
Acknowledgments
References

Abstract

This monograph surveys vision-language pre-training (VLP) methods for multimodal intelligence that have been developed in the last few years. We group these approaches into three categories: (i) VLP for image-text tasks, such as image captioning, image-text retrieval, visual question answering, and visual grounding; (ii) VLP for core computer vision tasks, such as (open-set) image classification, object detection, and segmentation; and (iii) VLP for video-text tasks, such as video captioning, video-text retrieval, and video question answering. For each category, we present a comprehensive review of state-of-the-art methods, and discuss the progress that has been made and challenges still being faced, using specific systems and models as case studies. In addition, for each category, we discuss advanced topics being actively explored in the research community, such as big foundation models, unified modeling, in-context few-shot learning, knowledge, robustness, and computer vision in the wild, to name a few.

DOI:10.1561/0600000105
ISBN: 978-1-63828-132-0
204 pp. $99.00
Buy book (pb)
 
ISBN: 978-1-63828-133-7
204 pp. $290.00
Buy E-book (.pdf)
Table of contents:
1. Introduction
2. Tasks, Benchmarks, and Early Models
3. VLP for Image-Text Tasks
4. VLP for Core Vision Tasks
5. VLP for Video-Text Tasks
6. VL Systems in Industry
7. Conclusions and Research Trends
Acknowledgments
References

Vision-Language Pre-Training: Basics, Recent Advances, and Future Trends

Humans perceive the world through many channels, such as images viewed by the eyes, or voices heard by the ears. Though any individual channel might be incomplete or noisy, humans can naturally align and fuse information collected from multiple channels in order to grasp the key concepts needed for a better understanding of the world.

One of the core aspirations in Artificial Intelligence (AI) is to develop algorithms that endow computers with an ability to effectively learn from multimodal (or, multi-channel) data. This data is similar to sights and sounds attained from vision and language that help humans make sense of the world around us. For example, computers could mimic this ability by searching the most relevant images to a text query (or vice versa), and by describing the content of an image using natural language. Vision-and-Language (VL), a popular research area that sits at the nexus of Computer Vision and Natural Language Processing (NLP), aims to achieve this goal.

This monograph surveys vision-language pre-training (VLP) methods for multimodal intelligence that have been developed in the last few years. Approaches are grouped into three categories: (i) VLP for image-text tasks, such as image captioning, image-text retrieval, visual question answering, and visual grounding; (ii) VLP for core computer vision tasks, such as (open-set) image classification, object detection, and segmentation; and (iii) VLP for video-text tasks, such as video captioning, video-text retrieval, and video question answering. For each category, a comprehensive review of state-of-the-art methods is presented, and the progress that has been made and challenges still being faced are discussed, using specific systems and models as case studies. In addition, for each category, advanced topics being actively explored in the research community are presented, such as big foundation models, unified modeling, in-context few-shot learning, knowledge, robustness, and computer vision in the wild, to name a few.

 
CGV-105