Visual Tuning

  • Bruce X.B. Yu
  • , Jianlong Chang
  • , Haixin Wang
  • , Lingbo Liu
  • , Shijie Wang
  • , Zhiyu Wang
  • , Junfan Lin
  • , Lingxi Xie
  • , Haojie Li
  • , Zhouchen Lin
  • , Qi Tian
  • , Chang Wen Chen

Research output: Journal article publicationJournal articleAcademic researchpeer-review

25 Citations (Scopus)

Abstract

Fine-tuning visual models has been widely shown promising performance on many downstream visual tasks. With the surprising development of pre-trained visual foundation models, visual tuning jumped out of the standard modus operandi that fine-tunes the whole pre-trained model or just the fully connected layer. Instead, recent advances can achieve superior performance than full-tuning the whole pre-trained parameters by updating far fewer parameters, enabling edge devices and downstream applications to reuse the increasingly large foundation models deployed on the cloud. With the aim of helping researchers get the full picture and future directions of visual tuning, this survey characterizes a large and thoughtful selection of recent works, providing a systematic and comprehensive overview of existing work and models. Specifically, it provides a detailed background of visual tuning and categorizes recent visual tuning techniques into five groups: fine-tuning, prompt tuning, adapter tuning, parameter tuning, and remapping tuning. Meanwhile, it offers some exciting research directions for prospective pre-training and various interactions in visual tuning.

Original languageEnglish
Article number297
Pages (from-to)1-38
JournalACM Computing Surveys
Volume56
Issue number12
DOIs
Publication statusPublished - 25 Jul 2024

Keywords

  • fine-tuning
  • Foundation model
  • parameter-efficient
  • pre-training

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Visual Tuning'. Together they form a unique fingerprint.

Cite this