Learning to Detect Saliency with Deep Structure

Yu Hu, Zenghai Chen, Zheru Chi, Hong Fu

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

5 Citations (Scopus)


Deep learning has shown great successes in solving various problems of computer vision. To the best of our knowledge, however, little existing work applies deep learning to saliency modeling. In this paper, a new saliency model based on convolutional neural network is proposed. The proposed model is able to produce a saliency map directly from an image's pixels. In the model, multi-level output values are adopted to simulate continuous values in a saliency map. Differing from most neural networks that use a relatively small number of output nodes, the output layer of our model has a large number of nodes. To make the training more efficient, an improved learning algorithm is adopted to train the model. Experimental results show that the proposed model succeeds in generating acceptable saliency maps after proper training.
Original languageEnglish
Title of host publicationProceedings - 2015 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2015
Number of pages6
ISBN (Electronic)9781479986965
Publication statusPublished - 12 Jan 2016
EventIEEE International Conference on Systems, Man, and Cybernetics, SMC 2015 - City University of Hong Kong, Kowloon Tong, Hong Kong
Duration: 9 Oct 201512 Oct 2015


ConferenceIEEE International Conference on Systems, Man, and Cybernetics, SMC 2015
Country/TerritoryHong Kong
CityKowloon Tong


  • convolutional neural network
  • deep learning
  • saliency detection
  • saliency map

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications
  • Energy Engineering and Power Technology
  • Information Systems and Management
  • Control and Systems Engineering


Dive into the research topics of 'Learning to Detect Saliency with Deep Structure'. Together they form a unique fingerprint.

Cite this