Penalty methods for a class of non-lipschitz optimization problems

Research output: Journal article publicationJournal articleAcademic researchpeer-review

39 Citations (Scopus)

Abstract

We consider a class of constrained optimization problems with a possibly nonconvex non-Lipschitz objective and a convex feasible set being the intersection of a polyhedron and a possibly degenerate ellipsoid. Such problems have a wide range of applications in data science, where the objective is used for inducing sparsity in the solutions while the constraint set models the noise tolerance and incorporates other prior information for data fitting. To solve this class of constrained optimization problems, a common approach is the penalty method. However, there is little theory on exact penalization for problems with nonconvex and non-Lipschitz objective functions. In this paper, we study the existence of exact penalty parameters regarding local minimizers, stationary points, and ϵ-minimizers under suitable assumptions. Moreover, we discuss a penalty method whose subproblems are solved via a nonmonotone proximal gradient method with a suitable update scheme for the penalty parameters and prove the convergence of the algorithm to a KKT point of the constrained problem. Preliminary numerical results demonstrate the efficiency of the penalty method for finding sparse solutions of underdetermined linear systems.
Original languageEnglish
Pages (from-to)1465-1492
Number of pages28
JournalSIAM Journal on Optimization
Volume26
Issue number3
DOIs
Publication statusPublished - 19 Jul 2016

Keywords

  • Exact penalty
  • Non-Lipschitz optimization
  • Nonconvex optimization
  • Proximal gradient method
  • Sparse solution

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science

Fingerprint

Dive into the research topics of 'Penalty methods for a class of non-lipschitz optimization problems'. Together they form a unique fingerprint.

Cite this