Abstract
A neural network based on smoothing approximation is presented for a class of nonsmooth, nonconvex constrained optimization problems, where the objective function is nonsmooth and nonconvex, the equality constraint functions are linear and the inequality constraint functions are nonsmooth, convex. This approach can find a Clarke stationary point of the optimization problem by following a continuous path defined by a solution of an ordinary differential equation. The global convergence is guaranteed if either the feasible set is bounded or the objective function is level bounded. Specially, the proposed network does not require: 1) the initial point to be feasible; 2) a prior penalty parameter to be chosen exactly; 3) a differential inclusion to be solved. Numerical experiments and comparisons with some existing algorithms are presented to illustrate the theoretical results and show the efficiency of the proposed network.
Original language | English |
---|---|
Article number | 6627988 |
Pages (from-to) | 545-556 |
Number of pages | 12 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 25 |
Issue number | 3 |
DOIs | |
Publication status | Published - 1 Mar 2014 |
Keywords
- Clarke stationary point
- condition number
- neural network
- nonsmooth nonconvex optimization
- smoothing approximation
- variable selection
ASJC Scopus subject areas
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence