TY - JOUR
T1 - An asymptotically superlinearly convergent semismooth newton augmented lagrangian method for linear programming
AU - Li, Xudong
AU - Sun, Defeng
AU - Toh, Kim Chuan
N1 - Funding Information:
\ast Received by the editors March 22, 2019; accepted for publication (in revised form) June 11, 2020; published electronically September 8, 2020. https://doi.org/10.1137/19M1251795 Funding: The first author's research is supported by the National Natural Science Foundation of China (11901107), the Young Elite Scientists Sponsorship Program by CAST (2019QNRC001), and the Shanghai Sailing Program (19YF1402600). The second author's research is partially supported by The Hong Kong Polytechnic University under the 2017 Postdoctoral Fellowships Scheme. The third author's research is partially supported by the Academic Research Fund of the Ministry of Singapore under grant R146-000-257-112.
Publisher Copyright:
© 2020 Society for Industrial and Applied Mathematics
PY - 2020/9/8
Y1 - 2020/9/8
N2 - Powerful interior-point methods (IPM) based commercial solvers, such as Gurobi and Mosek, have been hugely successful in solving large-scale linear programming (LP) problems. The high efficiency of these solvers depends critically on the sparsity of the problem data and advanced matrix factorization techniques. For a large scale LP problem with data matrix A that is dense (possibly structured) or whose corresponding normal matrix AAT has a dense Cholesky factor (even with reordering), these solvers may require excessive computational cost and/or extremely heavy memory usage in each interior-point iteration. Unfortunately, the natural remedy, i.e., the use of iterative methods based IPM solvers, although it can avoid the explicit computation of the coefficient matrix and its factorization, is often not practically viable due to the inherent extreme ill-conditioning of the large scale normal equation arising in each interior-point iteration. While recent progress has been made to alleviate the ill-conditioning issue via sophisticated preconditioning techniques, the difficulty remains a challenging one. To provide a better alternative choice for solving large scale LPs with dense data or requiring expensive factorization of its normal equation, we propose a semismooth Newton based inexact proximal augmented Lagrangian (Snipal) method. Different from classical IPMs, in each iteration of Snipal, iterative methods can efficiently be used to solve simpler yet better conditioned semismooth Newton linear systems. Moreover, Snipal not only enjoys a fast asymptotic superlinear convergence but is also proven to enjoy a finite termination property. Numerical comparisons with Gurobi have demonstrated encouraging potential of Snipal for handling large-scale LP problems where the constraint matrix A has a dense representation or AAT has a dense factorization even with an appropriate reordering. For a few large LP instances arising from correlation clustering, our algorithm can be up to 20-100 times faster than the barrier method implemented in Gurobi for solving the problems to the accuracy of 10 - 8 in the relative KKT residual. However, when tested on some large sparse LP problems available in the public domain, our algorithm is not yet practically competitive against the barrier method in Gurobi, especially when the latter can compute the Schur complement matrix and its sparse Cholesky factorization in each iteration cheaply.
AB - Powerful interior-point methods (IPM) based commercial solvers, such as Gurobi and Mosek, have been hugely successful in solving large-scale linear programming (LP) problems. The high efficiency of these solvers depends critically on the sparsity of the problem data and advanced matrix factorization techniques. For a large scale LP problem with data matrix A that is dense (possibly structured) or whose corresponding normal matrix AAT has a dense Cholesky factor (even with reordering), these solvers may require excessive computational cost and/or extremely heavy memory usage in each interior-point iteration. Unfortunately, the natural remedy, i.e., the use of iterative methods based IPM solvers, although it can avoid the explicit computation of the coefficient matrix and its factorization, is often not practically viable due to the inherent extreme ill-conditioning of the large scale normal equation arising in each interior-point iteration. While recent progress has been made to alleviate the ill-conditioning issue via sophisticated preconditioning techniques, the difficulty remains a challenging one. To provide a better alternative choice for solving large scale LPs with dense data or requiring expensive factorization of its normal equation, we propose a semismooth Newton based inexact proximal augmented Lagrangian (Snipal) method. Different from classical IPMs, in each iteration of Snipal, iterative methods can efficiently be used to solve simpler yet better conditioned semismooth Newton linear systems. Moreover, Snipal not only enjoys a fast asymptotic superlinear convergence but is also proven to enjoy a finite termination property. Numerical comparisons with Gurobi have demonstrated encouraging potential of Snipal for handling large-scale LP problems where the constraint matrix A has a dense representation or AAT has a dense factorization even with an appropriate reordering. For a few large LP instances arising from correlation clustering, our algorithm can be up to 20-100 times faster than the barrier method implemented in Gurobi for solving the problems to the accuracy of 10 - 8 in the relative KKT residual. However, when tested on some large sparse LP problems available in the public domain, our algorithm is not yet practically competitive against the barrier method in Gurobi, especially when the latter can compute the Schur complement matrix and its sparse Cholesky factorization in each iteration cheaply.
KW - Augmented Lagrangian method
KW - Linear programming
KW - Semismooth Newton method
UR - http://www.scopus.com/inward/record.url?scp=85092001652&partnerID=8YFLogxK
U2 - 10.1137/19M1251795
DO - 10.1137/19M1251795
M3 - Journal article
AN - SCOPUS:85092001652
SN - 1052-6234
VL - 30
SP - 2410
EP - 2440
JO - SIAM Journal on Optimization
JF - SIAM Journal on Optimization
IS - 3
ER -