Constrained NLP via gradient flow penalty continuation: Towards self-tuning robust penalty schemes
View / Open Files
Publication Date
2017-06-09Journal Title
Computers and Chemical Engineering
ISSN
0098-1354
Publisher
Elsevier
Volume
101
Pages
243-258
Language
English
Type
Article
This Version
AM
Metadata
Show full item recordCitation
Scott, F., Conejeros, R., & Vassiliadis, V. (2017). Constrained NLP via gradient flow penalty continuation: Towards self-tuning robust penalty schemes. Computers and Chemical Engineering, 101 243-258. https://doi.org/10.1016/j.compchemeng.2017.01.034
Abstract
This work presents a new numerical solution approach to nonlinear constrained optimization problems based on a gradient flow reformulation. The proposed solution schemes use self-tuning penalty parameters where the updating of the penalty parameter is directly embedded in the system of ODEs used in the reformulation, and its growth rate is linked to the violation of the constraints and variable bounds. The convergence properties of these schemes are analyzed, and it is shown that they converge to a local minimum asymptotically. Numerical experiments using a set of test problems, ranging from a few to several hundred variables, show that the proposed schemes are robust and converge to feasible points and local minima. Moreover, results suggest that the GF formulations were able to find the optimal solution to problems where conventional NLP solvers fail, and in less integration steps and time compared to a previously reported GF formulation.
Keywords
gradient flow, nonlinear programming problem, convergence analysis
Identifiers
External DOI: https://doi.org/10.1016/j.compchemeng.2017.01.034
This record's URL: https://www.repository.cam.ac.uk/handle/1810/263379
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International, Attribution-NonCommercial-NoDerivatives 4.0 International, Attribution-NonCommercial-NoDerivatives 4.0 International