Thumbnail
Access Restriction
Open

Author Zavala, Victor M. ♦ Anitescu, Mihai
Source CiteSeerX
Content type Text
File Format PDF
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science
Subject Keyword Iterative Linear Algebra ♦ Nonlinear Programming ♦ Exact Differentiable Penalty Function ♦ Parametric Optimization Problem ♦ Scalable Computation ♦ Derive General Convergence Result ♦ Numerical Study ♦ Latency-limited Environment ♦ Efficient Warm-starting ♦ Trust-region Newton Technique ♦ Mixed-integer Nonlinear Programming ♦ Superlinearly Convergent ♦ Primal-dual Merit Function ♦ Fast Detection ♦ Direct Minimization ♦ Algorithmic Approach ♦ Negative Curvature ♦ Newton Step ♦ Model Predictive Control
Abstract Abstract. We present an approach for nonlinear programming (NLP) based on the direct minimization of an exact differentiable penalty function using trust-region Newton techniques. As opposed to existing algorithmic approaches to NLP, the approach provides all the features required for scalability: it can efficiently detect and exploit directions of negative curvature, it is superlinearly convergent, and it enables the scalable computation of the Newton step through iterative linear algebra. Moreover, it presents features that are desirable for parametric optimization problems that have to be solved in a latency-limited environment, as is the case for model predictive control and mixed-integer nonlinear programming. These features are fast detection of activity, efficient warm-starting, and progress on a primal-dual merit function at every iteration. We derive general convergence results for our approach and demonstrate its behavior through numerical studies.
Educational Role Student ♦ Teacher
Age Range above 22 year
Educational Use Research
Education Level UG and PG ♦ Career/Technical Study
Learning Resource Type Article