Abstract: This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based Stochastic Approximation and Sample Average Approximation methods. The concept of stochastic gradient approximation of the true gradient can be successfully extended to deterministic problems. Methods of this kind are presented for the data fitting and machine learning problems.

Abstract: This is a short tutorial on complexity studies for differentiable convex optimization. A complexity study is made for a class of problems, an "oracle" that obtains information about the problem at a given point, and a stopping rule for algorithms. These three items compose a scheme, for which we study the performance of algorithms and problem complexity. Our problem classes will be quadratic minimization and convex minimization in ℝn. The oracle will always be first order. We study the performance of steepest descent and Krylov spacemethods for quadratic function minimization and Nesterov’s approach to the minimization of differentiable convex functions.

Abstract: Second order methods for optimization call for the solution of sequences of linear systems. In this survey we will discuss several issues related to the preconditioning of such sequences. Covered topics include both techniques for building updates of factorized preconditioners and quasi-Newton approaches. Sequences of unsymmetric linear systems arising in Newton-Krylov methods will be considered as well as symmetric positive definite sequences arising in the solution of nonlinear least-squares by Truncated Gauss-Newton methods.

Abstract: The aim of this text is to highlight recent advances of trust-region-based methods for nonlinear programming and to put them into perspective. An algorithmic framework provides a ground with the main ideas of these methods and the related notation. Specific approaches concerned with handling the trust-region subproblem are recalled, particularly for the large scale setting. Recent contributions encompassing the trust-region globalization technique for nonlinear programming are reviewed, including nonmonotone acceptance criteria for unconstrained minimization; the adaptive adjustment of the trust-region radius; the merging of the trust-region step into a line search scheme, and the usage of the trust-region elements within derivative-free optimization algorithms.

Abstract: We review the motivation for, the current state-of-the-art in convergence results, and some open questions concerning the stabilized version of the sequential quadratic programming algorithm for constrained optimization. We also discuss the tools required for its local convergence analysis, globalization challenges, and extentions of the method to the more general variational problems.

Abstract: Constraint qualifications (CQ) are assumptions on the algebraic description of the feasible set of an optimization problem that ensure that the KKT conditions hold at any local minimum. In this work we show that constraint qualifications based on the notion of constant rank can be understood as assumptions that ensure that the polar of the linear approximation of the tangent cone, generated by the active gradients, retains it geometric structure locally.

Abstract: This paper provides a short introduction to optimization problems with semidefinite constraints. Basic duality and optimality conditions are presented. For linear semidefinite programming some advances by dealing with degeneracy and the semidefinite facial reduction are discussed. Two relatively recent areas of application are presented. Finally a short overview of relevant literature on algorithmic approaches for efficiently solving linear and nonlinear semidefinite programming is provided.

Abstract: Generalized Nash equilibrium problems have become very important as a modeling tool during the last decades. The aim of this survey paper is twofold. It summarizes recent advances in the research on computational methods for generalized Nash equilibrium problems and points out current challenges. The focus of this survey is on algorithms and their convergence properties. Therefore, we also present reformulations of the generalized Nash equilibrium problem, results on error bounds and properties of the solution set of the equilibrium problems.

Abstract: A Mathematical Program with Linear Complementarity Constraints (MPLCC) is an optimization problem where a continuously differentiable function is minimized on a set defined by linear constraints and complementarity conditions on pairs of complementary variables. This problem finds many applications in several areas of science, engineering and economics and is also an important tool for the solution of some NP-hard structured and nonconvex optimization problems, such as bilevel, bilinear and nonconvex quadratic programs and the eigenvalue complementarity problem. In this paper some of the most relevant applications of the MPLCC and formulations of nonconvex optimization problems as MPLCCs are first presented. Algorithms for computing a feasible solution, a stationary point and a global minimum for the MPLCC are next discussed. The most important nonlinear programming methods, complementarity algorithms, enumerative techniques and 0 - 1 integer programming approaches for the MPLCC are reviewed. Some comments about the computational performance of these algorithms and a few topics for future research are also included in this survey.

Abstract: We present a rigorous and comprehensive survey on extensions to the multicriteria setting of three well-known scalar optimization algorithms. Multiobjective versions of the steepest descent, the projected gradient and the Newton methods are analyzed in detail. At each iteration, the search directions of these methods are computed by solving real-valued optimization problems and, in order to guarantee an adequate objective value decrease, Armijo-like rules are implemented by means of a backtracking procedure. Under standard assumptions, convergence to Pareto (weak Pareto) optima is established. For the Newton method, superlinear convergence is proved and, assuming Lipschitz continuity of the objectives second derivatives, it is shown that the rate is quadratic

Abstract: We review the methods and applications of automatic differentiation, a research and development activity, which has evolved in various computational fields since the mid 1950's. Starting from very simple basic principles that are familiar from school, one arrives at various theoretical and practical challenges. The resulting activity encompasses mathematical research and software development; it is now oftenreferredtoas algorithmic differentiation. From a geometrical and algebraic point of view, differentiation amounts to linearization, a concept that naturally extends to infinite dimensional spaces. In contract to other surveys, we will emphasize this interpretation as it has become more important recently and also facilitates the treatment of nonsmooth problems by piecewise linearization.

Abstract: Bundle methods are often the algorithms of choice for nonsmooth convex optimization, especially if accuracy in the solution and reliability are a concern. We review several algorithms based on the bundle methodology that have been developed recently and that, unlike their forerunner variants, have the ability to provide exact solutions even if most of the time the available information is inaccurate. We adopt an approach that is by no means exhaustive, but covers different proximal and level bundle methods dealing with inexact oracles, for both unconstrained and constrained problems.