Authors:Bethany L. Nicholson; Wei Wan; Shivakumar Kameswaran; Lorenz T. Biegler Pages: 321 - 350 Abstract: Dynamic optimization problems are constrained by differential and algebraic equations and are found everywhere in science and engineering. A well-established method to solve these types of problems is direct transcription, where the differential equations are replaced with discretized approximations based on finite-difference or Runge–Kutta schemes. However, for problems with thousands of state variables and discretization points, direct transcription may result in nonlinear optimization problems which are too large for general-purpose optimization solvers to handle. Also, when an interior-point solver is applied, the dominant computational cost is solving the linear systems resulting from the Newton step. For large-scale nonlinear programming problems, these linear systems may become prohibitively expensive to solve. Furthermore, the systems also become too large to formulate and store in memory of a standard computer. On the other hand, direct transcription can exploit sparsity and structure of the linear systems in order to overcome these challenges. In this paper we investigate and compare two parallel linear decomposition algorithms, Cyclic Reduction (CR) and Schur complement decomposition, which take advantage of structure and sparsity. We describe the numerical conditioning of the CR algorithm when applied to the linear systems arising from dynamic optimization problems, and then compare CR with Schur complement decomposition on a number of test problems. Finally, we propose conditions under which each should be used, and describe future research directions. PubDate: 2018-06-01 DOI: 10.1007/s10589-018-0001-7 Issue No:Vol. 70, No. 2 (2018)

Authors:Hao Jiang; Daniel P. Robinson; René Vidal; Chong You Pages: 395 - 418 Abstract: We consider the problem of subspace clustering with data that is potentially corrupted by both dense noise and sparse gross errors. In particular, we study a recently proposed low rank subspace clustering approach based on a nonconvex modeling formulation. This formulation includes a nonconvex spectral function in the objective function that makes the optimization task challenging, e.g., it is unknown whether the alternating direction method of multipliers (ADMM) framework proposed to solve the nonconvex model formulation is provably convergent. In this paper, we establish that the spectral function is differentiable and give a formula for computing the derivative. Moreover, we show that the derivative of the spectral function is Lipschitz continuous and provide an explicit value for the Lipschitz constant. These facts are then used to provide a lower bound for how the penalty parameter in the ADMM method should be chosen. As long as the penalty parameter is chosen according to this bound, we show that the ADMM algorithm computes iterates that have a limit point satisfying first-order optimality conditions. We also present a second strategy for solving the nonconvex problem that is based on proximal gradient calculations. The convergence and performance of the algorithms is verified through experiments on real data from face and digit clustering and motion segmentation. PubDate: 2018-06-01 DOI: 10.1007/s10589-018-0002-6 Issue No:Vol. 70, No. 2 (2018)

Authors:Anwa Zhou; Jinyan Fan Pages: 419 - 441 Abstract: In this paper, we introduce the CP-nuclear value of a completely positive (CP) tensor and study its properties. A semidefinite relaxation algorithm is proposed for solving the minimal CP-nuclear-value tensor recovery. If a partial tensor is CP-recoverable, the algorithm can give a CP tensor recovery with the minimal CP-nuclear value, as well as a CP-nuclear decomposition of the recovered CP tensor. If it is not CP-recoverable, the algorithm can always give a certificate for that, when it is regular. Some numerical experiments are also presented. PubDate: 2018-06-01 DOI: 10.1007/s10589-018-0003-5 Issue No:Vol. 70, No. 2 (2018)

Authors:Jianqiang Cheng; Richard Li-Yang Chen; Habib N. Najm; Ali Pinar; Cosmin Safta; Jean-Paul Watson Pages: 479 - 502 Abstract: Increasing penetration levels of renewables have transformed how power systems are operated. High levels of uncertainty in production make it increasingly difficulty to guarantee operational feasibility; instead, constraints may only be satisfied with high probability. We present a chance-constrained economic dispatch model that efficiently integrates energy storage and high renewable penetration to satisfy renewable portfolio requirements. Specifically, we require that wind energy contribute at least a prespecified proportion of the total demand and that the scheduled wind energy is deliverable with high probability. We develop an approximate partial sample average approximation (PSAA) framework to enable efficient solution of large-scale chance-constrained economic dispatch problems. Computational experiments on the IEEE-24 bus system show that the proposed PSAA approach is more accurate, closer to the prescribed satisfaction tolerance, and approximately 100 times faster than standard sample average approximation. Finally, the improved efficiency of our PSAA approach enables solution of a larger WECC-240 test system in minutes. PubDate: 2018-06-01 DOI: 10.1007/s10589-018-0006-2 Issue No:Vol. 70, No. 2 (2018)

Authors:Meijia Yang; Yong Xia; Jiulin Wang; Jiming Peng Pages: 571 - 592 Abstract: The Tikhonov identical regularized total least squares (TI) is to deal with the ill-conditioned system of linear equations where the data are contaminated by noise. A standard approach for (TI) is to reformulate it as a problem of finding a zero point of some decreasing concave non-smooth univariate function such that the classical bisection search and Dinkelbach’s method can be applied. In this paper, by exploring the hidden convexity of (TI), we reformulate it as a new problem of finding a zero point of a strictly decreasing, smooth and concave univariate function. This allows us to apply the classical Newton’s method to the reformulated problem, which converges globally to the unique root with an asymptotic quadratic convergence rate. Moreover, in every iteration of Newton’s method, no optimization subproblem such as the extended trust-region subproblem is needed to evaluate the new univariate function value as it has an explicit expression. Promising numerical results based on the new algorithm are reported. PubDate: 2018-06-01 DOI: 10.1007/s10589-018-0004-4 Issue No:Vol. 70, No. 2 (2018)

Authors:Tasnuva Haque; Yves Lucet Pages: 593 - 613 Abstract: We propose the first algorithm to compute the conjugate of a bivariate Piecewise Linear-Quadratic (PLQ) function in optimal linear worst-case time complexity. The key step is to use a planar graph, called the entity graph, not only to represent the entities (vertex, edge, or face) of the domain of a PLQ function but most importantly to record adjacent entities. We traverse the graph using breadth-first search to compute the conjugate of each entity using graph-matrix calculus, and use the adjacency information to create the output data structure in linear time. PubDate: 2018-06-01 DOI: 10.1007/s10589-018-0007-1 Issue No:Vol. 70, No. 2 (2018)

Authors:Gabriel Haeser Pages: 615 - 639 Abstract: We develop a new notion of second-order complementarity with respect to the tangent subspace related to second-order necessary optimality conditions by the introduction of so-called tangent multipliers. We prove that around a local minimizer, a second-order stationarity residual can be driven to zero while controlling the growth of Lagrange multipliers and tangent multipliers, which gives a new second-order optimality condition without constraint qualifications stronger than previous ones associated with global convergence of algorithms. We prove that second-order variants of augmented Lagrangian (under an additional smoothness assumption based on the Lojasiewicz inequality) and interior point methods generate sequences satisfying our optimality condition. We present also a companion minimal constraint qualification, weaker than the ones known for second-order methods, that ensures usual global convergence results to a classical second-order stationary point. Finally, our optimality condition naturally suggests a definition of second-order stationarity suitable for the computation of iteration complexity bounds and for the definition of stopping criteria. PubDate: 2018-06-01 DOI: 10.1007/s10589-018-0005-3 Issue No:Vol. 70, No. 2 (2018)

Authors:Guanglin Xu; Samuel Burer Pages: 33 - 59 Abstract: We study two-stage adjustable robust linear programming in which the right-hand sides are uncertain and belong to a convex, compact uncertainty set. This problem is NP-hard, and the affine policy is a popular, tractable approximation. We prove that under standard and simple conditions, the two-stage problem can be reformulated as a copositive optimization problem, which in turn leads to a class of tractable, semidefinite-based approximations that are at least as strong as the affine policy. We investigate several examples from the literature demonstrating that our tractable approximations significantly improve the affine policy. In particular, our approach solves exactly in polynomial time a class of instances of increasing size for which the affine policy admits an arbitrarily large gap. PubDate: 2018-05-01 DOI: 10.1007/s10589-017-9974-x Issue No:Vol. 70, No. 1 (2018)

Authors:Yangyang Xu; Shuzhong Zhang Pages: 91 - 128 Abstract: Block coordinate update (BCU) methods enjoy low per-update computational complexity because every time only one or a few block variables would need to be updated among possibly a large number of blocks. They are also easily parallelized and thus have been particularly popular for solving problems involving large-scale dataset and/or variables. In this paper, we propose a primal–dual BCU method for solving linearly constrained convex program with multi-block variables. The method is an accelerated version of a primal–dual algorithm proposed by the authors, which applies randomization in selecting block variables to update and establishes an O(1 / t) convergence rate under convexity assumption. We show that the rate can be accelerated to \(O(1/t^2)\) if the objective is strongly convex. In addition, if one block variable is independent of the others in the objective, we then show that the algorithm can be modified to achieve a linear rate of convergence. The numerical experiments show that the accelerated method performs stably with a single set of parameters while the original method needs to tune the parameters for different datasets in order to achieve a comparable level of performance. PubDate: 2018-05-01 DOI: 10.1007/s10589-017-9972-z Issue No:Vol. 70, No. 1 (2018)

Authors:Weizhe Gu; Wei-Po Chen; Chun-Hsu Ko; Yuh-Jye Lee; Jein-Shan Chen Pages: 171 - 199 Abstract: In this paper, we propose two new smooth support vector machines for \(\varepsilon \) -insensitive regression. According to these two smooth support vector machines, we construct two systems of smooth equations based on two novel families of smoothing functions, from which we seek the solution to \(\varepsilon \) -support vector regression ( \(\varepsilon \) -SVR). More specifically, using the proposed smoothing functions, we employ the smoothing Newton method to solve the systems of smooth equations. The algorithm is shown to be globally and quadratically convergent without any additional conditions. Numerical comparisons among different values of parameter are also reported. PubDate: 2018-05-01 DOI: 10.1007/s10589-017-9975-9 Issue No:Vol. 70, No. 1 (2018)

Authors:Martin Gugat; Günter Leugering; Alexander Martin; Martin Schmidt; Mathias Sirvent; David Wintergerst Pages: 267 - 294 Abstract: We study the transient optimization of gas transport networks including both discrete controls due to switching of controllable elements and nonlinear fluid dynamics described by the system of isothermal Euler equations, which are partial differential equations in time and 1-dimensional space. This combination leads to mixed-integer optimization problems subject to nonlinear hyperbolic partial differential equations on a graph. We propose an instantaneous control approach in which suitable Euler discretizations yield systems of ordinary differential equations on a graph. This networked system of ordinary differential equations is shown to be well-posed and affine-linear solutions of these systems are derived analytically. As a consequence, finite-dimensional mixed-integer linear optimization problems are obtained for every time step that can be solved to global optimality using general-purpose solvers. We illustrate our approach in practice by presenting numerical results on a realistic gas transport network. PubDate: 2018-05-01 DOI: 10.1007/s10589-017-9970-1 Issue No:Vol. 70, No. 1 (2018)

Authors:Nikolaus von Daniels Pages: 295 - 320 Abstract: We consider Tikhonov regularization of control-constrained optimal control problems. We present new a-priori estimates for the regularization error assuming measure and source-measure conditions. In the special case of bang–bang solutions, we introduce another assumption to obtain the same convergence rates. This new condition turns out to be useful in the derivation of error estimates for the discretized problem. The necessity of the just mentioned assumptions to obtain certain convergence rates is analyzed. Finally, a numerical example confirms the analytical findings. PubDate: 2018-05-01 DOI: 10.1007/s10589-017-9976-8 Issue No:Vol. 70, No. 1 (2018)

Authors:Kimon Fountoulakis; Rachael Tappenden Abstract: We present a novel randomized block coordinate descent method for the minimization of a convex composite objective function. The method uses (approximate) partial second-order (curvature) information, so that the algorithm performance is more robust when applied to highly nonseparable or ill conditioned problems. We call the method Flexible Coordinate Descent (FCD). At each iteration of FCD, a block of coordinates is sampled randomly, a quadratic model is formed about that block and the model is minimized approximately/inexactly to determine the search direction. An inexpensive line search is then employed to ensure a monotonic decrease in the objective function and acceptance of large step sizes. We present several high probability iteration complexity results to show that convergence of FCD is guaranteed theoretically. Finally, we present numerical results on large-scale problems to demonstrate the practical performance of the method. PubDate: 2018-02-24 DOI: 10.1007/s10589-018-9984-3

Authors:Martin Branda; Max Bucher; Michal Červinka; Alexandra Schwartz Abstract: We consider general nonlinear programming problems with cardinality constraints. By relaxing the binary variables which appear in the natural mixed-integer programming formulation, we obtain an almost equivalent nonlinear programming problem, which is thus still difficult to solve. Therefore, we apply a Scholtes-type regularization method to obtain a sequence of easier to solve problems and investigate the convergence of the obtained KKT points. We show that such a sequence converges to an S-stationary point, which corresponds to a local minimizer of the original problem under the assumption of convexity. Additionally, we consider portfolio optimization problems where we minimize a risk measure under a cardinality constraint on the portfolio. Various risk measures are considered, in particular Value-at-Risk and Conditional Value-at-Risk under normal distribution of returns and their robust counterparts under moment conditions. For these investment problems formulated as nonlinear programming problems with cardinality constraints we perform a numerical study on a large number of simulated instances taken from the literature and illuminate the computational performance of the Scholtes-type regularization method in comparison to other considered solution approaches: a mixed-integer solver, a direct continuous reformulation solver and the Kanzow–Schwartz regularization method, which has already been applied to Markowitz portfolio problems. PubDate: 2018-02-21 DOI: 10.1007/s10589-018-9985-2

Authors:Christoph Brauer; Dirk A. Lorenz; Andreas M. Tillmann Abstract: In this paper we propose a primal-dual homotopy method for \(\ell _1\) -minimization problems with infinity norm constraints in the context of sparse reconstruction. The natural homotopy parameter is the value of the bound for the constraints and we show that there exists a piecewise linear solution path with finitely many break points for the primal problem and a respective piecewise constant path for the dual problem. We show that by solving a small linear program, one can jump to the next primal break point and then, solving another small linear program, a new optimal dual solution is calculated which enables the next such jump in the subsequent iteration. Using a theorem of the alternative, we show that the method never gets stuck and indeed calculates the whole path in a finite number of steps. Numerical experiments demonstrate the effectiveness of our algorithm. In many cases, our method significantly outperforms commercial LP solvers; this is possible since our approach employs a sequence of considerably simpler auxiliary linear programs that can be solved efficiently with specialized active-set strategies. PubDate: 2018-02-15 DOI: 10.1007/s10589-018-9983-4

Authors:Andreas Rademacher; Korinna Rosin Abstract: In this article, we present a-posteriori error estimations in context of optimal control of contact problems; in particular of Signorini’s problem. Due to the contact side-condition, the solution operator of the underlying variational inequality is not differentiable, yet we want to apply Newton’s method. Therefore, the non-smooth problem is regularized by penalization and afterwards discretized by finite elements. We derive optimality systems for the regularized formulation in the continuous as well as in the discrete case. This is done explicitly for Signorini’s contact problem, which covers linear elasticity and linearized surface contact conditions. The latter creates the need for treating trace-operations carefully, especially in contrast to obstacle contact conditions, which exert in the domain. Based on the dual weighted residual method and these optimality systems, we deduce error representations for the regularization, discretization and numerical errors. Those representations are further developed into error estimators. The resulting error estimator for regularization error is defined only in the contact area. Therefore its computational cost is especially low for Signorini’s contact problem. Finally, we utilize the estimators in an adaptive refinement strategy balancing regularization and discretization errors. Numerical results substantiate the theoretical findings. We present different examples concerning Signorini’s problem in two and three dimensions. PubDate: 2018-01-31 DOI: 10.1007/s10589-018-9982-5

Authors:J. Preininger; P. T. Vuong Abstract: We revisit the gradient projection method in the framework of nonlinear optimal control problems with bang–bang solutions. We obtain the strong convergence of the iterative sequence of controls and the corresponding trajectories. Moreover, we establish a convergence rate, depending on a constant appearing in the corresponding switching function and prove that this convergence rate estimate is sharp. Some numerical illustrations are reported confirming the theoretical results. PubDate: 2018-01-29 DOI: 10.1007/s10589-018-9981-6

Authors:Eduardo Casas; Mariano Mateos; Arnd Rösch Abstract: We discretize a directionally sparse parabolic control problem governed by a linear equation by means of control approximations that are piecewise constant in time and continuous piecewise linear in space. By discretizing the objective functional with the help of appropriate numerical quadrature formulas, we are able to show that the discrete optimal solution exhibits a directional sparse pattern alike the one enjoyed by the continuous solution. Error estimates are obtained and a comparison with the cases of having piecewise approximations of the control or a semilinear state equation are discussed. Numerical experiments that illustrate the theoretical results are included. PubDate: 2018-01-27 DOI: 10.1007/s10589-018-9979-0

Authors:Federico Cabassi; Luca Consolini; Marco Locatelli Abstract: Range reduction techniques often considerably enhance the performance of algorithmic approaches for the solution of nonconvex problems. In this paper we propose a range reduction technique for a class of optimization problems with some special structured constraints. The procedure explores and updates the values associated to the nodes of a suitably defined graph. Convergence of the procedure and some efficiency issues, in particular related to the order into which the nodes of the graph are explored. The proposed technique is applied to solve problems arising from a relevant practical application, namely velocity planning along a given trajectory. The computational experiments show the efficiency of the procedure and its ability of returning solutions within times much lower than those of nonlinear solvers and compatible with real-time applications. PubDate: 2018-01-11 DOI: 10.1007/s10589-017-9978-6

Authors:Xianchao Xiu; Lingchen Kong; Yan Li; Houduo Qi Abstract: In this paper, we focus on the \(\ell _1-\ell _p\) minimization problem with \(0<p<1\) , which is challenging due to the \(\ell _p\) norm being non-Lipschizian. In theory, we derive computable lower bounds for nonzero entries of the generalized first-order stationary points of \(\ell _1-\ell _p\) minimization, and hence of its local minimizers. In algorithms, based on three locally Lipschitz continuous \(\epsilon \) -approximation to \(\ell _p\) norm, we design several iterative reweighted \(\ell _1\) and \(\ell _2\) methods to solve those approximation problems. Furthermore, we show that any accumulation point of the sequence generated by these methods is a generalized first-order stationary point of \(\ell _1-\ell _p\) minimization. This result, in particular, applies to the iterative reweighted \(\ell _1\) methods based on the new Lipschitz continuous \(\epsilon \) -approximation introduced by Lu (Math Program 147(1–2):277–307, 2014), provided that the approximation parameter \(\epsilon \) is below a threshold value. Numerical results are also reported to demonstrate the efficiency of the proposed methods. PubDate: 2018-01-05 DOI: 10.1007/s10589-017-9977-7