Abstract: In this paper, we provide a simple convergence analysis of proximal gradient algorithm with Bregman distance, which provides a tighter bound than existing result. In particular, for the problem of minimizing a class of convex objective functions, we show that proximal gradient algorithm with Bregman distance can be viewed as proximal point algorithm that incorporates another Bregman distance. Consequently, the convergence result of the proximal gradient algorithm with Bregman distance follows directly from that of the proximal point algorithm with Bregman distance, and this leads to a simpler convergence analysis with a tighter convergence bound than existing ones. We further propose and analyze the backtracking line-search variant of the proximal gradient algorithm with Bregman distance. PubDate: 2019-07-01

Abstract: We analyze a reliable and efficient max-norm a posteriori error estimator for a control-constrained, linear–quadratic optimal control problem. The estimator yields optimal experimental rates of convergence within an adaptive loop. PubDate: 2019-07-01

Abstract: Based on a reparametrization of the Douglas–Rachford algorithm, we provide a principle of finding the least norm solution for a sum of two maximally monotone operators. The algorithm allows us to find the least norm solution to a sum of monotone operators, and even generally to find the least norm primal-dual solution to inclusions with mixtures of composite monotone operators. Three numerical results illustrate our results. PubDate: 2019-07-01

Abstract: Recently a new solution concept for generalized Nash equilibrium problems was published by the author. This concept selects a reasonable equilibrium out of the typically infinitely many. The idea is to model the process of finding a compromise by solving parametrized generalized Nash equilibrium problems. In this paper we propose an algorithmic realization of the concept. The model produces a solution path, which is under suitable assumptions unique. The algorithm is a homotopy method that tries to follow this path. We use semismooth Newton steps as corrector steps in our algorithm in order to approximately solve the generalized Nash equilibrium problems for each given parameter. If we have a unique solution path, we need three additional theoretical assumptions: a stationarity result for the merit function, a coercivity condition for the constraints, and an extended Mangasarian–Fromowitz constraint qualification. Then we can prove convergence of our semismooth tracing algorithm to the unique equilibrium to be selected. We also present convincing numerical results on a test library of problems from literature. The algorithm also performs well on a number of problems that do not satisfy all the theoretical assumptions. PubDate: 2019-07-01

Abstract: The paper concerns with an algorithm for approximating solutions of a variational inequality problem involving a Lipschitz continuous and monotone operator in a Hilbert space. The algorithm uses a new stepsize rule which does not depend on the Lipschitz constant and without any linesearch procedure. The resulting algorithm only requires to compute a projection on feasible set and a value of operator over each iteration. The convergence and the convergence rate of the algorithm are established. Some experiments are performed to show the numerical behavior of the proposed algorithm and also to compare its performance with those of others. PubDate: 2019-07-01

Abstract: A Newton-like method for unconstrained minimization is introduced in the present work. While the computer work per iteration of the best-known implementations may need several factorizations or may use rather expensive matrix decompositions, the proposed method uses a single cheap factorization per iteration. Convergence and complexity and results, even in the case in which the subproblems’ Hessians are far from being Hessians of the objective function, are presented. Moreover, when the Hessian is Lipschitz-continuous, the proposed method enjoys \(O(\varepsilon ^{-3/2})\) evaluation complexity for first-order optimality and \(O(\varepsilon ^{-3})\) for second-order optimality as other recently introduced Newton method for unconstrained optimization based on cubic regularization or special trust-region procedures. Fairly successful and fully reproducible numerical experiments are presented and the developed corresponding software is freely available. PubDate: 2019-07-01

Abstract: The proximal point algorithm (PPA) is a fundamental method for convex programming. When applying the PPA to solve linearly constrained convex problems, we may prefer to choose an appropriate metric matrix to define the proximal regularization, so that the computational burden of the resulted PPA can be reduced, and sometimes even admit closed form or efficient solutions. This idea results in the so-called customized PPA (also known as preconditioned PPA), and it covers the linearized ALM, the primal-dual hybrid gradient algorithm, ADMM as special cases. Since each customized PPA owes special structures and has popular applications, it is interesting to ask wether we can design a simple relaxation strategy for these algorithms. In this paper we treat these customized PPA algorithms uniformly by a mixed variational inequality approach, and propose a new relaxation strategy for these customized PPA algorithms. Our idea is based on correcting the dual variables individually and does not rely on relaxing the primal variables. This is very different from previous works. From variational inequality perspective, we prove the global convergence and establish a worst-case convergence rate for these relaxed PPA algorithms. Finally, we demonstrate the performance improvements by some numerical results. PubDate: 2019-07-01

Abstract: We herein present a stabilized sequential programming method for equality constrained programming. The proposed method uses the concepts of proximal point methods and primal-dual regularization. A sequence of regularized problems are approximately solved with the regularization parameter approaching zero. At each iteration, a regularized QP subproblem is solved to obtain a primal-dual search direction. Further, a trust-funnel-like line search scheme is used to globalize the algorithm, and a global convergence under the weak assumption of cone-continuity property is shown. To achieve a fast local convergence, a specially designed second-order correction (SOC) technique is adopted near a solution. Under the second-order sufficient condition and some weak conditions (among which no constraint qualification is involved), the regularized QP subproblem transits to a stabilized QP subproblem in the limit. By possibly combining with the SOC step, the full step will be accepted in the limit and hence the superlinearly local convergence is achieved. Preliminary numerical results are reported, which are encouraging. PubDate: 2019-07-01

Abstract: The Uzawa type algorithms have shown themselves to be competitive for solving large and sparse saddle point problems. They reduce the original problem to a lower-dimensional linear system and solve the derived equation instead. In this paper, we propose new Uzawa-MBB type and preconditioned Uzawa-MBB type algorithms for the nonsymmetric saddle point problems. The main contributions of the paper are that both of the new algorithms are constructed from optimization algorithms, use a special descent direction based on Xu et al. (A new Uzawa-exact type algorithm for nonsymmetric saddle point problems, 2018. arXiv preprint arXiv:1802.04135 ), and combine the modified Barzilai–Borwein method with modified GLL line search strategy to solve the derived least squares problem. In addition, we analyze the convergence of the two new algorithms. Applications to finite element discretization of Navier–Stokes equation with an unstable pair of approximation spaces, i.e. Q1–P0 pair, are discussed and the results of the numerical experiments of applying our new algorithms are reported, which demonstrate the competitive performance of the preconditioned Uzawa-MBB type algorithms with Krylov subspace methods. PubDate: 2019-07-01

Abstract: We consider a regularized version of a Jacobi-type alternating direction method of multipliers (ADMM) for the solution of a class of separable convex optimization problems in a Hilbert space. The analysis shows that this method is equivalent to the standard proximal-point method applied in a Hilbert space with a transformed scalar product. The method therefore inherits the known convergence results from the proximal-point method and allows suitable modifications to get a strongly convergent variant. Some additional properties are also shown by exploiting the particular structure of the ADMM-type solution method. Applications and numerical results are provided for the domain decomposition method and potential (generalized) Nash equilibrium problems in a Hilbert space setting. PubDate: 2019-07-01

Abstract: We consider the problem of discrete arc sizing for tree-shaped potential networks with respect to infinitely many demand scenarios. This means that the arc sizes need to be feasible for an infinite set of scenarios. The problem can be seen as a strictly robust counterpart of a single-scenario network design problem, which is shown to be NP-complete even on trees. In order to obtain a tractable problem, we introduce a method for generating a finite scenario set such that optimality of a sizing for this finite set implies the sizing’s optimality for the originally given infinite set of scenarios. We further prove that the size of the finite scenario set is quadratically bounded above in the number of nodes of the underlying tree and that it can be computed in polynomial time. The resulting problem can then be solved as a standard mixed-integer linear optimization problem. Finally, we show the applicability of our theoretical results by computing globally optimal arc sizes for a realistic hydrogen transport network of Eastern Germany. PubDate: 2019-07-01

Abstract: Proximal bundle method has usually been presented for unconstrained convex optimization problems. In this paper, we develop an infeasible proximal bundle method for nonsmooth nonconvex constrained optimization problems. Using the improvement function we transform the problem into an unconstrained one and then we build a cutting plane model. The resulting algorithm allows effective control of the size of quadratic programming subproblems via the aggregation techniques. The novelty in our approach is that the objective and constraint functions can be any arbitrary (regular) locally Lipschitz functions. In addition the global convergence, starting from any point, is proved in the sense that every accumulation point of the iterative sequence is stationary for the improvement function. At the end, some encouraging numerical results with a MATLAB implementation are also reported. PubDate: 2019-06-15

Abstract: We investigate the NP-hard problem of computing the spark of a matrix (i.e., the smallest number of linearly dependent columns), a key parameter in compressed sensing and sparse signal recovery. To that end, we identify polynomially solvable special cases, gather upper and lower bounding procedures, and propose several exact (mixed-)integer programming models and linear programming heuristics. In particular, we develop a branch and cut scheme to determine the girth of a matroid, focussing on the vector matroid case, for which the girth is precisely the spark of the representation matrix. Extensive numerical experiments demonstrate the effectiveness of our specialized algorithms compared to general-purpose black-box solvers applied to several mixed-integer programming models. PubDate: 2019-06-13

Abstract: We apply novel inner-iteration preconditioned Krylov subspace methods to the interior-point algorithm for linear programming (LP). Inner-iteration preconditioners recently proposed by Morikuni and Hayami enable us to overcome the severe ill-conditioning of linear equations solved in the final phase of interior-point iterations. The Krylov subspace methods do not suffer from rank-deficiency and therefore no preprocessing is necessary even if rows of the constraint matrix are not linearly independent. By means of these methods, a new interior-point recurrence is proposed in order to omit one matrix-vector product at each step. Extensive numerical experiments are conducted over diverse instances of 140 LP problems including the Netlib, QAPLIB, Mittelmann and Atomizer Basis Pursuit collections. The largest problem has 434,580 unknowns. It turns out that our implementation is more robust than the standard public domain solvers SeDuMi (Self-Dual Minimization), SDPT3 (Semidefinite Programming Toh-Todd-Tütüncü) and the LSMR iterative solver in PDCO (Primal-Dual Barrier Method for Convex Objectives) without increasing CPU time. The proposed interior-point method based on iterative solvers succeeds in solving a fairly large number of LP instances from benchmark libraries under the standard stopping criteria. The work also presents a fairly extensive benchmark test for several renowned solvers including direct and iterative solvers. PubDate: 2019-06-06

Abstract: A realistic generalization of the Markov–Dubins problem, which is concerned with finding the shortest planar curve of constrained curvature joining two points with prescribed tangents, is the requirement that the curve passes through a number of prescribed intermediate points/nodes. We refer to this generalization as the Markov–Dubins interpolation problem. We formulate this interpolation problem as an optimal control problem and obtain results about the structure of its solution using optimal control theory. The Markov–Dubins interpolants consist of a concatenation of circular (C) and straight-line (S) segments. Abnormal interpolating curves are shown to exist and characterized; however, if the interpolating curve contains a straight-line segment then it cannot be abnormal. We derive results about the stationarity, or criticality, of the feasible solutions of certain structure. In particular, any feasible interpolant with arc types of CSC in each stage is proved to be stationary, i.e., critical. We propose a numerical method for computing Markov–Dubins interpolating paths. We illustrate the theory and the numerical approach by four qualitatively different examples. PubDate: 2019-06-01

Abstract: Euclidean norm computations over continuous variables appear naturally in the constraints or in the objective of many problems in the optimization literature, possibly defining non-convex feasible regions or cost functions. When some other variables have discrete domains, it positions the problem in the challenging Mixed Integer Nonlinear Programming (MINLP) class. For any MINLP where the nonlinearity is only present in the form of inequality constraints involving the Euclidean norm, we propose in this article an efficient methodology for linearizing the optimization problem at the cost of entirely controllable approximations even for non convex constraints. They make it possible to rely fully on Mixed Integer Linear Programming and all its strengths. We first empirically compare this linearization approach with a previously proposed linearization approach of the literature on the continuous k-center problem. This methodology is then successfully applied to a critical problem in the telecommunication satellite industry: the optimization of the beam layouts in multibeam satellite systems. We provide a proof of the NP-hardness of this very problem along with experiments on a realistic reference scenario. PubDate: 2019-06-01

Abstract: PIPS-SBB is a distributed-memory parallel solver with a scalable data distribution paradigm. It is designed to solve mixed integer programs (MIPs) with a dual-block angular structure, which is characteristic of deterministic-equivalent stochastic mixed-integer programs. In this paper, we present two different parallelizations of Branch & Bound (B&B), implementing both as extensions of PIPS-SBB, thus adding an additional layer of parallelism. In the first of the proposed frameworks, PIPS-PSBB, the coordination and load-balancing of the different optimization workers is done in a decentralized fashion. This new framework is designed to ensure all available cores are processing the most promising parts of the B&B tree. The second, ug[PIPS-SBB,MPI], is a parallel implementation using the Ubiquity Generator, a universal framework for parallelizing B&B tree search that has been sucessfully applied to other MIP solvers. We show the effects of leveraging multiple levels of parallelism in potentially improving scaling performance beyond thousands of cores. PubDate: 2019-06-01

Abstract: Iteratively reweighted \(\ell _1\) algorithm is a popular algorithm for solving a large class of optimization problems whose objective is the sum of a Lipschitz differentiable loss function and a possibly nonconvex sparsity inducing regularizer. In this paper, motivated by the success of extrapolation techniques in accelerating first-order methods, we study how widely used extrapolation techniques such as those in Auslender and Teboulle (SIAM J Optim 16:697–725, 2006), Beck and Teboulle (SIAM J Imaging Sci 2:183–202, 2009), Lan et al. (Math Program 126:1–29, 2011) and Nesterov (Math Program 140:125–161, 2013) can be incorporated to possibly accelerate the iteratively reweighted \(\ell _1\) algorithm. We consider three versions of such algorithms. For each version, we exhibit an explicitly checkable condition on the extrapolation parameters so that the sequence generated provably clusters at a stationary point of the optimization problem. We also investigate global convergence under additional Kurdyka–Łojasiewicz assumptions on certain potential functions. Our numerical experiments show that our algorithms usually outperform the general iterative shrinkage and thresholding algorithm in Gong et al. (Proc Int Conf Mach Learn 28:37–45, 2013) and an adaptation of the iteratively reweighted \(\ell _1\) algorithm in Lu (Math Program 147:277–307, 2014, Algorithm 7) with nonmonotone line-search for solving random instances of log penalty regularized least squares problems in terms of both CPU time and solution quality. PubDate: 2019-06-01

Abstract: In this work, we propose a predictor–corrector interior point method for linear programming in a primal–dual context, where the next iterate is chosen by the minimization of a polynomial merit function of three variables: the first is the steplength, the second defines the central path and the third models the weight of a corrector direction. The merit function minimization is performed by restricting it to constraints defined by a neighborhood of the central path that allows wide steps. In this framework, we combine different directions, such as the predictor, the corrector and the centering directions, with the aim of producing a better one. The proposed method generalizes most of predictor–corrector interior point methods, depending on the choice of the variables described above. Convergence analysis of the method is carried out, considering an initial point that has a good practical performance, which results in Q-linear convergence of the iterates with polynomial complexity. Numerical experiments using the Netlib test set are made, which show that this approach is competitive when compared to well established solvers, such as PCx. PubDate: 2019-06-01