Subjects -> MATHEMATICS (Total: 1074 journals)     - APPLIED MATHEMATICS (86 journals)    - GEOMETRY AND TOPOLOGY (23 journals)    - MATHEMATICS (793 journals)    - MATHEMATICS (GENERAL) (43 journals)    - NUMERICAL ANALYSIS (23 journals)    - PROBABILITIES AND MATH STATISTICS (106 journals) MATHEMATICS (793 journals)                  1 2 3 4 | Last

1 2 3 4 | Last

Similar Journals
 Computational Optimization and ApplicationsJournal Prestige (SJR): 1.127 Citation Impact (citeScore): 2Number of Followers: 9      Hybrid journal (It can contain Open Access articles) ISSN (Print) 1573-2894 - ISSN (Online) 0926-6003 Published by Springer-Verlag  [2570 journals]
• A conjugate direction based simplicial decomposition framework for solving
a specific class of dense convex quadratic programs
• Abstract: Abstract Many real-world applications can usually be modeled as convex quadratic problems. In the present paper, we want to tackle a specific class of quadratic programs having a dense Hessian matrix and a structured feasible set. We hence carefully analyze a simplicial decomposition like algorithmic framework that handles those problems in an effective way. We introduce a new master solver, called Adaptive Conjugate Direction Method, and embed it in our framework. We also analyze the interaction of some techniques for speeding up the solution of the pricing problem. We report extensive numerical experiments based on a benchmark of almost 1400 instances from specific and generic quadratic problems. We show the efficiency and robustness of the method when compared to a commercial solver (Cplex).
PubDate: 2020-03-01

• A self-adaptive method for pseudomonotone equilibrium problems and
variational inequalities
• Abstract: Abstract In this paper, we introduce and analyze a new algorithm for solving equilibrium problem involving pseudomonotone and Lipschitz-type bifunction in real Hilbert space. The algorithm requires only a strongly convex programming problem per iteration. A weak and a strong convergence theorem are established without the knowledge of the Lipschitz-type constants of the bifunction. As a special case of equilibrium problem, the variational inequality is also considered. Finally, numerical experiments are performed to illustrate the advantage of the proposed algorithm.
PubDate: 2020-03-01

• Empirical study of exact algorithms for the multi-objective spanning tree
• Abstract: Abstract The multi-objective spanning tree (MoST) is an extension of the minimum spanning tree problem (MST) that, as well as its single-objective counterpart, arises in several practical applications. However, unlike the MST, for which there are polynomial-time algorithms that solve it, the MoST is NP-hard. Several researchers proposed techniques to solve the MoST, each of those methods with specific potentialities and limitations. In this study, we examine those methods and divide them into two categories regarding their outcomes: Pareto optimal sets and Pareto optimal fronts. To compare the techniques from the two groups, we investigated their behavior on 2, 3 and 4-objective instances from different classes. We report the results of a computational experiment on 8100 complete and grid graphs in which we analyze specific features of each algorithm as well as the computational effort required to solve the instances.
PubDate: 2020-03-01

• Optimally linearizing the alternating direction method of multipliers for
convex programming
• Abstract: Abstract The alternating direction method of multipliers (ADMM) is being widely used in a variety of areas; its different variants tailored for different application scenarios have also been deeply researched in the literature. Among them, the linearized ADMM has received particularly wide attention in many areas because of its efficiency and easy implementation. To theoretically guarantee convergence of the linearized ADMM, the step size for the linearized subproblems, or the reciprocal of the linearization parameter, should be sufficiently small. On the other hand, small step sizes decelerate the convergence numerically. Hence, it is interesting to probe the optimal (largest) value of the step size that guarantees convergence of the linearized ADMM. This analysis is lacked in the literature. In this paper, we provide a rigorous mathematical analysis for finding this optimal step size of the linearized ADMM and accordingly set up the optimal version of the linearized ADMM in the convex programming context. The global convergence and worst-case convergence rate measured by the iteration complexity of the optimal version of linearized ADMM are proved as well.
PubDate: 2020-03-01

• A branch-and-cut algorithm for solving mixed-integer semidefinite
optimization problems
• Abstract: Abstract We consider a cutting-plane algorithm for solving mixed-integer semidefinite optimization (MISDO) problems. In this algorithm, the positive semidefinite (psd) constraint is relaxed, and the resultant mixed-integer linear optimization problem is solved repeatedly, imposing at each iteration a valid inequality for the psd constraint. We prove the convergence properties of the algorithm. Moreover, to speed up the computation, we devise a branch-and-cut algorithm, in which valid inequalities are dynamically added during a branch-and-bound procedure. We test the computational performance of our cutting-plane and branch-and-cut algorithms for three types of MISDO problem: random instances, computing restricted isometry constants, and robust truss topology design. Our experimental results demonstrate that, for many problem instances, our branch-and-cut algorithm delivered superior performance compared with general-purpose MISDO solvers in terms of computational efficiency and stability.
PubDate: 2020-03-01

• A line search exact penalty method for nonlinear semidefinite programming
• Abstract: Abstract In this paper, we present a line search exact penalty method for solving nonlinear semidefinite programming (SDP) problem. Compared with the traditional sequential semidefinite programming (SSDP) method which requires that the subproblem at every iterate point is compatible, this method is more practical. We first use a robust subproblem, which is always feasible, to get a detective step, then compute a search direction either from a traditional SSDP subproblem or a quadratic optimization subproblem with the penalty term. This two-phase strategy with the $$l_1$$ exact penalty function is employed to promote the global convergence, which is analyzed without assuming any constraint qualifications. Some preliminary numerical results are reported.
PubDate: 2020-03-01

• Fast feasibility check of the multi-material vertical alignment problem in
• Abstract: Abstract When building a road, it is critical to select a vertical alignment which ensures design and safety constraints. Finding such a vertical alignment is not necessarily a feasible problem, and the models describing it generally involve a large number of variables and constraints. This paper is dedicated to rapidly proving the feasibility or the infeasibility of a Mixed Integer Linear Program (MILP) modeling the vertical alignment problem. To do so, we take advantage of the particular structure of the MILP, and we prove that only a few of the MILP’s constraints determine the feasibility of the problem. In addition, we propose a method to build a feasible solution to the MILP that does not involve integer variables. This enables time saving to proving the feasibility of the vertical alignment problem and to find a feasible vertical alignment, as emphasized by numerical results. It is on average 75 times faster to prove the feasibility and 10 times faster to build a feasible solution.
PubDate: 2020-03-01

• Algorithms for stochastic optimization with function or expectation
constraints
• Abstract: Abstract This paper considers the problem of minimizing an expectation function over a closed convex set, coupled with a function or expectation constraint on either decision variables or problem parameters. We first present a new stochastic approximation (SA) type algorithm, namely the cooperative SA (CSA), to handle problems with the constraint on devision variables. We show that this algorithm exhibits the optimal $${{{\mathcal {O}}}}(1/\epsilon ^2)$$ rate of convergence, in terms of both optimality gap and constraint violation, when the objective and constraint functions are generally convex, where $$\epsilon$$ denotes the optimality gap and infeasibility. Moreover, we show that this rate of convergence can be improved to $${{{\mathcal {O}}}}(1/\epsilon )$$ if the objective and constraint functions are strongly convex. We then present a variant of CSA, namely the cooperative stochastic parameter approximation (CSPA) algorithm, to deal with the situation when the constraint is defined over problem parameters and show that it exhibits similar optimal rate of convergence to CSA. It is worth noting that CSA and CSPA are primal methods which do not require the iterations on the dual space and/or the estimation on the size of the dual variables. To the best of our knowledge, this is the first time that such optimal SA methods for solving function or expectation constrained stochastic optimization are presented in the literature.
PubDate: 2020-02-18

• Numerically tractable optimistic bilevel problems
• Abstract: Abstract We consider a class of optimistic bilevel problems. Specifically, we address bilevel problems in which at the lower level the objective function is fully convex and the feasible set does not depend on the upper level variables. We show that this nontrivial class of mathematical programs is sufficiently broad to encompass significant real-world applications and proves to be numerically tractable. From this respect, we establish that the stationary points for a relaxation of the original problem can be obtained addressing a suitable generalized Nash equilibrium problem. The latter game is proven to be convex and with a nonempty solution set. Leveraging this correspondence, we provide a provably convergent, easily implementable scheme to calculate stationary points of the relaxed bilevel program. As witnessed by some numerical experiments on an application in economics, this algorithm turns out to be numerically viable also for big dimensional problems.
PubDate: 2020-02-15

• Efficient local search procedures for quadratic fractional programming
problems
• Abstract: Abstract The problem of minimizing the sum of a convex quadratic function and the ratio of two quadratic functions can be reformulated as a Celis–Dennis–Tapia (CDT) problem and, thus, according to some recent results, can be polynomially solved. However, the degree of the known polynomial approaches for these problems is fairly large and that justifies the search for efficient local search procedures. In this paper the CDT reformulation of the problem is exploited to define a local search algorithm. On the theoretical side, its convergence to a stationary point is proved. On the practical side it is shown, through different numerical experiments, that the main cost of the algorithm is a single Schur decomposition to be performed during the initialization phase. The theoretical and practical results for this algorithm are further strengthened in a special case.
PubDate: 2020-02-08

• A modified proximal point method for DC functions on Hadamard manifolds
• Abstract: Abstract We study the convergence of a modified proximal point method for DC functions in Hadamard manifolds. We use the iteration computed by the proximal point method for DC function extended to the Riemannian context by Souza and Oliveira (J Glob Optim 63:797–810, 2015) to define a descent direction which improves the convergence of the method. Our method also accelerates the classical proximal point method for convex functions. We illustrate our results with some numerical experiments.
PubDate: 2020-02-08

• A discretization algorithm for nonsmooth convex semi-infinite programming
problems based on bundle methods
• Abstract: Abstract We propose a discretization algorithm for solving a class of nonsmooth convex semi-infinite programming problems that is based on a bundle method. Instead of employing the inexact calculation to evaluate the lower level problem, we shall carry out a discretization scheme. The discretization method is used to get a number of discretized problems which are solved by the bundle method. In particular, the subproblem used to generate a new point is independent of the number of constraints of the discretized problem. We apply a refinement-step which can be used to guarantee the convergence of the bundle method for the discretized problems as well as reduce the cost of the evaluations for the constraint functions during iteration. In addition we adopt an aggregation technique to manage the bundle information coming from previous steps. Both theoretical convergence analysis and preliminary computational results are reported. The results obtained have shown the good performance of the new algorithm.
PubDate: 2020-02-07

• Error estimates for the finite element approximation of bilinear boundary
control problems
• Abstract: Abstract In this article a special class of nonlinear optimal control problems involving a bilinear term in the boundary condition is studied. These kind of problems arise for instance in the identification of an unknown space-dependent Robin coefficient from a given measurement of the state, or when the Robin coefficient can be controlled in order to reach a desired state. Necessary and sufficient optimality conditions are derived and several discretization approaches for the numerical solution of the optimal control problem are investigated. Considered are both a full discretization and the postprocessing approach meaning that we compute an improved control by a pointwise evaluation of the first-order optimality condition. For both approaches finite element error estimates are shown and the validity of these results is confirmed by numerical experiments.
PubDate: 2020-02-07

• A derivative-free optimization algorithm for the efficient minimization of
functions obtained via statistical averaging
• Abstract: Abstract This paper considers the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of design parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in engineering applications. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. The present paper proposes a new optimization algorithm to adjust the amount of sampling associated with each function evaluation, making function evaluations more accurate (and, thus, more expensive), as required, as convergence is approached. The work builds on our algorithm for Delaunay-based Derivative-free Optimization via Global Surrogates ($${\varDelta }$$-DOGS, see JOGO https://doi.org/10.1007/s10898-015-0384-2). The new algorithm, dubbed $$\alpha$$-DOGS, substantially reduces the overall cost of the optimization process for problems of this important class. Further, under certain well-defined conditions, rigorous proof of convergence to the global minimum of the problem considered is established.
PubDate: 2020-02-04

• A comparison of solution approaches for the numerical treatment of
or-constrained optimization problems
• Abstract: Abstract Mathematical programs with or-constraints form a new class of disjunctive optimization problems with inherent practical relevance. In this paper, we provide a comparison of three different solution methods for the numerical treatment of this problem class which are inspired by classical approaches from disjunctive programming. First, we study the replacement of the or-constraints as nonlinear inequality constraints using suitable NCP-functions. Second, we transfer the or-constrained program into a mathematical program with switching or complementarity constraints which can be treated with the aid of well-known relaxation methods. Third, a direct Scholtes-type relaxation of the or-constraints is investigated. A numerical comparison of all these approaches which is based on three essentially different model programs from or-constrained optimization closes the paper.
PubDate: 2020-01-31

• On semi-infinite systems of convex polynomial inequalities and polynomial
optimization problems
• Abstract: Abstract We consider the semi-infinite system of polynomial inequalities of the form \begin{aligned} {{\mathbf {K}}}:=\{x\in {{\mathbb {R}}}^m\mid p(x,y)\ge 0,\quad \forall y\in S\subseteq {{\mathbb {R}}}^n\}, \end{aligned}where p(x, y) is a real polynomial in the variables x and the parameters y, the index set S is a basic semialgebraic set in $${{\mathbb {R}}}^n$$, $$-p(x,y)$$ is convex in x for every $$y\in S$$. We propose a procedure to construct approximate semidefinite representations of $${{\mathbf {K}}}$$. There are two indices to index these approximate semidefinite representations. As two indices increase, these semidefinite representation sets expand and contract, respectively, and can approximate $${{\mathbf {K}}}$$ as closely as possible under some assumptions. In some special cases, we can fix one of the two indices or both. Then, we consider the optimization problem of minimizing a convex polynomial over $${{\mathbf {K}}}$$. We present an SDP relaxation method for this optimization problem by similar strategies used in constructing approximate semidefinite representations of $${{\mathbf {K}}}$$. Under certain assumptions, some approximate minimizers of the optimization problem can also be obtained from the SDP relaxations. In some special cases, we show that the SDP relaxation for the optimization problem is exact and all minimizers can be extracted.
PubDate: 2020-01-21

• Tensor neural network models for tensor singular value decompositions
• Abstract: Abstract Tensor decompositions have become increasingly prevalent in recent years. Traditionally, tensors are represented or decomposed as a sum of rank-one outer products using either the CANDECOMP/PARAFAC, the Tucker model, or some variations thereof. The motivation of these decompositions is to find an approximate representation for a given tensor. The main propose of this paper is to develop two neural network models for finding an approximation based on t-product for a given third-order tensor. Theoretical analysis shows that each of the neural network models ensures the convergence performance. The computer simulation results further substantiate that the models can find effectively the left and right singular tensor subspace.
PubDate: 2020-01-20

• Relative-error inertial-relaxed inexact versions of Douglas-Rachford and
• Abstract: Abstract This paper derives new inexact variants of the Douglas-Rachford splitting method for maximal monotone operators and the alternating direction method of multipliers (ADMM) for convex optimization. The analysis is based on a new inexact version of the proximal point algorithm that includes both an inertial step and overrelaxation. We apply our new inexact ADMM method to LASSO and logistic regression problems and obtain somewhat better computational performance than earlier inexact ADMM methods.
PubDate: 2020-01-16

• Solution methods for a min–max facility location problem with regional
customers considering closest Euclidean distances
• Abstract: Abstract We study a facility location problem where a single facility serves multiple customers each represented by a (possibly non-convex) region in the plane. The aim of the problem is to locate a single facility in the plane so that the maximum of the closest Euclidean distances between the facility and the customer regions is minimized. Assuming that each customer region is mixed-integer second order cone representable, we firstly give a mixed-integer second order cone programming formulation of the problem. Secondly, we consider a solution method based on the Minkowski sums of sets. Both of these solution methods are extended to the constrained case in which the facility is to be located on a (possibly non-convex) subset of the plane. Finally, these two methods are compared in terms of solution quality and time with extensive computational experiments.
PubDate: 2020-01-04

• Secant update version of quasi-Newton PSB with weighted multisecant
equations
• Abstract: Abstract Quasi-Newton methods are often used in the frame of non-linear optimization. In those methods, the quality and cost of the estimate of the Hessian matrix has a major influence on the efficiency of the optimization algorithm, which has a huge impact for computationally costly problems. One strategy to create a more accurate estimate of the Hessian consists in maximizing the use of available information during this computation. This is done by combining different characteristics. The Powell-Symmetric-Broyden method (PSB) imposes, for example, the satisfaction of the last secant equation, which is called secant update property, and the symmetry of the Hessian (Powell in Nonlinear Programming 31–65, 1970). Imposing the satisfaction of more secant equations should be the next step to include more information into the Hessian. However, Schnabel proved that this is impossible (Schnabel in quasi-Newton methods using multiple secant equations, 1983). Penalized PSB (pPSB), works around the impossibility by giving a symmetric Hessian and penalizing the non-satisfaction of the multiple secant equations by using weight factors (Gratton et al. in Optim Methods Softw 30(4):748–755, 2015). Doing so, he loses the secant update property. In this paper, we combine the properties of PSB and pPSB by adding to pPSB the secant update property. This gives us the secant update penalized PSB (SUpPSB). This new formula that we propose also avoids matrix inversions, which makes it easier to compute. Next to that, SUpPSB also performs globally better compared to pPSB.
PubDate: 2020-01-02

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762