Authors:M. V. Dolgopolik Pages: 237 - 296 Abstract: In this article we present a general theory of augmented Lagrangian functions for cone constrained optimization problems that allows one to study almost all known augmented Lagrangians for these problems within a unified framework. We develop a new general method for proving the existence of global saddle points of augmented Lagrangian functions, called the localization principle. The localization principle unifies, generalizes and sharpens most of the known results on the existence of global saddle points, and, in essence, reduces the problem of the existence of global saddle points to a local analysis of optimality conditions. With the use of the localization principle we obtain first necessary and sufficient conditions for the existence of a global saddle point of an augmented Lagrangian for cone constrained minimax problems via both second and first order optimality conditions. In the second part of the paper, we present a general approach to the construction of globally exact augmented Lagrangian functions. The general approach developed in this paper allowed us not only to sharpen most of the existing results on globally exact augmented Lagrangians, but also to construct first globally exact augmented Lagrangian functions for equality constrained optimization problems, for nonlinear second order cone programs and for nonlinear semidefinite programs. These globally exact augmented Lagrangians can be utilized in order to design new superlinearly (or even quadratically) convergent optimization methods for cone constrained optimization problems. PubDate: 2018-06-01 DOI: 10.1007/s10898-017-0603-0 Issue No:Vol. 71, No. 2 (2018)

Authors:Takahito Kuno Pages: 297 - 311 Abstract: The simplicial algorithm is a popular branch-and-bound approach to the convex maximization problem with multiple local maxima. In this paper, we discuss some difficulties revealed when implementing this algorithm under the \(\omega \) -subdivision rule. To overcome those, we modify the bounding process and extend the \(\omega \) -subdivision rule. We also report numerical results for the simplicial algorithm according to the new subdivision rule. PubDate: 2018-06-01 DOI: 10.1007/s10898-018-0619-0 Issue No:Vol. 71, No. 2 (2018)

Authors:Shinji Yamada; Akiko Takeda Pages: 313 - 339 Abstract: Optimization problems whose objective function and constraints are quadratic polynomials are called quadratically constrained quadratic programs (QCQPs). QCQPs are NP-hard in general and are important in optimization theory and practice. There have been many studies on solving QCQPs approximately. Among them, a semidefinite program (SDP) relaxation is a well-known convex relaxation method. In recent years, many researchers have tried to find better relaxed solutions by adding linear constraints as valid inequalities. On the other hand, the SDP relaxation requires a long computation time, and it has high space complexity for large-scale problems in practice; therefore, the SDP relaxation may not be useful for such problems. In this paper, we propose a new convex relaxation method that is weaker but faster than SDP relaxation methods. The proposed method transforms a QCQP into a Lagrangian dual optimization problem and successively solves subproblems while updating the Lagrange multipliers. The subproblem in our method is a QCQP with only one constraint for which we propose an efficient algorithm. Numerical experiments confirm that our method can quickly find a relaxed solution with an appropriate termination condition. PubDate: 2018-06-01 DOI: 10.1007/s10898-018-0617-2 Issue No:Vol. 71, No. 2 (2018)

Authors:Q. L. Dong; Y. C. Tang; Y. J. Cho; Th. M. Rassias Pages: 341 - 360 Abstract: In this paper, first, we review the projection and contraction methods for solving the split feasibility problem (SFP), and then by using the inverse strongly monotone property of the underlying operator of the SFP, we improve the “optimal” step length to provide the modified projection and contraction methods. Also, we consider the corresponding relaxed variants for the modified projection and contraction methods, where the two closed convex sets are both level sets of convex functions. Some convergence theorems of the proposed methods are established under suitable conditions. Finally, we give some numerical examples to illustrate that the modified projection and contraction methods have an advantage over other methods, and improve greatly the projection and contraction methods. PubDate: 2018-06-01 DOI: 10.1007/s10898-018-0628-z Issue No:Vol. 71, No. 2 (2018)

Authors:Shengguan Xu; Hongquan Chen Pages: 361 - 381 Abstract: A novel Nash-EGO algorithm is presented to extend the usage of efficient global optimization (EGO) into large-scale optimizations by coupling with Nash game strategy. In our Nash-EGO, the large-scale design variables are split into several subsets by adopting Nash variable territory splitting, and the EGO optimizer acts as a player of specific Nash game. All the EGO players are coupled with each other and assigned to optimize their own subsets synchronously in parallel to produce the corresponding approximate optimal subsets. Doing in this way, the performance of EGO players could be expected to keep at a high level due to the fact that EGO players now take care of only their own small-scale subsets instead of facing the large-scale problem directly. A set of typical cases with a small number of variables are firstly selected to validate the performance of each EGO player mentioned. Then, the Nash-EGO proposed is tested by representative functions with a scale up to 30 design variables. Finally, more challenge cases with 90 design variables are constructed and investigated to mimic the real large-scale optimizations. It can be learned from the tests that, with respect to conventional EGO the present algorithm can always find near optimal solutions, which are more close to the theoretical values, and are achieved, moreover, less CPU time-consuming, up to hundreds times faster. All cases with 30 or 90 design variables have similar efficient performances, which indicates the present algorithm has the potential to cope with real large-scale optimizations. PubDate: 2018-06-01 DOI: 10.1007/s10898-018-0608-3 Issue No:Vol. 71, No. 2 (2018)

Authors:Shokouh Shahbeyk; Majid Soleimani-damaneh; Refail Kasimbeyli Pages: 383 - 405 Abstract: In this paper, we introduce Hartley properly and super nondominated solutions in vector optimization with a variable ordering structure. We prove the connections between Benson properly nondominated, Hartley properly nondominated, and super nondominated solutions under appropriate assumptions. Moreover, we establish some necessary and sufficient conditions for newly-defined solutions invoking an augmented dual cone approach, the linear scalarization, and variational analysis tools. In addition to the theoretical results, various clarifying examples are given. PubDate: 2018-06-01 DOI: 10.1007/s10898-018-0614-5 Issue No:Vol. 71, No. 2 (2018)

Authors:Eric Bradford; Artur M. Schweidtmann; Alexei Lapkin Pages: 407 - 438 Abstract: Many engineering problems require the optimization of expensive, black-box functions involving multiple conflicting criteria, such that commonly used methods like multiobjective genetic algorithms are inadequate. To tackle this problem several algorithms have been developed using surrogates. However, these often have disadvantages such as the requirement of a priori knowledge of the output functions or exponentially scaling computational cost with respect to the number of objectives. In this paper a new algorithm is proposed, TSEMO, which uses Gaussian processes as surrogates. The Gaussian processes are sampled using spectral sampling techniques to make use of Thompson sampling in conjunction with the hypervolume quality indicator and NSGA-II to choose a new evaluation point at each iteration. The reference point required for the hypervolume calculation is estimated within TSEMO. Further, a simple extension was proposed to carry out batch-sequential design. TSEMO was compared to ParEGO, an expected hypervolume implementation, and NSGA-II on nine test problems with a budget of 150 function evaluations. Overall, TSEMO shows promising performance, while giving a simple algorithm without the requirement of a priori knowledge, reduced hypervolume calculations to approach linear scaling with respect to the number of objectives, the capacity to handle noise and lastly the ability for batch-sequential usage. PubDate: 2018-06-01 DOI: 10.1007/s10898-018-0609-2 Issue No:Vol. 71, No. 2 (2018)

Authors:Eric Bradford; Artur M. Schweidtmann; Alexei Lapkin Pages: 439 - 440 Abstract: This note lists changes to the article by Bradford et al.. A number of figures have moved to the wrong positions, while the references in the text still refer to the correct figure, i.e. a number of figures are now in the wrong section, have the wrong position and have the wrong caption. PubDate: 2018-06-01 DOI: 10.1007/s10898-018-0629-y Issue No:Vol. 71, No. 2 (2018)

Authors:J. J. Moreno; G. Ortega; E. Filatovas; J. A. Martínez; E. M. Garzón Abstract: Non-Dominated Sorting (NDS) is the most time-consuming procedure used in the majority of evolutionary multiobjective optimization algorithms that are based on Pareto dominance ranking without regard to the computation time of the objective functions. It can be accelerated by the exploitation of its parallelism on High Performance Computing systems, that provide heterogeneous processing units, such as multicore processors and GPUs. The optimization of energy efficiency of such systems is a challenge in scientific computation since it depends on the kind of processing which is performed. Our interest is to solve NDS in an efficient way concerning both runtime and energy consumption. In literature, performance improvement has been extensively studied. Recently, a sequential Best Order Sort (BOS) algorithm for NDS has been introduced as one of the most efficient one in terms of practical performance. This work is focused on the acceleration of the NDS on modern architectures. Two efficient parallel NDS algorithms based on Best Order Sort, are introduced, MC-BOS and GPU-BOS. Both algorithms start with the fast sorting of population by objectives. MC-BOS computes in parallel the analysis of the population by objectives on the multicore processors. GPU-BOS is based on the principles of Best Order Sort, with a new scheme designed to harness the massive parallelism provided by GPUs. A wide experimental study of both algorithms on several kinds of CPU and GPU platforms has been carried out. Runtime and energy consumption are analysed to identify the best platform/algorithm of the parallel NDS for every particular population size. The analysis of obtained results defines criteria to help the user when selecting the optimal parallel version/platform for particular dimensions of NDS. The experimental results show that the new parallel NDS algorithms overcome the sequential Best Order Sort in terms of the performance and energy efficiency in relevant factors. PubDate: 2018-06-05 DOI: 10.1007/s10898-018-0669-3

Authors:Styliani Avraamidou; Efstratios N. Pistikopoulos Abstract: In this work, we present a novel algorithm for the global solution of tri-level mixed-integer linear optimization problems containing both integer and continuous variables at all three optimization levels. Based on multi-parametric theory and our earlier results for bi-level programming problems, the main idea of the algorithm is to recast the lower levels of the tri-level optimization problem as multi-parametric programming problems, in which the optimization variables (continuous and integer) of all the upper level problems, are considered as parameters at the lower levels. The resulting parametric solutions are then substituted into the corresponding higher-level problems sequentially. The algorithm is illustrated through numerical examples, along with implementation and computational studies. PubDate: 2018-05-31 DOI: 10.1007/s10898-018-0668-4

Authors:T. D. Chuong; V. Jeyakumar Abstract: In this paper, under the existence of a certificate of nonnegativity of the objective function over the given constraint set, we present saddle-point global optimality conditions and a generalized Lagrangian duality theorem for (not necessarily convex) polynomial optimization problems, where the Lagrange multipliers are polynomials. We show that the nonnegativity certificate together with the archimedean condition guarantees that the values of the Lasserre hierarchy of semidefinite programming (SDP) relaxations of the primal polynomial problem converge asymptotically to the common primal–dual value. We then show that the known regularity conditions that guarantee finite convergence of the Lasserre hierarchy also ensure that the nonnegativity certificate holds and the values of the SDP relaxations converge finitely to the common primal–dual value. Finally, we provide classes of nonconvex polynomial optimization problems for which the Slater condition guarantees the required nonnegativity certificate and the common primal–dual value with constant multipliers and the dual problems can be reformulated as semidefinite programs. These classes include some separable polynomial programs and quadratic optimization problems with quadratic constraints that admit certain hidden convexity. We also give several numerical examples that illustrate our results. PubDate: 2018-05-26 DOI: 10.1007/s10898-018-0665-7

Authors:Stanisław Migórski; Shengda Zeng Abstract: In this paper we investigate an abstract system which consists of a hemivariational inequality of parabolic type combined with a nonlinear evolution equation in the framework of an evolution triple of spaces which is called a differential hemivariational inequality [(DHVI), for short]. A hybrid iterative system corresponding to (DHVI) is introduced by using a temporally semi-discrete method based on the backward Euler difference scheme, i.e., the Rothe method, and a feedback iterative technique. We apply a surjectivity result for pseudomonotone operators and properties of the Clarke subgradient operator to establish existence and a priori estimates for solutions to an approximate problem. Finally, through a limiting procedure for solutions of the hybrid iterative system, the solvability of (DHVI) is proved without imposing any convexity condition on the nonlinear function \(u\mapsto f(t,x,u)\) and compactness of \(C_0\) -semigroup \(e^{A(t)}\) . PubDate: 2018-05-25 DOI: 10.1007/s10898-018-0667-5

Authors:Firdevs Ulus Abstract: There are different solution concepts for convex vector optimization problems (CVOPs) and a recent one, which is motivated from a set optimization point of view, consists of finitely many efficient solutions that generate polyhedral inner and outer approximations to the Pareto frontier. A CVOP with compact feasible region is known to be bounded and there exists a solution of this sense to it. However, it is not known if it is possible to generate polyhedral inner and outer approximations to the Pareto frontier of a CVOP if the feasible region is not compact. This study shows that not all CVOPs are tractable in that sense and gives a characterization of tractable problems in terms of the well known weighted sum scalarization problems. PubDate: 2018-05-24 DOI: 10.1007/s10898-018-0666-6

Authors:Anna N. Rettieva Abstract: In this paper a new approach to construct the cooperative behavior in dynamic multicriteria games is presented. To obtain a multicriteria Nash equilibrium the bargaining construction (Nash product) is adopted. To design a multicriteria cooperative equilibrium Nash bargaining scheme is applied with the multicriteria Nash equilibrium payoffs playing the role of the status quo points. Dynamic multicriteria bioresource management problem with random harvesting times is considered. The players’ strategies and the payoffs are obtained under cooperative and noncooperative behavior. PubDate: 2018-05-24 DOI: 10.1007/s10898-018-0658-6

Authors:Vikas Vikram Singh; Abdel Lisser Abstract: We consider an n-player non-cooperative game with random payoffs and continuous strategy set for each player. The random payoffs of each player are defined using a finite dimensional random vector. We formulate this problem as a chance-constrained game by defining the payoff function of each player using a chance constraint. We first consider the case where the continuous strategy set of each player does not depend on the strategies of other players. If a random vector defining the payoffs of each player follows a multivariate elliptically symmetric distribution, we show that there exists a Nash equilibrium. We characterize the set of Nash equilibria using the solution set of a variational inequality (VI) problem. Next, we consider the case where the continuous strategy set of each player is defined by a shared constraint set. In this case, we show that there exists a generalized Nash equilibrium for elliptically symmetric distributed payoffs. Under certain conditions, we characterize the set of a generalized Nash equilibria using the solution set of a VI problem. As an application, the random payoff games arising from electricity market are studied under chance-constrained game framework. PubDate: 2018-05-23 DOI: 10.1007/s10898-018-0664-8

Authors:J. X. Cruz Neto; F. M. O. Jacinto; P. A. Soares; J. C. O. Souza Abstract: We study some conditions for a monotone bifunction to be maximally monotone by using a corresponding vector field associated to the bifunction and vice versa. This approach allows us to establish existence of solutions to equilibrium problems in Hadamard manifolds obtained by perturbing the equilibrium bifunction. PubDate: 2018-05-18 DOI: 10.1007/s10898-018-0663-9

Authors:Yixin Zhao; Torbjörn Larsson; Elina Rönnberg; Panos M. Pardalos Abstract: A new and strong convexified formulation of the fixed charge transportation problem is provided. This formulation is obtained by integrating the concepts of Lagrangian decomposition and column generation. The decomposition is made by splitting the shipping variables into supply and demand side copies, while the columns correspond to extreme flow patterns for single sources or sinks. It is shown that the combination of Lagrangian decomposition and column generation provides a formulation whose strength dominates those of three other convexified formulations of the problem. Numerical results illustrate that our integrated approach has the ability to provide strong lower bounds. The Lagrangian decomposition yields a dual problem with an unbounded set of optimal solutions. We propose a regularized column generation scheme which prioritizes an optimal dual solution with a small \(l_1\) -norm. We further demonstrate numerically that information gained from the strong formulation can also be used for constructing a small-sized core problem which yields high-quality upper bounds. PubDate: 2018-05-18 DOI: 10.1007/s10898-018-0661-y

Authors:Kurt M. Anstreicher Abstract: We consider a bound for the maximum-entropy sampling problem (MESP) that is based on solving a max-det problem over a relaxation of the Boolean quadric polytope (BQP). This approach to MESP was first suggested by Christoph Helmberg over 15 years ago, but has apparently never been further elaborated or computationally investigated. We find that the use of a relaxation of BQP that imposes semidefiniteness and a small number of equality constraints gives excellent bounds on many benchmark instances. These bounds can be further tightened by imposing additional inequality constraints that are valid for the BQP. Duality information associated with the BQP-based bounds can be used to fix variables to 0/1 values, and also as the basis for the implementation of a “strong branching” strategy. A branch-and-bound algorithm using the BQP-based bounds solves some benchmark instances of MESP to optimality for the first time. PubDate: 2018-05-17 DOI: 10.1007/s10898-018-0662-x

Authors:Ke Guo; Deren Han Abstract: Recently, the convergence of the Douglas–Rachford splitting method (DRSM) was established for minimizing the sum of a nonsmooth strongly convex function and a nonsmooth hypoconvex function under the assumption that the strong convexity constant \(\beta \) is larger than the hypoconvexity constant \(\omega \) . Such an assumption, implying the strong convexity of the objective function, precludes many interesting applications. In this paper, we prove the convergence of the DRSM for the case \(\beta =\omega \) , under relatively mild assumptions compared with some existing work in the literature. PubDate: 2018-05-11 DOI: 10.1007/s10898-018-0660-z