Authors:Kamil A. Khan; Harry A. J. Watson; Paul I. Barton Pages: 687 - 729 Abstract: Abstract McCormick’s classical relaxation technique constructs closed-form convex and concave relaxations of compositions of simple intrinsic functions. These relaxations have several properties which make them useful for lower bounding problems in global optimization: they can be evaluated automatically, accurately, and computationally inexpensively, and they converge rapidly to the relaxed function as the underlying domain is reduced in size. They may also be adapted to yield relaxations of certain implicit functions and differential equation solutions. However, McCormick’s relaxations may be nonsmooth, and this nonsmoothness can create theoretical and computational obstacles when relaxations are to be deployed. This article presents a continuously differentiable variant of McCormick’s original relaxations in the multivariate McCormick framework of Tsoukalas and Mitsos. Gradients of the new differentiable relaxations may be computed efficiently using the standard forward or reverse modes of automatic differentiation. Extensions to differentiable relaxations of implicit functions and solutions of parametric ordinary differential equations are discussed. A C++ implementation based on the library MC++ is described and applied to a case study in nonsmooth nonconvex optimization. PubDate: 2017-04-01 DOI: 10.1007/s10898-016-0440-6 Issue No:Vol. 67, No. 4 (2017)

Authors:Ambros M. Gleixner; Timo Berthold; Benjamin Müller; Stefan Weltge Pages: 731 - 757 Abstract: Abstract Optimization-based bound tightening (OBBT) is one of the most effective procedures to reduce variable domains of nonconvex mixed-integer nonlinear programs (MINLPs). At the same time it is one of the most expensive bound tightening procedures, since it solves auxiliary linear programs (LPs)—up to twice the number of variables many. The main goal of this paper is to discuss algorithmic techniques for an efficient implementation of OBBT. Most state-of-the-art MINLP solvers apply some restricted version of OBBT and it seems to be common belief that OBBT is beneficial if only one is able to keep its computational cost under control. To this end, we introduce three techniques to increase the efficiency of OBBT: filtering strategies to reduce the number of solved LPs, ordering heuristics to exploit simplex warm starts, and the generation of Lagrangian variable bounds (LVBs). The propagation of LVBs during tree search is a fast approximation to OBBT without the need to solve auxiliary LPs. We conduct extensive computational experiments on MINLPLib2. Our results indicate that OBBT is most beneficial on hard instances, for which we observe a speedup of 17–19 % on average. Most importantly, more instances can be solved when using OBBT. PubDate: 2017-04-01 DOI: 10.1007/s10898-016-0450-4 Issue No:Vol. 67, No. 4 (2017)

Authors:Christoph Buchheim; Claudia D’Ambrosio Pages: 759 - 786 Abstract: Abstract We introduce a new method for solving box-constrained mixed-integer polynomial problems to global optimality. The approach, a specialized branch-and-bound algorithm, is based on the computation of lower bounds provided by the minimization of separable underestimators of the polynomial objective function. The underestimators are the novelty of the approach because the standard approaches in global optimization are based on convex relaxations. Thanks to the fact that only simple bound constraints are present, minimizing the separable underestimator can be done very quickly. The underestimators are computed monomial-wise after the original polynomial has been shifted. We show that such valid underestimators exist and their degree can be bounded when the degree of the polynomial objective function is bounded, too. For the quartic case, all optimal monomial underestimators are determined analytically. We implemented and tested the branch-and-bound algorithm where these underestimators are hardcoded. The comparison against standard global optimization and polynomial optimization solvers clearly shows that our method outperforms the others, the only exception being the binary case where, though, it is still competitive. Moreover, our branch-and-bound approach suffers less in case of dense polynomial objective function, i.e., in case of polynomials having a large number of monomials. This paper is an extended and revised version of the preliminary paper [4]. PubDate: 2017-04-01 DOI: 10.1007/s10898-016-0443-3 Issue No:Vol. 67, No. 4 (2017)

Authors:José L. Sainz-Pardo; Javier Alcaraz; Mercedes Landete; Juan F. Monge Pages: 787 - 804 Abstract: Abstract The aim of the reliability fixed-charge location problem is to find robust solutions to the fixed-charge location problem when some facilities might fail with probability q. In this paper we analyze for which allocation variables in the reliability fixed-charge location problem formulation the integrality constraint can be relaxed so that the optimal value matches the optimal value of the binary problem. We prove that we can relax the integrality of all the allocation variables associated to non-failable facilities or of all the allocation variables associated to failable facilities but not of both simultaneously. We also demonstrate that we can relax the integrality of all the allocation variables whenever a family of valid inequalities is added to the set of constraints or whenever the parameters of the problem satisfy certain conditions. Finally, when solving the instances in a data set we discuss which relaxation or which modification of the problem works better in terms of resolution time and we illustrate that relaxing the integrality of the allocation variables inappropriately can alter the objective value considerably. PubDate: 2017-04-01 DOI: 10.1007/s10898-016-0439-z Issue No:Vol. 67, No. 4 (2017)

Authors:Yuan Yuan; Zukui Li; Biao Huang Pages: 805 - 827 Abstract: Abstract Chance constraint is widely used for modeling solution reliability in optimization problems with uncertainty. Due to the difficulties in checking the feasibility of the probabilistic constraint and the non-convexity of the feasible region, chance constrained problems are generally solved through approximations. Joint chance constrained problem enforces that several constraints are satisfied simultaneously and it is more complicated than individual chance constrained problem. This work investigates the tractable robust optimization approximation framework for solving the joint chance constrained problem. Various robust counterpart optimization formulations are derived based on different types of uncertainty set. To improve the quality of robust optimization approximation, a two-layer algorithm is proposed. The inner layer optimizes over the size of the uncertainty set, and the outer layer optimizes over the parameter t which is used for the indicator function upper bounding. Numerical studies demonstrate that the proposed method can lead to solutions close to the true solution of a joint chance constrained problem. PubDate: 2017-04-01 DOI: 10.1007/s10898-016-0438-0 Issue No:Vol. 67, No. 4 (2017)

Authors:Nooshin Movahedian Pages: 829 - 850 Abstract: Abstract In this paper, a general optimization problem is considered to investigate the conditions which ensure the existence of Lagrangian vectors with a norm not greater than a fixed positive number. In addition, the nonemptiness and boundedness of the multiplier sets together with their exact upper bounds is characterized. Moreover, three new constraint qualifications are suggested that each of them follows a degree of boundedness for multiplier vectors. Several examples at the end of the paper indicate that the upper bound for Lagrangian vectors is easily computable using each of our constraint qualifications. One innovation is introducing the so-called bounded Lagrangian constraint qualification which is stated based on the nonemptiness and boundedness of all possible Lagrangian sets. An application of the results for a mathematical program with equilibrium constraints is presented. PubDate: 2017-04-01 DOI: 10.1007/s10898-016-0442-4 Issue No:Vol. 67, No. 4 (2017)

Authors:Qunfeng Liu; Guang Yang; Zhongzhi Zhang; Jinping Zeng Pages: 851 - 872 Abstract: Abstract DIRECT is derivative-free global-search algorithm has been found to perform robustly across a wide variety of low-dimensional test problems. The reason DIRECT’s robustness is its lack of algorithmic parameters that need be “tuned” to make the algorithm perform well. In particular, there is no parameter that determines the relative emphasis on global versus local search. Unfortunately, the same algorithmic features that enable DIRECT to perform so robustly have a negative side effect. In particular, DIRECT is usually quick to get close to the global minimum, but very slow to refine the solution to high accuracy. This is what we call DIRECT’s “eventually inefficient behavior.” In this paper, we outline two root causes for this undesirable behavior and propose modifications to eliminate it. The paper builds upon our previously published “MrDIRECT” algorithm, which we can now show only addressed the first root cause of the “eventually inefficient behavior.” The key contribution of the current paper is a further enhancement that allows MrDIRECT to address the second root cause as well. To demonstrate the effectiveness of the enhanced MrDIRECT, we have identified a set of test functions that highlight different situations in which DIRECT has convergence issues. Extensive numerical work with this test suite demonstrates that the enhanced version of MrDIRECT does indeed improve the convergence rate of DIRECT. PubDate: 2017-04-01 DOI: 10.1007/s10898-016-0447-z Issue No:Vol. 67, No. 4 (2017)

Authors:Dawei Zhan; Jiachang Qian; Yuansheng Cheng Pages: 873 - 892 Abstract: Abstract Most parallel efficient global optimization (EGO) algorithms focus only on the parallel architectures for producing multiple updating points, but give few attention to the balance between the global search (i.e., sampling in different areas of the search space) and local search (i.e., sampling more intensely in one promising area of the search space) of the updating points. In this study, a novel approach is proposed to apply this idea to further accelerate the search of parallel EGO algorithms. In each cycle of the proposed algorithm, all local maxima of expected improvement (EI) function are identified by a multi-modal optimization algorithm. Then the local EI maxima with value greater than a threshold are selected and candidates are sampled around these selected EI maxima. The results of numerical experiments show that, although the proposed parallel EGO algorithm needs more evaluations to find the optimum compared to the standard EGO algorithm, it is able to reduce the optimization cycles. Moreover, the proposed parallel EGO algorithm gains better results in terms of both number of cycles and evaluations compared to a state-of-the-art parallel EGO algorithm over six test problems. PubDate: 2017-04-01 DOI: 10.1007/s10898-016-0449-x Issue No:Vol. 67, No. 4 (2017)

Authors:Spyridon Samothrakis; Maria Fasli; Diego Perez; Simon Lucas Pages: 893 - 907 Abstract: Abstract Global optimisation of unknown noisy functions is a daunting task that appears in domains ranging from games to control problems to meta-parameter optimisation for machine learning. We show how to incorporate heuristics to Stochastic Simultaneous Optimistic Optimization (STOSOO), a global optimisation algorithm that has very weak requirements from the function. In our case, heuristics come in the form of Covariance Matrix Adaptation Evolution Strategy (CMA-ES). The new algorithm, termed Guided STOSOO (STOSOO-G), combines the ability of CMA-ES for fast local convergence (due to the algorithm following the “natural” gradient) and the global optimisation abilities of STOSOO. We compare all three algorithms in the “harder” parts of the Comparing Continuous Optimisers on Black-Box Optimization Benchmarking benchmark suite, which provides a default set of functions for testing. We show that our approach keeps the best of both worlds, i.e. the almost optimal exploration/exploitation of STOSOO with the local optimisation strength of CMA-ES. PubDate: 2017-04-01 DOI: 10.1007/s10898-016-0482-9 Issue No:Vol. 67, No. 4 (2017)

Authors:A. Y. Alfakih Pages: 909 - 924 Abstract: Abstract A bar framework (G, p) in dimension r is a graph G whose nodes are points \(p^1,\ldots ,p^n\) in \(\mathbb {R}^r\) and whose edges are line segments between pairs of these points. Two frameworks (G, p) and (G, q) are equivalent if each edge of (G, p) has the same (Euclidean) length as the corresponding edge of (G, q). A pair of non-adjacent vertices i and j of (G, p) is universally linked if \( p^i-p^j \) = \( q^i-q^j \) in every framework (G, q) that is equivalent to (G, p). Framework (G, p) is universally rigid iff every pair of non-adjacent vertices of (G, p) is universally linked. In this paper, we present a unified treatment of the universal rigidity problem based on the geometry of spectrahedra. A spectrahedron is the intersection of the positive semidefinite cone with an affine space. This treatment makes it possible to tie together some known, yet scattered, results and to derive new ones. Among the new results presented in this paper are: (1) The first sufficient condition for a given pair of non-adjacent vertices of (G, p) to be universally linked. (2) A new, weaker, sufficient condition for a framework (G, p) to be universally rigid thus strengthening the existing known condition. An interpretation of this new condition in terms of the Strong Arnold Property is also presented. PubDate: 2017-04-01 DOI: 10.1007/s10898-016-0448-y Issue No:Vol. 67, No. 4 (2017)

Authors:Kaarthik Sundar; Sivakumar Rathinam Pages: 527 - 551 Abstract: Abstract The multiple depot ring-star problem (MDRSP) is an important combinatorial optimization problem that arises in optical fiber network design and in applications that collect data using stationary sensing devices and autonomous vehicles. Given the locations of a set of customers and a set of depots, the goal is to (i) find a set of simple cycles such that each cycle (ring) passes through a subset of customers and exactly one depot, (ii) assign each non-visited customer to a visited customer or a depot, and (iii) minimize the sum of the routing costs, i.e., the cost of the cycles and the assignment costs. We present a mixed integer linear programming formulation for the MDRSP and propose valid inequalities to strengthen the linear programming relaxation. Furthermore, we present a polyhedral analysis and derive facet-inducing results for the MDRSP. All these results are then used to develop a branch-and-cut algorithm to obtain optimal solutions to the MDRSP. The performance of the branch-and-cut algorithm is evaluated through extensive computational experiments on several classes of test instances. PubDate: 2017-03-01 DOI: 10.1007/s10898-016-0431-7 Issue No:Vol. 67, No. 3 (2017)

Authors:Akshay Gupte; Shabbir Ahmed; Santanu S. Dey; Myun Seok Cheon Pages: 631 - 669 Abstract: Abstract The pooling problem is a folklore NP-hard global optimization problem that finds applications in industries such as petrochemical refining, wastewater treatment and mining. This paper assimilates the vast literature on this problem that is dispersed over different areas and gives new insights on prevalent techniques. We also present new ideas for computing dual bounds on the global optimum by solving high-dimensional linear programs. Finally, we propose discretization methods for inner approximating the feasible region and obtaining good primal bounds. Valid inequalities are derived for the discretized models, which are formulated as mixed integer linear programs. The strength of our relaxations and usefulness of our discretizations is empirically validated on random test instances. We report best known primal bounds on some of the large-scale instances. PubDate: 2017-03-01 DOI: 10.1007/s10898-016-0434-4 Issue No:Vol. 67, No. 3 (2017)

Authors:Fengmin Xu; Meihua Wang; Yu-Hong Dai; Dachuan Xu Abstract: Abstract Enhanced indexation aims to construct a portfolio to track and outperform the performance of a stock market index by employing both passive and active fund management strategies. This paper presents a novel sparse enhanced indexation model with chance and cardinality constraints. Its goal is to maximize the excess return that can be attained with a high probability, while the model allows a fund manger to limit the number of stocks in the portfolio and specify the maximum tolerable relative market risk. In particular, we model the asset returns as random variables and estimate their probability distributions by the Capital Asset Pricing Model or Fama-French 3-factor model, and measure the relative market risk with the coherent semideviation risk function. We deal with the chance constraint via distributionally robust approach and present a second-order cone programming and a semidefinite programming safe approximation for the model under different sets of potential distribution functions. A hybrid genetic algorithm is applied to solve the NP-hard problem. Numerical tests are conducted on the real data sets from major international stock markets, including USA, UK, Germany and China. The results demonstrate that the proposed model and the method can efficiently solve the enhanced indexation problem and our approach can generally achieve sparse tracking portfolios with good out-of-sample excess returns and high robustness. PubDate: 2017-03-25 DOI: 10.1007/s10898-017-0513-1

Authors:Rui Diao; Ya-Feng Liu; Yu-Hong Dai Abstract: Abstract The interval subset sum problem (ISSP) is a generalization of the well-known subset sum problem. Given a set of intervals \(\left\{ [a_{i,1},a_{i,2}]\right\} _{i=1}^n\) and a target integer T, the ISSP is to find a set of integers, at most one from each interval, such that their sum best approximates the target T but cannot exceed it. In this paper, we first study the computational complexity of the ISSP. We show that the ISSP is relatively easy to solve compared to the 0–1 knapsack problem. We also identify several subclasses of the ISSP which are polynomial time solvable (with high probability), albeit the problem is generally NP-hard. Then, we propose a new fully polynomial time approximation scheme for solving the general ISSP problem. The time and space complexities of the proposed scheme are \({{\mathcal {O}}}\left( n \max \left\{ 1 / \epsilon ,\log n\right\} \right) \) and \(\mathcal{O}\left( n+1/\epsilon \right) ,\) respectively, where \(\epsilon \) is the relative approximation error. To the best of our knowledge, the proposed scheme has almost the same time complexity but a significantly lower space complexity compared to the best known scheme. Both the correctness and efficiency of the proposed scheme are validated by numerical simulations. In particular, the proposed scheme successfully solves ISSP instances with \(n=100{,}000\) and \(\epsilon =0.1\%\) within 1 s. PubDate: 2017-03-23 DOI: 10.1007/s10898-017-0514-0

Authors:Kedong Yan; Hong Seo Ryoo Abstract: Abstract 0–1 multilinear programming (MP) captures the essence of pattern generation in logical analysis of data (LAD). This paper utilizes graph theoretic analysis of data to discover useful neighborhood properties among data for data reduction and multi-term linearization of the common constraint of an MP pattern generation model in a small number of stronger valid inequalities. This means that, with a systematic way to more efficiently generating Boolean logical patterns, LAD can be used for more effective analysis of data in practice. Mathematical properties and the utility of the new valid inequalities are illustrated on small examples and demonstrated through extensive experiments on 12 real-life data mining datasets. PubDate: 2017-03-20 DOI: 10.1007/s10898-017-0512-2

Authors:Rudolf Scitovski Abstract: Abstract In this paper, we consider a global optimization problem for a symmetric Lipschitz continuous function \(g:[a,b]^k\rightarrow {\mathbb {R}}\) , whose domain \([a,b]^k\subset {\mathbb {R}}^k\) consists of k! hypertetrahedrons of the same size and shape, in which function g attains equal values. A global minimum can therefore be searched for in one hypertetrahedron only, but then this becomes a global optimization problem with linear constraints. Apart from that, some known global optimization algorithms in standard form cannot be applied to solving the problem. In this paper, it is shown how this global optimization problem with linear constraints can easily be transformed into a global optimization problem on hypercube \([0,1]^k\) , for the solving of which an applied DIRECT algorithm in standard form is possible. This approach has a somewhat lower efficiency than known global optimization methods for symmetric Lipschitz continuous functions (such as SymDIRECT or DISIMPL), but, on the other hand, this method allows for the use of publicly available and well developed computer codes for solving a global optimization problem on hypercube \([0,1]^k\) (e.g. the DIRECT algorithm). The method is illustrated and tested on standard symmetric functions and very demanding center-based clustering problems for the data that have only one feature. An application to the image segmentation problem is also shown. PubDate: 2017-03-11 DOI: 10.1007/s10898-017-0510-4

Authors:Juan F. R. Herrera; José M. G. Salmerón; Eligius M. T. Hendrix; Rafael Asenjo; Leocadio G. Casado Abstract: Branch and Bound (B&B) algorithms are known to exhibit an irregularity of the search tree. Therefore, developing a parallel approach for this kind of algorithms is a challenge. The efficiency of a B&B algorithm depends on the chosen Branching, Bounding, Selection, Rejection, and Termination rules. The question we investigate is how the chosen platform consisting of programming language, used libraries, or skeletons influences programming effort and algorithm performance. Selection rule and data management structures are usually hidden to programmers for frameworks with a high level of abstraction, as well as the load balancing strategy, when the algorithm is run in parallel. We investigate the question by implementing a multidimensional Global Optimization B&B algorithm with the help of three frameworks with a different level of abstraction (from more to less): Bobpp, Threading Building Blocks (TBB), and a customized Pthread implementation. The following has been found. The Bobpp implementation is easy to code, but exhibits the poorest scalability. On the contrast, the TBB and Pthread implementations scale almost linearly on the used platform. The TBB approach shows a slightly better productivity. PubDate: 2017-03-10 DOI: 10.1007/s10898-017-0508-y

Authors:Radosław Rudek Abstract: Abstract In this paper, we analyse the parallel machine makespan minimization problem with the general sum of processing time based learning or aging effects. First, we prove that an optimal solution to the single machine case can be found by priority rules. Next, for the considered parallel machine problem, we construct the exact dynamic programming algorithm that can operate on real-valued job processing times, which is the only exact algorithm for the analysed problem. The computational analysis confirms that it can solve optimally moderate problem instances. PubDate: 2017-03-10 DOI: 10.1007/s10898-017-0509-x

Authors:Tao Sun; Hao Jiang; Lizhi Cheng Abstract: Abstract In this paper, we investigate the convergence of the proximal iteratively reweighted algorithm for a class of nonconvex and nonsmooth problems. Such problems actually include numerous models in the area of signal processing and machine learning research. Two extensions of the algorithm are also studied. We provide a unified scheme for these three algorithms. With the Kurdyka–Łojasiewicz property, we prove that the unified algorithm globally converges to a critical point of the objective function. PubDate: 2017-03-10 DOI: 10.1007/s10898-017-0507-z