Authors:Pooriya Beyhaghi; Daniele Cavaglieri; Thomas Bewley Pages: 331 - 382 Abstract: Abstract A new derivative-free optimization algorithm is introduced for nonconvex functions within a feasible domain bounded by linear constraints. Global convergence is guaranteed for twice differentiable functions with bounded Hessian, and is found to be remarkably efficient even for many functions which are not differentiable. Like other Response Surface Methods, at each optimization step, the algorithm minimizes a metric combining an interpolation of existing function evaluations and a model of the uncertainty of this interpolation. By adjusting the respective weighting of these two terms, the algorithm incorporates a tunable balance between global exploration and local refinement; a rule to adjust this balance automatically is also presented. Unlike other methods, any well-behaved interpolation strategy may be used. The uncertainty model is built upon the framework of a Delaunay triangulation of existing datapoints in parameter space. A quadratic function which goes to zero at each datapoint is formed within each simplex of this triangulation; the union of each of these quadratics forms the desired uncertainty model. Care is taken to ensure that function evaluations are performed at points that are well situated in parameter space; that is, such that the simplices of the resulting triangulation have circumradii with a known bound. This facilitates well-behaved local refinement as additional function evaluations are performed. PubDate: 2016-11-01 DOI: 10.1007/s10898-015-0384-2 Issue No:Vol. 66, No. 3 (2016)

Authors:Pooriya Beyhaghi; Thomas R. Bewley Pages: 383 - 415 Abstract: Abstract The derivative-free global optimization algorithms developed in Part I of this study, for linearly constrained problems, are extended to nonconvex n-dimensional problems with convex constraints. The initial \(n+1\) feasible datapoints are chosen in such a way that the volume of the simplex generated by these datapoints is maximized; as the algorithm proceeds, additional feasible datapoints are added in such a way that the convex hull of the available datapoints efficiently increases towards the boundaries of the feasible domain. Similar to the algorithms developed in Part I of this study, at each step of the algorithm, a search function is defined based on an interpolating function which passes through all available datapoints and a synthetic uncertainty function which characterizes the distance to the nearest datapoints. This uncertainty function, in turn, is built on the framework of a Delaunay triangulation, which is itself based on all available datapoints together with the (infeasible) vertices of an exterior simplex which completely contains the feasible domain. The search function is minimized within those simplices of this Delaunay triangulation that do not include the vertices of the exterior simplex. If the outcome of this minimization is contained within the circumsphere of a simplex which includes a vertex of the exterior simplex, this new point is projected out to the boundary of the feasible domain. For problems in which the feasible domain includes edges (due to the intersection of multiple twice-differentiable constraints), a modified search function is considered in the vicinity of these edges to assure convergence. PubDate: 2016-11-01 DOI: 10.1007/s10898-016-0433-5 Issue No:Vol. 66, No. 3 (2016)

Authors:Tipaluck Krityakierne; Taimoor Akhtar; Christine A. Shoemaker Pages: 417 - 437 Abstract: Abstract This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centers from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors. PubDate: 2016-11-01 DOI: 10.1007/s10898-016-0407-7 Issue No:Vol. 66, No. 3 (2016)

Authors:Nguyen Thi Van Hang; Jen-Chih Yao Pages: 439 - 456 Abstract: This paper establishes verifiable sufficient conditions for the existence of error bounds for the sub-level set of a difference function over an abstract constraint by applying a technique used by A. D. Ioffe. As a consequence, error bounds for constraint systems defined by d.c. inequalities and their applications in studying of exactness of the associated \(\ell _1\) penalty function and existence of Lagrange multipliers as necessary optimality conditions are also investigated. PubDate: 2016-11-01 DOI: 10.1007/s10898-016-0410-z Issue No:Vol. 66, No. 3 (2016)

Authors:Emilie Chouzenoux; Jean-Christophe Pesquet; Audrey Repetti Pages: 457 - 485 Abstract: Abstract A number of recent works have emphasized the prominent role played by the Kurdyka-Łojasiewicz inequality for proving the convergence of iterative algorithms solving possibly nonsmooth/nonconvex optimization problems. In this work, we consider the minimization of an objective function satisfying this property, which is a sum of two terms: (i) a differentiable, but not necessarily convex, function and (ii) a function that is not necessarily convex, nor necessarily differentiable. The latter function is expressed as a separable sum of functions of blocks of variables. Such an optimization problem can be addressed with the Forward–Backward algorithm which can be accelerated thanks to the use of variable metrics derived from the Majorize–Minimize principle. We propose to combine the latter acceleration technique with an alternating minimization strategy which relies upon a flexible update rule. We give conditions under which the sequence generated by the resulting Block Coordinate Variable Metric Forward–Backward algorithm converges to a critical point of the objective function. An application example to a nonconvex phase retrieval problem encountered in signal/image processing shows the efficiency of the proposed optimization method. PubDate: 2016-11-01 DOI: 10.1007/s10898-016-0405-9 Issue No:Vol. 66, No. 3 (2016)

Authors:Gui-Hua Lin; Mei-Ju Luo; Jin Zhang Pages: 487 - 510 Abstract: We consider a stochastic non-smooth programming problem with equality, inequality and abstract constraints, which is a generalization of the problem studied by Xu and Zhang (Math Program 119:371–401, 2009) where only an abstract constraint is considered. We employ a smoothing technique to deal with the non-smoothness and use the sample average approximation techniques to cope with the mathematical expectations. Then, we investigate the convergence properties of the approximation problems. We further apply the approach to solve the stochastic mathematical programs with equilibrium constraints. In addition, we give an illustrative example in economics to show the applicability of proposed approach. PubDate: 2016-11-01 DOI: 10.1007/s10898-016-0413-9 Issue No:Vol. 66, No. 3 (2016)

Authors:Thomas Weise; Yuezhong Wu; Raymond Chiong; Ke Tang; Jörg Lässig Pages: 511 - 534 Abstract: Abstract In the field of Evolutionary Computation, a common myth that “An Evolutionary Algorithm (EA) will outperform a local search algorithm, given enough runtime and a large-enough population” exists. We believe that this is not necessarily true and challenge the statement with several simple considerations. We then investigate the population size parameter of EAs, as this is the element in the above claim that can be controlled. We conduct a related work study, which substantiates the assumption that there should be an optimal setting for the population size at which a specific EA would perform best on a given problem instance and computational budget. Subsequently, we carry out a large-scale experimental study on 68 instances of the Traveling Salesman Problem with static population sizes that are powers of two between \((1+2)\) and \(({262144}+{524288})\) EAs as well as with adaptive population sizes. We find that analyzing the performance of the different setups over runtime supports our point of view and the existence of optimal finite population size settings. PubDate: 2016-11-01 DOI: 10.1007/s10898-016-0417-5 Issue No:Vol. 66, No. 3 (2016)

Authors:Serge L. Shishkin; Alan M. Finn Pages: 535 - 572 Abstract: Abstract The general class of zero-order Global Optimization problems is split into subclasses according to a proposed “Complexity measure” and the computational complexity of each subclass is rigorously estimated. Then, the laboriousness (computational demand) of general Branch-and-Bound (BnB) methods is estimated for each subclass. For conventional “Cubic” BnB based on splitting an n-dimensional cube into \(2^n\) sub-cubes, both upper and lower laboriousness estimates are obtained. The value of the Complexity measure for a problem subclass enters linearly into all complexity and laboriousness estimates for that subclass. A new BnB method based on the lattice \(A_n^*\) is presented with upper laboriousness bound that is, though conservative, smaller by a factor of \(O((4/3)^n)\) than the lower bound of the conventional method. The optimality of the new method is discussed. All results are extended to the class of Adaptive Covering problems—that is, covering of a large n-dimensional set by balls of different size, where the size of each ball is defined by a locally computed criterion. PubDate: 2016-11-01 DOI: 10.1007/s10898-016-0416-6 Issue No:Vol. 66, No. 3 (2016)

Authors:Hongjin He; Jitamitra Desai; Kai Wang Pages: 573 - 583 Abstract: Abstract In this paper, we introduce a new primal–dual prediction–correction algorithm for solving a saddle point optimization problem, which serves as a bridge between the algorithms proposed in Cai et al. (J Glob Optim 57:1419–1428, 2013) and He and Yuan (SIAM J Imaging Sci 5:119–149, 2012). An interesting byproduct of the proposed method is that we obtain an easily implementable projection-based primal–dual algorithm, when the primal and dual variables belong to simple convex sets. Moreover, we establish the worst-case \({\mathcal {O}}(1/t)\) convergence rate result in an ergodic sense, where t represents the number of iterations. PubDate: 2016-11-01 DOI: 10.1007/s10898-016-0437-1 Issue No:Vol. 66, No. 3 (2016)

Authors:Roman Sznajder Pages: 585 - 593 Abstract: Abstract In this paper, we investigate the structure of Lyapunov-like transformations on the extended second order cone, considered as a multivariate version of topheavy cone with respect to an arbitrary norm in a Euclidean space. As a by-product, we compute the Lyapunov rank of the extended second order cone. We also show the irreducibility of such a cone. PubDate: 2016-11-01 DOI: 10.1007/s10898-016-0445-1 Issue No:Vol. 66, No. 3 (2016)

Authors:Carmo P. Brás; Alfredo N. Iusem; Joaquim J. Júdice Pages: 153 - 171 Abstract: Abstract We introduce several new results on the Quadratic Eigenvalue Complementarity Problem (QEiCP), focusing on the nonsymmetric case, i.e., without making symmetry assumptions on the matrices defining the problem. First we establish a new sufficient condition for existence of solutions of this problem, which is somewhat more manageable than previously existent ones. This condition works through the introduction of auxiliary variables which leads to the reduction of QEiCP to an Eigenvalue Complementarity Problem in higher dimension. Hence, this reduction suggests a new strategy for solving QEiCP, which is also analyzed in the paper. We also present an upper bound for the number of solutions of QEiCP and exhibit some examples of instances of QEiCP whose solution set has large cardinality, without attaining though the just mentioned upper bound. We also investigate the numerical solution of the QEiCP by exploiting a nonlinear programming and a variational inequality formulations of QEiCP. Some numerical experiments are reported and illustrate the benefits and drawbacks of using these formulations for solving the QEiCP in practice. PubDate: 2016-10-01 DOI: 10.1007/s10898-014-0260-5 Issue No:Vol. 66, No. 2 (2016)

Authors:Junsang Yuh; Youngho Lee Pages: 173 - 193 Abstract: Abstract In this paper, we consider the class of 0–1 integer problems and develop an effective cutting plane algorithm that gives valid inequalities called surrogate-RLT cuts (SR cuts). Here we implement the surrogate constraint analysis along with the reformulation–linearization technique (RLT) to generate strong valid inequalities. In this approach, we construct a tighter linear relaxation by augmenting SR cuts to the problem. The level- \(d\) SR closure of a 0–1 integer program is the polyhedron obtained by intersecting all the SR cuts obtained from RLT polyhedron formed over each set of \(d\) variables with its initial formulation. We present an algorithm for approximately optimizing over the SR closure. Finally, we present the computational result of SR cuts for solving 0–1 integer programming problems of well-known benchmark instances from MIPLIB 3.0. PubDate: 2016-10-01 DOI: 10.1007/s10898-015-0297-0 Issue No:Vol. 66, No. 2 (2016)

Authors:Yen Tang; Jean-Philippe P. Richard; J. Cole Smith Pages: 225 - 262 Abstract: Abstract In this paper, we introduce a new class of algorithms for solving the mixed-integer bilevel min–max optimization problem. This problem involves two players, a leader and a follower, who play a Stackelberg game. In particular, the leader seeks to minimize over a set of discrete variables the maximum objective that the follower can achieve. The complicating features of our problem are that a subset of the follower’s decisions are restricted to be integer-valued, and that the follower’s decisions are constrained by the leader’s decisions. We first describe several bilevel min–max programs that can be used to obtain lower and upper bounds on the optimal objective value of the problem. We then present algorithms for this problem that finitely terminate with an optimal solution when the leader variables are restricted to take binary values. Finally, we report the results of a computational study aimed at evaluating the quality of our algorithms on two families of randomly generated problems. PubDate: 2016-10-01 DOI: 10.1007/s10898-015-0274-7 Issue No:Vol. 66, No. 2 (2016)

Authors:Ming Cheng; Subhash C. Sarin; Sanchit Singh Pages: 263 - 290 Abstract: Abstract In this paper, we address a single-lot, lot streaming problem for a two-stage hybrid flow shop, which consists of one machine at Stage 1 and two parallel (identical) machines at Stage 2. The objective is to minimize makespan. The lot is to be split into sublots each of which is processed first on the machine at Stage 1, and then, on one of the machines at Stage 2. A sublot-attached removal time is incurred after processing each sublot at Stage 1. First, we assume the number of sublots for the lot to be known a priori and develop closed-form expressions to obtain optimal, continuous sublot sizes for this case. Then, we consider determination of an optimal number of sublots in addition to their sizes. We develop an upper bound on the number of sublots, \(N_{\text {pos}}\) , and use an algorithm of \(O(N_{\text {pos}})\) complexity in conjunction with the closed-form expressions for sublot sizes to obtain an optimal solution. We also address the problem of determining number of sublots and integer sublot sizes, and propose a heuristic method for its solution that relies on some key results from the continuous case of the problem. The results of our numerical experimentation reveal the efficacy of the proposed method to obtain near-optimal integer sublot sizes and makespan values that are within 2.35 % of the true optimum for the testbed of data used, each obtained within a few seconds of CPU time. PubDate: 2016-10-01 DOI: 10.1007/s10898-015-0298-z Issue No:Vol. 66, No. 2 (2016)

Authors:Warren Adams; Pietro Belotti; Ruobing Shen Pages: 311 - 329 Abstract: Abstract Given a p-dimensional nonnegative, integral vector \(\varvec{\alpha },\) this paper characterizes the convex hull of the set S of nonnegative, integral vectors \(\varvec{x}\) that is lexicographically less than or equal to \(\varvec{\alpha }.\) To obtain a finite number of elements in S, the vectors \(\varvec{x}\) are restricted to be component-wise upper-bounded by an integral vector \(\varvec{u}.\) We show that a linear number of facets is sufficient to describe the convex hull. For the special case in which every entry of \(\varvec{u}\) takes the same value \((n-1)\) for some integer \(n \ge 2,\) the convex hull of the set of n-ary vectors results. Our facets generalize the known family of cover inequalities for the \(n=2\) binary case. They allow for advances relative to both the modeling of integer variables using base-n expansions, and the solving of knapsack problems having weakly super-decreasing coefficients. Insight is gained by alternately constructing the convex hull representation in a higher-variable space using disjunctive programming arguments. PubDate: 2016-10-01 DOI: 10.1007/s10898-016-0435-3 Issue No:Vol. 66, No. 2 (2016)

Authors:J. M. Martínez; M. Raydan Abstract: Abstract In a recent paper, we introduced a trust-region method with variable norms for unconstrained minimization, we proved standard asymptotic convergence results, and we discussed the impact of this method in global optimization. Here we will show that, with a simple modification with respect to the sufficient descent condition and replacing the trust-region approach with a suitable cubic regularization, the complexity of this method for finding approximate first-order stationary points is \(O(\varepsilon ^{-3/2})\) . We also prove a complexity result with respect to second-order stationarity. Some numerical experiments are also presented to illustrate the effect of the modification on practical performance. PubDate: 2016-10-15 DOI: 10.1007/s10898-016-0475-8

Authors:Jai Rajyaguru; Mario E. Villanueva; Boris Houska; Benoît Chachuat Abstract: Abstract This article presents an arithmetic for the computation of Chebyshev models for factorable functions and an analysis of their convergence properties. Similar to Taylor models, Chebyshev models consist of a pair of a multivariate polynomial approximating the factorable function and an interval remainder term bounding the actual gap with this polynomial approximant. Propagation rules and local convergence bounds are established for the addition, multiplication and composition operations with Chebyshev models. The global convergence of this arithmetic as the polynomial expansion order increases is also discussed. A generic implementation of Chebyshev model arithmetic is available in the library MC++. It is shown through several numerical case studies that Chebyshev models provide tighter bounds than their Taylor model counterparts, but this comes at the price of extra computational burden. PubDate: 2016-10-12 DOI: 10.1007/s10898-016-0474-9

Authors:T. Bittencourt; O. P. Ferreira Abstract: Abstract Extension of concepts and techniques of linear spaces for the Riemannian setting has been frequently attempted. One reason for the extension of such techniques is the possibility to transform some Euclidean non-convex or quasi-convex problems into Riemannian convex problems. In this paper, a version of Kantorovich’s theorem on Newton’s method for finding a singularity of differentiable vector fields defined on a complete Riemannian manifold is presented. In the presented analysis, the classical Lipschitz condition is relaxed using a general majorant function, which enables us to not only establish the existence and uniqueness of the solution but also unify earlier results related to Newton’s method. Moreover, a ball is prescribed around the points satisfying Kantorovich’s assumptions and convergence of the method is ensured for any starting point within this ball. In addition, some bounds for the Q-quadratic convergence of the method, which depends on the majorant function, are obtained. PubDate: 2016-10-08 DOI: 10.1007/s10898-016-0472-y

Abstract: Abstract Nonnegative matrix factorization (NMF) is a powerful technique for dimension reduction, extracting latent factors and learning part-based representation. For large datasets, NMF performance depends on some major issues such as fast algorithms, fully parallel distributed feasibility and limited internal memory. This research designs a fast fully parallel and distributed algorithm using limited internal memory to reach high NMF performance for large datasets. Specially, we propose a flexible accelerated algorithm for NMF with all its \(L_1\) \(L_2\) regularized variants based on full decomposition, which is a combination of exact line search, greedy coordinate descent, and accelerated search. The proposed algorithm takes advantages of these algorithms to converges linearly at an over-bounded rate \((1-\frac{\mu }{L})(1 - \frac{\mu }{rL})^{2r}\) in optimizing each factor matrix when fixing the other factor one in the sub-space of passive variables, where r is the number of latent components, and \(\mu \) and L are bounded as \(\frac{1}{2} \le \mu \le L \le r\) . In addition, the algorithm can exploit the data sparseness to run on large datasets with limited internal memory of machines, which is is advanced compared to fast block coordinate descent methods and accelerated methods. Our experimental results are highly competitive with seven state-of-the-art methods about three significant aspects of convergence, optimality and average of the iteration numbers. PubDate: 2016-10-04 DOI: 10.1007/s10898-016-0471-z