for Journals by Title or ISSN for Articles by Keywords help
 Subjects -> ENGINEERING (Total: 2255 journals)     - CHEMICAL ENGINEERING (189 journals)    - CIVIL ENGINEERING (180 journals)    - ELECTRICAL ENGINEERING (99 journals)    - ENGINEERING (1196 journals)    - ENGINEERING MECHANICS AND MATERIALS (386 journals)    - HYDRAULIC ENGINEERING (55 journals)    - INDUSTRIAL ENGINEERING (61 journals)    - MECHANICAL ENGINEERING (89 journals) ENGINEERING (1196 journals)            First | 1 2 3 4 5 6 | Last
 Journal of Global Optimization   [SJR: 0.992]   [H-I: 60]   [4 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1573-2916 - ISSN (Online) 0925-5001    Published by Springer-Verlag  [2336 journals]
• Convergence analysis of multivariate McCormick relaxations
• Authors: Jaromił Najman; Alexander Mitsos
Pages: 597 - 628
Abstract: The convergence rate is analyzed for McCormick relaxations of compositions of the form $$F \circ f$$ , where F is a multivariate function, as established by Tsoukalas and Mitsos (J Glob Optim 59:633–662, 2014). Convergence order in the Hausdorff metric and pointwise convergence order are analyzed. Similar to the convergence order propagation of McCormick univariate composition functions, Bompadre and Mitsos (J Glob Optim 52(1):1–28, 2012), the convergence order of the multivariate composition is determined by the minimum of the orders of the inclusion functions of the inner functions and the convergence order of the multivariate outer function. The convergence order in the Hausdorff metric additionally depends on the enclosure order of the image of the inner functions introduced in this work. The result established holds for any composition and can be further specialized for specific compositions. In some cases this specialization results in the bounds established by Bompadre and Mitsos. Examples of important functions, e.g., binary product of functions and minimum of functions show that the convergence rate of the relaxations based on multivariate composition theorem results in a higher convergence rate than the convergence rate of univariate McCormick relaxations. Refined bounds, employing also the range order, similar to those determined by Bompadre et al. (J Glob Optim 57(1):75–114, 2013), on the convergence order of McCormick relaxations of univariate and multivariate composite functions are developed.
PubDate: 2016-12-01
DOI: 10.1007/s10898-016-0408-6
Issue No: Vol. 66, No. 4 (2016)

• Polyhedral subdivisions and functional forms for the convex envelopes of

• Authors: Marco Locatelli
Pages: 629 - 668
Abstract: In this paper we show that the convex envelope over polytopes for a class of bivariate functions, including the bilinear and fractional functions as special cases, is characterized by a polyhedral subdivision of the polytopes, and is such that over each member of the subdivision the convex envelope has a given (although possibly only implicitly defined) functional form. For the bilinear and fractional case we show that there are three possible functional forms, which can be explicitly defined.
PubDate: 2016-12-01
DOI: 10.1007/s10898-016-0418-4
Issue No: Vol. 66, No. 4 (2016)

• New multi-commodity flow formulations for the pooling problem
• Authors: Natashia Boland; Thomas Kalinowski; Fabian Rigterink
Pages: 669 - 710
Abstract: The pooling problem is a nonconvex nonlinear programming problem with numerous applications. The nonlinearities of the problem arise from bilinear constraints that capture the blending of raw materials. Bilinear constraints are well-studied and significant progress has been made in solving large instances of the pooling problem to global optimality. This is due in no small part to reformulations of the problem. Recently, Alfaki and Haugland proposed a multi-commodity flow formulation of the pooling problem based on input commodities. The authors proved that the new formulation has a stronger linear relaxation than previously known formulations. They also provided computational results which show that the new formulation outperforms previously known formulations when used in a global optimization solver. In this paper, we generalize their ideas and propose new multi-commodity flow formulations based on output, input and output and (input, output)-commodities. We prove the equivalence of formulations, and we study the partial order of formulations with respect to the strength of their LP relaxations. In an extensive computational study, we evaluate the performance of the new formulations. We study the trade-off between disaggregating commodities and therefore increasing the size of formulations versus strengthening the relaxed linear programs and improving the computational performance of the nonlinear programs. We provide computational results which show that output commodities often outperform input commodities, and that disaggregating commodities further only marginally strengthens the linear relaxations. In fact, smaller formulations often show a significantly better performance when used in a global optimization solver.
PubDate: 2016-12-01
DOI: 10.1007/s10898-016-0404-x
Issue No: Vol. 66, No. 4 (2016)

• Solving infinite horizon optimization problems through analysis of a
one-dimensional global optimization problem
• Authors: Seksan Kiatsupaibul; Robert L. Smith; Zelda B. Zabinsky
Pages: 711 - 727
Abstract: Infinite horizon optimization (IHO) problems present a number of challenges for their solution, most notably, the inclusion of an infinite data set. This hurdle is often circumvented by approximating its solution by solving increasingly longer finite horizon truncations of the original infinite horizon problem. In this paper, we adopt a novel transformation that reduces the infinite dimensional IHO problem into an equivalent one dimensional optimization problem, i.e., minimizing a Hölder continuous objective function with known parameters over a closed and bounded interval of the real line. We exploit the characteristics of the transformed problem in one dimension and introduce an algorithm with a graphical implementation for solving the underlying infinite dimensional optimization problem.
PubDate: 2016-12-01
DOI: 10.1007/s10898-016-0423-7
Issue No: Vol. 66, No. 4 (2016)

• MONEDA: scalable multi-objective optimization with a neural network-based
estimation of distribution algorithm
• Authors: Luis Martí; Jesús García; Antonio Berlanga; José M. Molina
Pages: 729 - 768
Abstract: The extension of estimation of distribution algorithms (EDAs) to the multi-objective domain has led to multi-objective optimization EDAs (MOEDAs). Most MOEDAs have limited themselves to porting single-objective EDAs to the multi-objective domain. Although MOEDAs have proved to be a valid approach, the last point is an obstacle to the achievement of a significant improvement regarding “standard” multi-objective optimization evolutionary algorithms. Adapting the model-building algorithm is one way to achieve a substantial advance. Most model-building schemes used so far by EDAs employ off-the-shelf machine learning methods. However, the model-building problem has particular requirements that those methods do not meet and even evade. The focus of this paper is on the model-building issue and how it has not been properly understood and addressed by most MOEDAs. We delve down into the roots of this matter and hypothesize about its causes. To gain a deeper understanding of the subject we propose a novel algorithm intended to overcome the drawbacks of current MOEDAs. This new algorithm is the multi-objective neural estimation of distribution algorithm (MONEDA). MONEDA uses a modified growing neural gas network for model-building (MB-GNG). MB-GNG is a custom-made clustering algorithm that meets the above demands. Thanks to its custom-made model-building algorithm, the preservation of elite individuals and its individual replacement scheme, MONEDA is capable of scalably solving continuous multi-objective optimization problems. It performs better than similar algorithms in terms of a set of quality indicators and computational resource requirements.
PubDate: 2016-12-01
DOI: 10.1007/s10898-016-0415-7
Issue No: Vol. 66, No. 4 (2016)

• Scatter search for the bandpass problem
• Authors: Jesús Sánchez-Oro; Manuel Laguna; Rafael Martí; Abraham Duarte
Pages: 769 - 790
Abstract: We tackle a combinatorial problem that consists of finding the optimal configuration of a binary matrix. The configuration is determined by the ordering of the rows in the matrix and the objective function value is associated with a value B, the so-called bandpass number. In the basic version of the problem, the objective is to maximize the number of non-overlapping blocks containing B consecutive cells with a value of one in each column of the matrix. We explore variants of this basic problem and use them to test heuristic strategies within the scatter search framework. An existing library of problem instances is used to perform scientific testing of the proposed search procedures to gain insights that may be valuable in other combinational optimization settings. We also conduct competitive testing to compare outcomes with methods published in the literature and to improve upon previous results.
PubDate: 2016-12-01
DOI: 10.1007/s10898-016-0446-0
Issue No: Vol. 66, No. 4 (2016)

• Gap functions for quasi-equilibria
• Authors: Giancarlo Bigi; Mauro Passacantando
Pages: 791 - 810
Abstract: An approach for solving quasi-equilibrium problems (QEPs) is proposed relying on gap functions, which allow reformulating QEPs as global optimization problems. The (generalized) smoothness properties of a gap function are analysed and an upper estimate of its Clarke directional derivative is given. Monotonicity assumptions on both the equilibrium and constraining bifunctions are a key tool to guarantee that all the stationary points of a gap function actually solve QEP. A few classes of constraints satisfying such assumptions are identified covering a wide range of situations. Relying on these results, a descent method for solving QEP is devised and its convergence proved. Finally, error bounds are given in order to guarantee the boundedness of the sequence generated by the algorithm.
PubDate: 2016-12-01
DOI: 10.1007/s10898-016-0458-9
Issue No: Vol. 66, No. 4 (2016)

• MSO: a framework for bound-constrained black-box global optimization
algorithms
• Authors: Abdullah Al-Dujaili; S. Suresh; N. Sundararajan
Pages: 811 - 845
Abstract: This paper addresses a class of algorithms for solving bound-constrained black-box global optimization problems. These algorithms partition the objective function domain over multiple scales in search for the global optimum. For such algorithms, we provide a generic procedure and refer to as multi-scale optimization (MSO). Furthermore, we propose a theoretical methodology to study the convergence of MSO algorithms based on three basic assumptions: (a) local Hölder continuity of the objective function f, (b) partitions boundedness, and (c) partitions sphericity. Moreover, the worst-case finite-time performance and convergence rate of several leading MSO algorithms, namely, Lipschitzian optimization methods, multi-level coordinate search, dividing rectangles, and optimistic optimization methods have been presented.
PubDate: 2016-12-01
DOI: 10.1007/s10898-016-0441-5
Issue No: Vol. 66, No. 4 (2016)

• Delaunay-based derivative-free optimization via global surrogates, part I:
linear constraints
• Authors: Pooriya Beyhaghi; Daniele Cavaglieri; Thomas Bewley
Pages: 331 - 382
Abstract: A new derivative-free optimization algorithm is introduced for nonconvex functions within a feasible domain bounded by linear constraints. Global convergence is guaranteed for twice differentiable functions with bounded Hessian, and is found to be remarkably efficient even for many functions which are not differentiable. Like other Response Surface Methods, at each optimization step, the algorithm minimizes a metric combining an interpolation of existing function evaluations and a model of the uncertainty of this interpolation. By adjusting the respective weighting of these two terms, the algorithm incorporates a tunable balance between global exploration and local refinement; a rule to adjust this balance automatically is also presented. Unlike other methods, any well-behaved interpolation strategy may be used. The uncertainty model is built upon the framework of a Delaunay triangulation of existing datapoints in parameter space. A quadratic function which goes to zero at each datapoint is formed within each simplex of this triangulation; the union of each of these quadratics forms the desired uncertainty model. Care is taken to ensure that function evaluations are performed at points that are well situated in parameter space; that is, such that the simplices of the resulting triangulation have circumradii with a known bound. This facilitates well-behaved local refinement as additional function evaluations are performed.
PubDate: 2016-11-01
DOI: 10.1007/s10898-015-0384-2
Issue No: Vol. 66, No. 3 (2016)

• Delaunay-based derivative-free optimization via global surrogates, part
II: convex constraints
• Authors: Pooriya Beyhaghi; Thomas R. Bewley
Pages: 383 - 415
Abstract: The derivative-free global optimization algorithms developed in Part I of this study, for linearly constrained problems, are extended to nonconvex n-dimensional problems with convex constraints. The initial $$n+1$$ feasible datapoints are chosen in such a way that the volume of the simplex generated by these datapoints is maximized; as the algorithm proceeds, additional feasible datapoints are added in such a way that the convex hull of the available datapoints efficiently increases towards the boundaries of the feasible domain. Similar to the algorithms developed in Part I of this study, at each step of the algorithm, a search function is defined based on an interpolating function which passes through all available datapoints and a synthetic uncertainty function which characterizes the distance to the nearest datapoints. This uncertainty function, in turn, is built on the framework of a Delaunay triangulation, which is itself based on all available datapoints together with the (infeasible) vertices of an exterior simplex which completely contains the feasible domain. The search function is minimized within those simplices of this Delaunay triangulation that do not include the vertices of the exterior simplex. If the outcome of this minimization is contained within the circumsphere of a simplex which includes a vertex of the exterior simplex, this new point is projected out to the boundary of the feasible domain. For problems in which the feasible domain includes edges (due to the intersection of multiple twice-differentiable constraints), a modified search function is considered in the vicinity of these edges to assure convergence.
PubDate: 2016-11-01
DOI: 10.1007/s10898-016-0433-5
Issue No: Vol. 66, No. 3 (2016)

• SOP: parallel surrogate global optimization with Pareto center selection
for computationally expensive single objective problems
• Authors: Tipaluck Krityakierne; Taimoor Akhtar; Christine A. Shoemaker
Pages: 417 - 437
Abstract: This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centers from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.
PubDate: 2016-11-01
DOI: 10.1007/s10898-016-0407-7
Issue No: Vol. 66, No. 3 (2016)

• Sufficient conditions for error bounds of difference functions and
applications
• Authors: Nguyen Thi Van Hang; Jen-Chih Yao
Pages: 439 - 456
Abstract: This paper establishes verifiable sufficient conditions for the existence of error bounds for the sub-level set of a difference function over an abstract constraint by applying a technique used by A. D. Ioffe. As a consequence, error bounds for constraint systems defined by d.c. inequalities and their applications in studying of exactness of the associated $$\ell _1$$ penalty function and existence of Lagrange multipliers as necessary optimality conditions are also investigated.
PubDate: 2016-11-01
DOI: 10.1007/s10898-016-0410-z
Issue No: Vol. 66, No. 3 (2016)

• Smoothing and SAA method for stochastic programming problems with
non-smooth objective and constraints
• Authors: Gui-Hua Lin; Mei-Ju Luo; Jin Zhang
Pages: 487 - 510
Abstract: We consider a stochastic non-smooth programming problem with equality, inequality and abstract constraints, which is a generalization of the problem studied by Xu and Zhang (Math Program 119:371–401, 2009) where only an abstract constraint is considered. We employ a smoothing technique to deal with the non-smoothness and use the sample average approximation techniques to cope with the mathematical expectations. Then, we investigate the convergence properties of the approximation problems. We further apply the approach to solve the stochastic mathematical programs with equilibrium constraints. In addition, we give an illustrative example in economics to show the applicability of proposed approach.
PubDate: 2016-11-01
DOI: 10.1007/s10898-016-0413-9
Issue No: Vol. 66, No. 3 (2016)

• An optimal algorithm for Global Optimization and adaptive covering
• Authors: Serge L. Shishkin; Alan M. Finn
Pages: 535 - 572
Abstract: The general class of zero-order Global Optimization problems is split into subclasses according to a proposed “Complexity measure” and the computational complexity of each subclass is rigorously estimated. Then, the laboriousness (computational demand) of general Branch-and-Bound (BnB) methods is estimated for each subclass. For conventional “Cubic” BnB based on splitting an n-dimensional cube into $$2^n$$ sub-cubes, both upper and lower laboriousness estimates are obtained. The value of the Complexity measure for a problem subclass enters linearly into all complexity and laboriousness estimates for that subclass. A new BnB method based on the lattice $$A_n^*$$ is presented with upper laboriousness bound that is, though conservative, smaller by a factor of $$O((4/3)^n)$$ than the lower bound of the conventional method. The optimality of the new method is discussed. All results are extended to the class of Adaptive Covering problems—that is, covering of a large n-dimensional set by balls of different size, where the size of each ball is defined by a locally computed criterion.
PubDate: 2016-11-01
DOI: 10.1007/s10898-016-0416-6
Issue No: Vol. 66, No. 3 (2016)

• A primal–dual prediction–correction algorithm for saddle point
optimization
• Authors: Hongjin He; Jitamitra Desai; Kai Wang
Pages: 573 - 583
Abstract: In this paper, we introduce a new primal–dual prediction–correction algorithm for solving a saddle point optimization problem, which serves as a bridge between the algorithms proposed in Cai et al. (J Glob Optim 57:1419–1428, 2013) and He and Yuan (SIAM J Imaging Sci 5:119–149, 2012). An interesting byproduct of the proposed method is that we obtain an easily implementable projection-based primal–dual algorithm, when the primal and dual variables belong to simple convex sets. Moreover, we establish the worst-case $${\mathcal {O}}(1/t)$$ convergence rate result in an ergodic sense, where t represents the number of iterations.
PubDate: 2016-11-01
DOI: 10.1007/s10898-016-0437-1
Issue No: Vol. 66, No. 3 (2016)

• The Lyapunov rank of extended second order cones
• Authors: Roman Sznajder
Pages: 585 - 593
Abstract: In this paper, we investigate the structure of Lyapunov-like transformations on the extended second order cone, considered as a multivariate version of topheavy cone with respect to an arbitrary norm in a Euclidean space. As a by-product, we compute the Lyapunov rank of the extended second order cone. We also show the irreducibility of such a cone.
PubDate: 2016-11-01
DOI: 10.1007/s10898-016-0445-1
Issue No: Vol. 66, No. 3 (2016)

• Pseudo expected improvement criterion for parallel EGO algorithm
• Authors: Dawei Zhan; Jiachang Qian; Yuansheng Cheng
Abstract: The efficient global optimization (EGO) algorithm is famous for its high efficiency in solving computationally expensive optimization problems. However, the expected improvement (EI) criterion used for picking up candidate points in the EGO process produces only one design point per optimization cycle, which is time-wasting when parallel computing can be used. In this work, a new criterion called pseudo expected improvement (PEI) is proposed for developing parallel EGO algorithms. In each cycle, the first updating point is selected by the initial EI function. After that, the PEI function is built to approximate the real updated EI function by multiplying the initial EI function by an influence function of the updating point. The influence function is designed to simulate the impact that the updating point will have on the EI function, and is only corresponding to the position of the updating point (not the function value of the updating point). Therefore, the next updating point can be identified by maximizing the PEI function without evaluating the first updating point. As the sequential process goes on, a desired number of updating points can be selected by the PEI criterion within one optimization cycle. The efficiency of the proposed PEI criterion is validated by six benchmarks with dimension from 2 to 6. The results show that the proposed PEI algorithm performs significantly better than the standard EGO algorithm, and gains significant improvements over five of the six test problems compared against a state-of-the-art parallel EGO algorithm. Furthermore, additional experiments show that it affects the convergence of the proposed algorithm significantly when the global maximum of the PEI function is not found. It is recommended to use as much evaluations as one can afford to find the global maximum of the PEI function.
PubDate: 2016-11-29
DOI: 10.1007/s10898-016-0484-7

• A hybrid discretization algorithm with guaranteed feasibility for the
global solution of semi-infinite programs
• Authors: Hatim Djelassi; Alexander Mitsos
Abstract: A discretization-based algorithm for the global solution of semi-infinite programs (SIPs) is proposed, which is guaranteed to converge to a feasible, $$\varepsilon$$ -optimal solution finitely under mild assumptions. The algorithm is based on the hybridization of two existing algorithms. The first algorithm (Mitsos in Optimization 60(10–11):1291–1308, 2011) is based on a restriction of the right-hand side of the constraints of a discretized SIP. The second algorithm (Tsoukalas and Rustem in Optim Lett 5(4):705–716, 2011) employs a discretized oracle problem and a binary search in the objective space. Hybridization of the approaches yields an algorithm, which leverages the strong convergence guarantees and the relatively tight upper bounding problem of the first approach while employing an oracle problem adapted from the second approach to generate cheap lower bounds and adaptive updates to the restriction of the first approach. These adaptive updates help in avoiding a dense population of the discretization. The hybrid algorithm is shown to be superior to its predecessors both theoretically and computationally. A proof of finite convergence is provided under weaker assumptions than the assumptions in the references. Numerical results from established SIP test cases are presented.
PubDate: 2016-11-01
DOI: 10.1007/s10898-016-0476-7

• A Lagrangian search method for the P -median problem
• Authors: Joshua Q. Hale; Enlu Zhou; Jiming Peng
Abstract: In this paper, we propose a novel algorithm for solving the classical P-median problem. The essential aim is to identify the optimal extended Lagrangian multipliers corresponding to the optimal solution of the underlying problem. For this, we first explore the structure of the data matrix in P-median problem to recast it as another equivalent global optimization problem over the space of the extended Lagrangian multipliers. Then we present a stochastic search algorithm to find the extended Lagrangian multipliers corresponding to the optimal solution of the original P-median problem. Numerical experiments illustrate that the proposed algorithm can effectively find a global optimal or very good suboptimal solution to the underlying P-median problem, especially for the computationally challenging subclass of P-median problems with a large gap between the optimal solution of the original problem and that of its Lagrangian relaxation.
PubDate: 2016-11-01
DOI: 10.1007/s10898-016-0481-x

• Journal of Global Optimization Best Paper Award for 2015
• Authors: Sergiy Butenko
PubDate: 2016-10-21
DOI: 10.1007/s10898-016-0479-4

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327

Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs