Authors:Vahid Morovati; Hadi Basirzadeh; Latif Pourkarimi Pages: 261 - 294 Abstract: This work is an attempt to develop multiobjective versions of some well-known single objective quasi-Newton methods, including BFGS, self-scaling BFGS (SS-BFGS), and the Huang BFGS (H-BFGS). A comprehensive and comparative study of these methods is presented in this paper. The Armijo line search is used for the implementation of these methods. The numerical results show that the Armijo rule does not work the same way for the multiobjective case as for the single objective case, because, in this case, it imposes a large computational effort and significantly decreases the speed of convergence in contrast to the single objective case. Hence, we consider two cases of all multi-objective versions of quasi-Newton methods: in the presence of the Armijo line search and in the absence of any line search. Moreover, the convergence of these methods without using any line search under some mild conditions is shown. Also, by introducing a multiobjective subproblem for finding the quasi-Newton multiobjective search direction, a simple representation of the Karush–Kuhn–Tucker conditions is derived. The H-BFGS quasi-Newton multiobjective optimization method provides a higher-order accuracy in approximating the second order curvature of the problem functions than the BFGS and SS-BFGS methods. Thus, this method has some benefits compared to the other methods as shown in the numerical results. All mentioned methods proposed in this paper are evaluated and compared with each other in different aspects. To do so, some well-known test problems and performance assessment criteria are employed. Moreover, these methods are compared with each other with regard to the expended CPU time, the number of iterations, and the number of function evaluations. PubDate: 2018-09-01 DOI: 10.1007/s10288-017-0363-1 Issue No:Vol. 16, No. 3 (2018)

Authors:Sven Mallach Pages: 295 - 309 Abstract: We introduce and prove new necessary and sufficient conditions to carry out a compact linearization approach for a general class of binary quadratic problems subject to assignment constraints that has been proposed by Liberti (4OR 5(3):231–245, 2007, https://doi.org/10.1007/s10288-006-0015-3). The new conditions resolve inconsistencies that can occur when the original method is used. We also present a mixed-integer linear program to compute a minimally sized linearization. When all the assignment constraints have non-overlapping variable support, this program is shown to have a totally unimodular constraint matrix. Finally, we give a polynomial-time combinatorial algorithm that is exact in this case and can be used as a heuristic otherwise. PubDate: 2018-09-01 DOI: 10.1007/s10288-017-0364-0 Issue No:Vol. 16, No. 3 (2018)

Authors:Do Van Luu; Tran Thi Mai Pages: 311 - 337 Abstract: Fritz John and Karush–Kuhn–Tucker necessary conditions for local LU-optimal solutions of the constrained interval-valued optimization problems involving inequality, equality and set constraints in Banach spaces in terms of convexificators are established. Under suitable assumptions on the generalized convexity of objective and constraint functions, sufficient conditions for LU-optimal solutions are given. The dual problems of Mond–Weir and Wolfe types are studied together with weak and strong duality theorems for them. PubDate: 2018-09-01 DOI: 10.1007/s10288-017-0369-8 Issue No:Vol. 16, No. 3 (2018)

Authors:Simon Bull; Jesper Larsen; Richard M. Lusby; Natalia J. Rezanova Abstract: The line planning problem that arises in the planning of a passenger railway involves selecting a number of lines from a potential pool to provide sufficient passenger capacity, meeting operational requirements, while optimising some measure of line quality. We model, and solve, the problem of minimising the average passenger system time, including frequency-dependent estimates for switching between lines in collaboration with Danish State Railways (DSB). We present a multi-commodity flow formulation for the problem of freely routing passengers, coupled to discrete line-frequency decisions selecting lines from a predefined pool. The performance of the developed methodology is analysed on instances taken from the suburban commuter network, DSB S-tog, in Copenhagen, Denmark. We show that the proposed approach yields line plans that are superior from both an operator and a passenger perspective to line plans that have been implemented in practice. PubDate: 2018-10-06 DOI: 10.1007/s10288-018-0391-5

Authors:Mariusz Górajski; Dominika Machowska Abstract: This paper examines the long-term impact of loyalty programs on a company’s profit and reputation among customers, and with different durations of product use. We analyze how the launch of loyalty programs may change the profitability of optimal advertising activities. The basis of this study is a modified goodwill model where the market is segmented according to usage experience. The main novelty is the role of loyalty programs and consumer recommendations in the creation of product goodwill, and also their influence on optimal advertising. The dynamics of goodwill are described by a partial differential equation. The firm maximizes the sum of discounted profits by choosing a different advertising campaign for each market segment. For a high-quality product, we observe that there is a trade off between the loyalty program and optimal advertising strategies. For a low-quality product, the loyalty program causes more profitable companies to invest heavily in additional advertising efforts. PubDate: 2018-10-05 DOI: 10.1007/s10288-018-0386-2

Authors:Amir Ahamdi-Javid; Nasrin Ramshe Abstract: Automated Guide Vehicles (AGVs) are widely used in material handling systems. In practice, to achieve more space utilization, safety, cost reduction, and increased flexibility, only a limited number of manufacturing cells may be preferred to have direct access to AGV travel paths, and the other cells are chosen to have no or indirect access to them. This paper investigates the problem of determining a single loop in a block layout with two criteria: loop length and loop-adjacency desirability. Unlike the traditional single shortest loop design problem, where all cells must be located next to the loop, the proposed problem considers a more realistic assumption that each cell in the block layout has a different preference with regard to being adjacent to the loop: some cells must be located adjacent to the loop, some must not be adjacent to the loop, and others can be located next to the loop but with different positive or negative priorities. The problem is formulated as a bi-objective integer linear programming model with two exponential-size constraint sets. A cutting-plane algorithm is proposed to solve the model under important methods commonly used to deal with a bi-objective model. The numerical results show the high efficiency of the proposed algorithm in large scales. PubDate: 2018-10-04 DOI: 10.1007/s10288-018-0383-5

Authors:Zohre Aminifard; Saman Babaie-Kafaki Abstract: Based on a singular value analysis conducted on the Dai–Liao conjugate gradient method, it is shown that when the gradient approximately lies in the direction of the maximum magnification by the search direction matrix, the method may get into some computational errors and also, the convergence may occur hardly. Hence, we obtain a formula for computing the Dai–Liao parameter which makes the direction of the maximum magnification by the search direction matrix to be orthogonal to the gradient. We briefly discuss global convergence of the corresponding Dai–Liao method with and without convexity assumption on the objective function. Numerical experiments on a set of test problems of the CUTEr collection show practical effectiveness of the suggested adaptive choice of the Dai–Liao parameter in the sense of the Dolan–Moré performance profile. PubDate: 2018-09-26 DOI: 10.1007/s10288-018-0387-1

Authors:Salim Haddadi Abstract: We propose a two-phase heuristic for the generalized assignment problem (GAP). The first phase—a generic variable-fixing method—heuristically eliminates up to 98% of the variables without sacrificing the solution quality. The second phase takes as input the small reduced GAP obtained during the first phase and applies a very large scale neighborhood search. The definition of the successive exponential size neighborhoods is guided by the subgradient method applied to the Lagrangian relaxation of the knapsack constraints via the reduced costs. Searching the proposed neighborhood is NP-hard and amounts to solving a monotone binary program (BP) with m constraints and p variables, where m and p are respectively the number of agents and tasks of the reduced GAP (monotone BPs are BPs with two nonzero coefficients of opposite sign per column). To the best of our knowledge, this is the first time the above ideas are exposed. Extensive testing on large scale GAP instances is presented and previously unknown better values for eight instances are obtained. Comparison to well-established methods shows that this new approach is competitive and constitutes a substantial addition to the arsenal of tools for solving GAP. PubDate: 2018-09-24 DOI: 10.1007/s10288-018-0389-z

Authors:Ren-Xia Chen; Shi-Sheng Li Abstract: We address cumulative deterioration scheduling in which two agents compete to perform their respective jobs on a single machine. By cumulative deterioration we mean that the actual processing time of any job of the two agents is a linear increasing function of the total normal processing times of already processed jobs. Each agent desires to optimize some scheduling criterion that depends on the completion times of its own jobs only. We study several scheduling problems arising from different combinations of some regular scheduling criteria, including the maximum cost (embracing lateness and makespan as its special cases), the total completion time, and the (weighted) number of tardy jobs. The aim is to find an optimal schedule that minimizes the objective value of one agent while maintaining the objective value of the other agent not exceeding a fixed upper bound. For each problem under study, we design either a polynomial-time or a pseudo-polynomial-time algorithm to solve it. PubDate: 2018-09-19 DOI: 10.1007/s10288-018-0388-0

Authors:Yukun Cheng; Xiaotie Deng; Dominik Scheder Abstract: Market makers choose and design market rules to serve certain objectives, such as to maximize revenue from the sales in the case of a single seller and multiple buyers. Given such rules, market participants play against each other to maximize their utility function values on goods acquired, possibly by hiding or misrepresenting their information needed in the implementation of market rules. Today’s Internet economy has changed the information collection process and may make some of the assumptions of market rule implementation obsolete. Here we make a fresh review of works on this challenge on the Internet where new economic systems operate. PubDate: 2018-08-17 DOI: 10.1007/s10288-018-0385-3

Authors:John Martinovic; Markus Hähnel; Guntram Scheithauer; Waltenegus Dargie; Andreas Fischer Abstract: Based on an application in the field of server consolidation, we consider the one-dimensional cutting stock problem with nondeterministic item lengths. After a short introduction to the general topic we investigate the case of normally distributed item lengths in more detail. Within this framework, we present two lower bounds as well as two heuristics to obtain upper bounds, where the latter are either based on a related (ordinary) cutting stock problem or an adaptation of the first fit decreasing heuristic to the given stochastical context. For these approximation techniques, dominance relations are discussed, and theoretical performance results are stated. As a main contribution, we develop a characterization of feasible patterns by means of one linear and one quadratic inequality. Based on this, we derive two exact modeling approaches for the nondeterministic cutting stock problem, and provide results of numerical simulations. PubDate: 2018-07-18 DOI: 10.1007/s10288-018-0384-4

Authors:Johnny C. Ho; Ivar Massabò; Giuseppe Paletta; Alex J. Ruiz-Torres Abstract: This note proposes and analyzes a posterior tight worst-case bound for the longest processing time (LPT) heuristic for scheduling independent jobs on identical parallel machines with the objective of minimizing the makespan. It makes natural remarks on the well-known posterior worst-case bounds, and shows that the proposed bound can complement the well-known posterior bounds to synergistically achieve a better posterior worst-case bound for the LPT heuristic. Moreover, it gives some insight on LPT asymptotical optimality. PubDate: 2018-06-16 DOI: 10.1007/s10288-018-0381-7

Authors:Ramón Flores; Elisenda Molina; Juan Tejada Abstract: Following the original interpretation of the Shapley value as a priori evaluation of the prospects of a player in a multi-person interaction situation, we intend to apply the Shapley generalized value (introduced formally in Marichal et al. in Discrete Appl Math 155:26–43, 2007) as a tool for the assessment of a group of players that act as a unit in a coalitional game. We propose an alternative axiomatic characterization which does not use a direct formulation of the classical efficiency property. Relying on this valuation, we also analyze the profitability of a group. We motivate this use of the Shapley generalized value by means of two relevant applications in which it is used as an objective function by a decision maker who is trying to identify an optimal group of agents in a framework in which agents interact and the attained benefit can be modeled by means of a transferable utility game. PubDate: 2018-06-14 DOI: 10.1007/s10288-018-0380-8

Authors:Luciano Porretta; Daniele Catanzaro; Bjarni V. Halldórsson; Bernard Fortz Abstract: A point-interval \((I_v, p_v)\) is a pair constituted by an interval \(I_v\) of \({\mathbb {R}}\) and a point \(p_v \in I_v\) . A graph \(G=(V,E)\) is a Max-Point-Tolerance (MPT) graph if each vertex \(v\in V\) can be mapped to a point-interval in such a way that (u, v) is an edge of G iff \(I_u \cap I_v \supseteq \{p_u, p_v\}\) . MPT graphs constitute a superclass of interval graphs and naturally arise in genetic analysis as a way to represent specific relationships among DNA fragments extracted from a population of individuals. One of the most important applications of MPT graphs concerns the search for an association between major human diseases and chromosome regions from patients that exhibit loss of heterozygosity events. This task can be formulated as a minimum cost clique cover problem in a MPT graph and gives rise to a \({{\mathcal {N}}}{{\mathcal {P}}}\) -hard combinatorial optimization problem known in the literature as the Parsimonious Loss of Heterozygosity Problem (PLOHP). In this article, we investigate ways to speed up the best known exact solution algorithm for the PLOHP as well as techniques to enlarge the size of the instances that can be optimally solved. In particular, we present a Branch&Price algorithm for the PLOHP and we develop a number of preprocessing techniques and decomposition strategies to dramatically reduce the size of its instances. Computational experiments show that the proposed approach is 10–30 \(\times \) faster than previous approaches described in the literature, and suggest new directions for the development of future exact solution approaches that may prove of fundamental assistance in practice. PubDate: 2018-06-07 DOI: 10.1007/s10288-018-0377-3