Abstract: Nowadays, data envelopment analysis (DEA) is a well-established non-parametric methodology for performance evaluation and benchmarking. DEA has witnessed a widespread use in many application areas since the publication of the seminal paper by Charnes, Cooper and Rhodes in 1978. However, to the best of our knowledge, no published work formally addressed out-of-sample evaluation in DEA. In this paper, we fill this gap by proposing a framework for the out-of-sample evaluation of decision making units. We tested the performance of the proposed framework in risk assessment and bankruptcy prediction of companies listed on the London Stock Exchange. Numerical results demonstrate that the proposed out-of-sample evaluation framework for DEA is capable of delivering an outstanding performance and thus opens a new avenue for research and applications in risk modelling and analysis using DEA as a non-parametric frontier-based classifier and makes DEA a real contender in industry applications in banking and investment. PubDate: 2017-07-01

Abstract: A productive research in the emerging field of disaster management plays a quite important role in relaxing this disastrous advanced society. The planning problem of saving affected areas and normalizing the situation after any kind of disasters is very challenging. For the optimal use of available road network, the contraflow technique increases the outward road capacities from the disastrous areas by reversing the arcs. Number of efficient algorithms and heuristics handle this issue with contraflow reconfiguration on particular networks but the problem with multiple sources and multiple sinks is NP-hard. This paper concentrates on analytical solutions of continuous time contraflow problem. We consider the value approximation earliest arrival transshipment contraflow for the arbitrary and zero transit times on each arcs. These problems are solved with pseudo-polynomial and polynomial time complexity, respectively. We extend the concept of dynamic contraflow to the more general setting where the given network is replaced by an abstract contraflow with a system of linearly ordered sets, called paths satisfying the switching property. We introduce the continuous maximum abstract contraflow problem and present polynomial time algorithms to solve its static and dynamic versions by reversing the direction of paths. contraflow approach not only increases the flow value but also eliminates the crossing at intersections. The flow value can be increased up to double with contraflow reconfiguration. PubDate: 2017-07-01

Abstract: This paper analyzes a family of rules for bankruptcy problems that generalizes the so-called reverse Talmud rule and encompasses both the constrained equal-awards rule and the constrained equal-losses rule. The family, introduced by van den Brink et al. (Eur J Oper Res 228:413–417, 2013), is a counterpart to the so-called TAL-family of rules, introduced and studied by Moreno-Ternero and Villar (Soc Choice Welf 27:231–249, 2006a), and it is included within the so-called CIC-family of rules introduced by Thomson (Soc Choice Welf 31:667–692, 2008). We provide a systematic study of the structural properties of the rules within the family, as well as its connections with the existing related literature. PubDate: 2017-07-01

Abstract: In this paper we propose simple yet efficient version of the two-phase Pareto local search (2PPLS) for solving the biobjective traveling salesman problem (bTSP). In the first phase the powerful Lin–Kernighan heuristic is used to generate some high quality solutions being very close to the Pareto front. Then Pareto local search is used to generate more potentially Pareto efficient solutions along the Pareto front. Instead of previously used method of Aneja and Nair we use uniformly distributed weight vectors in the first phase. We show experimentally that properly balancing the computational effort in the first and second phase we can obtain results better than previous versions of 2PPLS for bTSP and at least comparable to the state-of-the art results of more complex MOMAD method. Furthermore, we propose a simple extension of 2PPLS where some additional solutions are generated by Lin–Kernighan heuristic during the run of PLS. In this way we obtain a method that is more robust with respect to the number of initial solutions generated in the first phase. PubDate: 2017-07-01

Abstract: When making decisions with the Analytic Network Process, coherency testing is an important step in the decision making process. Once an incoherent priority vector is identified it can either be costly or in some cases next to impossible to elicit new pairwise comparisons. Remarkably, there is useful information in the linking estimates that one may have already calculated and used in one of the approaches to measure the coherency of the Supermatrix. A dynamic clustering method is used to automatically identify a cluster of coherent linking estimates from which a new coherent priority vector can be calculated and used to replace the most incoherent priority vector. The decision maker can then accept or revise the proposed new and coherent priority vector. This process is repeated until the entire Supermatrix is coherent. This method can save decision makers valuable time and effort by using the information and relationships that already exist in a weighted Supermatrix that is sufficiently coherent. The method is initially motivated and demonstrated through a simple straightforward example. A group of conceptual charts and a figure provide a visual motivation and explanation of the method. A high level summary of the method is provided in a table before the method is presented in detail. Simulations demonstrate both the application and the robustness of the proposed method. Code is provided, as supplementary material, in the programming language R so the method can be easily applied by the decision maker. PubDate: 2017-07-01

Abstract: A simulation optimization framework containing three fundamental stages (feasibility check, screening, and selection) is proposed for solving the zero-one optimization via simulation problem in the presence of a single stochastic constraint. We present three rapid screening algorithms that combine these three stages in different manners, such that various sampling mechanisms are applied, therefore yielding different statistical guarantees. An empirical evaluation for the efficiency comparison between the proposed algorithms and other existing works is provided. PubDate: 2017-07-01

Abstract: We study the multiproduct price optimization problem under the multilevel nested logit model, which includes the multinomial logit and the two-level nested logit models as special cases. When the price sensitivities are identical within each primary nest, that is, within each nest at level 1, we prove that the profit function is concave with respect to the market share variables. We proceed to show that the markup, defined as price minus cost, is constant across products within each primary nest, and that the adjusted markup, defined as price minus cost minus the reciprocal of the product between the scale parameter of the root nest and the price-sensitivity parameter of the primary nest, is constant across primary nests at optimality. This allows us to reduce the multidimensional pricing problem to an equivalent single-variable maximization problem involving a unimodal function. Based on these findings, we investigate the oligopolistic game and characterize the Nash equilibrium. We also develop a dimension reduction technique which can simplify price optimization problems with flexible price-sensitivity structures. PubDate: 2017-07-01

Abstract: We run a benefit segmentation of 2017 insurance consumers in order to analyze the structure and heterogeneity of the German term life insurance market. The consumers’ preference information has been obtained through a choice-based conjoint (CBC) experiment and a subsequent hierarchical Bayes (HB) estimation routine. Drawing on their part-worth utility profiles, we first construct a diverse cluster ensemble, comprising a total of 1624 hierarchical and k-means solutions based on different linkage criterions and sensibly drawn starting points. Then, final group memberships are determined by means of consensus clustering. Our empirical results indicate that the market divides into three segments characterized by substantially different consumer types with distinct demands and needs. While the first group is clearly driven by the premium, the opposite holds true for the brand-loyal group. Additionally, the market is completed by a third segment with in-between preference structures. Hence, both brand insurers and companies with a lower reputation face consumer groups that almost perfectly fit their provider profiles. More specifically, by offering segment-oriented products, an efficient resource allocation is fostered and the basis for long-term business relationships is laid. This is becoming increasingly important, because ongoing regulatory efforts, low interest rates, and market entrances from InsuranceTech start-ups and tech giants aiming to utilize the market’s enormous hidden potential are changing the competitive environment significantly. A consequent alignment of important strategic decisions related to product innovations, pricing, and distribution channels to our identified consumer segments enables incumbents to maintain a stable and sustainable market share and profitability. PubDate: 2017-07-01

Abstract: Decision making in the operation and planning of power systems is, in general, economically driven, especially in deregulated markets. To better understand the participants’ behavior in power markets, it is necessary to include concepts of microeconomics and operations research in the analysis of power systems. Particularly, game theory equilibrium models have played an important role in shaping participants’ behavior and their interactions. In recent years, bilevel games and their applications to power systems have received growing attention. Bilevel optimization models, Mathematical Program with Equilibrium Constraints and Equilibrium Problem with Equilibrium Constraints are examples of bilevel games. This paper provides an overview of the full range of formulations of non-cooperative bilevel games. Our aim is to present, in an unified manner, the theoretical foundations, classification and main techniques for solving bilevel games and their applications to power systems. PubDate: 2017-07-01

Abstract: Incident managers assigning wildfire response vehicles to provide protection to community assets may experience disruptions to their plans arising from factors such as changes in weather, vehicle breakdowns or road closures. We develop an approach to rerouting wildfire response vehicles once a disruption has occurred. The aim is to maximise the total value of assets protected while minimising changes to the original vehicle assignments. A number of functions to measure deviations from the original plans are proposed. The approach is demonstrated using a realistic fire scenario impacting South Hobart, Tasmania, Australia. Computational testing shows that realistic sized problems can be solved within a reasonable time using a commercial solver. PubDate: 2017-07-01

Abstract: We derive conditions for stochastic, hazard rate, likelihood ratio, reversed hazard rate, increasing convex and mean residual life orderings of Pareto distributions with different shape and scale parameters. A real data application of the conditions is presented. PubDate: 2017-07-01

Abstract: We investigate a two-warehouse inventory model for non-instantaneous deteriorating items with partial backlogging and stock-dependent demand under inflationary conditions. Shortages are allowed. The backlogging rate is variable and depends on the waiting time for the next replenishment. This paper seeks to determine an optimal replenishment policy that minimizes the present value of the total cost per unit time. The necessary and sufficient conditions for the existence and uniqueness of the optimal solution are found. The corresponding problems are formulated and solved with particle swarm optimization. Numerical experimentation and post-optimality analysis are conducted. PubDate: 2017-07-01

Abstract: 1.5 dimensional (1.5D) terrain is characterized by a piecewise linear curve. Locating minimum number of guards on the terrain (T) to cover/guard the whole terrain is known as 1.5D terrain guarding problem. Approximation algorithms and a polynomial-time approximation scheme have been presented for the problem. The problem has been shown to be NP-Hard. In the problem, the set of possible guard locations and the set of points to be guarded are uncountable. To solve the problem to optimality, a finite dominating set (FDS) of size \(\hbox {O}(n^{2})\) and a witness set of size \(\hbox {O}(n^{3})\) have been presented, where n is the number of vertices on T. We show that there exists an even smaller FDS of cardinality \(\hbox {O}(k)\) and a witness set of cardinality O(n), where k is the number of convex points. Convex points are vertices with the additional property that between any two convex points the piecewise linear curve representing the terrain is convex. Since it is always true that \(k \le n\) for \(n \ge \) 2 and since it is possible to construct terrains such that \(n=2^{k}\) , the existence of an FDS with cardinality O(k) and a witness set of cardinality of \(\hbox {O}(n)\) leads to the reduction of decision variables and constraints respectively in the zero-one integer programming formulation of the problem. PubDate: 2017-07-01

Abstract: This paper proposes a fast ant colony system based solution method to solve realistic instances of the time-dependent orienteering problem with time windows within a few seconds of computation time. Orienteering problems occur in logistic situations where an optimal combination of locations needs to be selected and the routing between these selected locations needs to be optimized. For the time-dependent problem, the travel time between two locations depends on the departure time at the first location. The main contribution of this paper is the design of a fast and effective algorithm for this problem. Numerous experiments on realistic benchmark instances with varying size confirm the state-of-the-art performance and practical relevance of the algorithm. PubDate: 2017-07-01

Abstract: This paper highlights the role of behavioral factors for efficiency measurement in supply networks. To this aim, behavioral issues are investigated among interrelations between decision makers involved in corporate bond service networks. The corporate bond network was considered in three consecutive stages, where each stage represents the relations between two members of the network: issuer–underwriter, underwriter–bank, and bank–investor. Adopting a multi-method approach, we collected behavioral data by conducting semi-structured interviews and applying the critical incident technique. Financial and behavioral data, collected from each stage in 20 corporate bond networks, were analyzed using fuzzy network data envelopment analysis to obtain overall and stage-wise efficiency scores for each network. Sensitivity analyzes of the findings revealed inefficiencies in the relations between underwriters–issuers, banks–underwriters, and banks–investors stemming from certain behavioral factors. The results show that incorporating behavioral factors provides a better means of efficiency measurement in supply networks. PubDate: 2017-07-01

Abstract: We consider various lexicographic allocation procedures for coalitional games with transferable utility where the payoffs are computed in an externally given order of the players. The common feature of the methods is that if the allocation is in the core, it is an extreme point of the core. We first investigate the general relationships between these allocations and obtain two hierarchies on the class of balanced games. Secondly, we focus on assignment games and sharpen some of these general relationships. Our main result shows that, similarly to the core and the coalitionally rational payoff set, also the dual coalitionally rational payoff set of an assignment game is determined by the individual and mixed-pair coalitions, and present an efficient and elementary way to compute these basic dual coalitional values. As a byproduct we obtain the coincidence of the sets of lemarals (vectors of lexicographic maxima over the set of dual coalitionally rational payoff vectors), lemacols (vectors of lexicographic maxima over the core) and extreme core points. This provides a way to compute the AL-value (the average of all lemacols) with no need to obtain the whole coalitional function of the dual assignment game. PubDate: 2017-07-01

Abstract: This paper develops an EOQ inventory model that considers the demand rate as a function of stock and selling price. Shortages are permitted and two cases are studied: (i) complete backordering and (ii) partial backordering. The inventory model is for a deteriorating seasonal product. The product’s deterioration rate is controlled by investing in the preservation technology. The main purpose of the inventory model is to determine the optimum selling price, ordering frequency and preservation technology investment that maximizes the total profit. Additionally, the paper proves that the total profit is a concave function of selling price, ordering frequency and preservation technology investment. Therefore, a simple algorithm is proposed to obtain the optimal values for the decision variables. Several numerical examples are solved and studied along with a sensitivity analysis. PubDate: 2017-07-01

Abstract: A supplier of products and services aims to minimize the capacity investment cost and the operational cost incurred by unwanted byproducts, e.g. carbon dioxide emission. In this paper, we consider a sustainable supply chain network design problem, where the capacity and the product flow along each link are design variables. We formulate it as a multi-criteria optimization problem. A bio-inspired algorithm is developed to tackle this problem. We illustrate how to design a sustainable supply chain network in three steps. First, we develop a generalized model inspired by the foraging behaviour of slime mould Physarum polycephalum to handle the network optimization with multiple sinks. Second, we propose a strategy to update the link cost iteratively, thus making the Physarum model to converge to a user equilibrium. Third, we perform an equivalent operation to transform a system optimum problem into a corresponding user equilibrium problem so that it is solvable in the Physarum model. The efficiency of the proposed algorithm is illustrated with numerical examples. PubDate: 2017-07-01

Abstract: Controlling the number of active assets (cardinality of the portfolio) in a mean-variance portfolio problem is practically important but computationally demanding. Such task is ordinarily a mixed integer quadratic programming (MIQP) problem. We propose a novel approach to reformulate the problem as a mixed integer linear programming (MILP) problem for which computer codes are readily available. For numerical tests, we find cardinality constrained minimum variance portfolios of stocks in S&P500. A significant gain in robustness and computational effort by our MILP approach relative to MIQP is reported. Similarly, our MILP approach also competes favorably against cardinality constrained portfolio optimization with risk measures CVaR and MASD. For illustrations, we depict portfolios in a portfolio map where cardinality provides a third criterion in addition to risk and return. Fast solution allows an interactive search for a desired portfolio. PubDate: 2017-07-01

Abstract: The equal-risk-contribution, inverse-volatility weighted, maximum-diversification and minimum-variance portfolio weights are all direct functions of the estimated covariance matrix. We perform a Monte Carlo study to assess the impact of covariance matrix misspecification to these risk-based portfolios at the daily, weekly and monthly forecasting horizon. Our results show that the equal-risk-contribution and inverse-volatility weighted portfolio weights are relatively robust to covariance misspecification. In contrast, the minimum-variance portfolio weights are highly sensitive to errors in both the estimated variances and correlations, while errors in the estimated correlations can have a large effect on the weights of the maximum-diversification portfolio. PubDate: 2017-07-01