Authors:Min Yang; Qingxian An; Tao Ding; Pengzhen Yin; Liang Liang Abstract: Abstract Carbon emission allocation in China is one of the most important research fields in recent years. The existing carbon allocation approaches are primary based on efficiency maximized principle or efficiency invariance principle. Such two principles, however, are both extreme cases which are particularly difficult to realize in reality. In this study, we proposed an alternative approach based on gradually efficiency improvement planning and emission reduction planning principles. Based on them, the efficiency is planning to be steadily improved after allocating carbon emission year by year rather than achieve the maximized target (efficient) in a single step or maintain the original efficiency without any improvement. Meanwhile, the total carbon emission is also planning to be reduced by a certain percent according to the promise by China in Copenhagen climate conference. Finally, the proposed method is applied to allocate the carbon emission quotas among provinces in China, and some realistic conclusions are obtained to guide in reality. PubDate: 2017-10-19 DOI: 10.1007/s10479-017-2682-1

Authors:Charbel Jose Chiappetta Jabbour; Rafael Caliani Janeiro; Ana Beatriz Lopes de Sousa Jabbour; Jose Alcides Gobbo Junior; Manoel Henrique Salgado; Daniel Jugend Abstract: Abstract Drawing on theoretical assumptions from equity theory applied to exchange situations between businesses, and justice concepts applied to supply chains, this work has the original objective of discussing the level of implementation of social aspects of operations management by focusing on inter-organizational justice in Brazilian’s supply chains. In this context, this research presents a quantitative survey study, in which managers of Brazilian companies answered a questionnaire on their perceptions of levels of adoption of practices/initiatives for justice in supply chains. The results indicate that: (a) most of the analyzed practices have potential to be adopted in a more intense way, revealing potential actions that managers should take in order to promote social justice over the supply chain; (b) there are many positive correlations among the analyzed practices, which may suggest potential relationships, revealing practices that could have a joint implementation. Other results, implications, and research limitations were equally presented by taking into account particularities of Brazilian national culture. PubDate: 2017-10-17 DOI: 10.1007/s10479-017-2660-7

Authors:Sameer Prasad; Jason Woldt; Jasmine Tata; Nezih Altay Abstract: Abstract In this paper, we apply project management concepts and frameworks to the context of disaster resilience and examine how groups can increase the disaster resilience of a community. Based on our literature review and case study methodology, we develop a model that draws upon the relevant literatures in project management, operations management, disaster management, and organizational behaviour; we then compare that model with 12 disaster-related cases supported by four Non-Governmental Organizations (NGOs) in India. Our model measures disaster resilience using both an encompassing measure we refer to as Total Cost to Community (TCC) that captures the interrelatedness of level of recovery (deliverables), speed of recovery (time), and loss minimization (cost) at a community group level, as well as through learning (single domain or alternate domain). The model indicates that the external elements of the disaster management process (scale, goal complexity, immediacy, and stakeholder variance) influence the internal characteristics of disaster project management (information demands and uncertainty), which in turn influence disaster resilience. The level of community group processes (group strength, group continuity, and group capacity) also influences learning, both directly and indirectly, through internal characteristics of project management. In addition, the relationship between the external elements of disaster recovery and the internal characteristics of disaster project management is moderated by resources available. This model provides interesting new avenues for future theory and research, such as creating operations research models to identify the trigger points for groups becoming effective and exploring the quantification of TCC, a new construct developed in this research. Ultimately, this model can provide a roadmap for NGOs and government entities interested in building disaster resilience among micro-enterprises in vulnerable communities. PubDate: 2017-10-16 DOI: 10.1007/s10479-017-2679-9

Authors:Sahitya Elluru; Hardik Gupta; Harpreet Kaur; Surya Prakash Singh Abstract: Abstract Natural or man-made disasters lead to disruptions across entire supply chain, hugely affecting the entire distribution system. Owing to disaster, the supply chain usually takes longer time to recover and eventually leads to loss in reputation and revenue. Therefore, business organizations are constantly focusing on making its distribution network of a supply chain resilient to either man-made or natural disasters in order to satisfy customer demand in time. The supply chain distribution network broadly comprises of two major decisions i.e. facility location and vehicle routing. The paper addresses these distribution decisions jointly as location-routing problem. The paper proposes Location-Routing Model with Time Windows using proactive and reactive approaches. In proactive approach, the risk factors are considered as preventive measure for disaster caused disruptions. The model is extended for reactive approach by considering the disruptions such as facility breakdowns, route blockages, and delivery delays with cost penalties. The case illustration is discussed for proactive approach. In case of disaster caused disruptions, the reactive approach is illustrated using three disruption case scenarios. Using both proactive and reactive approach, designing the distribution system can make the overall supply chain a disaster resilient supply chain. PubDate: 2017-10-16 DOI: 10.1007/s10479-017-2681-2

Authors:Gülcin Ermis; Can Akkan Abstract: Abstract We develop search algorithms based on local search, and a matheuristic that solves a set of mixed integer programming models to improve the robustness of a set of solutions for an academic timetabling problem. The matheuristic uses the solution pool feature of CPLEX while solving two related MIP models iteratively. The solutions form a network (Akkan et al. in Eur J Oper Res 249(2):560–576, 2016. doi:10.1016/j.ejor.2015.08.047), in which edges are defined by the Hamming distance between pairs of solutions. This network is used to calculate a robustness measure, where disruption of a solution is assumed to occur when the time slot to which a team had been assigned is no longer feasible for that team and the heuristic response to this disruption is choosing one of the neighbors of the disrupted solution. Considering the objective function of the timetabling problem and this robustness measure results in a bi-criteria optimization problem where the goal is to improve the Pareto front by enlarging the network. We compare the performance of the heuristics on a set of random instances and seven semesters’ actual data. These results show that some of the proposed local search algorithms and the matheuristic find high quality approximate Pareto fronts. Besides being one of the few timetabling algorithms in the literature addressing robustness, a key contribution of this research is the demonstration of the effectiveness of the matheuristic approach. By using this matheuristic approach, for any discrete optimization model that can be solved optimally or near-optimally in an acceptable time, researchers can develop a robustness improvement algorithm. PubDate: 2017-10-16 DOI: 10.1007/s10479-017-2646-5

Authors:Mingchih Chen; Xufeng Zhao; Toshio Nakagawa Abstract: Abstract It would be of interest to formulate the general replacement models, combing the constant and random policies to satisfy the commonly planned and randomly needed replacement times. This paper takes up age and periodic replacement models again to formulate their general models when replacement actions are also conducted at random times \(Y_i~(i=1,2,\ldots ,n)\) . The classic approach of whichever occurs first and the newly proposed approach of whichever occurs last are used for such general models, whose models are named as replacement first, modified replacement first, replacement last and modified replacement last, respectively. We compare all of the replacement models analytically and numerically to find which policy should be selected from the viewpoint of cost. It is shown that the modified replacement policies with combined approaches of whichever occurs first and last are more economical than others. In addition, the replacement models with different replacement costs are extended for further studies. PubDate: 2017-10-14 DOI: 10.1007/s10479-017-2685-y

Authors:Prasenjit Mondal Abstract: Abstract We consider semi-Markov decision processes with finite state and action spaces and a general multichain structure. A form of limiting ratio average (undiscounted) reward is the criterion for comparing different policies. The main result is that the value vector and a pure optimal semi-stationary policy (i.e., a policy which depends only on the initial state and the current state) for such an SMDP can be computed directly from an optimal solution of a finite set (whose cardinality equals the number of states) of linear programming (LP) problems. To be more precise, we prove that the single LP associated with a fixed initial state provides the value and an optimal pure stationary policy of the corresponding SMDP. The relation between the set of feasible solutions of each LP and the set of stationary policies is also analyzed. Examples are worked out to describe the algorithm. PubDate: 2017-10-14 DOI: 10.1007/s10479-017-2686-x

Authors:Hela Masri; Saoussen Krichen Abstract: Abstract A major problem in communication networks is how to efficiently define the routing paths and to allocate bandwidths in order to satisfy a given collection of transmission requests. In this paper, we study this routing problem by modeling it as a bi-objective single path multicommodity flow problem (SMCFP). Two conflicting objective functions are simultaneously optimized: the delay and reliability of the generated paths. To tackle the complexity of this problem (NP-hard), most of the existing studies proposed approximate methods and metaheuristics as solution approaches. In this paper, we propose to adapt the augmented \(\epsilon \) -constraint method in order to solve small sized instances of the bi-SMCFP. For large scale problems, we develop three metaheuristics: a multiobjective multi-operator genetic algorithm, an adaptive multiobjective variable neighborhood search and new hybrid method combining the \(\epsilon \) -constraint with the evolutionary metaheuristic. The idea of the hybridization schema is to first use the metaheuristic to generate a good approximation of the Pareto front, then to enhance the quality of the solutions using the \(\epsilon \) -constraint method to push them toward the exact Pareto front. An intelligent decomposition scheme is used to reduce the size of the search space before applying the exact method. Computational results demonstrate the efficiency of the proposed hybrid algorithm using instances derived from real network topology and other randomly generated instances. PubDate: 2017-10-13 DOI: 10.1007/s10479-017-2667-0

Authors:Nader Azizi; Navneet Vidyarthi; Satyaveer S. Chauhan Abstract: Abstract Motivated by the strategic importance of congestion management, in this paper we present a model to design hub-and-spoke networks under stochastic demand and congestion. The proposed model determines the location and capacity of the hub nodes and allocate non-hub nodes to these hubs while minimizing the sum of the fixed cost, transportation cost and the congestion cost. In our approach, hubs are modelled as spatially distributed M/G/1 queues and congestion is captured using the expected queue lengths at hub facilities. A simple transformation and a piecewise linear approximation technique are used to linearize the resulting nonlinear model. We present two solution approaches: an exact method that uses a cutting plane approach and a novel genetic algorithm based heuristic. The numerical experiments are conducted using CAB and TR datasets. Analysing the results obtained from a number of problem instances, we illustrate the impact of congestion cost on the network topology and show that substantial reduction in congestion can be achieved with a small increase in total cost if congestion at hub facilities is considered at the design stage. The computational results further confirm the stability and efficiency of both exact and heuristic approaches. PubDate: 2017-10-13 DOI: 10.1007/s10479-017-2656-3

Authors:Daniel Wei-Chung Miao; Yung-Hsin Lee; Jr-Yan Wang Abstract: Abstract This paper extends the forward Monte-Carlo methods, which have been developed for the basic types of American options, to the valuation of American barrier options. The main advantage of these methods is that they do not require backward induction, the most time-consuming and memory-intensive step in the simulation approach to American options pricing. For these methods to work, we need to define the so-called pseudo critical prices which are used to determine whether early exercise should happen. In this study, we define a new and more flexible version of the pseudo critical prices which can be conveniently extended to all fourteen types of American barrier options. These pseudo critical prices are shown to satisfy the criteria of a sufficient indicator which guarantees the effectiveness of the proposed methods. A series of numerical experiments are provided to compare the performance between the forward and backward Monte-Carlo methods and demonstrate the computational advantages of the forward methods. PubDate: 2017-10-11 DOI: 10.1007/s10479-017-2639-4

Authors:Amir Hossein Nobil; Amir Hosein Afshar Sedigh; Leopoldo Eduardo Cárdenas-Barrón Abstract: Abstract This paper develops a multiproduct economic production quantity inventory model for a vendor–buyer system in which several products are manufactured on a single machine. The vendor delivers the products to customer in small batches. The number of orders must be a discrete value. Moreover, benefitting from a just-in-time policy, the buyer decides the size of the delivered batches. Due to the fact that several products are manufactured on one machine, this makes that the production capacity be considered as a constraint. The aim of this study is to determine the optimal cycle length and the number of delivered batches for each product so that the total inventory cost is minimized. The problem under study is modeled as a mixed integer nonlinear programing problem considering maximum number of orders, capacity and budget constraints. Three different methods are developed and employed to solve this problem: an exact method, a heuristic algorithm and a hybrid genetic algorithm. Based on the results, the three algorithms have near efficiency with different running times. The results shows that the heuristic algorithm obtains a good solution in a short time and the hybrid genetic algorithm finds solutions with higher quality in an acceptable time. Finally, a sensitivity analysis is done to evaluate the effect of changes in the parameters of problem. PubDate: 2017-10-11 DOI: 10.1007/s10479-017-2650-9

Authors:Dirk Sierag; Bernard Hanzon Abstract: Abstract In this paper a direct generalisation of the recombining binomial tree model by Cox et al. (J Financ Econ 7:229–263, 1979) based on the Pascal’s simplex is constructed. This discrete model can be used to approximate the prices of derivatives on multiple assets in a Black–Scholes market environment. The generalisation keeps most aspects of the binomial model intact, of which the following are the most important: The direct link to the Pascal’s simplex (which specialises to Pascal’s triangle in the binomial case); the matching of moments of the (log-transformed) process; convergence to the correct option prices both for European and American options, when the time step length goes to zero and the completeness of the model, at least for sufficiently small time step. The goal of this paper is to present basic theoretical aspects of this approach. However, we also illustrate the approach by a number of example calculations. Further possible developments of this approach are discussed in a final section. PubDate: 2017-10-10 DOI: 10.1007/s10479-017-2655-4

Authors:Marco Viola; Mara Sangiovanni; Gerardo Toraldo; Mario R. Guarracino Abstract: Abstract Supervised classification is one of the most powerful techniques to analyze data, when a-priori information is available on the membership of data samples to classes. Since the labeling process can be both expensive and time-consuming, it is interesting to investigate semi-supervised algorithms that can produce classification models taking advantage of unlabeled samples. In this paper we propose LapReGEC, a novel technique that introduces a Laplacian regularization term in a generalized eigenvalue classifier. As a result, we produce models that are both accurate and parsimonious in terms of needed labeled data. We empirically prove that the obtained classifier well compares with other techniques, using as little as 5% of labeled points to compute the models. PubDate: 2017-10-10 DOI: 10.1007/s10479-017-2674-1

Authors:Sebastian Lozano; Belarmino Adenso-Diaz Abstract: Abstract This paper deals with planning the product flows along a supply chain (SC) in which there are product losses in the nodes and in the arcs. Given the demand by each retailer, appropriate quantities to be procured from the different suppliers must be decided and the routing of the product along the SC must be determined. Care must be taken because, due to losses, the amount of product that will be finally available at the retailers is lower than the amount of product procured. The objective is twofold: minimizing total costs and minimizing product losses. The proposed approach leverages the existence of data on the flows in previous periods. With those observed flows, a Network Data Envelopment Analysis technology is inferred which allows the computing of any feasible operating point. The resulting biobjective optimization problem can be solved using the weighted Tchebycheff method. PubDate: 2017-10-10 DOI: 10.1007/s10479-017-2653-6

Authors:Liangjie Xia; Tingting Guo; Juanjuan Qin; Xiaohang Yue; Ning Zhu Abstract: Abstract The traditional self-interest hypothesis is far from perfect. Social preference has a significant impact on every firm’s decision making. This paper incorporates reciprocal preferences and consumers’ low-carbon awareness (CLA) into the dyadic supply chain in which a single manufacturer plays a Stackelberg-like game with a single retailer. This research intends to investigate how reciprocity and CLA may affect the decisions and performances of the supply chain members and the system’s efficiency. In this study, the following two scenarios are discussed: (1) both the manufacturer and the retailer have no reciprocal preferences and (2) both of them have reciprocal preferences. We derive equilibriums under both scenarios and present a numerical analysis. We demonstrate that reciprocal preferences and CLA significantly affect the equilibrium and firms’ profits and utilities. First, the optimal retail price increases with CLA, while it decreases with the reciprocity of the retailer and the manufacturer; the optimal wholesale price increases with CLA and the retailer’s reciprocity, while it decreases with the manufacturer’s reciprocity. The optimal emission reduction level increases with CLA and the reciprocity of both the manufacturer and the retailer. Second, the optimal profits of the participants and the supply chain increase with CLA, the participants’ optimal profits are concave in their own reciprocity and increase with their co-operators’ reciprocity. Third, the participants’ optimal utilities increase with CLA and their reciprocity. Finally, the supply chain efficiency increases with the participants’ reciprocity, while the efficiency decreases with CLA. PubDate: 2017-10-09 DOI: 10.1007/s10479-017-2657-2

Authors:Miguel A. Ortíz; Leidy E. Betancourt; Kevin Parra Negrete; Fabio De Felice; Antonella Petrillo Abstract: Abstract In today highly competitive and globalized markets, an efficient use of production resources is necessary for manufacturing enterprises. In this research, the problem of scheduling and sequencing of manufacturing system is presented. A flexible job shop problem sequencing problem is analyzed in detail. After formulating this problem mathematically, a new model is proposed. This problem is not only theoretically interesting, but also practically relevant. An illustrative example is also conducted to demonstrate the applicability of the proposed model. PubDate: 2017-10-07 DOI: 10.1007/s10479-017-2678-x

Authors:Bahram Alidaee; Vijay P. Ramalingam; Haibo Wang; Bryan Kethley Abstract: Abstract In this paper, we propose a critical event tabu search meta-heuristic for the general integer multidimensional knapsack problem (GMDKP). Variations of GMDKP have enormous applications, and often occur as a sub-problem of more general combinatorial problems. For the special case of binary multidimensional knapsack problems (BMDKP) there are variety of heuristics, mostly sophisticated meta-heuristics, which provides good solutions to the problem. However, to date there is no method that can provide reasonable solutions to realistic size GMDKP. To the best of our knowledge there are only three heuristics published in the literature for GMDKP, and all three are simple greedy heuristics. There is no meta-heuristic available that effectively provides good solutions for large-scale GMDKP. One successful meta-heuristic that has proven to be highly effective in solving combinatorial optimization is a variation of tabu search known as the critical event tabu search (CETS). CETS was originally proposed for the BMDKP with considerable success afterwards. In CETS, clever use of surrogate programming is embedded as choice rules to obtain high quality solutions. The main purpose of this paper is to design the meta-heuristic CETS for the GMDKP using variety of different surrogate choice rules. Extensive computational experiment for large-scale problems are presented. Our procedures open the door for further applications of meta-heuristics to general integer programs. PubDate: 2017-10-07 DOI: 10.1007/s10479-017-2675-0

Authors:Jonas Harbering; Abhiram Ranade; Marie Schmidt; Oliver Sinnen Abstract: Abstract In this work we consider the single track train scheduling problem. The problem consists of scheduling a set of trains from opposite sides along a single track. The track has intermediate stations and the trains are only allowed to pass each other at those stations. Traversal times of the trains on the blocks between the stations only depend on the block lengths but not on the train. This problem is a special case of minimizing the makespan in job shop scheduling with two counter routes and no preemption. We develop a lower bound on the makespan of the train scheduling problem which provides us with an easy solution method in some special cases. Additionally, we prove that for a fixed number of blocks the problem can be solved in pseudo-polynomial time. PubDate: 2017-10-07 DOI: 10.1007/s10479-017-2644-7

Authors:Jiang Wu; Jinn-Tsair Teng; Konstantina Skouri Abstract: Abstract In general, the demand rate moving through a product life cycle can be reasonably depicted by a trapezoidal-type pattern: it initially increases during the introduction and growth phases, then remains reasonably constant in the maturity phase, and finally decreases in the decline phase. It is evident that perishable products deteriorate continuously over time and can not be sold after its maximum lifetime. Thus, the deterioration rate of a product is increasing with time and closely related to its maximum lifetime. Furthermore, it has been hard to obtain loans from banks since the global financial meltdown in 2008. Hence, over 80% of firms in the United Kingdom and the United States sell their products on various short-term, interest-free loans (i.e., trade credit) to customers. To incorporate those important facts, we develop an inventory model by (1) assuming the demand pattern is trapezoidal, (2) extending the deterioration rate to 100% as its maximum lifetime is approaching, (3) using discounted cash-flow analysis to calculate all relevant costs considering the effects of upstream and downstream trade credits, and (4) including the costly purchase cost into the total cost, which is omitted in previous studies. Then, the order quantity that maximizes the present value of the profit is uniquely determined. Finally, through numerical examples, managerial insights are provided. PubDate: 2017-10-06 DOI: 10.1007/s10479-017-2673-2

Authors:Sylvain Béal; André Casajus; Frank Huettner Abstract: Abstract We study values for transferable utility games enriched by a communication graph. The most well-known such values are component-efficient and characterized by some deletion link property. We study efficient extensions of such values: for a given component-efficient value, we look for a value that (i) satisfies efficiency, (ii) satisfies the link-deletion property underlying the original component-efficient value, and (iii) coincides with the original component-efficient value whenever the underlying graph is connected. Béal et al. (Soc Choice Welf 45:819–827, 2015) prove that the Myerson value (Myerson in Math Oper Res 2:225–229, 1977) admits a unique efficient extension, which has been introduced by van den Brink et al. (Econ Lett 117:786–789, 2012). We pursue this line of research by showing that the average tree solution (Herings et al. in Games Econ Behav 62:77–92, 2008) and the compensation solution (Béal et al. in Int J Game Theory 41:157–178, 2012b) admit similar unique efficient extensions, and that there exists no efficient extension of the position value (Meessen in Communication games, 1988; Borm et al. in SIAM J Discrete Math 5:305–320, 1992). As byproducts, we obtain new characterizations of the average tree solution and the compensation solution, and of their efficient extensions. PubDate: 2017-10-06 DOI: 10.1007/s10479-017-2661-6