for Journals by Title or ISSN for Articles by Keywords help
 Subjects -> BUSINESS AND ECONOMICS (Total: 3247 journals)     - ACCOUNTING (100 journals)    - BANKING AND FINANCE (274 journals)    - BUSINESS AND ECONOMICS (1193 journals)    - CONSUMER EDUCATION AND PROTECTION (23 journals)    - COOPERATIVES (4 journals)    - ECONOMIC SCIENCES: GENERAL (183 journals)    - ECONOMIC SYSTEMS, THEORIES AND HISTORY (196 journals)    - FASHION AND CONSUMER TRENDS (13 journals)    - HUMAN RESOURCES (96 journals)    - INSURANCE (26 journals)    - INTERNATIONAL COMMERCE (131 journals)    - INTERNATIONAL DEVELOPMENT AND AID (87 journals)    - INVESTMENTS (22 journals)    - LABOR AND INDUSTRIAL RELATIONS (45 journals)    - MACROECONOMICS (16 journals)    - MANAGEMENT (542 journals)    - MARKETING AND PURCHASING (95 journals)    - MICROECONOMICS (24 journals)    - PRODUCTION OF GOODS AND SERVICES (139 journals)    - PUBLIC FINANCE, TAXATION (36 journals)    - TRADE AND INDUSTRIAL DIRECTORIES (2 journals) BUSINESS AND ECONOMICS (1193 journals)                  1 2 3 4 5 6 | Last
 Annals of Operations Research   [SJR: 1.186]   [H-I: 78]   [10 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1572-9338 - ISSN (Online) 0254-5330    Published by Springer-Verlag  [2350 journals]
• Paths, pivots, and practice: the power of optimization
• Authors: Miguel F. Anjos; Antoine Deza
Pages: 1 - 4
PubDate: 2018-06-01
DOI: 10.1007/s10479-018-2853-8
Issue No: Vol. 265, No. 1 (2018)

• Computational study of valid inequalities for the maximum k -cut problem
• Authors: Vilmar Jefté Rodrigues de Sousa; Miguel F. Anjos; Sébastien Le Digabel
Pages: 5 - 27
Abstract: We consider the maximum k-cut problem that consists in partitioning the vertex set of a graph into k subsets such that the sum of the weights of edges joining vertices in different subsets is maximized. We focus on identifying effective classes of inequalities to tighten the semidefinite programming relaxation. We carry out an experimental study of four classes of inequalities from the literature: clique, general clique, wheel and bicycle wheel. We considered 10 combinations of these classes and tested them on both dense and sparse instances for $$k \in \{3,4,5,7\}$$ . Our computational results suggest that the bicycle wheel and wheel are the strongest inequalities for $$k=3$$ , and that for $$k \in \{4,5,7\}$$ the wheel inequalities are the strongest by far. Furthermore, we observe an improvement in the performance for all choices of k when both bicycle wheel and wheel are used, at the cost of 72% more CPU time on average when compared with using only one of them.
PubDate: 2018-06-01
DOI: 10.1007/s10479-017-2448-9
Issue No: Vol. 265, No. 1 (2018)

• On component commonality for periodic review assemble-to-order systems
• Authors: Antoine Deza; Kai Huang; Hongfeng Liang; Xiao Jiao Wang
Pages: 29 - 46
Abstract: Akçay and Xu (Manag Sci 50(1):99–116, 2004) studied a periodic review assemble-to-order (ATO) system with an independent base stock policy and a first-come-first-served allocation rule, where the base stock levels and the component allocation are optimized jointly. The formulation is non-convex and, thus theoretically and computationally challenging. In their computational experiments, Akçay and Xu (Manag Sci 50(1):99–116, 2004) modified the right hand side of the inventory availability constraints by substituting linear functions for piece-wise linear ones. This modification may have a significant impact for low budget levels. The optimal solutions obtained via the original formulation; that is, without the modification, include zero base stock levels for some components and indicate consequently a bias against component commonality. We study the impact of component commonality on ATO systems. We show that lowering component commonality may yield a higher type-II service level. The lower degree of component commonality is achieved via separating inventories of the same component for different products. We substantiate this property via computational and theoretical approaches. We show that for low budget levels the use of separate inventories of the same component for different products could achieve a higher reward than with shared inventories. Finally, considering a simple ATO system consisting of one component shared by two products, we characterize the budget ranges such that the use of separate inventories is beneficial, as well as the budget ranges such that component commonality is beneficial.
PubDate: 2018-06-01
DOI: 10.1007/s10479-017-2507-2
Issue No: Vol. 265, No. 1 (2018)

• Combinatorial redundancy detection
• Authors: Komei Fukuda; Bernd Gärtner; May Szedlák
Pages: 47 - 65
Abstract: The problem of detecting and removing redundant constraints is fundamental in optimization. We focus on the case of linear programs (LPs) in dictionary form, given by n equality constraints in $$n+d$$ variables, where the variables are constrained to be nonnegative. A variable $$x_r$$ is called redundant, if after removing $$x_r \ge 0$$ the LP still has the same feasible region. The time needed to solve such an LP is denoted by $$\textit{LP}(n,d)$$ . It is easy to see that solving $$n+d$$ LPs of the above size is sufficient to detect all redundancies. The currently fastest practical method is the one by Clarkson: it solves $$n+d$$ linear programs, but each of them has at most s variables, where s is the number of nonredundant constraints. In the first part we show that knowing all of the finitely many dictionaries of the LP is sufficient for the purpose of redundancy detection. A dictionary is a matrix that can be thought of as an enriched encoding of a vertex in the LP. Moreover—and this is the combinatorial aspect—it is enough to know only the signs of the entries, the actual values do not matter. Concretely we show that for any variable $$x_r$$ one can find a dictionary, such that its sign pattern is either a redundancy or nonredundancy certificate for $$x_r$$ . In the second part we show that considering only the sign patterns of the dictionary, there is an output sensitive algorithm of running time $$\mathcal {O}(d \cdot (n+d) \cdot s^{d-1} \cdot \textit{LP}(s,d) + d \cdot s^{d} \cdot \textit{LP}(n,d))$$ to detect all redundancies. In the case where all constraints are in general position, the running time is $$\mathcal {O}(s \cdot \textit{LP}(n,d) + (n+d) \cdot \textit{LP}(s,d))$$ , which is essentially the running time of the Clarkson method. Our algorithm extends naturally to a more general setting of arrangements of oriented topological hyperplane arrangements.
PubDate: 2018-06-01
DOI: 10.1007/s10479-016-2385-z
Issue No: Vol. 265, No. 1 (2018)

• A numerical evaluation of the bounded degree sum-of-squares hierarchy of
Lasserre, Toh, and Yang on the pooling problem
• Authors: Ahmadreza Marandi; Joachim Dahl; Etienne de Klerk
Pages: 67 - 92
Abstract: The bounded degree sum-of-squares (BSOS) hierarchy of Lasserre et al. (EURO J Comput Optim 1–31, 2015) constructs lower bounds for a general polynomial optimization problem with compact feasible set, by solving a sequence of semi-definite programming (SDP) problems. Lasserre, Toh, and Yang prove that these lower bounds converge to the optimal value of the original problem, under some assumptions. In this paper, we analyze the BSOS hierarchy and study its numerical performance on a specific class of bilinear programming problems, called pooling problems, that arise in the refinery and chemical process industries.
PubDate: 2018-06-01
DOI: 10.1007/s10479-017-2407-5
Issue No: Vol. 265, No. 1 (2018)

• Efficient solution of many instances of a simulation-based optimization
problem utilizing a partition of the decision space
• Authors: Zuzana Nedělková; Peter Lindroth; Michael Patriksson; Ann-Brith Strömberg
Pages: 93 - 118
Abstract: This paper concerns the solution of a class of mathematical optimization problems with simulation-based objective functions. The decision variables are partitioned into two groups, referred to as variables and parameters, respectively, such that the objective function value is influenced more by the variables than by the parameters. We aim to solve this optimization problem for a large number of parameter settings in a computationally efficient way. The algorithm developed uses surrogate models of the objective function for a selection of parameter settings, for each of which it computes an approximately optimal solution over the domain of the variables. Then, approximate optimal solutions for other parameter settings are computed through a weighting of the surrogate models without requiring additional expensive function evaluations. We have tested the algorithm’s performance on a set of global optimization problems differing with respect to both mathematical properties and numbers of variables and parameters. Our results show that it outperforms a standard and often applied approach based on a surrogate model of the objective function over the complete space of variables and parameters.
PubDate: 2018-06-01
DOI: 10.1007/s10479-017-2721-y
Issue No: Vol. 265, No. 1 (2018)

• How difficult is nonlinear optimization' A practical solver tuning
approach, with illustrative results
• Authors: János D. Pintér
Pages: 119 - 141
Abstract: Nonlinear optimization (NLO) encompasses a vast range of problems, from very simple to theoretically intractable instances. For this reason, it is impossible to offer guaranteed—while practically meaningful—advice to users of NLO software. This issue becomes apparent, when facing exceptionally hard and/or previously unexplored NLO challenges. We propose a heuristic quadratic meta-model based approach, and suggest corresponding key option settings to use with the Lipschitz global optimizer (LGO) solver suite. These LGO option settings are directly related to estimating the sufficient computational effort to handle a broad range of NLO problems. The proposed option settings are evaluated experimentally, by solving (numerically) a representative set of NLO test problems which are based on real-world optimization applications and non-trivial academic challenges. Our tests include also a set of scalable optimization problems which are increasingly difficult to handle as the size of the model-instances increases. Based on our computational results, it is possible to offer generally valid, practical advice to LGO users. Arguably (and mutatis mutandis), comparable advice can be given to users of other NLO software products with a similarly broad mandate to LGO’s. An additional benefit of such aggregated tests is that their results can effectively assist the rapid evaluation and verification of NLO solver performance during software development phases.
PubDate: 2018-06-01
DOI: 10.1007/s10479-017-2518-z
Issue No: Vol. 265, No. 1 (2018)

• Graph bisection revisited
• Authors: Renata Sotirov
Pages: 143 - 154
Abstract: The graph bisection problem is the problem of partitioning the vertex set of a graph into two sets of given sizes such that the sum of weights of edges joining these two sets is optimized. We present a semidefinite programming relaxation for the graph bisection problem with a matrix variable of order n—the number of vertices of the graph—that is equivalent to the currently strongest semidefinite programming relaxation obtained by using vector lifting. The reduction in the size of the matrix variable enables us to impose additional valid inequalities to the relaxation in order to further strengthen it. The numerical results confirm that our simplified and strengthened semidefinite relaxation provides the currently strongest bound for the graph bisection problem in reasonable time.
PubDate: 2018-06-01
DOI: 10.1007/s10479-017-2575-3
Issue No: Vol. 265, No. 1 (2018)

• LP-based tractable subcones of the semidefinite plus nonnegative cone
• Authors: Akihiro Tanaka; Akiko Yoshise
Pages: 155 - 182
Abstract: The authors in a previous paper devised certain subcones of the semidefinite plus nonnegative cone and showed that satisfaction of the requirements for membership of those subcones can be detected by solving linear optimization problems (LPs) with O(n) variables and $$O(n^2)$$ constraints. They also devised LP-based algorithms for testing copositivity using the subcones. In this paper, they investigate the properties of the subcones in more detail and explore larger subcones of the positive semidefinite plus nonnegative cone whose satisfaction of the requirements for membership can be detected by solving LPs. They introduce a semidefinite basis (SD basis) that is a basis of the space of $$n \times n$$ symmetric matrices consisting of $$n(n+1)/2$$ symmetric semidefinite matrices. Using the SD basis, they devise two new subcones for which detection can be done by solving LPs with $$O(n^2)$$ variables and $$O(n^2)$$ constraints. The new subcones are larger than the ones in the previous paper and inherit their nice properties. The authors also examine the efficiency of those subcones in numerical experiments. The results show that the subcones are promising for testing copositivity as a useful application.
PubDate: 2018-06-01
DOI: 10.1007/s10479-017-2720-z
Issue No: Vol. 265, No. 1 (2018)

• Efficient extensions of communication values
• Authors: Sylvain Béal; André Casajus; Frank Huettner
Pages: 41 - 56
Abstract: We study values for transferable utility games enriched by a communication graph. The most well-known such values are component-efficient and characterized by some deletion link property. We study efficient extensions of such values: for a given component-efficient value, we look for a value that (i) satisfies efficiency, (ii) satisfies the link-deletion property underlying the original component-efficient value, and (iii) coincides with the original component-efficient value whenever the underlying graph is connected. Béal et al. (Soc Choice Welf 45:819–827, 2015) prove that the Myerson value (Myerson in Math Oper Res 2:225–229, 1977) admits a unique efficient extension, which has been introduced by van den Brink et al. (Econ Lett 117:786–789, 2012). We pursue this line of research by showing that the average tree solution (Herings et al. in Games Econ Behav 62:77–92, 2008) and the compensation solution (Béal et al. in Int J Game Theory 41:157–178, 2012b) admit similar unique efficient extensions, and that there exists no efficient extension of the position value (Meessen in Communication games, 1988; Borm et al. in SIAM J Discrete Math 5:305–320, 1992). As byproducts, we obtain new characterizations of the average tree solution and the compensation solution, and of their efficient extensions.
PubDate: 2018-05-01
DOI: 10.1007/s10479-017-2661-6
Issue No: Vol. 264, No. 1-2 (2018)

• A variational inequality formulation for designing a multi-echelon,
multi-product supply chain network in a competitive environment
• Authors: Isa Feyzian-Tary; Jafar Razmi; Mohamad Sadegh Sangari
Pages: 89 - 121
Abstract: In a competitive environment, supply chains are competing with each other to gain the market share and competition is a critical factor influencing the supply chain network structure. The current paper presents a variational inequality formulation and provides the results for a competitive supply chain network design model. The new-entrant supply chain competes against an existing one in a non-cooperative behavior. The networks include raw material suppliers, manufacturers, retailers, and the same demand markets. The manufacturers produce multiple products with deterministic, price-dependent demand. The goal is to maximize the future revenue of both chains. The problem is modeled by mathematical programming and the governing Nash equilibrium conditions are derived. Then, a finite-dimensional variational inequality formulation is presented to solve the equilibrium problem. Qualitative properties of the equilibrium pattern are provided to establish existence and uniqueness results under reasonable conditions. The modified projection algorithm is used to solve the variational inequality problem. A numerical example is presented in order to show the efficiency of the proposed model and to investigate the behavior of the model under different conditions.
PubDate: 2018-05-01
DOI: 10.1007/s10479-017-2737-3
Issue No: Vol. 264, No. 1-2 (2018)

• Functional law of the iterated logarithm for multi-server queues with
batch arrivals and customer feedback
• Authors: Yongjiang Guo; Yunan Liu; Renhu Pei
Pages: 157 - 191
Abstract: A functional law of the iterated logarithm (FLIL) and its corresponding law of the iterated logarithm (LIL) are established for a multi-server queue with batch arrivals and customer feedback. The FLIL and LIL, which quantify the magnitude of asymptotic fluctuations of the stochastic processes around their mean values, are developed in three cases: underloaded, critically loaded and overloaded, for five performance measures: queue length, workload, busy time, idle time and departure process. Both FLIL and LIL are proved using an approach based on strong approximations.
PubDate: 2018-05-01
DOI: 10.1007/s10479-017-2529-9
Issue No: Vol. 264, No. 1-2 (2018)

• And/or-convexity: a graph convexity based on processes and deadlock models
• Authors: Carlos V. G. C. Lima; Fábio Protti; Dieter Rautenbach; Uéverton S. Souza; Jayme L. Szwarcfiter
Pages: 267 - 286
Abstract: Deadlock prevention techniques are essential in the design of robust distributed systems. However, despite the large number of different algorithmic approaches to detect and solve deadlock situations, yet there remains quite a wide field to be explored in the study of deadlock-related combinatorial properties. In this work we consider a simplified AND-OR model, where the processes and their communication are given as a graph G. Each vertex of G is labelled AND or OR, in such a way that an AND-vertex (resp., OR-vertex) depends on the computation of all (resp., at least one) of its neighbors. We define a graph convexity based on this model, such that a set $$S \subseteq V(G)$$ is convex if and only if every AND-vertex (resp., OR-vertex) $$v \in V(G){\setminus }S$$ has at least one (resp., all) of its neighbors in $$V(G) {\setminus } S$$ . We relate some classical convexity parameters to blocking sets that cause deadlock. In particular, we show that those parameters in a graph represent the sizes of minimum or maximum blocking sets, and also the computation time until system stability is reached. Finally, a study on the complexity of combinatorial problems related to such graph convexity is provided.
PubDate: 2018-05-01
DOI: 10.1007/s10479-017-2666-1
Issue No: Vol. 264, No. 1-2 (2018)

• Perfect edge domination: hard and solvable cases
• Authors: Min Chih Lin; Vadim Lozin; Veronica A. Moyano; Jayme L. Szwarcfiter
Pages: 287 - 305
Abstract: Let G be an undirected graph. An edge of G dominates itself and all edges adjacent to it. A subset $$E'$$ of edges of G is an edge dominating set of G, if every edge of the graph is dominated by some edge of $$E'$$ . We say that $$E'$$ is a perfect edge dominating set of G, if every edge not in $$E'$$ is dominated by exactly one edge of $$E'$$ . The perfect edge dominating problem is to determine a least cardinality perfect edge dominating set of G. For this problem, we describe two NP-completeness proofs, for the classes of claw-free graphs of degree at most 3, and for bounded degree graphs, of maximum degree at most $$d \ge 3$$ and large girth. In contrast, we prove that the problem admits an O(n) time solution, for cubic claw-free graphs. In addition, we prove a complexity dichotomy theorem for the perfect edge domination problem, based on the results described in the paper. Finally, we describe a linear time algorithm for finding a minimum weight perfect edge dominating set of a $$P_5$$ -free graph. The algorithm is robust, in the sense that, given an arbitrary graph G, either it computes a minimum weight perfect edge dominating set of G, or it exhibits an induced subgraph of G, isomorphic to a $$P_5$$ .
PubDate: 2018-05-01
DOI: 10.1007/s10479-017-2664-3
Issue No: Vol. 264, No. 1-2 (2018)

• A novel DEA model based on uncertainty theory
Pages: 367 - 389
Abstract: In deterministic DEA models, precise values are assigned to input and output data while they are intrinsically subjected to some degree of uncertainty. Most studies in this area are based on the assumption that inputs and outputs are equipped with some pre-known knowledge that enables one to use probability theory or fuzzy theory. In the lack of such data, one has to trust on the experts’ opinions, which can be considered as a sort of uncertainty. In this situation, the axiomatic approach of uncertainty theory initiated by Liu (Uncertainty theory. Berlin: Springer, 2007) could be an adequate powerful tool. Applying this theory, Wen et al. (J Appl Math, 2014; Soft Comput 1987–1996, 2015) suggested an uncertain DEA model while it has the disadvantage of pessimism. In this paper, we introduce another uncertain DEA model with the objective of acquiring the highest belief degree that the evaluated DMU is efficient. We also apply this model in ranking of the evaluated DMUs. Implementation of the model on different illustrative examples reveals that the ranks of DMUs are almost-stable in our model. This observation states that the rank of a DMU may roughly alternate with respect to the variation of minimum belief degrees. Our proposed model also compensates the rather optimistic point of view in the Wen et al. model that identifies all DMUs as efficient for higher belief degrees.
PubDate: 2018-05-01
DOI: 10.1007/s10479-017-2652-7
Issue No: Vol. 264, No. 1-2 (2018)

• Dispatching algorithm for production programming of flexible job-shop
systems in the smart factory industry
• Authors: Miguel A. Ortíz; Leidy E. Betancourt; Kevin Parra Negrete; Fabio De Felice; Antonella Petrillo
Pages: 409 - 433
Abstract: In today highly competitive and globalized markets, an efficient use of production resources is necessary for manufacturing enterprises. In this research, the problem of scheduling and sequencing of manufacturing system is presented. A flexible job shop problem sequencing problem is analyzed in detail. After formulating this problem mathematically, a new model is proposed. This problem is not only theoretically interesting, but also practically relevant. An illustrative example is also conducted to demonstrate the applicability of the proposed model.
PubDate: 2018-05-01
DOI: 10.1007/s10479-017-2678-x
Issue No: Vol. 264, No. 1-2 (2018)

• Optimal inventory policies for deteriorating items with trapezoidal-type
demand patterns and maximum lifetimes under upstream and downstream trade
credits
• Authors: Jiang Wu; Jinn-Tsair Teng; Konstantina Skouri
Pages: 459 - 476
Abstract: In general, the demand rate moving through a product life cycle can be reasonably depicted by a trapezoidal-type pattern: it initially increases during the introduction and growth phases, then remains reasonably constant in the maturity phase, and finally decreases in the decline phase. It is evident that perishable products deteriorate continuously over time and can not be sold after its maximum lifetime. Thus, the deterioration rate of a product is increasing with time and closely related to its maximum lifetime. Furthermore, it has been hard to obtain loans from banks since the global financial meltdown in 2008. Hence, over 80% of firms in the United Kingdom and the United States sell their products on various short-term, interest-free loans (i.e., trade credit) to customers. To incorporate those important facts, we develop an inventory model by (1) assuming the demand pattern is trapezoidal, (2) extending the deterioration rate to 100% as its maximum lifetime is approaching, (3) using discounted cash-flow analysis to calculate all relevant costs considering the effects of upstream and downstream trade credits, and (4) including the costly purchase cost into the total cost, which is omitted in previous studies. Then, the order quantity that maximizes the present value of the profit is uniquely determined. Finally, through numerical examples, managerial insights are provided.
PubDate: 2018-05-01
DOI: 10.1007/s10479-017-2673-2
Issue No: Vol. 264, No. 1-2 (2018)

• Correction to: Fair ticket pricing in public transport as a constrained
cost allocation game
• Authors: Ralf Borndörfer; Nam-Dũng Hoang
Pages: 541 - 541
Abstract: The authors wish to correct the acknowledgement in page 68 from The work of Nam-Dũng Hoàng is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED).
PubDate: 2018-05-01
DOI: 10.1007/s10479-017-2731-9
Issue No: Vol. 264, No. 1-2 (2018)

• Correction to: Short-term manpower planning for MRT carriage maintenance
under mixed deterministic and stochastic demands
• Authors: Chia-Hung Chen; Shangyao Yan; Miawjane Chen
Pages: 543 - 543
Abstract: In the original article (Chen et al. 2010), the authors inadvertently did not reference their previous work to which this article (Chen et al. 2010) is related. The reference list has been updated with the addition of Chen et al. (2008) to reflect this. The authors apologize for any inconvenience they might have caused.
PubDate: 2018-05-01
DOI: 10.1007/s10479-017-2703-0
Issue No: Vol. 264, No. 1-2 (2018)

• Correction to: Time symmetry of resource constrained project scheduling
with general temporal constraints and take-give resources
• Authors: Zdeněk Hanzálek; Přemysl Šůcha
Pages: 545 - 545
Abstract: Due to some technical issues with article HTML, Algorithm 1 appeared twice and there was no ILP formulation.
PubDate: 2018-05-01
DOI: 10.1007/s10479-017-2704-z
Issue No: Vol. 264, No. 1-2 (2018)

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327

Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs