for Journals by Title or ISSN for Articles by Keywords help
 Subjects -> BUSINESS AND ECONOMICS (Total: 3214 journals)     - ACCOUNTING (97 journals)    - BANKING AND FINANCE (273 journals)    - BUSINESS AND ECONOMICS (1177 journals)    - CONSUMER EDUCATION AND PROTECTION (23 journals)    - COOPERATIVES (4 journals)    - ECONOMIC SCIENCES: GENERAL (176 journals)    - ECONOMIC SYSTEMS, THEORIES AND HISTORY (197 journals)    - FASHION AND CONSUMER TRENDS (13 journals)    - HUMAN RESOURCES (95 journals)    - INSURANCE (27 journals)    - INTERNATIONAL COMMERCE (130 journals)    - INTERNATIONAL DEVELOPMENT AND AID (85 journals)    - INVESTMENTS (27 journals)    - LABOR AND INDUSTRIAL RELATIONS (45 journals)    - MACROECONOMICS (16 journals)    - MANAGEMENT (536 journals)    - MARKETING AND PURCHASING (92 journals)    - MICROECONOMICS (24 journals)    - PRODUCTION OF GOODS AND SERVICES (139 journals)    - PUBLIC FINANCE, TAXATION (36 journals)    - TRADE AND INDUSTRIAL DIRECTORIES (2 journals) BUSINESS AND ECONOMICS (1177 journals)                  1 2 3 4 5 6 | Last
 Central European Journal of Operations Research   [SJR: 0.837]   [H-I: 17]   [5 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1613-9178 - ISSN (Online) 1435-246X    Published by Springer-Verlag  [2351 journals]
• Extending the multi-criteria decision making method DEX with numeric
attributes, value distributions and relational models
• Authors: Nejc Trdin; Marko Bohanec
Pages: 1 - 41
Abstract: DEX is a qualitative multi-criteria decision analysis method. The method supports decision makers in making complex decisions based on multiple, possibly conflicting, attributes. The attributes in DEX have qualitative value scales and are structured hierarchically. The hierarchical topology allows for decomposition of the decision problem into simpler sub-problems. In DEX, alternatives are described with qualitative values, taken from the scales of corresponding input attributes in the hierarchy. The evaluation of alternatives is performed in a bottom-up way, utilizing aggregation functions, which are defined for every aggregated attribute in the form of decision rules. DEX has been used in numerous practical applications—from everyday decision problems to solving decision problems in the financial and ecological domains. Based on experience, we identified the need for three major methodological extensions to DEX: introducing numeric attributes, the probabilistic and fuzzy aggregation of values and relational models. These extensions were proposed by users of the existing method and by the new demands of complex decision problems, which require advanced decision making approaches. In this paper, we introduce these three extensions by describing the extensions formally, justifying their contributions to the decision making process and illustrating them on a didactic example, which is followed throughout the paper.
PubDate: 2018-03-01
DOI: 10.1007/s10100-017-0468-9
Issue No: Vol. 26, No. 1 (2018)

• Measuring inefficiency for specific inputs using data envelopment
analysis: evidence from construction industry in Spain and Portugal
• Authors: Magdalena Kapelko
Pages: 43 - 66
Abstract: This article contributes to the efficiency literature by defining, in the context of the data envelopment analysis framework, the directional distance function approach for measuring both technical and scale inefficiencies with regard to the use of individual inputs. The input-specific technical and scale inefficiencies are then aggregated in order to calculate the overall inefficiency measures. Empirical application focuses on a large dataset of Spanish and Portuguese construction companies between 2002 and 2010 and accounts for three inputs: materials, labor and fixed assets. The results show, first, that for both Spanish and Portuguese construction companies, fixed assets are the most technically inefficient input. Second, the most inefficient scale concerns the utilization of material input in both samples; the reason for this inefficiency is that firms tend to operate in the increasing returns to scale portion of technology set. Third, in both samples, large firms have the lowest input-specific technical inefficiencies, but the highest input-specific scale inefficiencies, compared to their small and medium-sized counterparts, and tend to suffer from decreasing returns to scale. Finally, in both samples, input-specific technical inefficiency under constant returns to scale increased during the period of the recent financial crisis, mainly due to the augmentation in scale inefficiency.
PubDate: 2018-03-01
DOI: 10.1007/s10100-017-0473-z
Issue No: Vol. 26, No. 1 (2018)

• Supply chain contracts for capacity decisions under symmetric and
asymmetric information
• Authors: Onur Kaya; Serra Caner
Pages: 67 - 92
Abstract: Production capacity decision under random demand is an important factor that significantly effects supply chain profits. It is realized in decentralized supply chains that the suppliers build capacity levels that are less than optimal for the total supply chain, since the supplier incurs all the cost and bears all the risk for the built capacity. To improve the supply chain performance, we analyze supply chain contracts considering capacity decisions in a two-party supply chain composed of a single manufacturer and a single supplier. We analyze and compare four well-known contracts, namely, simple wholesale price only contract, linear contract, cost sharing contract and revenue sharing contract under symmetric and asymmetric information about the supplier’s capacity building cost. The choice of the contract and determining the optimal contract parameters might be difficult for the manufacturer, especially if he has incomplete information about the supplier. In the asymmetric information models, we analyze the screening problem of the manufacturer when designing a menu of contracts without exact knowledge of the supplier’s capacity cost. We determine the optimal menu of contracts designed for both high and low cost suppliers and analyze their results through numerical experiments. Focusing on the capacity decisions under random demand, we aim to answer the three questions: (i) Which contracts coordinate the supply chain; (ii) Which contracts allow for any division of the supply chains profit among the firms; and (iii) Which contracts are worth adopting. We find the optimal contract parameters, determine the respective profits obtained by the supply chain members, and find which contracts would be better to use for the companies depending on the system parameters in different settings by analyzing and comparing the efficiencies of the contracts.
PubDate: 2018-03-01
DOI: 10.1007/s10100-017-0474-y
Issue No: Vol. 26, No. 1 (2018)

• Inventory control in dual sourcing commodity procurement with price
correlation
• Authors: Karl Inderfurth; Peter Kelle; Rainer Kleber
Pages: 93 - 119
Abstract: In this paper, we focus on the role of inventory management as a means for operational hedging by dual sourcing of commodities using a multi-period option contract and spot market. We consider a manufacturing company in a make-to-stock environment with uncertain product demand. We replace the common i.i.d. price assumption that is typical in operations management studies by the mean reverting price model, a more realistic spot price model with inter-temporal price–price correlation. Additionally, we address the case where the spot price in one period is correlated with the demand in the previous period (demand–price correlation). The contribution of the paper is threefold. First, we reveal that price–price correlation has a considerable impact on the structural properties of optimal stock-keeping policies. Furthermore, we isolate two main effects of correlation in spot-price dynamics when selecting policy parameters: a variability effect, which increases the benefits from stock-keeping and lessens the usage of the option contract, and a counteracting correlation effect that exploits persistence of low/high spot price incidences. Finally, in a numerical study we show under which circumstances disregarding the correlation can result in large performance losses.
PubDate: 2018-03-01
DOI: 10.1007/s10100-017-0475-x
Issue No: Vol. 26, No. 1 (2018)

• Authors: Tobias F. Rötheli
Pages: 121 - 133
Abstract: We investigate the circumstances in which business cycle forecasting is beneficial for business by addressing both the short-run and the long-run aspects. For an assessment of short-run forecasting we make a distinction between using publicly available information of cycle probabilities and the use of resources to sharpen this outlook. A sharpened forecast can pay off because it helps the firm to optimally select its output mix. For a long-run perspective we show that firms whose optimal level of operation varies with varying selling prices gain from an accurate assessment of the likelihood of the states of expansion and recession. Petroleum refining in the U.S. is econometrically studied as an exemplary industry. The results document cyclical regularities that indicate that forecasting is advantageous for firms in this industry.
PubDate: 2018-03-01
DOI: 10.1007/s10100-017-0477-8
Issue No: Vol. 26, No. 1 (2018)

• A framework for sensitivity analysis of decision trees
• Authors: Bogumił Kamiński; Michał Jakubczyk; Przemysław Szufel
Pages: 135 - 159
Abstract: In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.
PubDate: 2018-03-01
DOI: 10.1007/s10100-017-0479-6
Issue No: Vol. 26, No. 1 (2018)

• Tight upper bounds for semi-online scheduling on two uniform machines with
known optimum
• Authors: György Dósa; Armin Fügenschuh; Zhiyi Tan; Zsolt Tuza; Krzysztof Węsek
Pages: 161 - 180
Abstract: We consider a semi-online version of the problem of scheduling a sequence of jobs of different lengths on two uniform machines with given speeds 1 and s. Jobs are revealed one by one (the assignment of a job has to be done before the next job is revealed), and the objective is to minimize the makespan. In the considered variant the optimal offline makespan is known in advance. The most studied question for this online-type problem is to determine the optimal competitive ratio, that is, the worst-case ratio of the solution given by an algorithm in comparison to the optimal offline solution. In this paper, we make a further step towards completing the answer to this question by determining the optimal competitive ratio for s between $$\frac{5 + \sqrt{241}}{12} \approx 1.7103$$ and $$\sqrt{3} \approx 1.7321$$ , one of the intervals that were still open. Namely, we present and analyze a compound algorithm achieving the previously known lower bounds.
PubDate: 2018-03-01
DOI: 10.1007/s10100-017-0481-z
Issue No: Vol. 26, No. 1 (2018)

• Investments in supplier-specific economies of scope with two different
services and different supplier characters: two specialists
• Authors: Günter Fandel; Jan Trockel
Pages: 181 - 192
Abstract: Firms have to choose their market positions. Suppliers can offer a wide range of services as generalists or they act as specialists by offering a small range of services. In this paper based on Chatain/Zemsky (Manag Sci 53:550–565, 2007) and Chatain (Strateg Manag J 32:76–102, 2011) we analyse how supplier-specific economies of scope generated by investments can compensate the loss occurring by a non-optimal organisational structure (resource configuration) of production. These considerations are modelled by a non-cooperative game with one buyer and two suppliers. We show how the buyer can gain from supplier-specific economies of scope. In this case, the buyer will never split the orders to both suppliers, i.e. he always should order one supplier, if the tasks have similar characteristics and the investment costs of a supplier result in higher specific economies of scope relevant to the choice of the buyer. The amount of the specific economies of scope determines to whom of the suppliers the buyer will place both orders. But, if the investment costs of the suppliers are very high and/or the gains of the buyer are rather low, the pure strategy combination “no investments” for the two suppliers will become the unique Nash equilibrium, whereby the buyer places the two orders each to the supplier who is the specialist for it.
PubDate: 2018-03-01
DOI: 10.1007/s10100-017-0483-x
Issue No: Vol. 26, No. 1 (2018)

• Time to dispense with the p -value in OR'
• Authors: Marko Hofmann; Silja Meyer-Nieberg
Pages: 193 - 214
Abstract: Null hypothesis significance testing is the standard procedure of statistical decision making, and p-values are the most widespread decision criteria of inferential statistics both in science, in general, and also in operations research, in particular. p-values are of paramount importance in the life and human sciences, and dominate statistical summaries in natural and technical sciences as well as in operations research, a domain in which the p-value seems to be a common denominator for decision making based on samples. Yet, the use of significance testing in the analysis of research data has been criticized from numerous statisticians—continuously for almost 100 years. This criticism has recently (March 7, 2016) been given an official status by a statement from the American Statistical Association on p-values. Is it time to dispense with the p-value in OR' The answer depends on many factors, including the research objective, the research domain, and, especially, the amount of information provided in addition to the p-value. Despite this dependence from context three conclusions can be made that should concern the operational analyst: First, p-values can perfectly cast doubt on a null hypothesis or its underlying assumptions, but they are only a first step of analysis, which, stand alone, lacks expressive power. Second, the statistical layman almost inescapably misinterprets the evidentiary value of p-values. Third and foremost, p-values are an inadequate choice for a succinct executive summary of statistical evidence for or against a research question. In statistical summaries confidence intervals of standardized effect sizes provide much more information than p-values without requiring much more space.
PubDate: 2018-03-01
DOI: 10.1007/s10100-017-0484-9
Issue No: Vol. 26, No. 1 (2018)

• Heuristic algorithms for the minmax regret flow-shop problem with interval
processing times
• Authors: Michał Ćwik; Jerzy Józefczyk
Pages: 215 - 238
Abstract: An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion’s value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.
PubDate: 2018-03-01
DOI: 10.1007/s10100-017-0485-8
Issue No: Vol. 26, No. 1 (2018)

• AHP model for performance evaluation of employees in a Czech management
consulting company
• Authors: Lucie Lidinska; Josef Jablonsky
Pages: 239 - 258
Abstract: The article is focused on an application of the analytic hierarchy process (AHP) to the performance evaluation of employees of a management consulting company. Performance evaluation of employees is a complex task that must take into account various aspects and evaluation criteria. Moreover, each employee of the company participates during the period being considered in several projects and his or her overall performance is an aggregation of individual performances in particular projects. This aggregation is based on the weights of the projects that usually depend on man-days the employees participated in the projects or their financial contributions. AHP is a tool for structuring and analysis of complex decision making problems and seems to be an ideal tool for this task. The proposed AHP model combines relative and absolute measurement and allows deriving overall performance scores of the employees through a simple MS Excel tool easily and quickly without the necessity to use any specialized software.
PubDate: 2018-03-01
DOI: 10.1007/s10100-017-0486-7
Issue No: Vol. 26, No. 1 (2018)

• Quantifying and mitigating inefficiency in information acquisition under
competition
• Authors: Jialu Li; Meiying Yang; Xuan Zhao
Abstract: This paper analyzes a horizontally differentiated product market in which firms acquire costly information about the stochastic market. Our results provide guidelines to government agencies on regulating information acquisition. We show that firms overinvest in acquiring information only when information acquisition is particularly cost-effective. Otherwise, underinvestment could occur even under very intense horizontal competition. Moreover, it is underinvestment in information acquisition that is more damaging to firms. Using a linear cost function, we demonstrate that the loss in return on investment caused by horizontal competition can be at least one-third of the first-best return on investment. If the degree of competitive intensity and demand variation decrease, or the marginal cost increases, information acquisition will become increasingly inefficient. We further find that firms benefit from agreeing in advance to exert the same investment level and strategically invest less than the competitive equilibrium level, which can benefit consumers as well. Industry associations are therefore recommended to facilitate effective communication between firms.
PubDate: 2018-02-26
DOI: 10.1007/s10100-018-0529-8

• Pricing decisions in marketing channels in the presence of optional
contingent products
• Authors: Peter M. Kort; Sihem Taboubi; Georges Zaccour
Abstract: The technological developments observed in the last two decades contributed to the digitalization of products and the introduction of various mobile devices designed for the consumption of this digital content. Many online retailers launched their own mobile devices, which had a direct effect on their multi-product pricing strategies, but also an effect on the other channel members’ pricing decisions (i.e., digital-content providers). In many industries, these developments resulted in switching from traditional wholesale pricing to Revenue-Sharing Contracts (RSC), involving a shift of control over retail prices in the channel, a situation that was not always easily accepted by channel members. We examine a manufacturer-retailer framework where the manufacturer sells a base product in two formats: a tangible product sold directly to consumers and a digital format sold via an online retailer. The latter also sells an optional contingent product, a device used to consume the digital product. We investigate two questions: The first one pertains to the contingent product’s impact on firms’ pricing strategies. The second question investigates whether the manufacturer is interested in the implementation of an RSC and then looks at whether this pricing model suits the retailer and consumers. Our main results are as follows: (1) The presence of the contingent product leads to a higher retail price for the digital base product and negatively affects the demand for the tangible product format. (2) The manufacturer is interested in an RSC only if it receives a sufficiently large part of the digital-product revenue, but the retailer is almost always interested by this pricing model. (3) The double marginalization effect could benefit the manufacturer.
PubDate: 2018-02-22
DOI: 10.1007/s10100-018-0527-x

• Metaheuristic search techniques for multi-objective and stochastic
problems: a history of the inventions of Walter J. Gutjahr in the past
22 years
• Authors: Karl F. Doerner; Vittorio Maniezzo
Abstract: This paper is a survey of the research contributions made by Walter J. Gutjahr during his career so far, and provides a classification of his areas of research, along with a discussion of the results presented in his most significant publications. Although works are divided into theoretical and application-oriented contributions, linkages among these subsets are also identified.
PubDate: 2018-02-08
DOI: 10.1007/s10100-018-0522-2

• Systemic risk and copula models
• Authors: Georg Ch. Pflug; Alois Pichler
Abstract: Systemic risk describes the phenomenon that dependency adds a specific component of risk to a system or network of (financial) institutions as a whole, which would not be present if the institutions were independent from each other. This paper introduces the concept of systemic risk measures. We describe and study its behavior as a function of the copula, which represents the loss variables of the institutions in the network. Further, we define stochastic order relations on copulas and relate them with systemic risk measures.
PubDate: 2018-02-07
DOI: 10.1007/s10100-018-0525-z

• Stochastic contagion models without immunity: their long term behaviour
and the optimal level of treatment
• Authors: Raimund M. Kovacevic
Abstract: In this paper we analyze two stochastic versions of one of the simplest classes of contagion models, namely so-called SIS models. Several formulations of such models, based on stochastic differential equations, have been recently discussed in literature, mainly with a focus on the existence and uniqueness of stationary distributions. With applicability in view, the present paper uses the Fokker–Planck equations related to SIS stochastic differential equations, not only in order to derive basic facts, but also to derive explicit expressions for stationary densities and further characteristics related to the asymptotic behaviour. Two types of models are analyzed here: The first one is a version of the SIS model with external parameter noise and saturated incidence. The second one is based on the Kramers–Moyal approximation of the simple SIS Markov chain model, which leads to a model with scaled additive noise. In both cases we analyze the asymptotic behaviour, which leads to limiting stationary distributions in the first case and limiting quasistationary distributions in the second case. Finally, we use the derived properties for analyzing the decision problem of choosing the cost-optimal level of treatment intensity.
PubDate: 2018-02-07
DOI: 10.1007/s10100-018-0526-y

• Solving routing problems with pairwise synchronization constraints
• Authors: Sophie N. Parragh; Karl F. Doerner
Abstract: Pairwise route synchronization constraints are commonly encountered in the field of service technician routing and scheduling and in the area of mobile care. Pairwise route synchronization refers to constraints that require that two technicians or home care workers visit the same location at exactly the same time. We consider constraints of this type in the context of the well-known vehicle routing problem with time windows and a generic service technician routing and scheduling problem. Different approaches for dealing with the problem of pairwise route synchronization are compared and several ways of integrating a synchronization component into a metaheuristic algorithm tailored to the original problems are analyzed. When applied to benchmark instances from the literature, our algorithm matches almost all available optimal values and it produces several new best results for the remaining instances.
PubDate: 2018-02-07
DOI: 10.1007/s10100-018-0520-4

• Large-step interior-point algorithm for linear optimization based on a new
wide neighbourhood
• Authors: Zsolt Darvay; Petra Renáta Takács
Abstract: The interior-point algorithms can be classified in multiple ways. One of these takes into consideration the length of the step. In this way, we can speak about large-step and short-step methods, that work in different neighbourhoods of the central path. The large-step algorithms work in a wide neighbourhood, while the short-step ones determine the new iterates that are in a smaller neighbourhood. In spite of the fact that the large-step algorithms are more efficient in practice, the theoretical complexity of the short-step ones is generally better. Ai and Zhang introduced a large-step interior-point method for linear complementarity problems using a wide neighbourhood of the central path, which has the same complexity as the best short-step methods. We present a new wide neighbourhood of the central path. We prove that the obtained large-step primal–dual interior-point method for linear programming has the same complexity as the best short-step algorithms.
PubDate: 2018-02-05
DOI: 10.1007/s10100-018-0524-0

• Finiteness of the quadratic primal simplex method when s-monotone index
selection rules are applied
Abstract: This paper considers the primal quadratic simplex method for linearly constrained convex quadratic programming problems. Finiteness of the algorithm is proven when $${\mathbf {s}}$$ -monotone index selection rules are applied. The proof is rather general: it shows that any index selection rule that only relies on the sign structure of the reduced costs/transformed right hand side vector and for which the traditional primal simplex method is finite, is necessarily finite as well for the primal quadratic simplex method for linearly constrained convex quadratic programming problems.
PubDate: 2018-02-05
DOI: 10.1007/s10100-018-0523-1

• An alternative approach to solving cost minimization problem with
Cobb–Douglas technology
• Authors: Vedran Kojić; Zrinka Lukač
Abstract: We propose a new method for solving the production cost minimization problem with Cobb–Douglas technology. The method is based on weighted arithmetic–geometric-mean inequality (weighted AM–GM) and it does not use calculus. In comparison to methods which use calculus (like substitution method or the Lagrange multiplier method), our method derives the global minimum costs and unique global minimizer in case of the Cobb–Douglas production function in a new and easier manner. Another benefit of this method is that it derives the exact formula for the optimal level of inputs as well as the formula for minimum costs in general case of $$n>2$$ inputs. Also, it is important to emphasize that our method does not require checking the first and the second order conditions, which is a necessary step in methods using calculus. The result is first derived for the case of two inputs and then generalized for the problem with $$n>2$$ inputs.
PubDate: 2018-01-22
DOI: 10.1007/s10100-017-0519-2

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327