for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Jurnals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover Islamic Africa  
   [3 followers]  Follow    
   Full-text available via subscription Subscription journal
   ISSN (Print) 2333-262X - ISSN (Online) 2154-0993
   Published by Project MUSE Homepage  [360 journals]
  • Multi-stage stochastic optimization: the distance between stochastic
           scenario processes
    • Abstract: Abstract Approximation techniques are challenging, important and very often irreplaceable solution methods for multi-stage stochastic optimization programs. Applications for scenario process approximation include financial and investment planning, inventory control, energy production and trading, electricity generation planning, pension fund management, supply chain management and similar fields. In multi-stage stochastic optimization problems the amount of stage-wise available information is crucial. While some authors deal with filtration distances, in this paper we consider the concepts of nested distributions and their distances which allows to keep the setup purely distributional but at the same time to introduce information and information constraints. Also we introduce the distance between stochastic process and a tree and we generalize the concept of nested distance for the case of infinite trees, i.e. for the case of two stochastic processes given by their continuous distributions. We are making a step towards to a new method for distribution quantization that is the most suitable for multi-stage stochastic optimization programs as it takes into account both the stochastic process and the stage-wise information.
      PubDate: 2015-01-01
       
  • Risk and reward of home equity borrowing for investment in Canada, a
           stochastic analysis
    • Abstract: Abstract According to Canadian tax law one way of transforming from non-tax deductible (personal mortgage) to tax deductible interest expenses is to borrow against home equity to make investments. A re-advanceable mortgage is a product in which the mortgagor immediately re-borrows principal as it is paid. We assume the re-borrowed funds are invested into a single risky asset and study the risk associated with this strategy to provide an accurate description of the mortgagor’s position. Our model accommodates stochastically changing interest rates and housing prices. We find that immediately borrowing from the increase in housing price decreases the expected mortgage payoff time, but also significantly increases the risk of taking a much longer time to pay off the mortgage. The risk of this strategy is very sensitive to changes in housing price volatility. Additionally, both the expected mortgage payoff time and the risk of taking a long time to pay off the mortgage increase with interest rate volatility. To reduce both the expected mortgage payoff time and the risk of a longer payoff time, the asset chosen should be (i) negatively correlated with housing price; and (ii) positively correlated with interest rates. From the case study using historical data we find that business cycle has an effect on the strategy’s performance and a reasonable decision rule would be to (i) implement this strategy at the beginning or in the middle of an expansionary period; and (ii) delay its implementation otherwise. Results of this study are relevant to homeowners, financial planners and policymakers.
      PubDate: 2015-01-01
       
  • The evolution of cooperation with different fitness functions using
           probabilistic cellular automata
    • Abstract: Abstract In this work, we use probabilistic cellular automata to model a population in which the cells represent individuals that interact with their neighbors playing a game. The games may have either the form of Prisoner’s Dilemma or Hawk-Dove (Snow-Drift, Chicken) games, and may be considered as a competition for a benefit or resource. The result of each game gives each player a payoff, which is decreased from his amount of life. The advantage of such approach is that each player plays with different individuals separately, not as a multi-player matrix game. The probability for an individual having a certain action is considered his strategy, and each action returns a payoff to individual. The purpose of the work is test different fitness functions for evaluating the generation of new individuals, which will have characters of the best adapted individuals in a neighborhood, i.e., have higher values in a fitness function.
      PubDate: 2015-01-01
       
  • An integrated approach based on DEA and AHP
    • Abstract: Abstract This research proposes a theoretical framework to assess the performance of Decision Making Units (DMUs) by integrating the Data Envelopment Analysis (DEA) and Analytic Hierarchy Process (AHP) methodologies. According to this, we consider two sets of weights of inputs and outputs under hierarchical structures of data. The first set of weights, represents the best attainable level of efficiency for each DMU in comparison to other DMUs. This level of efficiency can be less than or equal to that of obtaining from a traditional DEA model. The second set of weights reflects the priority weights of inputs and outputs for all DMUs, using AHP, in the DEA framework. We assess the performance of each DMU in terms of the relative closeness to the priority weights of inputs and outputs. For this purpose, we develop a parametric distance model to measure the deviations between the two sets of weights. Increasing the value of a parameter in a defined range of efficiency loss, we explore how much the deviations can be improved to achieve the desired goals of the decision maker. This may result in various ranking positions for each DMU in comparison to the other DMUs. To highlight the usefulness of the proposed approach, a case study for assessing the financial performance of eight listed companies in the steel industry of China is carried out.
      PubDate: 2015-01-01
       
  • The maximum ratio clique problem
    • Abstract: Abstract This paper introduces a fractional version of the classical maximum weight clique problem, the maximum ratio clique problem, which is to find a maximal clique that has the largest ratio of benefit and cost weights associated with the clique’s vertices. NP-completeness of the decision version of the problem is established, and three solution methods are proposed. The results of numerical experiments with standard graph instances, as well as with real-life instances arising in finance and energy systems, are reported.
      PubDate: 2015-01-01
       
  • A comparison of Bayesian, Hazard, and Mixed Logit model of bankruptcy
           prediction
    • Abstract: Abstract The purpose of this study is to examine the impact of the choice of cut-off points, sampling procedures, and business cycles on the forecasting accuracy of bankruptcy prediction models. A misclassification can result in an erroneous prediction resulting in prohibitive costs to firms, investors, and the economy. A salient feature of our study is that our analysis includes both parametric and nonparametric bankruptcy prediction models. A sample of firms from the Bankruptcy Research Database in the U.S. is used to evaluate the relative performance of the three most commonly used bankruptcy prediction models: Bayesian, Hazard, and Mixed Logit. Our results indicate that the choice of the cut-off point and sampling procedures affect the rankings of the three models. We show that the empirical cut-off point estimated from the training sample result in the lowest misclassification costs for all three models. When tests are conducted using randomly selected samples, and all specifications of type I costs over type II costs are taken into account, the Mixed Logit model performs slightly better than the Bayesian model and much better than the Hazard model. However, when tests are conducted across business-cycle samples, the Bayesian model has the best performance and much better predictive power in recent business cycles. This study extends recent research comparing the performance of bankruptcy prediction models by identifying under what conditions a model performs better. It also allays the concerns for a range of users groups, including auditors, shareholders, employees, suppliers, rating agencies, and creditors’ with respect to assessing corporate failure risk.
      PubDate: 2015-01-01
       
  • Special issue on computational techniques and applications
    • PubDate: 2015-01-01
       
  • The impact of customer behavior models on revenue management systems
    • Abstract: Abstract Revenue management (RM) can be considered an application of operations research in the transportation industry. For these service companies, it is a difficult task to adjust supply and demand. In order to maximize revenue, RM systems display demand behavior by using historical data. Usually, parametric methods are applied to estimate the probability of choosing a product at a given time. However, parameter estimation becomes challenging when we need to deal with constrained data. In this research, we evaluate the performance of a revenue management system when a non-parametric method for choice probability estimation is chosen. The outcomes of this method have been compared to the total expected revenue using synthetic data.
      PubDate: 2015-01-01
       
  • Stochastic model for energy commercialisation of small hydro plants in the
           Brazilian energy market
    • Abstract: Abstract This paper presents a stochastic model for energy commercialisation strategies of small hydro plants (SHPs) in the Brazilian electricity market. The model aims to find the maximum expected revenue of the generation company, considering the main energy market regulations in Brazil, such as the penalty for insufficient energy certificates, the seasonality of energy certificates and the stochastic processes of future energy prices and plant generation. The problem is formulated as a multi-stage linear stochastic programming model, where the stochastic variables are the energy future prices, the system hydro generation and the SHP generation in the portfolio. Because of the large number of time steps in this model, methods with sampling strategies are necessary to identify a good solution. Therefore, we apply the Stochastic Dual Dynamic Programming algorithm. A case example is presented to analyse certain results of the model, which considers a generator company with a set of SHPs that can sell energy through contracts with periods of 6–24 months.
      PubDate: 2015-01-01
       
  • Imperfect production process with learning and forgetting effects
    • Abstract: Abstract Wright’s learning curve (WLC) assumes every unit of production has an acceptable level of quality, which is not the case in many production environments. Many studies reported that a production process may go out-of-control therefore generating defective items requiring rework. Jaber and Guiffrida (Int J Prod Econ 127(1):27–38, 2004) have modified the WLC by accounting for rework time. In a later study, Jaber and Guiffrida (Eur J Oper Res 189(1):93–104, 2008) allowed for production interruption to restore the quality of the production process to reduce the number of defective items per lot. Although these works were the first analytical models that linked learning to quality, their results cannot be generalized as they considered a single (first) production cycle. This assumption ignores the transfer of learning that occurs between cycles in intermittent production environments. This paper addresses this limitation and considers the knowledge transferred to deteriorate because of forgetting. The results indicate that the performance function of the process has a convex form under certain conditions. The performance of the system improves with faster learning in production and rework, frequent process restorations, and transfer of learning between cycles.
      PubDate: 2015-01-01
       
  • Game Theory Explorer: software for the applied game theorist
    • Abstract: Abstract This paper presents the “Game Theory Explorer” software tool to create and analyze games as models of strategic interaction. A game in extensive or strategic form is created and nicely displayed with a graphical user interface in a web browser. State-of-the-art algorithms then compute all Nash equilibria of the game after a mouseclick. In tutorial fashion, we present how the program is used, and the ideas behind its main algorithms. We report on experiences with the architecture of the software and its development as an open-source project.
      PubDate: 2015-01-01
       
  • A heuristic algorithm to solve the single-facility location routing
           problem on Riemannian surfaces
    • Abstract: Abstract Location routing problem (LRP) in supply chain management is integration of the vehicle routing (VRP) and facility location problems (FLP). To the best of our knowledge, the known solutions obtained for the LRP in the literature are only obtained for the Euclidean space. Solving LRP on Riemannian manifold surface (RMS) is a more realistic approach than using Euclidean surfaces because of the curved structure of the pathways on Earth with changing local RMS curvatures. The shortest path distances on Earth’s surface can be determined by calculating geodesic distances in local neighborhoods. The special case of the LRP on RMS is the traditional LRP in the Euclidean space when the curvature of the RMS is zero. In this work, we introduce a new LRP to be solved on (RMS) and find a heuristic algorithmic solution to this LRP. In particular, we formulate the LRP for a single facility on RMS; a generalization of the surface and distance assumptions for the traditional single facility LRP. In addition, a heuristic algorithm is formulated to solve the proposed LRP on RMS with the corresponding computational results displayed for a particular scenario. The numerical results corresponding to the theoretical results introduced in this work are incomparable with the ones known in the literature for the traditional LRP because of the change in the surface and distance assumptions.
      PubDate: 2014-12-23
       
  • Constructing optimal sparse portfolios using regularization methods
    • Abstract: Abstract Mean-variance portfolios have been criticized because of unsatisfying out-of-sample performance and the presence of extreme and unstable asset weights, especially when the number of securities is large. The bad performance is caused by estimation errors in inputs parameters, that is the covariance matrix and the expected return vector. Recent studies show that imposing a penalty on the 1-norm of the asset weights vector (i.e. \(\ell _{1}\) -regularization) not only regularizes the problem, thereby improving the out-of-sample performance, but also allows to automatically select a subset of assets to invest in. However, \(\ell _{1}\) -regularization might lead to the construction of biased solutions. We propose a new, simple type of penalty that explicitly considers financial information and then we consider several alternative penalties, that allow to improve on the \(\ell _{1}\) -regularization approach. By using U.S.-stock market data, we show empirically that the proposed penalties can lead to the construction of portfolios with an out-of-sample performance superior to several state-of-art benchmarks, especially in high dimensional problems.
      PubDate: 2014-12-13
       
  • On variance reduction of mean-CVaR Monte Carlo estimators
    • Abstract: Abstract We formulate an objective as a convex combination of expectation and risk, measured by the \(\mathrm{CVaR }\) risk measure. The poor performance of standard Monte Carlo estimators applied on functions of this form is discussed and a variance reduction scheme based on importance sampling is proposed. We provide analytical solution for random variables based on normal distribution and outline the way for the other distributions, either by analytical computation or by sampling. Our results are applied in the framework of stochastic dual dynamic programming algorithm. Computational results which validate the previous analysis are given.
      PubDate: 2014-11-14
       
  • Erratum to: A copula-based heuristic for scenario generation
    • PubDate: 2014-10-25
       
  • The natural hedge of a gas-fired power plant
    • Abstract: Abstract Electricity industries worldwide have been restructured in order to introduce competition. As a result, decision makers are exposed to volatile electricity prices, which are positively correlated with those of natural gas in markets with price-setting gas-fired power plants. Consequently, gas-fired plants are said to enjoy a “natural hedge.” We explore the properties of such a built-in hedge for a gas-fired power plant via a stochastic programming approach, which enables characterisation of uncertainty in both electricity and gas prices in deriving optimal hedging and generation decisions. The producer engages in financial hedging by signing forward contracts at the beginning of the month while anticipating uncertainty in spot prices. Using UK energy price data from 2006 to 2011 and daily aggregated dispatch decisions of a typical gas-fired power plant, we find that such a producer does, in fact, enjoy a natural hedge, i.e., it is better off facing uncertain spot prices rather than locking in its generation cost. However, the natural hedge is not a perfect hedge, i.e., even modest risk aversion makes it optimal to use gas forwards partially. Furthermore, greater operational flexibility enhances this natural hedge as generation decisions provide a countervailing response to uncertainty. Conversely, higher energy-conversion efficiency reduces the natural hedge by decreasing the importance of natural gas price volatility and, thus, its correlation with the electricity price.
      PubDate: 2014-10-21
       
  • A leader-followers model of power transmission capacity expansion in a
           market driven environment
    • Abstract: Abstract We introduce a model for analyzing the upgrade of the national transmission grid that explicitly accounts for responses given by the power producers in terms of generation unit expansion. The problem is modeled as a bilevel program with a mixed integer structure in both upper and lower level. The upper level is defined by the transmission company problem which has to decide on how to upgrade the network. The lower level models the reactions of both power producers, who take a decision on new facilities and power output, and Market Operator, which strikes a new balance between demand and supply, providing new Locational Marginal Prices. We illustrate our methodology by means of an example based on the Garver’s 6-bus Network.
      PubDate: 2014-10-12
       
  • Decision-making from a risk assessment perspective for Corporate Mergers
           and Acquisitions
    • Abstract: Abstract Corporate Mergers and Acquisitions (M \( { \& }\) As) are notoriously complex, and risk management is one of the essential aspects of the analysis process for decision-making on M \( { \& }\) A deals. Empirically, we see that some M \( { \& }\) A transactions are not successful in part because of the increased exposure to correlated sectors, suggesting that the merged entity possesses increased risk in the market. This motivates our research on risk evaluation processes for corporate M \( { \& }\) A deals. In this paper, a decision making scheme using a reasonable risk measure is introduced and surveyed. Numerical examples are presented.
      PubDate: 2014-10-09
       
  • Preface: Special issue on learning and robustness
    • PubDate: 2014-10-01
       
  • Calibrating probability distributions with convex-concave-convex
           functions: application to CDO pricing
    • Abstract: Abstract This paper considers a class of functions referred to as convex-concave-convex (CCC) functions to calibrate unimodal or multimodal probability distributions. In discrete case, this class of functions can be expressed by a system of linear constraints and incorporated into an optimization problem. We use CCC functions for calibrating a risk-neutral probability distribution of obligors default intensities (hazard rates) in collateral debt obligations (CDO). The optimal distribution is calculated by maximizing the entropy function with no-arbitrage constraints given by bid and ask prices of CDO tranches. Such distribution reflects the views of market participants on the future market environments. We provide an explanation of why CCC functions may be applicable for capturing a non-data information about the considered distribution. The numerical experiments conducted on market quotes for the iTraxx index with different maturities and starting dates support our ideas and demonstrate that the proposed approach has stable performance. Distribution generalizations with multiple humps and their applications in credit risk are also discussed.
      PubDate: 2014-10-01
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2014