Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We study different parallelization schemes for the stochastic dual dynamic programming (SDDP) algorithm. We propose a taxonomy for these parallel algorithms, which is based on the concept of parallelizing by scenario and parallelizing by node of the underlying stochastic process. We develop a synchronous and asynchronous version for each configuration. The parallelization strategy in the parallelscenario configuration aims at parallelizing the Monte Carlo sampling procedure in the forward pass of the SDDP algorithm, and thus generates a large number of supporting hyperplanes in parallel. On the other hand, the parallel-node strategy aims at building a single hyperplane of the dynamic programming value function in parallel. The considered algorithms are implemented using Julia and JuMP on a high performance computing cluster. We study the effectiveness of the methods in terms of achieving tight optimality gaps, as well as the scalability properties of the algorithms with respect to an increasing number of CPUs. In particular, we study the effects of the different parallelization strategies on performance when increasing the number of Monte Carlo samples in the forward pass, and demonstrate through numerical experiments that such an increase may be harmful. Our results indicate that a parallel-node strategy presents certain benefits as compared to a parallel-scenario configuration. PubDate: 2022-06-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The inclusion of emojis when solving natural language processing problems (e.g., text‐based emotion detection, sentiment classification, topic analysis) improves the quality of the results. However, the existing literature focuses only on the general meaning transferred by emojis and has not examined emojis in the context of investor sentiment classification. This article provides a comprehensive study of the impact that inclusion of emojis could make in predicting stock investors’ sentiment. We found that a classifier that incorporates domain-specific emoji vectors, which capture the syntax and semantics of emojis in the financial context, could improve the accuracy of investor sentiment classification. Also, when domain-specific emoji vectors are considered, daily time-series of investor sentiment demonstrated additional marginal explanatory power on returns and volatility. Further, a comparison of conducted cluster analysis of domain-specific versus domain-independent emoji vectors showed different natural groupings of emojis reflecting domain specificity when special meaning of emojis is considered. Finally, domain-specific emoji vectors could result in the development of significantly superior emoji sentiment lexicons. Given the importance of domain-specific emojis in investor sentiment classification of social media data, we have developed an emoji lexicon that could be used by other researchers. PubDate: 2022-06-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper is about the application of optimization methods to the analysis of three pricing schemes adopted by one manufacturer in a two-country model of production and trade. The analysis focuses on pricing schemes—one uniform pricing scheme, and two differential pricing schemes—for which there is no competition coming from the so-called parallel trade. This term denotes the practice of buying a patented product like a medicine in one market at one price, then re-selling it in a second so-called gray market at a higher price, on a parallel distribution chain where it competes with the official distribution chain. The adoption of pricing schemes under which parallel trade does not arise can prevent the occurrence of its well-documented negative effects. In the work, a comparison of the optimal solutions to the optimization problems modeling the three pricing schemes is performed. More specifically, conditions are found under which the two differential pricing schemes are more desirable from several points of view (e.g., incentive for the manufacturer to do Research and Development, product accessibility, global welfare) than the uniform pricing scheme. In particular, we prove that, compared to the uniform pricing scheme, the two differential pricing schemes increase the incentive for the manufacturer to invest in Research and Development. We also prove that they serve both countries under a larger range of values for the relative market size, making the product more accessible to consumers in the lower price country. Moreover, we provide a sufficient condition under which price discrimination is more efficient from a global welfare perspective than uniform pricing. The analysis applies in particular to the case of the European Single Market for medicines. Compared to other studies, our work takes into account also the possible presence in all the optimization problems of a positive constant marginal cost of production, showing that it can have non-negligible effects on the results of the analysis. As an important contribution, indeed, our analysis clarifies the conditions—which have been overlooked in the literature about the mechanisms adopted to prevent parallel trade occurrence—that allow/do not allow one to neglect the presence of this factor. Such conditions are related, e.g., to the comparison between the positive constant marginal cost of production, the parallel trade cost per-unit, and the maximal price that can be effectively charged to the consumers in the lower price country. PubDate: 2022-06-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The nested distance builds on the Wasserstein distance to quantify the difference of stochastic processes, including also the evolution of information modelled by filtrations. The Sinkhorn divergence is a relaxation of the Wasserstein distance, which can be computed considerably faster. For this reason we employ the Sinkhorn divergence and take advantage of the related (fixed point) iteration algorithm. Furthermore, we investigate the transition of the entropy throughout the stages of the stochastic process and provide an entropy-regularized nested distance formulation, including a characterization of its dual. Numerical experiments affirm the computational advantage and supremacy. PubDate: 2022-06-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We study finite-maturity American equity options in a stochastic mean-reverting diffusive interest rate framework. We allow for a non-zero correlation between the innovations driving the equity price and the interest rate. Importantly, we also allow for the interest rate to assume negative values, which is the case for some investment grade government bonds in Europe in recent years. In this setting we focus on American equity call and put options and characterize analytically their two-dimensional free boundary, i.e. the underlying equity and the interest rate values that trigger the optimal exercise of the option before maturity. We show that non-standard double continuation regions may appear, extending the findings documented in the literature in a constant interest rate framework. Moreover, we contribute by developing a bivariate discretization of the equity price and interest rate processes that converges in distribution as the time step shrinks. This discretization, described by a recombining quadrinomial tree, allows us to compute American equity options’ prices and to analyze their free boundaries with respect to time and current interest rate. Finally, we document the existence of non-standard optimal exercise policies for American call options on a non-dividend-paying equity. PubDate: 2022-05-12
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Lift-and-project (L &P) cuts are well-known general 0–1 programming cuts which are typically deployed in branch-and-cut methods to solve MILP problems. In this article, we discuss ways to use these cuts within the framework of Benders’ decomposition algorithms for solving two-stage mixed-binary stochastic problems with binary first-stage variables and continuous recourse. In particular, we show how L &P cuts derived for the master problem can be strengthened with the second-stage information. An adapted L-shaped algorithm and its computational efficiency analysis is presented. We show that the strengthened L &P cuts can significantly reduce the number of iterations and the solution time. PubDate: 2022-05-05
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In the application of machine learning to real-life decision-making systems, e.g., credit scoring and criminal justice, the prediction outcomes might discriminate against people with sensitive attributes, leading to unfairness. The commonly used strategy in fair machine learning is to include fairness as a constraint or a penalization term in the minimization of the prediction loss, which ultimately limits the information given to decision-makers. In this paper, we introduce a new approach to handle fairness by formulating a stochastic multi-objective optimization problem for which the corresponding Pareto fronts uniquely and comprehensively define the accuracy-fairness trade-offs. We have then applied a stochastic approximation-type method to efficiently obtain well-spread and accurate Pareto fronts, and by doing so we can handle training data arriving in a streaming way. PubDate: 2022-04-21
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In sourcing decisions, a buyer encounters three main issues. The first issue is the demand uncertainty that leads the buyer to find the optimal inventory level that balances inventory costs and customer satisfaction. The second one is deciding which suppliers to buy from and how much to order from each. Suppliers may have limited supply capacity that causes the buyer to split his/her order among multiple suppliers. To determine the suppliers’ order quantities, the buyer should evaluate them on the basis of factors such as wholesale price and quality. The third issue is the buyer’s purchasing strategy that can be reflected by assigning various weights to these factors. To help the buyer deal with these three issues, we propose a two-stage solution approach. In Stage 1 the Technique for Order of Preference by Similarity to Ideal Solution is employed to evaluate and rank the suppliers based on multiple factors. In Stage 2 a newsvendor problem is formulated where the uncertain demand is described by a fuzzy number having a general membership function. Then, an algorithm is employed to solve the newsvendor problem and assign the order quantities to the sorted suppliers to optimize the buyer’s inventory level. We use a numerical analysis to demonstrate the accuracy and effectiveness of the proposed solution approach. The numerical analysis also compares different types of uncertain demand in optimizing the buyer’s inventory level. PubDate: 2022-04-13
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract From a common point of view, quantum mechanics, psychology, and decision science disciplines try to predict how unruly systems (atomic particles, human behaviors, and decision makers’ choices) might behave in the future. Effective predicting outcome of a capacity allocation game under various allocation policies requires a profound understanding as how strategic reasoning of decision makers contributes to the financial gain of players. A quantum game framework is employed in the current study to investigate how performance of allocation policies is affected when buyers strategize over order quantities. The results show that the degree of being manipulative for allocation mechanisms is not identical and adopting adaptive quantum method is the most effective approach to secure the highest fill rate and profit when it is practiced under a reasonable range of entanglement levels. PubDate: 2022-03-19
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Years of globalization, outsourcing and cost cutting have increased supply chain vulnerability calling for more effective risk mitigation strategies. In our research, we analyze supply chain disruptions in a production setting. Using a bilevel optimization framework, we minimize the total production cost for a manufacturer interested in finding optimal disruption mitigation strategies. The problem constitutes a convex network flow program under a chance constraint bounding the manufacturer’s regrets in disrupted scenarios. Thus, in contrast to standard bilevel optimization schemes with two decision-makers, a leader and a follower, our model searches for the optimal production plan of a manufacturer in view of a reduction in the sequence of his own scenario-specific regrets. Defined as the difference in costs of a reactive plan, which considers the disruption as unknown until it occurs, and a benchmark anticipative plan, which predicts the disruption in the beginning of the planning horizon, the regrets allow measurement of the impact of scenario-specific production strategies on the manufacturer’s total cost. For an efficient solution of the problem, we employ generalized Benders decomposition and develop customized feasibility cuts. In the managerial section, we discuss the implications for the risk-adjusted production and observe that the regrets of long disruptions are reduced in our mitigation strategy at the cost of shorter disruptions, whose regrets typically stay far below the risk threshold. This allows a decrease of the production cost under rare but high-impact disruption scenarios. PubDate: 2022-02-28 DOI: 10.1007/s10287-022-00421-3
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Technological innovations often create new markets and this gives incentives to learn about their associated profitabilities. However, this decision depends not only on the underlying uncertain profitability, but also on attitudes towards risk. We develop a decision-support tool that accounts for the impact of learning for a potentially risk-averse decision maker. The Kalman filter is applied to derive a time-varying estimate of the process, and the option is valued as dependent on this estimation. We focus on linear stochastic processes with normally distributed noise. Through a numerical example, we find that the marginal benefit of learning decreases rapidly over time, and that the majority of investment times occur early in the option holding period, after the holder has realized the main benefits of learning, and that risk aversion leads to earlier adoption. We find that risk-aversion reduces the value of learning and thus reduces the additional value of waiting and observing noisy signals through time. PubDate: 2022-01-27 DOI: 10.1007/s10287-022-00423-1
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract A measure for portfolio risk management is proposed by extending the Markowitz mean-variance approach to include the left-hand tail effects of asset returns. Two risk dimensions are captured: asset covariance risk along risk in left-hand tail similarity and volatility. The key ingredient is an informative set on the left-hand tail distributions of asset returns obtained by an adaptive clustering procedure. This set allows a left tail similarity and left tail volatility to be defined, thereby providing a definition for the left-tail-covariance-like matrix. The convex combination of the two covariance matrices generates a “two-dimensional” risk that, when applied to portfolio selection, provides a measure of its systemic vulnerability due to the asset centrality. This is done by simply associating a suitable node-weighted network with the portfolio. Higher values of this risk indicate an asset allocation suffering from too much exposure to volatile assets whose return dynamics behave too similarly in left-hand tail distributions and/or co-movements, as well as being too connected to each other. Minimizing these combined risks reduces losses and increases profits, with a low variability in the profit and loss distribution. The portfolio selection compares favorably with some competing approaches. An empirical analysis is made using exchange traded fund prices over the period January 2006–February 2018. PubDate: 2022-01-20 DOI: 10.1007/s10287-022-00422-2
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Environment-related risks affect assets in various sectors of the global economy, as well as social and governance aspects, giving birth to what is known as ESG investments. Sustainable and responsible finance has become a major aim for asset managers who are regularly dealing with the measurement and management of ESG risks. To this purpose, Financial Institutions and Rating Agencies have created an ESG score aimed to provide disclosure on the environment, social, and governance (corporate social responsibilities) metrics. CSR/ESG ratings are becoming quite popular even if highly questioned in terms of reliability. Asset managers do not always believe that markets consistently and correctly price climate risks into company valuations, in these cases ESG ratings, when available, provide an important tool in the company’s fundraising process or on the shares’ return. Assuming we can choose a reliable set of CSR/ESG ratings, we aim to assess how structural data- balance sheet items- may affect ESG scores assigned to regularly traded stocks. Using a Random Forest algorithm, we investigate how structural data affect the Thomson Reuters Refinitiv ESG scores for the companies which constitute the STOXX 600 Index. We find that balance sheet data provide a crucial element to explain ESG scores. PubDate: 2021-12-02 DOI: 10.1007/s10287-021-00419-3
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The increasing penetration of inflexible and fluctuating renewable energy generation is often accompanied by a sequential market setup, including a day-ahead spot market that balances forecasted supply and demand with an hourly time resolution and a balancing market in which flexible generation handles unexpected imbalances closer to real-time and with a higher time resolution. Market characteristics such as time resolution, the time of market offering and the information available at this time, price elasticities of demand and the number of market participants, allow producers to exercise market power to different degrees. To capture this, we study oligopolistic spot and balancing markets with Cournot competition, and formulate two stochastic equilibrium models for the sequential markets. The first is an open-loop model which we formulate and solve as a complementarity problem. The second is a closed-loop model that accounts for the sequence of market clearings, but is computationally more demanding. Via optimality conditions, the result is an equilibrium problem with equilibrium constraints which we solve by an iterative procedure. When compared to the closed-loop solution, our results show that the open-loop problem overestimates the ability to exercise market power unless the market allows for speculation. In the presence of a speculator, the open-loop formulation forces spot and balancing market prices to be equal in expectation and indicates substantial profit reductions, whereas speculation has less severe impact in the closed-loop problem. We use the closed-loop model to further analyse market power issues with a higher time resolution and limited access to the balancing market. PubDate: 2021-11-20 DOI: 10.1007/s10287-021-00418-4
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we consider the problem of tax evasion, which occurs whenever an individual or business ignores tax laws. Fighting tax evasion is the main task of the Economic and Financial Military Police, which annually performs fiscal controls to track down and prosecute evaders at national level. Due to limited financial resources, the tax inspector is unable to audit the population entirely. In this article, we propose a model to assist the Italian tax inspector (Guardia di Finanza, G.d.F.) in allocating its budget among different business clusters, via a controller-controlled Stackelberg game. The G.d.F. is seen as the leader, while potential evaders are segmented into classes according to their business sizes, as set by the Italian regulatory framework. Numerical results on the real Italian case for fiscal year 2015 are provided. Insights on the optimal number of controls the inspector will have to perform among different business clusters are discussed and compared to the strategy implemented by the G.d.F. PubDate: 2021-10-01 DOI: 10.1007/s10287-021-00416-6
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract A new computational approach based on the pointwise regularity exponent of the price time series is proposed to estimate Value at Risk. The forecasts obtained are compared with those of two largely used methodologies: the variance-covariance method and the exponentially weighted moving average method. Our findings show that in two very turbulent periods of financial markets the forecasts obtained using our algorithm decidedly outperform the two benchmarks, providing more accurate estimates in terms of both unconditional coverage and independence and magnitude of losses. PubDate: 2021-08-06 DOI: 10.1007/s10287-021-00412-w
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Several emerging applications call for a fusion of statistical learning and stochastic programming (SP). We introduce a new class of models which we refer to as Predictive Stochastic Programming (PSP). Unlike ordinary SP, PSP models work with datasets which represent random covariates, often refered to as predictors (or features) and responses (or labels) in the machine learning literature. As a result, these PSP models call for methodologies which borrow relevant concepts from both learning and optimization. We refer to such a methodology as Learning Enabled Optimization (LEO). This paper sets forth the foundation for such a framework by introducing several novel concepts such as statistical optimality, hypothesis tests for model-fidelity, generalization error of PSP, and finally, a non-parametric methodology for model selection. These new concepts, which are collectively referred to as LEO, provide a formal framework for modeling, solving, validating, and reporting solutions for PSP models. We illustrate the LEO framework by applying it to a production-marketing coordination model based on combining a pedagogical production planning model with an advertising dataset intended for sales prediction. PubDate: 2021-07-31 DOI: 10.1007/s10287-021-00400-0
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The structure of networks plays a central role in the behavior of financial systems and their response to policy. Real-world networks, however, are rarely directly observable: banks’ assets and liabilities are typically known, but not who is lending how much and to whom. This paper adds to the existing literature in two ways. First, it shows how to simulate realistic networks that are based on balance-sheet information. To do so, we introduce a model where links cause fixed-costs, independent of contract size; but the costs per link decrease the more connected a bank is (scale economies). Second, to approach the optimization problem, we develop a new algorithm inspired by the transportation planning literature and research in stochastic search heuristics. Computational experiments find that the resulting networks are not only consistent with the balance sheets, but also resemble real-world financial networks in their density (which is sparse but not minimally dense) and in their core-periphery and disassortative structure. PubDate: 2021-07-08 DOI: 10.1007/s10287-021-00393-w
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The most common application of Black’s formula is interest rate derivatives pricing. Black’s model, a variant of Black-Scholes option pricing model, was first introduced by Fischer Black in 1976. In recent market conditions, where global interest rates are at very low levels and in some markets are currently zero or negative, Black model—in its canonical form—fails to price interest rate options since positive interest rates are assumed in its formula. In this paper we propose a heuristic method that, without explicit assumptions about the forward rate generating process, extends the cumulative standard normal distribution domain to negative interest rates and allows Black’s model to work in the conventional way. Furthermore, we provide the derivations of the so called five Greek letters that enable finance professionals to evaluate the sensitivity of an option to various parameters. Along with the description of the methodology, we present an extensive simulation study and a comparison with the Normal model which is widely used in the negative environment option pricing problems. PubDate: 2021-07-02 DOI: 10.1007/s10287-021-00408-6