Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: AbstractStochastic control problems in which there are no bounds on the rate of control reduce to so-called free-boundary problems in partial differential equations (PDEs). In a free-boundary problem the solution of the PDE and the domain over which the PDE must be solved need to be determined simultaneously. Examples of such stochastic control problems are singular control, optimal stopping, and impulse control problems. Application areas of these problems are diverse and include finance, economics, queuing, healthcare, and public policy. In most cases, the free-boundary problem needs to be solved numerically.In this survey, we present a recent computational method that solves these free-boundary problems. The method finds the free-boundary by solving a sequence of fixed-boundary problems. These fixed-boundary problems are relatively easy to solve numerically. We summarize and unify recent results on this moving boundary method, illustrating its application on a set of classical problems, of increasing difficulty, in finance. This survey is intended for those are primarily interested in computing numerical solutions to these problems. To this end, we include actual Matlab code for one of the problems studied, namely, American option pricing.Suggested CitationKumar Muthuraman and Sunil Kumar (2008), "Solving Free-boundary Problems with Applications in Finance", Foundations and Trends® in Stochastic Systems: Vol. 1: No. 4, pp 259-341. http://dx.doi.org/10.1561/0900000006 PubDate: Mon, 25 Aug 2008 00:00:00 +020
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: AbstractThe notion of long range dependence is discussed from a variety of points of view, and a new approach is suggested. A number of related topics is also discussed, including connections with non-stationary processes, with ergodic theory, self-similar processes and fractionally differenced processes, heavy tails and light tails, limit theorems and large deviations.Suggested CitationGennady Samorodnitsky (2007), "Long Range Dependence", Foundations and Trends® in Stochastic Systems: Vol. 1: No. 3, pp 163-257. http://dx.doi.org/10.1561/0900000004 PubDate: Fri, 28 Dec 2007 00:00:00 +010
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: AbstractThis manuscript summarizes a line of research that maps certain classical problems of discrete mathematics — such as the Hamiltonian Cycle and the Traveling Salesman Problems — into convex domains where continuum analysis can be carried out. Arguably, the inherent difficulty of these, now classical, problems stems precisely from the discrete nature of domains in which these problems are posed. The convexification of domains underpinning the reported results is achieved by assigning probabilistic interpretation to key elements of the original deterministic problems.In particular, approaches summarized here build on a technique that embeds Hamiltonian Cycle and Traveling Salesman Problems in a structured singularly perturbed Markov Decision Process. The unifying idea is to interpret subgraphs traced out by deterministic policies (including Hamiltonian Cycles, if any) as extreme points of a convex polyhedron in a space filled with randomized policies.The topic has now evolved to the point where there are many, both theoretical and algorithmic, results that exploit the nexus between graph theoretic structures and both probabilistic and algebraic entities of related Markov chains. The latter include moments of first return times, limiting frequencies of visits to nodes, or the spectra of certain matrices traditionally associated with the analysis of Markov chains. Numerous open questions and problems are described in the presentation.Suggested CitationJerzy A. Filar (2007), "Controlled Markov Chains, Graphs, and Hamiltonicity", Foundations and Trends® in Stochastic Systems: Vol. 1: No. 2, pp 77-162. http://dx.doi.org/10.1561/0900000003 PubDate: Thu, 20 Dec 2007 00:00:00 +010
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: AbstractThis paper focuses on monotonicity results for dynamic systems that take values in the natural numbers or in more-dimensional lattices. The results are mostly formulated in terms of controlled queueing systems, but there are also applications to maintenance systems, revenue management, and so forth. We concentrate on results that are obtained by inductively proving properties of the dynamic programming value function. We give a framework for using this method that unifies results obtained for different models. We also give a comprehensive overview of the results that can be obtained through it, in which we discuss not only (partial) characterizations of optimal policies but also applications of monotonicity to optimization problems and the comparison of systems.Suggested CitationGer Koole (2007), "Monotonicity in Markov Reward and Decision Chains: Theory and Applications", Foundations and Trends® in Stochastic Systems: Vol. 1: No. 1, pp 1-76. http://dx.doi.org/10.1561/0900000002 PubDate: Thu, 07 Jun 2007 00:00:00 +020