Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: AbstractIn this monograph, we provide an overview of the information relaxation approach for calculating performance bounds in stochastic dynamic programs (DPs). The technique involves (1) relaxing the temporal feasibility (or nonanticipativity) constraints so the decision-maker (DM) has additional information before making decisions, and (2) incorporating a penalty that punishes the DM for violating the temporal feasibility constraints. The goal of this monograph is to provide a self-contained overview of the key theoretical results of the information relaxation approach as well as a review of research that has successfully used these techniques in a broad range of applications. We illustrate the information relaxation approach on applications in inventory management, assortment planning, and portfolio optimization.Suggested CitationDavid B. Brown and James E. Smith (2022), "Information Relaxations and Duality in Stochastic Dynamic Programs: A Review and Tutorial", Foundations and Trends® in Optimization: Vol. 5: No. 3, pp 246-339. http://dx.doi.org/10.1561/2400000027 PubDate: Mon, 21 Mar 2022 00:00:00 +010

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: AbstractThis monograph covers some recent advances in a range of acceleration techniques frequently used in convex optimization. We first use quadratic optimization problems to introduce two key families of methods, namely momentum and nested optimization schemes. They coincide in the quadratic case to form the Chebyshev method.We discuss momentum methods in detail, starting with the seminal work of Nesterov [1] and structure convergence proofs using a few master templates, such as that for optimized gradient methods, which provide the key benefit of showing how momentum methods optimize convergence guarantees. We further cover proximal acceleration, at the heart of the Catalyst and Accelerated Hybrid Proximal Extragradient frameworks, using similar algorithmic patterns.Common acceleration techniques rely directly on the knowledge of some of the regularity parameters in the problem at hand. We conclude by discussing restart schemes, a set of simple techniques for reaching nearly optimal convergence rates while adapting to unobserved regularity parameters.Suggested CitationAlexandre d’Aspremont, Damien Scieur and Adrien Taylor (2021), "Acceleration Methods", Foundations and Trends® in Optimization: Vol. 5: No. 1-2, pp 1-245. http://dx.doi.org/10.1561/2400000036 PubDate: Wed, 15 Dec 2021 00:00:00 +010

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: AbstractDeep neural networks are widely used for nonlinear functionapproximation, with applications ranging from computervision to control. Although these networks involve the compositionof simple arithmetic operations, it can be verychallenging to verify whether a particular network satisfiescertain input-output properties. This article surveys methodsthat have emerged recently for soundly verifying suchproperties. These methods borrow insights from reachabilityanalysis, optimization, and search. We discuss fundamentaldifferences and connections between existing algorithms. Inaddition, we provide pedagogical implementations of existingmethods and compare them on a set of benchmarkproblems.Suggested CitationChangliu Liu, Tomer Arnon, Christopher Lazarus, Christopher Strong, Clark Barrett and Mykel J. Kochenderfer (2020), "Algorithms for Verifying Deep Neural Networks", Foundations and Trends® in Optimization: Vol. 4: No. . http://dx.doi.org/10.1561/2400000035 PubDate: Thu, 31 Dec 2020 00:00:00 +010

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: AbstractThis monograph develops a comprehensive statistical learning framework that is robust to (distributional) perturbations in the data using Distributionally Robust Optimization (DRO) under the Wasserstein metric. Beginning with fundamental properties of the Wasserstein metric and the DRO formulation, we explore duality to arrive at tractable formulations and develop finite-sample, as well as asymptotic, performance guarantees. We consider a series of learning problems, including (i) distributionally robust linear regression; (ii) distributionally robust regression with group structure in the predictors; (iii) distributionally robust multi-output regression and multiclass classification, (iv) optimal decision making that combines distributionally robust regression with nearest-neighbor estimation; (v) distributionally robust semi-supervised learning, and (vi) distributionally robust reinforcement learning. A tractable DRO relaxation for each problem is being derived, establishing a connection between robustness and regularization, and obtaining bounds on the prediction and estimation errors of the solution. Beyond theory, we include numerical experiments and case studies using synthetic and real data. The real data experiments are all associated with various health informatics problems, an application area which provided the initial impetus for this work.Suggested CitationRuidi Chen and Ioannis Ch. Paschalidis (2020), "Distributionally Robust Learning", Foundations and Trends® in Optimization: Vol. 4: No. 1-2, pp 1-243. http://dx.doi.org/10.1561/2400000026 PubDate: Wed, 23 Dec 2020 00:00:00 +010

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: AbstractStructured optimization uses a prescribed set of atoms to assemble a solution that fits a model to data. Polarity, which extends the familiar notion of orthogonality from linear sets to general convex sets, plays a special role in a simple and geometric form of convex duality. This duality correspondence yields a general notion of alignment that leads to an intuitive and complete description of how atoms participate in the final decomposition of the solution. The resulting geometric perspective leads to variations of existing algorithms effective for large-scale problems. We illustrate these ideas with many examples, including applications in matrix completion and morphological component analysis for the separation of mixtures of signals.Suggested CitationZhenan Fan, Halyun Jeong, Yifan Sun and Michael P. Friedlander (2020), "Atomic Decomposition via Polar Alignment: The Geometry of Structured Optimization", Foundations and Trends® in Optimization: Vol. 3: No. 4, pp 280-366. http://dx.doi.org/10.1561/2400000028 PubDate: Tue, 24 Nov 2020 00:00:00 +010

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: AbstractIndex tracking is a very popular passive investment strategy.Since an index cannot be traded directly, index trackingrefers to the process of creating a portfolio that approximatesits performance. A straightforward way to do that isto purchase all the assets that compose an index in appropriatequantities. However, to simplify the execution, avoidsmall and illiquid positions, and large transaction costs, it isdesired that the tracking portfolio consists of a small numberof assets, i.e., we wish to create a sparse portfolio.Although index tracking is driven from the financial industry,it is in fact a pure signal processing problem: a regression ofthe financial historical data subject to some portfolio constraintswith some caveats and particularities. Furthermore, the sparse index tracking problem is similar to many sparsityformulations in the signal processing area in the sense thatit is a regression problem with some sparsity requirements.In its original form, sparse index tracking can be formulatedas a combinatorial optimization problem. A commonly usedapproach is to use mixed-integer programming (MIP) tosolve small sized problems. Nevertheless, MIP solvers are notapplicable for high-dimensional problems since the runningtime can be prohibiting for practical use.The goal of this monograph is to provide an in-depth overviewof the index tracking problem and analyze all the caveats andpractical issues an investor might have, such as the frequentrebalancing of weights, the changes in the index composition,the transaction costs, etc. Furthermore, a unified frameworkfor a large variety of sparse index tracking formulations isprovided. The derived algorithms are very attractive forpractical use since they provide efficient tracking portfoliosorders of magnitude faster than MIP solvers.Suggested CitationKonstantinos Benidis, Yiyong Feng and Daniel P. Palomar (2018), "Optimization Methods for Financial Index Tracking: From Theory to Practice", Foundations and Trends® in Optimization: Vol. 3: No. 3, pp 171-279. http://dx.doi.org/10.1561/2400000021 PubDate: Thu, 07 Jun 2018 00:00:00 +020