Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: AbstractSignal processing traditionally relies on classical statistical modeling techniques. Such model-based methods utilize mathematical formulations that represent the underlying physics, prior information and additional domain knowledge. Simple classical models are useful but sensitive to inaccuracies and may lead to poor performance when real systems display complex or dynamic behavior. More recently, deep learning approaches that use highly parametric deep neural networks (DNNs) are becoming increasingly popular. Deep learning systems do not rely on mathematical modeling, and learn their mapping from data, which allows them to operate in complex environments. However, they lack the interpretability and reliability of model-based methods, typically require large training sets to obtain good performance, and tend to be computationally complex.Model-based signal processing methods and data-centric deep learning each have their pros and cons. These paradigms can be characterized as edges of a continuous spectrum varying in specificity and parameterization. The methodologies that lie in the middle ground of this spectrum, thus integrating model-based signal processing with deep learning, are referred to as model-based deep learning, and are the focus here.This monograph provides a tutorial style presentation of model-based deep learning methodologies. These are families of algorithms that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches. Such model-based deep learning methods exploit both partial domain knowledge, via mathematical structures designed for specific problems, as well as learning from limited data. We accompany our presentation with running signal processing examples, in super-resolution, tracking of dynamic systems, and array processing. We show how they are expressed using the provided characterization and specialized in each of the detailed methodologies. Our aim is to facilitate the design and study of future systems at the intersection of signal processing and machine learning that incorporate the advantages of both domains. The source code of our numerical examples are available and reproducible as Python notebooks.Suggested CitationNir Shlezinger and Yonina C. Eldar (2023), "Model-Based Deep Learning", Foundations and Trends® in Signal Processing: Vol. 17: No. 4, pp 291-416. http://dx.doi.org/10.1561/2000000113 PubDate: Mon, 21 Aug 2023 00:00:00 +020

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: AbstractGraph signal processing (GSP) has seen rapid developments in recent years. Since its introduction around ten years ago, we have seen numerous new ideas and practical applications related to the field. In this tutorial, we give an overview of some recent advances in generalizing GSP, with a focus on the extension to high-dimensional spaces, models, and structures. Alongside new frameworks proposed to tackle such problems, many new mathematical tools are being introduced. In the first part of the monograph, we will review traditional GSP, highlight the challenges it faces, and motivate efforts in overcoming such challenges, which will be the theme of the rest of the monograph.Suggested CitationXingchao Jian, Feng Ji and Wee Peng Tay (2023), "Generalizing Graph Signal Processing: High Dimensional Spaces, Models and Structures", Foundations and Trends® in Signal Processing: Vol. 17: No. 3, pp 209-290. http://dx.doi.org/10.1561/2000000119 PubDate: Mon, 06 Mar 2023 00:00:00 +010

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: AbstractDeep learning has achieved remarkable success in many machine learning tasks such as image classification, speech recognition, and game playing. However, these breakthroughs are often difficult to translate into real-world engineering systems because deep learning models require a massive number of training samples, which are costly to obtain in practice. To address labeled data scarcity, few-shot meta-learning optimizes learning algorithms that can efficiently adapt to new tasks quickly. While meta-learning is gaining significant interest in the machine learning literature, its working principles and theoretic fundamentals are not as well understood in the engineering community.This review monograph provides an introduction to metalearning by covering principles, algorithms, theory, and engineering applications. After introducing meta-learning in comparison with conventional and joint learning, we describe the main meta-learning algorithms, as well as a general bilevel optimization framework for the definition of meta-learning techniques. Then, we summarize known results on the generalization capabilities of meta-learning from a statistical learning viewpoint. Applications to communication systems, including decoding and power allocation, are discussed next, followed by an introduction to aspects related to the integration of meta-learning with emerging computing technologies, namely neuromorphic and quantum computing. The monograph is concluded with an overview of open research challenges.Suggested CitationLisha Chen, Sharu Theresa Jose, Ivana Nikoloska, Sangwoo Park, Tianyi Chen and Osvaldo Simeone (2023), "Learning with Limited Samples: Meta-Learning and Applications to Communication Systems", Foundations and Trends® in Signal Processing: Vol. 17: No. 2, pp 79-208. http://dx.doi.org/10.1561/2000000115 PubDate: Wed, 25 Jan 2023 00:00:00 +010

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: AbstractWe consider the well-studied problem of decomposing a vector time series signal into components with different characteristics, such as smooth, periodic, nonnegative, or sparse. We describe a simple and general framework in which the components are defined by loss functions (which include constraints), and the signal decomposition is carried out by minimizing the sum of losses of the components (subject to the constraints). When each loss function is the negative log-likelihood of a density for the signal component, this framework coincides with maximum a posteriori probability (MAP) estimation; but it also includes many other interesting cases. Summarizing and clarifying prior results, we give two distributed optimization methods for computing the decomposition, which find the optimal decomposition when the component class loss functions are convex, and are good heuristics when they are not. Both methods require only the masked proximal operator of each of the component loss functions, a generalization of the well-known proximal operator that handles missing entries in its argument. Both methods are distributed, i.e., handle each component separately. We derive tractable methods for evaluating the masked proximal operators of some loss functions that, to our knowledge, have not appeared in the literature.Suggested CitationBennet E. Meyers and Stephen P. Boyd (2023), "Signal Decomposition Using Masked Proximal Operators", Foundations and Trends® in Signal Processing: Vol. 17: No. 1, pp 1-78. http://dx.doi.org/10.1561/2000000122 PubDate: Mon, 16 Jan 2023 00:00:00 +010