A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> ELECTRONICS (Total: 207 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Foundations and Trends® in Signal Processing
Journal Prestige (SJR): 0.23
Citation Impact (citeScore): 2
Number of Followers: 7  
 
  Full-text available via subscription Subscription journal
ISSN (Print) 1932-8346 - ISSN (Online) 1932-8354
Published by Now Publishers Inc Homepage  [28 journals]
  • Generalizing Graph Signal Processing: High Dimensional Spaces, Models and
           Structures

    • Free pre-print version: Loading...

      Abstract: AbstractGraph signal processing (GSP) has seen rapid developments in recent years. Since its introduction around ten years ago, we have seen numerous new ideas and practical applications related to the field. In this tutorial, we give an overview of some recent advances in generalizing GSP, with a focus on the extension to high-dimensional spaces, models, and structures. Alongside new frameworks proposed to tackle such problems, many new mathematical tools are being introduced. In the first part of the monograph, we will review traditional GSP, highlight the challenges it faces, and motivate efforts in overcoming such challenges, which will be the theme of the rest of the monograph.Suggested CitationXingchao Jian, Feng Ji and Wee Peng Tay (2023), "Generalizing Graph Signal Processing: High Dimensional Spaces, Models and Structures", Foundations and Trends® in Signal Processing: Vol. 17: No. 3, pp 209-290. http://dx.doi.org/10.1561/2000000119
      PubDate: Mon, 06 Mar 2023 00:00:00 +010
       
  • Learning with Limited Samples: Meta-Learning and Applications to
           Communication Systems

    • Free pre-print version: Loading...

      Abstract: AbstractDeep learning has achieved remarkable success in many machine learning tasks such as image classification, speech recognition, and game playing. However, these breakthroughs are often difficult to translate into real-world engineering systems because deep learning models require a massive number of training samples, which are costly to obtain in practice. To address labeled data scarcity, few-shot meta-learning optimizes learning algorithms that can efficiently adapt to new tasks quickly. While meta-learning is gaining significant interest in the machine learning literature, its working principles and theoretic fundamentals are not as well understood in the engineering community.This review monograph provides an introduction to metalearning by covering principles, algorithms, theory, and engineering applications. After introducing meta-learning in comparison with conventional and joint learning, we describe the main meta-learning algorithms, as well as a general bilevel optimization framework for the definition of meta-learning techniques. Then, we summarize known results on the generalization capabilities of meta-learning from a statistical learning viewpoint. Applications to communication systems, including decoding and power allocation, are discussed next, followed by an introduction to aspects related to the integration of meta-learning with emerging computing technologies, namely neuromorphic and quantum computing. The monograph is concluded with an overview of open research challenges.Suggested CitationLisha Chen, Sharu Theresa Jose, Ivana Nikoloska, Sangwoo Park, Tianyi Chen and Osvaldo Simeone (2023), "Learning with Limited Samples: Meta-Learning and Applications to Communication Systems", Foundations and Trends® in Signal Processing: Vol. 17: No. 2, pp 79-208. http://dx.doi.org/10.1561/2000000115
      PubDate: Wed, 25 Jan 2023 00:00:00 +010
       
  • Signal Decomposition Using Masked Proximal Operators

    • Free pre-print version: Loading...

      Abstract: AbstractWe consider the well-studied problem of decomposing a vector time series signal into components with different characteristics, such as smooth, periodic, nonnegative, or sparse. We describe a simple and general framework in which the components are defined by loss functions (which include constraints), and the signal decomposition is carried out by minimizing the sum of losses of the components (subject to the constraints). When each loss function is the negative log-likelihood of a density for the signal component, this framework coincides with maximum a posteriori probability (MAP) estimation; but it also includes many other interesting cases. Summarizing and clarifying prior results, we give two distributed optimization methods for computing the decomposition, which find the optimal decomposition when the component class loss functions are convex, and are good heuristics when they are not. Both methods require only the masked proximal operator of each of the component loss functions, a generalization of the well-known proximal operator that handles missing entries in its argument. Both methods are distributed, i.e., handle each component separately. We derive tractable methods for evaluating the masked proximal operators of some loss functions that, to our knowledge, have not appeared in the literature.Suggested CitationBennet E. Meyers and Stephen P. Boyd (2023), "Signal Decomposition Using Masked Proximal Operators", Foundations and Trends® in Signal Processing: Vol. 17: No. 1, pp 1-78. http://dx.doi.org/10.1561/2000000122
      PubDate: Mon, 16 Jan 2023 00:00:00 +010
       
  • Online Component Analysis, Architectures and Applications

    • Free pre-print version: Loading...

      Abstract: AbstractThis monograph deals with principal component analysis (PCA), kernel component analysis (KPCA), and independent component analysis (ICA), highlighting their applications to streaming-data implementations. The basic concepts related to PCA, KPCA, and ICA are widely available in the literature; however, very few texts deal with their practical implementation in computationally limited resources. The presentation tries to emphasize the current solutions considering possible constraints in power consumption and desirable computational complexity. For instance, there are good examples in biomedical engineering applications where tools like PCA and ICA can sort out the human body’s activities. For example, it is possible to remove noise and undesirable artifacts from a target signal such as EEG and ECG, among others. In turn, KPCA may be a valuable resource for non-linear image denoising. Nonetheless, many current solutions rely on batch processing implemented in general-purpose computing resources.In general terms, PCA consists of a sequence of uncorrelated data projections ordered according to their variances and employing mutually orthogonal directions. PCA is mighty in extracting hidden linear structures in high-dimension datasets. The standard PCA implementation computes the eigenvectors of the data-covariance matrix, retaining those directions to which the data exhibit the highest projection variances. This concept can be extended to the so-called Kernel PCA, wherein the data instances are implicitly mapped into a high-dimensional feature space via some non-linear transform, typically unknown. Conversely, ICA strengthens the PCA maximization variance approach by imposing the strict premise of mutual independence on the resulting projections. In fact, ICA comes to rescue the traditional tools when one aims at assessing non-Gaussian sources from data, often not available for direct measurement. Frequently, ICA and KPCA are more powerful tools for solving challenging tasks than PCA since they exploit high-order statistics from data.All these methods require some simplifications to allow a simple online implementation when coping with streaming data. This monograph describes some state-of-the-art solutions for PCA, KPCA, and ICA, emphasizing their online deployments. Many online PCA and, more recently, KPCA techniques were proposed based on Hebbian learning rules and fixed-point iterative equations. Notably, online KPCA solutions also include data selection strategies to define a compact dictionary over which the kernel components are expanded. The complexity of these dictionaries is controlled by simply setting a single hyperparameter. In both cases, the online extensions proposed rely on simple equations, can track nonstationary environments, and require reduced storage, enabling its use in real-time applications operating in low-cost embedded hardware.This monograph discusses the state-of-the-art online PCA and KPCA techniques in a unified and principled manner, presenting solutions that achieve a higher convergence speed and accuracy in many applications, particularly image processing. Besides, this work also explains how to remove various artifacts from data records based on blind source separation (BSS) by ICA, splitting feature identification from feature separation. Herein, three FastICA online hardware architectures and implementation for biomedical signal processing are addressed. The main features are summarized as follows: 1) energy-efficient FastICA using the early determination scheme; 2) cost-effective variable-channel FastICA using the Gram-Schmidt-based whitening algorithm; and 3) moving-window-based online FastICA algorithm with limited memory. The post-layout simulation results with artificial and EEG data validate the design concepts.In summary, this monograph presents the leading algorithmic solutions for PCA, KPCA, ICA, Iterative PCA, Online KPCA, and Online ICA, focusing on approaches amenable to process streaming signals. Furthermore, it provides some insights into how to choose the right solution for practical systems. Along the way, some implementation examples are provided in a variety of areas.Suggested CitationJoão B. O. Souza Filho, Lan-Da Van, Tzyy-Ping Jung and Paulo S. R. Diniz (2022), "Online Component Analysis, Architectures and Applications", Foundations and Trends® in Signal Processing: Vol. 16: No. 3-4, pp 224-429. http://dx.doi.org/10.1561/2000000112
      PubDate: Wed, 23 Nov 2022 00:00:00 +010
       
  • An Introduction to Quantum Machine Learning for Engineers

    • Free pre-print version: Loading...

      Abstract: AbstractIn the current noisy intermediate-scale quantum (NISQ) era, quantum machine learning is emerging as a dominant paradigm to program gate-based quantum computers. In quantum machine learning, the gates of a quantum circuit are parameterized, and the parameters are tuned via classical optimization based on data and on measurements of the outputs of the circuit. Parameterized quantum circuits (PQCs) can efficiently address combinatorial optimization problems, implement probabilistic generative models, and carry out inference (classification and regression). This monograph provides a self-contained introduction to quantum machine learning for an audience of engineers with a background in probability and linear algebra. It first describes the necessary background, concepts, and tools necessary to describe quantum operations and measurements. Then, it covers parameterized quantum circuits, the variational quantum eigensolver, as well as unsupervised and supervised quantum machine learning formulations.Suggested CitationOsvaldo Simeone (2022), "An Introduction to Quantum Machine Learning for Engineers", Foundations and Trends® in Signal Processing: Vol. 16: No. 1-2, pp 1-223. http://dx.doi.org/10.1561/2000000118
      PubDate: Wed, 27 Jul 2022 00:00:00 +020
       
  • Wireless for Machine Learning: A Survey

    • Free pre-print version: Loading...

      Abstract: AbstractAs data generation increasingly takes place on devices without a wired connection, Machine Learning (ML) related traffic will be ubiquitous in wireless networks. Many studies have shown that traditional wireless protocols are highly inefficient or unsustainable to support ML, which creates the need for new wireless communication methods. In this monograph, we give a comprehensive review of the state-of-the-art wireless methods that are specifically designed to support ML services over distributed datasets. Currently, there are two clear themes within the literature, analog over-the-air computation and digital radio resource management optimized for ML. This survey gives an introduction to these methods, reviews the most important works, highlights open problems, and discusses application scenarios.Suggested CitationHenrik Hellström, José Mairton B. da Silva Jr., Mohammad Mohammadi Amiri, Mingzhe Chen, Viktoria Fodor, H. Vincent Poor and Carlo Fischione (2022), "Wireless for Machine Learning: A Survey", Foundations and Trends® in Signal Processing: Vol. 15: No. 4, pp 290-399. http://dx.doi.org/10.1561/2000000114
      PubDate: Thu, 09 Jun 2022 00:00:00 +020
       
  • Bilevel Methods for Image Reconstruction

    • Free pre-print version: Loading...

      Abstract: AbstractThis review discusses methods for learning parameters for image reconstruction problems using bilevel formulations. Image reconstruction typically involves optimizing a cost function to recover a vector of unknown variables that agrees with collected measurements and prior assumptions. Stateof- the-art image reconstruction methods learn these prior assumptions from training data using various machine learning techniques, such as bilevel methods.One can view the bilevel problem as formalizing hyperparameter optimization, as bridging machine learning and cost function based optimization methods, or as a method to learn variables best suited to a specific task. More formally, bilevel problems attempt to minimize an upper-level loss function, where variables in the upper-level loss function are themselves minimizers of a lower-level cost function.This review contains a running example problem of learning tuning parameters and the coefficients for sparsifying filters used in a regularizer. Such filters generalize the popular total variation regularization method, and learned filters are closely related to convolutional neural networks approaches that are rapidly gaining in popularity. Here, the lower-level problem is to reconstruct an image using a regularizer with learned sparsifying filters; the corresponding upper-level optimization problem involves a measure of reconstructed image quality based on training data.This review discusses multiple perspectives to motivate the use of bilevel methods and to make them more easily accessible to different audiences. We then turn to ways to optimize the bilevel problem, providing pros and cons of the variety of proposed approaches. Finally we overview bilevel applications in image reconstruction.Suggested CitationCaroline Crockett and Jeffrey A. Fessler (2022), "Bilevel Methods for Image Reconstruction", Foundations and Trends® in Signal Processing: Vol. 15: No. 2-3, pp 121-289. http://dx.doi.org/10.1561/2000000111
      PubDate: Thu, 05 May 2022 00:00:00 +020
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.236.209.138
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-