A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> ELECTRONICS (Total: 207 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
APSIPA Transactions on Signal and Information Processing
Journal Prestige (SJR): 0.404
Citation Impact (citeScore): 2
Number of Followers: 8  

  This is an Open Access Journal Open Access journal
ISSN (Print) 2048-7703 - ISSN (Online) 2048-7703
Published by Cambridge University Press Homepage  [352 journals]
  • Subspace learning for facial expression recognition: an overview and a new
           perspective

    • Authors: Turan; Cigdem, Zhao, Rui, Lam, Kin-Man, He, Xiangjian
      First page: 1
      Abstract: For image recognition, an extensive number of subspace-learning methods have been proposed to overcome the high-dimensionality problem of the features being used. In this paper, we first give an overview of the most popular and state-of-the-art subspace-learning methods, and then, a novel manifold-learning method, named soft locality preserving map (SLPM), is presented. SLPM aims to control the level of spread of the different classes, which is closely connected to the generalizability of the learned subspace. We also do an overview of the extension of manifold learning methods to deep learning by formulating the loss functions for training, and further reformulate SLPM into a soft locality preserving (SLP) loss. These loss functions are applied as an additional regularization to the learning of deep neural networks. We evaluate these subspace-learning methods, as well as their deep-learning extensions, on facial expression recognition. Experiments on four commonly used databases show that SLPM effectively reduces the dimensionality of the feature vectors and enhances the discriminative power of the extracted features. Moreover, experimental results also demonstrate that the learned deep features regularized by SLP acquire a better discriminability and generalizability for facial expression recognition.
      PubDate: 2021-01-14
      DOI: 10.1017/ATSIP.2020.27
       
  • Laplacian networks: bounding indicator function smoothness for neural
           networks robustness

    • Authors: Lassance; Carlos, Gripon, Vincent, Ortega, Antonio
      First page: 2
      Abstract: For the past few years, deep learning (DL) robustness (i.e. the ability to maintain the same decision when inputs are subject to perturbations) has become a question of paramount importance, in particular in settings where misclassification can have dramatic consequences. To address this question, authors have proposed different approaches, such as adding regularizers or training using noisy examples. In this paper we introduce a regularizer based on the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DL architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness on classical supervised learning vision datasets for various types of perturbations. We also show it can be combined with existing methods to increase overall robustness.
      PubDate: 2021-02-05
      DOI: 10.1017/ATSIP.2021.2
       
  • Toward community answer selection by jointly static and dynamic user
           expertise modeling

    • Authors: Liu; Yuchao, Liu, Meng, Yin, Jianhua
      First page: 3
      Abstract: Answer selection, ranking high-quality answers first, is a significant problem for the community question answering sites. Existing approaches usually consider it as a text matching task, and then calculate the quality of answers via their semantic relevance to the given question. However, they thoroughly ignore the influence of other multiple factors in the community, such as the user expertise. In this paper, we propose an answer selection model based on the user expertise modeling, which simultaneously considers the social influence and the personal interest that affect the user expertise from different views. Specifically, we propose an inductive strategy to aggregate the social influence of neighbors. Besides, we introduce the explicit topic interest of users and capture the context-based personal interest by weighing the activation of each topic. Moreover, we construct two real-world datasets containing rich user information. Extensive experiments on two datasets demonstrate that our model outperforms several state-of-the-art models.
      PubDate: 2021-03-01
      DOI: 10.1017/ATSIP.2020.28
       
  • Demystifying data and AI for manufacturing: case studies from a major
           computer maker

    • Authors: Chen; Yi-Chun, He, Bo-Huei, Lin, Shih-Sung, Soeseno, Jonathan Hans, Tan, Daniel Stanley, Chen, Trista Pei-Chun, Chen, Wei-Chao
      First page: 4
      Abstract: In this article, we discuss the backgrounds and technical details about several smart manufacturing projects in a tier-one electronics manufacturing facility. We devise a process to manage logistic forecast and inventory preparation for electronic parts using historical data and a recurrent neural network to achieve significant improvement over current methods. We present a system for automatically qualifying laptop software for mass production through computer vision and automation technology. The result is a reliable system that can save hundreds of man-years in the qualification process. Finally, we create a deep learning-based algorithm for visual inspection of product appearances, which requires significantly less defect training data compared to traditional approaches. For production needs, we design an automatic optical inspection machine suitable for our algorithm and process. We also discuss the issues for data collection and enabling smart manufacturing projects in a factory setting, where the projects operate on a delicate balance between process innovations and cost-saving measures.
      PubDate: 2021-03-08
      DOI: 10.1017/ATSIP.2021.3
       
  • Automatic Deception Detection using Multiple Speech and Language
           Communicative Descriptors in Dialogs

    • Authors: Chou; Huang-Cheng, Liu, Yi-Wen, Lee, Chi-Chun
      First page: 5
      Abstract: While deceptive behaviors are a natural part of human life, it is well known that human is generally bad at detecting deception. In this study, we present an automatic deception detection framework by comprehensively integrating prior domain knowledge in deceptive behavior understanding. Specifically, we compute acoustics, textual information, implicatures with non-verbal behaviors, and conversational temporal dynamics for improving automatic deception detection in dialogs. The proposed model reaches start-of-the-art performance on the Daily Deceptive Dialogues corpus of Mandarin (DDDM) database, 80.61% unweighted accuracy recall in deception recognition. In the further analyses, we reveal that (i) the deceivers’ deception behaviors can be observed from the interrogators’ behaviors in the conversational temporal dynamics features and (ii) some of the acoustic features (e.g. loudness and MFCC) and textual features are significant and effective indicators to detect deception behaviors.
      PubDate: 2021-04-16
      DOI: 10.1017/ATSIP.2021.6
       
  • Speech emotion recognition based on listener-dependent emotion perception
           models

    • Authors: Ando; Atsushi, Mori, Takeshi, Kobashikawa, Satoshi, Toda, Tomoki
      First page: 6
      Abstract: This paper presents a novel speech emotion recognition scheme that leverages the individuality of emotion perception. Most conventional methods simply poll multiple listeners and directly model the majority decision as the perceived emotion. However, emotion perception varies with the listener, which forces the conventional methods with their single models to create complex mixtures of emotion perception criteria. In order to mitigate this problem, we propose a majority-voted emotion recognition framework that constructs listener-dependent (LD) emotion recognition models. The LD model can estimate not only listener-wise perceived emotion, but also majority decision by averaging the outputs of the multiple LD models. Three LD models, fine-tuning, auxiliary input, and sub-layer weighting, are introduced, all of which are inspired by successful domain-adaptation frameworks in various speech processing tasks. Experiments on two emotional speech datasets demonstrate that the proposed approach outperforms the conventional emotion recognition frameworks in not only majority-voted but also listener-wise perceived emotion recognition.
      PubDate: 2021-04-20
      DOI: 10.1017/ATSIP.2021.7
       
  • Audio-to-score singing transcription based on a CRNN-HSMM hybrid model

    • Authors: Nishikimi; Ryo, Nakamura, Eita, Goto, Masataka, Yoshii, Kazuyoshi
      First page: 7
      Abstract: This paper describes an automatic singing transcription (AST) method that estimates a human-readable musical score of a sung melody from an input music signal. Because of the considerable pitch and temporal variation of a singing voice, a naive cascading approach that estimates an F0 contour and quantizes it with estimated tatum times cannot avoid many pitch and rhythm errors. To solve this problem, we formulate a unified generative model of a music signal that consists of a semi-Markov language model representing the generative process of latent musical notes conditioned on musical keys and an acoustic model based on a convolutional recurrent neural network (CRNN) representing the generative process of an observed music signal from the notes. The resulting CRNN-HSMM hybrid model enables us to estimate the most-likely musical notes from a music signal with the Viterbi algorithm, while leveraging both the grammatical knowledge about musical notes and the expressive power of the CRNN. The experimental results showed that the proposed method outperformed the conventional state-of-the-art method and the integration of the musical language model with the acoustic model has a positive effect on the AST performance.
      PubDate: 2021-04-20
      DOI: 10.1017/ATSIP.2021.4
       
  • Analyzing public opinion on COVID-19 through different perspectives and
           stages

    • Authors: Gao; Yuqi, Hua, Hang, Luo, Jiebo
      First page: 8
      Abstract: In recent months, COVID-19 has become a global pandemic and had a huge impact on the world. People under different conditions have very different attitudes toward the epidemic. Due to the real-time and large-scale nature of social media, we can continuously obtain a massive amount of public opinion information related to the epidemic from social media. In particular, researchers may ask questions such as “how is the public reacting to COVID-19 in China during different stages of the pandemic'”, “what factors affect the public opinion orientation in China'”, and so on. To answer such questions, we analyze the pandemic-related public opinion information on Weibo, China's largest social media platform. Specifically, we have first collected a large amount of COVID-19-related public opinion microblogs. We then use a sentiment classifier to recognize and analyze different groups of users’ opinions. In the collected sentiment-orientated microblogs, we try to track the public opinion through different stages of the COVID-19 pandemic. Furthermore, we analyze more key factors that might have an impact on the public opinion of COVID-19 (e.g. users in different provinces or users with different education levels). Empirical results show that the public opinions vary along with the key factors of COVID-19. Furthermore, we analyze the public attitudes on different public-concerning topics, such as staying at home and quarantine. In summary, we uncover interesting patterns of users and events as an insight into the world through the lens of a major crisis.
      PubDate: 2021-03-17
      DOI: 10.1017/ATSIP.2021.5
       
  • The future of biometrics technology: from face recognition to related
           applications

    • Authors: Imaoka; Hitoshi, Hashimoto, Hiroshi, Takahashi, Koichi, Ebihara, Akinori F., Liu, Jianquan, Hayasaka, Akihiro, Morishita, Yusuke, Sakurai, Kazuyuki
      First page: 9
      Abstract: Biometric recognition technologies have become more important in the modern society due to their convenience with the recent informatization and the dissemination of network services. Among such technologies, face recognition is one of the most convenient and practical because it enables authentication from a distance without requiring any authentication operations manually. As far as we know, face recognition is susceptible to the changes in the appearance of faces due to aging, the surrounding lighting, and posture. There were a number of technical challenges that need to be resolved. Recently, remarkable progress has been made thanks to the advent of deep learning methods. In this position paper, we provide an overview of face recognition technology and introduce its related applications, including face presentation attack detection, gaze estimation, person re-identification and image data mining. We also discuss the research challenges that still need to be addressed and resolved.
      PubDate: 2021-05-28
      DOI: 10.1017/ATSIP.2021.8
       
  • A protection method of trained CNN model with a secret key from
           unauthorized access

    • Authors: Maungmaung; AprilPyone, Kiya, Hitoshi
      First page: 10
      Abstract: In this paper, we propose a novel method for protecting convolutional neural network models with a secret key set so that unauthorized users without the correct key set cannot access trained models. The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access without any noticeable overhead. We introduce three block-wise transformations with a secret key set to generate learnable transformed images: pixel shuffling, negative/positive transformation, and format-preserving Feistel-based encryption. Protected models are trained by using transformed images. The results of experiments with the CIFAR and ImageNet datasets show that the performance of a protected model was close to that of non-protected models when the key set was correct, while the accuracy severely dropped when an incorrect key set was given. The protected model was also demonstrated to be robust against various attacks. Compared with the state-of-the-art model protection with passports, the proposed method does not have any additional layers in the network, and therefore, there is no overhead during training and inference processes.
      PubDate: 2021-07-09
      DOI: 10.1017/ATSIP.2021.9
       
  • Compression efficiency analysis of AV1, VVC, and HEVC for random access
           applications

    • Authors: Nguyen; Tung, Marpe, Detlev
      First page: 11
      Abstract: AOM Video 1 (AV1) and Versatile Video Coding (VVC) are the outcome of two recent independent video coding technology developments. Although VVC is the successor of High Efficiency Video Coding (HEVC) in the lineage of international video coding standards jointly developed by ITU-T and ISO/IEC within an open and public standardization process, AV1 is a video coding scheme that was developed by the industry consortium Alliance for Open Media (AOM) and that has its technological roots in Google's proprietary VP9 codec. This paper presents a compression efficiency evaluation for the AV1, VVC, and HEVC video coding schemes in a typical video compression application requiring random access. The latter is an important property, without which essential functionalities in digital video broadcasting or streaming could not be provided. For the evaluation, we employed a controlled experimental environment that basically follows the guidelines specified in the Common Test Conditions of the Joint Video Experts Team. As representatives of the corresponding video coding schemes, we selected their freely available reference software implementations. Depending on the application-specific frequency of random access points, the experimental results show averaged bit-rate savings of about 10–15% for AV1 and 36–37% for the VVC reference encoder implementation (VTM), both relative to the HEVC reference encoder implementation (HM) and by using a test set of video sequences with different characteristics regarding content and resolution. A direct comparison between VTM and AV1 reveals averaged bit-rate savings of about 25–29% for VTM, while the averaged encoding and decoding run times of VTM relative to those of AV1 are around 300% and 270%, respectively.
      PubDate: 2021-07-13
      DOI: 10.1017/ATSIP.2021.10
       
  • 3D skeletal movement-enhanced emotion recognition networks

    • Authors: Shi; Jiaqi, Liu, Chaoran, Ishi, Carlos Toshinori, Ishiguro, Hiroshi
      First page: 12
      Abstract: Automatic emotion recognition has become an important trend in the fields of human–computer natural interaction and artificial intelligence. Although gesture is one of the most important components of nonverbal communication, which has a considerable impact on emotion recognition, it is rarely considered in the study of emotion recognition. An important reason is the lack of large open-source emotional databases containing skeletal movement data. In this paper, we extract three-dimensional skeleton information from videos and apply the method to IEMOCAP database to add a new modality. We propose an attention-based convolutional neural network which takes the extracted data as input to predict the speakers’ emotional state. We also propose a graph attention-based fusion method that combines our model with the models using other modalities, to provide complementary information in the emotion classification task and effectively fuse multimodal cues. The combined model utilizes audio signals, text information, and skeletal data. The performance of the model significantly outperforms the bimodal model and other fusion strategies, proving the effectiveness of the method.
      PubDate: 2021-08-05
      DOI: 10.1017/ATSIP.2021.11
       
  • Immersive audio, capture, transport, and rendering: a review

    • Authors: Sun; Xuejing
      First page: 13
      Abstract: Immersive audio has received significant attention in the past decade. The emergence of a few groundbreaking systems and events (Dolby Atmos, MPEG-H, VR/AR, AI) contributes to reshaping the landscape of this field, accelerating the mass market adoption of immersive audio. This review serves as a quick recap of some immersive audio background, end to end workflow, covering audio capture, compression, and rendering. The technical aspects of object audio and ambisonic will be explored, as well as other related topics such as binauralization, virtual surround, and upmix. Industry trends and applications are also discussed where user experience ultimately decides the future direction of the immersive audio technologies.
      PubDate: 2021-09-16
      DOI: 10.1017/ATSIP.2021.12
       
  • Robust deep convolutional neural network against image distortions

    • Authors: Wang; Liang-Yao, Chen, Sau-Gee, Chien, Feng-Tsun
      First page: 14
      Abstract: Many approaches have been proposed in the literature to enhance the robustness of Convolutional Neural Network (CNN)-based architectures against image distortions. Attempts to combat various types of distortions can be made by combining multiple expert networks, each trained by a certain type of distorted images, which however lead to a large model with high complexity. In this paper, we propose a CNN-based architecture with a pre-processing unit in which only undistorted data are used for training. The pre-processing unit employs discrete cosine transform (DCT) and discrete wavelets transform (DWT) to remove high-frequency components while capturing prominent high-frequency features in the undistorted data by means of random selection. We further utilize the singular value decomposition (SVD) to extract features before feeding the preprocessed data into the CNN for training. During testing, distorted images directly enter the CNN for classification without having to go through the hybrid module. Five different types of distortions are produced in the SVHN dataset and the CIFAR-10/100 datasets. Experimental results show that the proposed DCT-DWT-SVD module built upon the CNN architecture provides a classifier robust to input image distortions, outperforming the state-of-the-art approaches in terms of accuracy under different types of distortions.
      PubDate: 2021-10-11
      DOI: 10.1017/ATSIP.2021.14
       
  • Two-stage pyramidal convolutional neural networks for image colorization

    • Authors: Wei; Yu-Jen, Wei, Tsu-Tsai, Kuo, Tien-Ying, Su, Po-Chyi
      First page: 15
      Abstract: The development of colorization algorithms through deep learning has become the current research trend. These algorithms colorize grayscale images automatically and quickly, but the colors produced are usually subdued and have low saturation. This research addresses this issue of existing algorithms by presenting a two-stage convolutional neural network (CNN) structure with the first and second stages being a chroma map generation network and a refinement network, respectively. To begin, we convert the color space of an image from RGB to HSV to predict its low-resolution chroma components and therefore reduce the computational complexity. Following that, the first-stage output is zoomed in and its detail is enhanced with a pyramidal CNN, resulting in a colorized image. Experiments show that, while using fewer parameters, our methodology produces results with more realistic color and higher saturation than existing methods.
      PubDate: 2021-10-08
      DOI: 10.1017/ATSIP.2021.13
       
  • Social rhythms measured via social media use for predicting psychiatric
           symptoms

    • Authors: Yokotani; Kenji, Takano, Masanori
      First page: 16
      Abstract: Social rhythms have been considered as relevant to mood disorders, but detailed analysis of social rhythms has been limited. Hence, we aim to assess social rhythms via social media use and predict users' psychiatric symptoms through their social rhythms. A two-wave survey was conducted in the Pigg Party, a popular Japanese avatar application. First and second waves of data were collected from 3504 and 658 Pigg Party users, respectively. The time stamps of their communication were sampled. Furthermore, the participants answered the General Health Questionnaire and perceived emotional support in the Pigg Party. The results indicated that social rhythms of users with many social supports were stable in a 24-h cycle. However, the rhythms of users with few social supports were disrupted. To predict psychiatric symptoms via social rhythms in the second-wave data, the first-wave data were used for training. We determined that fast Chirplet transformation was the optimal transformation for social rhythms, and the best accuracy scores on psychiatric symptoms and perceived emotional support in the second-wave data corresponded to 0.9231 and 0.7462, respectively. Hence, measurement of social rhythms via social media use enabled detailed understanding of emotional disturbance from the perspective of time-varying frequencies.
      PubDate: 2021-10-28
      DOI: 10.1017/ATSIP.2021.17
       
  • TGHop: an explainable, efficient, and lightweight method for texture
           generation

    • Authors: Lei; Xuejing, Zhao, Ganning, Zhang, Kaitai, Kuo, C.-C. Jay
      First page: 17
      Abstract: An explainable, efficient, and lightweight method for texture generation, called TGHop (an acronym of Texture Generation PixelHop), is proposed in this work. Although synthesis of visually pleasant texture can be achieved by deep neural networks, the associated models are large in size, difficult to explain in theory, and computationally expensive in training. In contrast, TGHop is small in its model size, mathematically transparent, efficient in training and inference, and able to generate high-quality texture. Given an exemplary texture, TGHop first crops many sample patches out of it to form a collection of sample patches called the source. Then, it analyzes pixel statistics of samples from the source and obtains a sequence of fine-to-coarse subspaces for these patches by using the PixelHop++ framework. To generate texture patches with TGHop, we begin with the coarsest subspace, which is called the core, and attempt to generate samples in each subspace by following the distribution of real samples. Finally, texture patches are stitched to form texture images of a large size. It is demonstrated by experimental results that TGHop can generate texture images of superior quality with a small model size and at a fast speed.
      PubDate: 2021-10-27
      DOI: 10.1017/ATSIP.2021.15
       
  • Cross-layer knowledge distillation with KL divergence and offline ensemble
           for compressing deep neural network

    • Authors: Chou; Hsing-Hung, Chiu, Ching-Te, Liao, Yi-Ping
      First page: 18
      Abstract: Deep neural networks (DNN) have solved many tasks, including image classification, object detection, and semantic segmentation. However, when there are huge parameters and high level of computation associated with a DNN model, it becomes difficult to deploy on mobile devices. To address this difficulty, we propose an efficient compression method that can be split into three parts. First, we propose a cross-layer matrix to extract more features from the teacher's model. Second, we adopt Kullback Leibler (KL) Divergence in an offline environment to make the student model find a wider robust minimum. Finally, we propose the offline ensemble pre-trained teachers to teach a student model. To address dimension mismatch between teacher and student models, we adopt a convolution and two-stage knowledge distillation to release this constraint. We conducted experiments with VGG and ResNet models, using the CIFAR-100 dataset. With VGG-11 as the teacher's model and VGG-6 as the student's model, experimental results showed that the Top-1 accuracy increased by 3.57% with a compression rate and 3.5x computation rate. With ResNet-32 as the teacher's model and ResNet-8 as the student's model, experimental results showed that Top-1 accuracy increased by 4.38% with a compression rate and computation rate. In addition, we conducted experiments using the ImageNet dataset. With MobileNet-16 as the teacher's model and MobileNet-9 as the student's model, experimental results showed that the Top-1 accuracy increased by 3.98% with a compression rate and computation rate.
      PubDate: 2021-11-17
      DOI: 10.1017/ATSIP.2021.16
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.239.129.52
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-