for Journals by Title or ISSN
for Articles by Keywords
help

 A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

       | Last   [Sort by number of followers]   [Restore default list]

  Subjects -> ELECTRONICS (Total: 187 journals)
Showing 1 - 200 of 277 Journals sorted alphabetically
Acta Electronica Malaysia     Open Access  
Advances in Electrical and Electronic Engineering     Open Access   (Followers: 7)
Advances in Electronics     Open Access   (Followers: 90)
Advances in Magnetic and Optical Resonance     Full-text available via subscription   (Followers: 8)
Advances in Power Electronics     Open Access   (Followers: 38)
Advancing Microelectronics     Hybrid Journal  
Aerospace and Electronic Systems, IEEE Transactions on     Hybrid Journal   (Followers: 334)
American Journal of Electrical and Electronic Engineering     Open Access   (Followers: 26)
Annals of Telecommunications     Hybrid Journal   (Followers: 9)
APSIPA Transactions on Signal and Information Processing     Open Access   (Followers: 9)
Archives of Electrical Engineering     Open Access   (Followers: 14)
Autonomous Mental Development, IEEE Transactions on     Hybrid Journal   (Followers: 8)
Bell Labs Technical Journal     Hybrid Journal   (Followers: 30)
Bioelectronics in Medicine     Hybrid Journal  
Biomedical Engineering, IEEE Reviews in     Full-text available via subscription   (Followers: 20)
Biomedical Engineering, IEEE Transactions on     Hybrid Journal   (Followers: 38)
Biomedical Instrumentation & Technology     Hybrid Journal   (Followers: 6)
Broadcasting, IEEE Transactions on     Hybrid Journal   (Followers: 13)
BULLETIN of National Technical University of Ukraine. Series RADIOTECHNIQUE. RADIOAPPARATUS BUILDING     Open Access   (Followers: 1)
Bulletin of the Polish Academy of Sciences : Technical Sciences     Open Access   (Followers: 1)
Canadian Journal of Remote Sensing     Full-text available via subscription   (Followers: 47)
China Communications     Full-text available via subscription   (Followers: 9)
Chinese Journal of Electronics     Hybrid Journal  
Circuits and Systems     Open Access   (Followers: 15)
Consumer Electronics Times     Open Access   (Followers: 5)
Control Systems     Hybrid Journal   (Followers: 293)
ECTI Transactions on Computer and Information Technology (ECTI-CIT)     Open Access  
ECTI Transactions on Electrical Engineering, Electronics, and Communications     Open Access  
Edu Elektrika Journal     Open Access   (Followers: 1)
Electrica     Open Access  
Electronic Design     Partially Free   (Followers: 117)
Electronic Markets     Hybrid Journal   (Followers: 7)
Electronic Materials Letters     Hybrid Journal   (Followers: 4)
Electronics     Open Access   (Followers: 97)
Electronics and Communications in Japan     Hybrid Journal   (Followers: 10)
Electronics For You     Partially Free   (Followers: 100)
Electronics Letters     Hybrid Journal   (Followers: 26)
Elkha : Jurnal Teknik Elektro     Open Access  
Embedded Systems Letters, IEEE     Hybrid Journal   (Followers: 55)
Energy Harvesting and Systems     Hybrid Journal   (Followers: 4)
Energy Storage Materials     Full-text available via subscription   (Followers: 3)
EPJ Quantum Technology     Open Access   (Followers: 1)
EURASIP Journal on Embedded Systems     Open Access   (Followers: 11)
Facta Universitatis, Series : Electronics and Energetics     Open Access  
Foundations and Trends® in Communications and Information Theory     Full-text available via subscription   (Followers: 6)
Foundations and Trends® in Signal Processing     Full-text available via subscription   (Followers: 10)
Frequenz     Hybrid Journal   (Followers: 1)
Frontiers of Optoelectronics     Hybrid Journal   (Followers: 1)
Geoscience and Remote Sensing, IEEE Transactions on     Hybrid Journal   (Followers: 205)
Haptics, IEEE Transactions on     Hybrid Journal   (Followers: 4)
IACR Transactions on Symmetric Cryptology     Open Access  
IEEE Antennas and Propagation Magazine     Hybrid Journal   (Followers: 99)
IEEE Antennas and Wireless Propagation Letters     Hybrid Journal   (Followers: 80)
IEEE Journal of Emerging and Selected Topics in Power Electronics     Hybrid Journal   (Followers: 49)
IEEE Journal of the Electron Devices Society     Open Access   (Followers: 9)
IEEE Journal on Exploratory Solid-State Computational Devices and Circuits     Hybrid Journal   (Followers: 1)
IEEE Power Electronics Magazine     Full-text available via subscription   (Followers: 72)
IEEE Transactions on Antennas and Propagation     Full-text available via subscription   (Followers: 71)
IEEE Transactions on Automatic Control     Hybrid Journal   (Followers: 58)
IEEE Transactions on Circuits and Systems for Video Technology     Hybrid Journal   (Followers: 26)
IEEE Transactions on Consumer Electronics     Hybrid Journal   (Followers: 42)
IEEE Transactions on Electron Devices     Hybrid Journal   (Followers: 19)
IEEE Transactions on Information Theory     Hybrid Journal   (Followers: 26)
IEEE Transactions on Power Electronics     Hybrid Journal   (Followers: 78)
IEEE Transactions on Signal and Information Processing over Networks     Full-text available via subscription   (Followers: 12)
IEICE - Transactions on Electronics     Full-text available via subscription   (Followers: 12)
IEICE - Transactions on Information and Systems     Full-text available via subscription   (Followers: 5)
IET Cyber-Physical Systems : Theory & Applications     Open Access   (Followers: 1)
IET Energy Systems Integration     Open Access  
IET Microwaves, Antennas & Propagation     Hybrid Journal   (Followers: 35)
IET Nanodielectrics     Open Access  
IET Power Electronics     Hybrid Journal   (Followers: 55)
IET Smart Grid     Open Access  
IET Wireless Sensor Systems     Hybrid Journal   (Followers: 18)
IETE Journal of Education     Open Access   (Followers: 4)
IETE Journal of Research     Open Access   (Followers: 11)
IETE Technical Review     Open Access   (Followers: 13)
IJEIS (Indonesian Journal of Electronics and Instrumentation Systems)     Open Access   (Followers: 3)
Industrial Electronics, IEEE Transactions on     Hybrid Journal   (Followers: 70)
Industrial Technology Research Journal Phranakhon Rajabhat University     Open Access  
Industry Applications, IEEE Transactions on     Hybrid Journal   (Followers: 35)
Informatik-Spektrum     Hybrid Journal   (Followers: 2)
Instabilities in Silicon Devices     Full-text available via subscription   (Followers: 1)
Intelligent Transportation Systems Magazine, IEEE     Full-text available via subscription   (Followers: 13)
International Journal of Advanced Research in Computer Science and Electronics Engineering     Open Access   (Followers: 18)
International Journal of Advances in Telecommunications, Electrotechnics, Signals and Systems     Open Access   (Followers: 11)
International Journal of Antennas and Propagation     Open Access   (Followers: 11)
International Journal of Applied Electronics in Physics & Robotics     Open Access   (Followers: 4)
International Journal of Computational Vision and Robotics     Hybrid Journal   (Followers: 6)
International Journal of Control     Hybrid Journal   (Followers: 11)
International Journal of Electronics     Hybrid Journal   (Followers: 7)
International Journal of Electronics and Telecommunications     Open Access   (Followers: 13)
International Journal of Granular Computing, Rough Sets and Intelligent Systems     Hybrid Journal   (Followers: 3)
International Journal of High Speed Electronics and Systems     Hybrid Journal  
International Journal of Hybrid Intelligence     Hybrid Journal  
International Journal of Image, Graphics and Signal Processing     Open Access   (Followers: 16)
International Journal of Microwave and Wireless Technologies     Hybrid Journal   (Followers: 10)
International Journal of Nanoscience     Hybrid Journal   (Followers: 1)
International Journal of Numerical Modelling: Electronic Networks, Devices and Fields     Hybrid Journal   (Followers: 4)
International Journal of Power Electronics     Hybrid Journal   (Followers: 25)
International Journal of Review in Electronics & Communication Engineering     Open Access   (Followers: 4)
International Journal of Sensors, Wireless Communications and Control     Hybrid Journal   (Followers: 10)
International Journal of Systems, Control and Communications     Hybrid Journal   (Followers: 4)
International Journal of Wireless and Microwave Technologies     Open Access   (Followers: 6)
International Transaction of Electrical and Computer Engineers System     Open Access   (Followers: 2)
JAREE (Journal on Advanced Research in Electrical Engineering)     Open Access  
Journal of Biosensors & Bioelectronics     Open Access   (Followers: 3)
Journal of Advanced Dielectrics     Open Access   (Followers: 1)
Journal of Artificial Intelligence     Open Access   (Followers: 11)
Journal of Circuits, Systems, and Computers     Hybrid Journal   (Followers: 4)
Journal of Computational Intelligence and Electronic Systems     Full-text available via subscription   (Followers: 1)
Journal of Electrical and Electronics Engineering Research     Open Access   (Followers: 32)
Journal of Electrical Bioimpedance     Open Access  
Journal of Electrical Bioimpedance     Open Access   (Followers: 2)
Journal of Electrical Engineering & Electronic Technology     Hybrid Journal   (Followers: 7)
Journal of Electrical, Electronics and Informatics     Open Access  
Journal of Electromagnetic Analysis and Applications     Open Access   (Followers: 8)
Journal of Electromagnetic Waves and Applications     Hybrid Journal   (Followers: 9)
Journal of Electronic Design Technology     Full-text available via subscription   (Followers: 6)
Journal of Electronics (China)     Hybrid Journal   (Followers: 5)
Journal of Energy Storage     Full-text available via subscription   (Followers: 4)
Journal of Engineered Fibers and Fabrics     Open Access   (Followers: 2)
Journal of Field Robotics     Hybrid Journal   (Followers: 3)
Journal of Guidance, Control, and Dynamics     Hybrid Journal   (Followers: 173)
Journal of Information and Telecommunication     Open Access   (Followers: 1)
Journal of Intelligent Procedures in Electrical Technology     Open Access   (Followers: 3)
Journal of Low Power Electronics     Full-text available via subscription   (Followers: 10)
Journal of Low Power Electronics and Applications     Open Access   (Followers: 10)
Journal of Microelectronics and Electronic Packaging     Hybrid Journal  
Journal of Microwave Power and Electromagnetic Energy     Hybrid Journal   (Followers: 3)
Journal of Microwaves, Optoelectronics and Electromagnetic Applications     Open Access   (Followers: 11)
Journal of Nuclear Cardiology     Hybrid Journal  
Journal of Optoelectronics Engineering     Open Access   (Followers: 4)
Journal of Physics B: Atomic, Molecular and Optical Physics     Hybrid Journal   (Followers: 29)
Journal of Power Electronics & Power Systems     Full-text available via subscription   (Followers: 11)
Journal of Semiconductors     Full-text available via subscription   (Followers: 5)
Journal of Sensors     Open Access   (Followers: 26)
Journal of Signal and Information Processing     Open Access   (Followers: 9)
Jurnal ELTIKOM : Jurnal Teknik Elektro, Teknologi Informasi dan Komputer     Open Access  
Jurnal Rekayasa Elektrika     Open Access  
Jurnal Teknik Elektro     Open Access  
Jurnal Teknologi Elektro     Open Access  
Kinetik : Game Technology, Information System, Computer Network, Computing, Electronics, and Control     Open Access  
Learning Technologies, IEEE Transactions on     Hybrid Journal   (Followers: 12)
Magnetics Letters, IEEE     Hybrid Journal   (Followers: 7)
Majalah Ilmiah Teknologi Elektro : Journal of Electrical Technology     Open Access   (Followers: 2)
Metrology and Measurement Systems     Open Access   (Followers: 6)
Microelectronics and Solid State Electronics     Open Access   (Followers: 27)
Nanotechnology Magazine, IEEE     Full-text available via subscription   (Followers: 41)
Nanotechnology, Science and Applications     Open Access   (Followers: 6)
Nature Electronics     Hybrid Journal   (Followers: 1)
Networks: an International Journal     Hybrid Journal   (Followers: 5)
Open Electrical & Electronic Engineering Journal     Open Access  
Open Journal of Antennas and Propagation     Open Access   (Followers: 9)
Optical Communications and Networking, IEEE/OSA Journal of     Full-text available via subscription   (Followers: 15)
Paladyn. Journal of Behavioral Robotics     Open Access   (Followers: 1)
Power Electronics and Drives     Open Access   (Followers: 2)
Problemy Peredachi Informatsii     Full-text available via subscription  
Progress in Quantum Electronics     Full-text available via subscription   (Followers: 7)
Pulse     Full-text available via subscription   (Followers: 5)
Radiophysics and Quantum Electronics     Hybrid Journal   (Followers: 2)
Recent Advances in Communications and Networking Technology     Hybrid Journal   (Followers: 3)
Recent Advances in Electrical & Electronic Engineering     Hybrid Journal   (Followers: 9)
Research & Reviews : Journal of Embedded System & Applications     Full-text available via subscription   (Followers: 5)
Revue Méditerranéenne des Télécommunications     Open Access  
Security and Communication Networks     Hybrid Journal   (Followers: 2)
Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of     Hybrid Journal   (Followers: 56)
Semiconductors and Semimetals     Full-text available via subscription   (Followers: 1)
Sensing and Imaging : An International Journal     Hybrid Journal   (Followers: 2)
Services Computing, IEEE Transactions on     Hybrid Journal   (Followers: 4)
Software Engineering, IEEE Transactions on     Hybrid Journal   (Followers: 78)
Solid State Electronics Letters     Open Access  
Solid-State Circuits Magazine, IEEE     Hybrid Journal   (Followers: 13)
Solid-State Electronics     Hybrid Journal   (Followers: 9)
Superconductor Science and Technology     Hybrid Journal   (Followers: 3)
Synthesis Lectures on Power Electronics     Full-text available via subscription   (Followers: 3)
Technical Report Electronics and Computer Engineering     Open Access  
TELE     Open Access  
Telematique     Open Access  
TELKOMNIKA (Telecommunication, Computing, Electronics and Control)     Open Access   (Followers: 9)
Transactions on Electrical and Electronic Materials     Hybrid Journal  
Universal Journal of Electrical and Electronic Engineering     Open Access   (Followers: 6)
Ural Radio Engineering Journal     Open Access  
Visión Electrónica : algo más que un estado sólido     Open Access   (Followers: 1)
Wireless and Mobile Technologies     Open Access   (Followers: 6)
Wireless Power Transfer     Full-text available via subscription   (Followers: 4)
Women in Engineering Magazine, IEEE     Full-text available via subscription   (Followers: 11)
Електротехніка і Електромеханіка     Open Access  

       | Last   [Sort by number of followers]   [Restore default list]

Similar Journals
Journal Cover
IEEE Transactions on Circuits and Systems for Video Technology
Journal Prestige (SJR): 0.977
Citation Impact (citeScore): 5
Number of Followers: 26  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1051-8215
Published by IEEE Homepage  [191 journals]
  • IEEE Transactions on Circuits and Systems for Video Technology publication
           information
    • PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • IEEE Transactions on Circuits and Systems for Video Technology publication
           information
    • PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Introduction to the Special Section on Deep Learning for Visual
           Surveillance
    • Authors: Fatih Porikli;Larry S. Davis;Qi Wang;Yi Li;Carlo Regazzoni;
      Pages: 2535 - 2537
      Abstract: We are now living in an era of visual information where data is unceasingly generated and pushed into consumption at astounding rates. A remarkable portion of this sensory input comes in the form of videos streaming from large-scale surveillance infrastructures as well as consumer-grade monitoring systems. The sheer amount of ground-based, aerial and mobile video surveillance data demands fittingly competent, accurate, effective techniques to extract useful cues and provide assistance for detection, prevention, and intervention tasks in traffic, safety, security, defense, forensic, health, biology, ethology, and retail space management applications.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Adaptive Deep Convolutional Neural Networks for Scene-Specific Object
           Detection
    • Authors: Xudong Li;Mao Ye;Yiguang Liu;Ce Zhu;
      Pages: 2538 - 2551
      Abstract: A deep convolutional neural network (CNN) becomes a widely used tool for object detection. Many previous works have achieved excellent performance on object detection benchmarks. However, these works present generic detectors whose performance will drop rapidly when they are applied to a surveillance scene. In this paper, we propose an efficient method to construct a scene-specific regression model based on a generic CNN-based classifier. Our regression model is an adaptive deep CNN (ADCNN), which can predict object locations in the surveillance scene. First, we transfer the generic CNN-based classifier to the surveillance scene by selecting useful kernels. Second, we learn the context information of the surveillance scene in our regression model for accurate location prediction. Our main contributions are: 1) a transfer learning method that selects useful kernels in the generic CNN-based classifier; 2) a special architecture that can effectively learn the local and global context information in the surveillance scene; and 3) a new objective function to effectively train parameters in ADCNN. Compared with some state-of-the-art models, ADCNN achieves the best performance on three surveillance data sets for pedestrian detection and one surveillance data set for vehicle detection.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Interactive Hierarchical Object Proposals
    • Authors: Mingliang Chen;Jiawei Zhang;Shengfeng He;Qingxiong Yang;Qing Li;Ming-Hsuan Yang;
      Pages: 2552 - 2566
      Abstract: Object proposal algorithms have been demonstrated to be very successful in accelerating object detection process. High object localization quality and detection recall can be obtained using thousands of proposals. However, the performance with a small number of proposals is still unsatisfactory. This paper demonstrates that the performance of a few proposals can be significantly improved with the minimal human interaction—a single touch point. To this end, we first generate hierarchical superpixels using an efficient tree-organized structure as our initial object proposals, and then select only a few proposals from them by learning an effective Convolutional neural network for objectness ranking. We explore and design an architecture to integrate human interaction with the global information of the whole image for objectness scoring, which is able to significantly improve the performance with a minimum number of object proposals. Extensive experiments show the proposed method outperforms all the state-of-the-art methods for locating the meaningful object with the touch point constraint. Furthermore, the proposed method is extended for video. By combining with the novel interactive motion segmentation cue for generating hierarchical superpixels, the performance on a single proposal is satisfactory and can be used in the interactive vision systems, such as selecting the input of a real-time tracking system.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Pixelwise Deep Sequence Learning for Moving Object Detection
    • Authors: Yingying Chen;Jinqiao Wang;Bingke Zhu;Ming Tang;Hanqing Lu;
      Pages: 2567 - 2579
      Abstract: Moving object detection is an essential, well-studied but still open problem in computer vision and plays a fundamental role in many applications. Traditional approaches usually reconstruct background images with hand-crafted visual features, such as color, texture, and edge. Due to lack of prior knowledge or semantic information, it is difficult to deal with complicated and rapid changing scenes. To exploit the temporal structure of the pixel-level semantic information, in this paper, we propose an end-to-end deep sequence learning architecture for moving object detection. First, the video sequences are input into a deep convolutional encoder–decoder network for extracting pixel-wise semantic features. Then, to exploit the temporal context, we propose a novel attention long short-term memory (Attention ConvLSTM) to model pixelwise changes over time. A spatial transformer network and a conditional random field layer are finally appended to reduce the sensitivity to camera motion and smooth the foreground boundaries. A multi-task loss is proposed to jointly optimization for frame-based classification and temporal prediction in an end-to-end network. Experimental results on CDnet 2014 and LASIESTA show 12.15% and 16.71% improvement to the state of the art, respectively.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Deep CNNs for Object Detection Using Passive Millimeter Sensors
    • Authors: Santiago López-Tapia;Rafael Molina;Nicolás Pérez de la Blanca;
      Pages: 2580 - 2589
      Abstract: Passive millimeter wave images (PMMWIs) can be used to detect and localize objects concealed under clothing. Unfortunately, the quality of the acquired images and the unknown position, shape, and size of the hidden objects render these tasks challenging. In this paper, we discuss a deep learning approach to this detection/localization problem. The effect of the nonstationary acquisition noise on different architectures is analyzed and discussed. A comparison with shallow architectures is also presented. The achieved detection accuracy defines a new state of the art in object detection on PMMWIs. The low computational training and testing costs of the solution allow its use in real-time applications.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Multi-View Vehicle Type Recognition With Feedback-Enhancement Multi-Branch
           CNNs
    • Authors: Zhibo Chen;Chenlu Ying;Chaoyi Lin;Sen Liu;Weiping Li;
      Pages: 2590 - 2599
      Abstract: Vehicle type recognition (VTR) is a quite common requirement and one of the key challenges in real surveillance scenarios, such as intelligent traffic and unmanned driving. Usually coarse-grained and fine-grained VTRs are applied in different applications, and the challenge from multiple viewpoints is critical for both cases. In this paper, we propose a feedback-enhancement multi-branch CNN (FM-CNN) to solve the challenge in these two cases. The proposed FM-CNN takes three derivatives of an image as input and leverages the advantages of hierarchical details, feedback enhancement, model average, and stronger robustness to translation and mirroring. A single global cross-entropy loss is insufficient to train such a complex CNN and so we add extra branch losses to enhance feedbacks to each branch. Though reusing pre-trained parameters, we propose a novel parameter update method to adapt FM-CNN to task-specific local visual patterns and global information in new datasets. To test the effectiveness of FM-CNN, we create our own multi-view VTR (MVVTR) data set since there are no such data sets available. And, for fine-grained VTR, we use the CompCars data set. Compared with state-of-the-art classification solutions without special preprocessing, the proposed FM-CNN demonstrates better performance in both coarse-grained and fine-grained scenarios. For coarse-grained VTR, it achieves 94.9% Top-1 accuracy on the MVVTR data set. For fine-grained VTR, it achieves 91.0% Top-1 and 97.8% Top-5 accuracies on the CompCars data set.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Real-Time Deep Tracking via Corrective Domain Adaptation
    • Authors: Hanxi Li;Xinyu Wang;Fumin Shen;Yi Li;Fatih Porikli;Mingwen Wang;
      Pages: 2600 - 2612
      Abstract: Visual tracking is one of the fundamental problems in computer vision. Recently, some deep-learning-based tracking algorithms have been illustrating record-breaking performances. However, due to the high complexity of neural networks, most deep trackers suffer from low tracking speed and are, thus, impractical in many real-world applications. Some recently proposed deep trackers with smaller network structure achieve high efficiency while at the cost of significant decrease in precision. In this paper, we propose to transfer the deep feature, which is learned originally for image classification to the visual tracking domain. The domain adaptation is achieved via some “grafted” auxiliary networks, which are trained by regressing the object location in tracking frames. This adaptation improves the tracking performance significantly both on accuracy and efficiency. The yielded deep tracker is real time and also illustrates the state-of-the-art accuracies in the experiment involving two well-adopted benchmarks with more than 100 test videos. Furthermore, the adaptation is also naturally used for introducing the objectness concept into visual tracking. This removes a long-standing target ambiguity in visual tracking tasks, and we illustrate the empirical superiority of the more well-defined task.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Video Person Re-Identification for Wide Area Tracking Based on Recurrent
           Neural Networks
    • Authors: Niall McLaughlin;Jesus Martinez del Rincon;Paul Miller;
      Pages: 2613 - 2626
      Abstract: In this paper, we propose a video-based person re-identification system for wide area tracking based on a recurrent neural network architecture. Given short video sequences of a person, generated by a tracking algorithm, our video re-identification algorithm links these tracklets in full trajectories across a network of non-overlapping cameras in an open-world scenario. In our system, features are first extracted from each frame using a convolutional neural network. Then, a recurrent layer combines information across time-steps. The features from all time-steps are finally combined using temporal pooling to give an overall appearance feature for the complete sequence. Our system is trained to perform re-identification using a Siamese network architecture. Experiments are conducted on the iLIDS-VID and PRID-2011 video re-identification data sets as well as in the DukeMTMC multi-camera tracking data set.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • People Counting in Dense Crowd Images Using Sparse Head Detections
    • Authors: Mamoona Birkhez Shami;Salman Maqbool;Hasan Sajid;Yasar Ayaz;Sen-Ching Samson Cheung;
      Pages: 2627 - 2636
      Abstract: People counting in extremely dense crowds is a challenging problem due to severe occlusions, few pixels per head, cluttered environments, and skewed camera perspectives. In this paper, we present a novel algorithm for people counting in highly dense crowd images. Our approach relies on the fact that the head is the most visible part of an individual in a dense crowd. As such, a head detector can be used to estimate the spatially varying head size, which is the key feature used in our head counting procedure. We leverage the state-of-the art convolutional neural network for the sparse head detection in a dense crowd. After sub-dividing the image into rectangular patches, we first use an speeded-up robust features-based support vector machine binary classifier to label each patch as crowd/not-crowd and eliminate all not-crowd patches. Regression is then performed on each crowd patch to estimate average head size. The number of individuals in each patch is estimated by dividing the patch area with the estimated head size. For the crowd patches where no heads are detected, the counts are estimated based on distance-based weighted averaging over the counts from neighboring patches. Finally, the individual patch counts are summed up to obtain the total count. We evaluate our approach on three publicly available datasets for extremely dense crowds: UCF_CC_50, ShanghaiTech, and AHU-Crowd. Our approach gives comparable results on these challenging datasets to other state of the art algorithms but, unlike other algorithms, our proposed method does not require the laborious task of obtaining labeled training data of dense crowd images.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Scene Invariant Virtual Gates Using DNNs
    • Authors: Simon Denman;Clinton Fookes;Prasad K. D. V. Yarlagadda;Sridha Sridharan;
      Pages: 2637 - 2651
      Abstract: Understanding where people are located and how they are moving about in an environment is critical for operators of large public spaces such as shopping centers, and large public infrastructures such as airports. Automated analysis of CCTV footage is increasingly being used to address this need through techniques that can count crowd sizes, estimate their density, and estimate the through-put of people into and/or out of a choke-point. A limitation of using CCTV based approaches, however, is the need to train models specific to each view which, for large environments with 100s or 1000s of cameras, can quickly become problematic. While there is some success in developing scene-invariant crowd counting and crowd density estimation approaches, much less attention has been given to developing scene-invariant solutions for through-put estimation. In this paper, we investigate the use of convolutional neural network and long short-term memory architectures to estimate pedestrian through-put from arbitrary CCTV viewpoints. To properly develop and demonstrate our approach, we present a new 22 view database featuring 44 h of pedestrian throughput annotation, containing over 11 000 annotated people; and using this proposed approach we show that we are able to outperform a scene-dependant approach across a diverse set of challenging view-points.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Feature-Based Image Patch Classification for Moving Shadow Detection
    • Authors: Mosin Russell;Ju Jia Zou;Gu Fang;Weidong Cai;
      Pages: 2652 - 2666
      Abstract: The presence of shadows in images significantly affects the performance of many computer vision tasks and visual processing applications, such as object tracking, object classification, and behavior recognition. Most methods have been designed to detect shadows in specific situations, but they often fail to distinguish shadow points from the foreground object in many problematic situations, such as chromatic shadows, non-textured and dark surfaces, and foreground–background camouflage. In this paper, we propose a new feature-based image patch approximation and multi-independent sparse representation technique to tackle these environmental problems. In this method, two illumination-invariant features—binary patterns of local color constancy and light-based gradient matching—are introduced, along with the intensity-reduction histogram. These features are extracted from image patches and are used to construct two over-complete dictionaries for objects and shadows, respectively. Given a new image patch, its best approximation for a number of iterations is found from each dictionary. For each iteration, an independent class assignment is performed by finding its distances from the reference dictionaries. The patch is then assigned to a class based on its probability of occurrence. The proposed framework is evaluated on common shadow detection data sets, and it shows improved performance in terms of the shadow detection rate and discrimination rate compared with the state-of-the-art methods.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Multi-Modality Multi-Task Recurrent Neural Network for Online Action
           Detection
    • Authors: Jiaying Liu;Yanghao Li;Sijie Song;Junliang Xing;Cuiling Lan;Wenjun Zeng;
      Pages: 2667 - 2682
      Abstract: Online action detection is a brand new challenge and plays a critical role in visual surveillance analytics. It goes one step further than a conventional action recognition task, which recognizes human actions from well-segmented clips. Online action detection is desired to identify the action type and localize action positions on the fly from the untrimmed stream data. In this paper, we propose a multi-modality multi-task recurrent neural network, which incorporates both RGB and Skeleton networks. We design different temporal modeling networks to capture specific characteristics from various modalities. Then, a deep long short-term memory subnetwork is utilized effectively to capture the complex long-range temporal dynamics, naturally avoiding the conventional sliding window design and thus ensuring high computational efficiency. Constrained by a multi-task objective function in the training phase, this network achieves superior detection performance and is capable of automatically localizing the start and end points of actions more accurately. Furthermore, embedding subtask of regression provides the ability to forecast the action prior to its occurrence. We evaluate the proposed method and several other methods in action detection and forecasting on the online action detection data set and gaming action data set datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance on both tasks.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Trajectory-Pooled Spatial-Temporal Architecture of Deep Convolutional
           Neural Networks for Video Event Detection
    • Authors: Yonggang Li;Rui Ge;Yi Ji;Shengrong Gong;Chunping Liu;
      Pages: 2683 - 2692
      Abstract: Nowadays content-based video event detection faces great challenges due to complex scenes and blurred actions in surveillance videos. To alleviate these challenges, we propose a novel spatial-temporal architecture of deep convolutional neural networks for this task. By taking advantage of spatial-temporal information, we fine-tune two-stream networks, and then, fuse spatial and temporal features at convolution layers using a 2D pooling fusion method to enforce the consistence of spatial-temporal information. Based on the two-stream networks and spatial-temporal layer, a triple-channel model is obtained. Furthermore, we implement trajectory-constrained pooling to deep features and hand-crafted features to combine their merits. A fusion method on triple-channel yields the final detection result. The experiments on two benchmark surveillance video data sets including VIRAT 1.0 and VIRAT 2.0, which involve a suit of challenging events, such as person loading an object to a vehicle or person opening a vehicle trunk, manifest that the proposed method can achieve superior performance compared with the state-of-the-art methods on these event benchmarks.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Combined Static and Motion Features for Deep-Networks-Based Activity
           Recognition in Videos
    • Authors: Sameera Ramasinghe;Jathushan Rajasegaran;Vinoj Jayasundara;Kanchana Ranasinghe;Ranga Rodrigo;Ajith A. Pasqual;
      Pages: 2693 - 2707
      Abstract: Activity recognition in videos in a deep-learning setting—or otherwise—uses both static and pre-computed motion components. The method of combining the two components, while keeping the burden on the deep network less, still remains uninvestigated. Moreover, it is not clear what the level of contribution of individual components is, and how to control the contribution. In this paper, we use a combination of convolutional-neural-network-generated static features and motion features in the form of motion tubes. We propose three schemas for combining static and motion components: based on a variance ratio, principal components, and Cholesky decomposition. The Cholesky-decomposition-based method allows the control of contributions. The ratio given by variance analysis of static and motion features matches well with the experimental optimal ratio used in the Cholesky decomposition-based method. The resulting activity recognition system is better or on par with the existing state-of-the-art when tested with three popular data sets. The findings also enable us to characterize a data set with respect to its richness in motion information.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • On Input/Output Architectures for Convolutional Neural Network-Based
           Cross-View Gait Recognition
    • Authors: Noriko Takemura;Yasushi Makihara;Daigo Muramatsu;Tomio Echigo;Yasushi Yagi;
      Pages: 2708 - 2719
      Abstract: In this paper, we discuss input/output architectures for convolutional neural network (CNN)-based cross-view gait recognition. For this purpose, we consider two aspects: verification versus identification and the tradeoff between spatial displacements caused by subject difference and view difference. More specifically, we use the Siamese network with a pair of inputs and contrastive loss for verification and a triplet network with a triplet of inputs and triplet ranking loss for identification. The aforementioned CNN architectures are insensitive to spatial displacement, because the difference between a matching pair is calculated at the last layer after passing through the convolution and max pooling layers; hence, they are expected to work relatively well under large view differences. By contrast, because it is better to use the spatial displacement to its best advantage because of the subject difference under small view differences, we also use CNN architectures where the difference between a matching pair is calculated at the input level to make them more sensitive to spatial displacement. We conducted experiments for cross-view gait recognition and confirmed that the proposed architectures outperformed the state-of-the-art benchmarks in accordance with their suitable situations of verification/identification tasks and view differences.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • A Data Set for Airborne Maritime Surveillance Environments
    • Authors: Ricardo Ribeiro;Gonçalo Cruz;Jorge Matos;Alexandre Bernardino;
      Pages: 2720 - 2732
      Abstract: This paper presents a data set with surveillance imagery over the sea captured by a small size UAV. This data set presents the object examples ranging from cargo ships, small boats, life rafts to hydrocarbon slick. The video sequences were captured using different types of cameras, at different heights, and different perspectives. The data set also contains thousands of labels with positions of objects of interest. This was only possible to achieve with the labeling tool also described in this paper. Additionally, using standard evaluation frameworks, we establish a baseline of results using algorithms developed by the authors, which are better adapted to the maritime environment.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Comprehensive Analysis of Deep Learning-Based Vehicle Detection in Aerial
           Images
    • Authors: Lars Sommer;Tobias Schuchert;Jürgen Beyerer;
      Pages: 2733 - 2747
      Abstract: Vehicle detection in aerial images is a crucial image processing step for many applications such as screening of large areas as used for surveillance, reconnaissance, or rescue tasks. In recent years, several deep learning-based frameworks have been proposed for object detection. However, these detectors were developed for data sets that considerably differ from aerial images. In this paper, we systematically investigate the potential of fast R-CNN and faster R-CNN for aerial images, which achieve top performing results on common detection benchmark data sets. Therefore, the applicability of eight state-of-the-art object proposal methods used to generate a set of candidate regions and of both detectors is examined. Relevant adaptations to account for the characteristics of the aerial images are provided. To overcome the shortcomings of the original approach in the case of handling small instances, we further propose our own networks that clearly outperform state-of-the-art methods for vehicle detection in aerial images. Furthermore, we analyze the impact of the different adaptations with respect to various ground sampling distances to provide a guideline for detecting small objects in aerial images. All experiments are performed on two publicly available data sets to account for differing characteristics such as varying object sizes, number of objects per image, and varying backgrounds.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Glance and Stare: Trapping Flying Birds in Aerial Videos by Adaptive Deep
           Spatio-Temporal Features
    • Authors: Shuman Tian;Xianbin Cao;Yan Li;Xiantong Zhen;Baochang Zhang;
      Pages: 2748 - 2759
      Abstract: Flying bird detection has recently attracted increasing attention in computer vision, which becomes an urgent task with the opening up of the low-altitude airspace. However, compared to conventional object detection tasks, it is much more challenging to trap flying birds in aerial videos due to small target sizes, complex backgrounds of great variations and disturbances of bird-like objects. In this paper, we propose a unified framework termed glance-and-stare detection (GSD) to trap flying birds in aerial videos. The GSD is inspired by the fact that human beings first glance at the whole image and then stare at the areas where the suspected object is most likely to appear until the confirmation is obtained. Specifically, we propose the zooming-in algorithm to generate region proposals for accurate localization of flying birds; to represent region proposal sequences of different lengths, we propose adaptive deep spatio-temporal features by leveraging the strength of 3D convolutional neural networks, based on which classification is conducted to achieve final detection. In contrast to conventional methods, the GSD enables localization and classification to be conducted jointly in an alternating iterative way, which mutually enhances each other to improve their performance. In order to validate the proposed GSD algorithm, we build flying bird data sets including images and videos, which provide new benchmarks for evaluation of flying bird detection systems. Experiments on the data sets demonstrate that the GSD can achieve high detection accuracy and largely outperform the state-of-the-art detection methods.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • On the Minimum Perceptual Temporal Video Sampling Rate and Its Application
           to Adaptive Frame Skipping
    • Authors: Christoph Bachhuber;Amit Bhardwaj;Rastin Pries;Eckehard Steinbach;
      Pages: 2760 - 2774
      Abstract: Media technology, in particular video recording and playback, keeps improving to provide users with high-quality real and virtual visual content. In recent years, increasing the temporal sampling rate of videos and the refresh rate of displays has become one focus of technical innovation. This raises the question, how high the sampling and refresh rates should be' To answer this question, we determine the minimum temporal sampling rate at which a video should be presented to make temporal sampling imperceptible to viewers. Through a psychophysical study, we find that this minimum sampling rate depends on both the speed of the objects in the image plane and the exposure time of the recording camera. We propose a model to compute the required minimum sampling rate based on these two parameters. In addition, state-of-the-art video codecs employ motion vectors from which the local object movement speed can be inferred. Therefore, we present a procedure to compute the minimum sampling rate given an encoded video and camera exposure time. Since the object motion speed in a video may vary, the corresponding minimum frame rate is also varying. This is why the results of this paper are particularly applicable when used together with adaptive frame rate computer generated graphics or novel video communication solutions that drop insignificant frames. In our experiments, we show that videos played back at the minimum adaptive frame rate achieve an average bit rate reduction of 26% compared to constant frame rate playback, while perceptually no difference can be observed.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Identifying Computer Generated Images Based on Quaternion Central Moments
           in Color Quaternion Wavelet Domain
    • Authors: Jinwei Wang;Ting Li;Xiangyang Luo;Yun-Qing Shi;Sunil Kr. Jha;
      Pages: 2775 - 2785
      Abstract: In this paper, a novel forensics scheme for color image is proposed in color quaternion wavelet transform (CQWT) domain. Compared with discrete wavelet transform (DWT), contourlet wavelet transform, and local binary patterns, CQWT processes a color image as a unit, and so, it can provide more forensics information to identify the photograph (PG) and computer generated (CG) images by considering the quaternion magnitude and phase measures. Meanwhile, two novel quaternion central moments for color images, i.e., quaternion skewness and kurtosis, are proposed to extract forensics features. In the condition of the same statistical model as Farid’s model, the CQWT can boost the performance of the existing identification models. Compared with Farid’s model and Li’s model in 7500 PG and 7500 CG, the quaternion statistical features show a better classification performance. Results in the comparative experiments show that the classification accuracy of the CQWT improves by 19% more than Farid’s model, and the quaternion features approximately improve by 2% more than the traditional.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Accelerating Flexible Manifold Embedding for Scalable Semi-Supervised
           Learning
    • Authors: Suo Qiu;Feiping Nie;Xiangmin Xu;Chunmei Qing;Dong Xu;
      Pages: 2786 - 2795
      Abstract: In this paper, we address the problem of large-scale graph-based semi-supervised learning for multi-class classification. Most existing scalable graph-based semi-supervised learning methods are based on the hard linear constraint or cannot cope with the unseen samples, which limits their applications and learning performance. To this end, we build upon our previous work flexible manifold embedding (FME) [1] and propose two novel linear-complexity algorithms called fast flexible manifold embedding (f-FME) and reduced flexible manifold embedding (r-FME). Both of the proposed methods accelerate FME and inherit its advantages. Specifically, our methods address the hard linear constraint problem by combining a regression residue term and a manifold smoothness term jointly, which naturally provides the prediction model for handling unseen samples. To reduce computational costs, we exploit the underlying relationship between a small number of anchor points and all data points to construct the graph adjacency matrix, which leads to simplified closed-form solutions. The resultant f-FME and r-FME algorithms not only scale linearly in both time and space with respect to the number of training samples but also can effectively utilize information from both labeled and unlabeled data. Experimental results show the effectiveness and scalability of the proposed methods.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • 2D-LBP: An Enhanced Local Binary Feature for Texture Image Classification
    • Authors: Bin Xiao;Kaili Wang;Xiuli Bi;Weisheng Li;Junwei Han;
      Pages: 2796 - 2808
      Abstract: The local binary pattern (LBP) and its variants have shown the effectiveness in texture images classification, face recognition, and other applications. However, most of these LBP methods only focus on the histogram of LBP patterns and ignore the spatial contextual information between LBP patterns. In this paper, we propose a 2D-LBP method which uses a sliding window to count the weighted occurrence number of the rotation invariant uniform LBP pattern pairs to obtain the spatial contextual information. The multi-resolution 2D-LBP features can also be obtained when the radius of 2D-LBP is changed. At last, a two-stage classifier which acts as an ensemble learning step is followed to achieve an accurate classification by combining the predictions on each 2D-LBP with single resolution. Theoretical proof shows that the proposed 2D-LBP is a general framework and can be integrated on other LBP variants to derive new feature extraction methods. Experimental results show that, the proposed method achieves 99.71%, 97.09%, 98.48%, and 49.00% classification accuracy on the public “Brodatz,” “CUReT,” “UIUC,” and “FMD” texture image databases, respectively. Compared with the original LBP and its variants, the proposed method obtains higher classification accuracy under different cases, and simultaneously owns shorter time complexity.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • An Adaptive Multi-Projection Metric Learning for Person Re-Identification
           Across Non-Overlapping Cameras
    • Authors: Hai-Miao Hu;Wen Fang;Bo Li;Qi Tian;
      Pages: 2809 - 2821
      Abstract: Person re-identification is one of the most important and challenging problems in video analytics systems; it aims to match people across non-overlapping camera views. For person re-identification, metric learning is introduced to improve the performance by providing a metric adapted for cross-view matching. The essence of metric learning is to search for an optimal projection matrix to project the original features into a new feature space. However, most existing metric learning methods overlook the inconsistency of feature distributions in multiple cameras. In this paper, we propose a multi-projection metric learning (MPML) method to overcome the inconsistency among multiple cameras in person re-identification. Our solution is to jointly learn multiple projection matrices using paired samples from different cameras to project features from different cameras into a common feature space. To make our method adaptive to newly added cameras without affecting the learned projection matrices, we further propose an adaptive MPML method, which can learn new camera projection matrices without having to update any of the obtained projection matrices. The proposed methods are evaluated on four major person re-identification data sets, with comprehensive experiments showing the effectiveness of the proposed methods and notable improvements over the state-of-the–art approaches.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Attention-Based 3D-CNNs for Large-Vocabulary Sign Language Recognition
    • Authors: Jie Huang;Wengang Zhou;Houqiang Li;Weiping Li;
      Pages: 2822 - 2832
      Abstract: Sign language recognition (SLR) is an important and challenging research topic in the multimedia field. Conventional techniques for SLR rely on hand-crafted features, which achieve limited success. In this paper, we present attention-based 3D-convolutional neural networks (3D-CNNs) for SLR. The framework has two advantages: 3D-CNNs learn spatio-temporal features from raw video without prior knowledge and the attention mechanism helps to select the clue. When training 3D-CNN for capturing spatio-temporal features, spatial attention is incorporated into the network to focus on the areas of interest. After feature extraction, temporal attention is utilized to select the significant motions for classification. The proposed method is evaluated on two large scale sign language data sets. The first one, collected by ourselves, is a Chinese sign language data set that consists of 500 categories. The other is the ChaLearn14 benchmark. The experiment results demonstrate the effectiveness of our approach compared with state-of-the-art algorithms.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • GiantClient: Video HotSpot for Multi-User Streaming
    • Authors: Anis Elgabli;Muhamad Felemban;Vaneet Aggarwal;
      Pages: 2833 - 2843
      Abstract: In this paper, we propose a cooperative multi-user video streaming system, termed GiantClient, for videos encoded using scalable video coding (SVC). The proposed system allows a group of users to watch a video on a single screen. The users, who may have different data plans from different carriers or different levels of energy, can collaborate to fetch the SVC-encoded video at high quality and avoid running into re-buffering. Using SVC, each layer of every chunk of the video can be fetched by only one of the cooperating users. Therefore, we formulate the streaming problem that obtains the quality and the fetching policy decisions as an optimization problem. The objective is to optimize a novel quality-of-experience metric that maintains a tradeoff between maximizing the quality of every chunk and ensuring fairness among all video chunks for the minimum re-buffering time. The problem is constrained with the available bandwidth, the chunk deadlines, and the imposed maximum contribution constraints by users. Moreover, we propose a low-complexity algorithm to solve the proposed optimization problem. A real implementation of the system with real SVC-encoded videos and real bandwidth traces reveal the robustness and performance of the proposed algorithm.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Improved Search in Hamming Space Using Deep Multi-Index Hashing
    • Authors: Hanjiang Lai;Yan Pan;Si Liu;Zhenbin Weng;Jian Yin;
      Pages: 2844 - 2855
      Abstract: Similarity-preserving hashing is a widely used method for nearest neighbor search in large-scale image retrieval tasks. Considerable research has been conducted on deep-network-based hashing approaches to improve the performance. However, the binary codes generated from deep networks may be not uniformly distributed over the Hamming space, which will greatly increase the retrieval time. To this end, we propose a deep-network-based multi-index hashing (MIH) for retrieval efficiency. We first introduce the MIH mechanism into the proposed deep architecture, which divides the binary codes into multiple substrings. Each substring corresponds to one hash table. Then, we add the two balanced constraints to obtain more uniformly distributed binary codes: 1) balanced substrings, where the Hamming distances of each substring are equal for any two binary codes and 2) balanced hash buckets, where the sizes of each bucket are equal. Extensive evaluations on several benchmark image retrieval data sets show that the learned balanced binary codes bring dramatic speedups and achieve comparable performance over the existing baselines.
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Introducing IEEE Collabratec
    • Pages: 2856 - 2856
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • IEEE Open Access Publishing
    • Pages: 2857 - 2857
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
  • Member Get-A-Member (MGM) Program
    • Pages: 2858 - 2858
      PubDate: Sept. 2019
      Issue No: Vol. 29, No. 9 (2019)
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 34.204.173.45
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-