Subjects -> ELECTRONICS (Total: 207 journals)
| A B C D E F G H I J K L M N O P Q R S T U V W X Y Z | The end of the list has been reached or no journals were found for your choice. |
|
|
- A Single Image Dehazing Method Based on End-to-End CPAD-Net Network in
Deep Learning Environment-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Chaoda Song, Jun Liu Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. To address the issues of blurred details and distortion of color in the images recovered by the original AOD-Net dehazing method, this paper proposes a CPAD-Net dehazing network model based on attention mechanism and dense residual blocks. The network is improved on the basis of AOD-Net, which can reduce the errors arising from the separately determined transmittance and atmospheric light values. A new dense residual block structure is designed to replace the traditional convolution method, which effectively improves the detail processing capability and the representation ability of the network model for image feature information. On this basis, the attention module determines how to learn the weights according to the feature importance of distinct channels and distinct pixels, and then obtain the recovery of images in terms of color and texture. The experiments showed that the dehazing efficiency of our method are richer in texture detail information and more natural in color recovery. Compared with other algorithms, the PSNR and SSIM indexes of our method are considerably superior to those listed algorithms, which definitively demonstrates that the dehazing effect of our method is more effective, and the recovered images are more realistic and natural. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-26T07:00:00Z DOI: 10.1142/S0218126623502729
- Applying Coding Behavior Features to Student Plagiarism Detection on
Programming Assignments-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zheng Li, Yuting Zhang, Yong Liu, Yonghao Wu, ShuMei Wu Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In programming education, the result of plagiarism detection is a crucial criterion for assessing whether or not students can pass course exams. Recently, the prevalent methods for detecting student plagiarism have been proposed by analyzing source code. These methods extract features (such as token, abstract syntax tree and control flow graph) from the source code, examine the similarity of codes using various similarity detection methods, and then perform plagiarism detection based on a predefined plagiarism threshold. However, these previous methods for plagiarism detection have some problems. First, they are less effective in detecting code modification related to structure. Second, they require a considerable number of training data, which demand high computing time and space. Third, they cannot determine whether students plagiarize in time. We propose a novel plagiarism detection method by analyzing the behavioral features of students during the coding process. Specifically, we extract five behavioral features based on students’ programming habits. Then, we use a feature ranking-based suspiciousness algorithm to obtain the possibility of student plagiarism. Based on our proposed method, we develop the Online Integrated Programming Platform. To evaluate the accuracy of our method, we conduct a series of experiments. Final experimental results indicate that our method achieves promising results with Accuracy, Precision, Recall and [math] values of 0.95, 0.90, 0.95 and 0.92, respectively. Finally, we also analyze the correlation between whether students plagiarized and their regular and final grades, which can further verify the effectiveness of our proposed method. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-26T07:00:00Z DOI: 10.1142/S0218126623502869
- VDIBA-Based Current-Mode PID Controller Desi̇gn
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Umut Cem Oruçoğlu, Emre Özer, Firat Kaçar Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper aims to bring a voltage differencing inverting buffered amplifier (VDIBA)-based current-mode (CM) proportional integral derivative (PID) controller circuit. This CM PID controller is designed with a single VDIBA, three resistors, and two grounded capacitors. The proposed circuit is easy to design, and the control parameters can be tuned without changing the design configuration. A sensitivity analysis of the control parameters to electronic components has been conducted. The Simulation Program with Integrated Circuit Emphasis (SPICE) simulation has been performed using Taiwan Semiconductor Manufacturing Company (TSMC) [math]m complementary metal-oxide semiconductor (CMOS) technology parameters. An application circuit example is given to demonstrate the reliability of the proposed PID design. A comparison table of the PID controllers previously reported in the literature is also presented. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-26T07:00:00Z DOI: 10.1142/S0218126623502882
- Mathematical Modeling and Numerical Simulation of a Single-Turn MEMS
Piezoresistive Pressure Sensor for Enhancement of Performance Metrics-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Eshan Sabhapandit, Sumit Kumar Jindal, Dadasikandar Kanekal, Hemprasad Yashwant Patil Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Micro-Electro-Mechanical System (MEMS)-based pressure sensors operating on the principle of piezoresistivity have found profound application in various fields like automobile, aerospace, aviation, biomedical and consumer electronics. Various research studies have been conducted to optimize the design of MEMS-based pressure sensors to meet specific requirements of different fields. Modification in the structure of the piezoresistors placed on these sensors has shown great effect in this regard. However, most of these improvements have been validated through fabrication and measurement, but there has been a lack of significant studies developing analytical models to explain these improvements. This paper studies the performance of a single-turn piezoresistor design on a square silicon diaphragm. The analytical model relates the dimensions of the single-turn piezoresistor on a square diaphragm to the output voltage, and hence, sensor sensitivity is laid out. The correctness of the relation is also validated through Finite Element Analysis (FEA) performed using COMSOL Multiphysics software. Hence, an optimized single-turn design is presented which achieves a sensitivity of 203.57[math]mV/V/MPa over a pressure range of 0–1[math]MPa. These results are then compared to work from existing literature. The comparison shows an improved performance which was achieved by optimizing the design through its derived analytical model. The proposed sensor can be utilized in disposable blood pressure measurement system where high sensor sensitivity is required. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-23T07:00:00Z DOI: 10.1142/S0218126623502766
- A 4.21-[math]V Offset Voltage and 42-nV/[math]Hz Input Noise Chopper
Operational Amplifier with Dynamic Element Matching-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ruikai Zhu, Chenjian Wu Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper presents a high-efficiency operational amplifier (op-amp) used in the readout circuit of micro-sensor. The chopping and dynamic element matching (DEM) techniques are used in the proposed design, which greatly reduce the offset voltage and flicker noise [math] noise) of the op-amp. By optimizing the circuit structure, the maximum chopping frequency is increased to 2[math]MHz when the offset voltage is less than [math][math]V. Therefore, the maximum bandwidth of signal processing can be up to 1MHz. The proposed circuit is designed and fabricated using TSMC 0.18-[math]m 1P5M CMOS technology. It occupies an area of 0.16[math]mm2 and consumes [math]A from a 1.8-V supply. When the chopping frequency is 100[math]kHz, the input-referred offset voltage is 4.21[math][math]V and the input-referred noise is 42[math]nV/[math][math]Hz. It achieves a noise efficiency factor of 7.82 and a power efficiency factor of 110.07. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-23T07:00:00Z DOI: 10.1142/S0218126623502845
- A New Design of NCFF Compensated Operational Amplifier for Continuous-Time
Delta Sigma Modulator-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Kasturi Ghosh Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. A power-efficient second-order no-capacitor feedforward (NCFF) op-amp has been designed in 180[math]nm CMOS process. To attain high gain and better common mode rejection, cross-coupled loading network has been used in each differential stage. The designed op-amp achieves over 40[math]dB gain up to 400[math]MHz and over 10[math]dB open loop gain up to 4[math]GHz with 0.87[math]mW power consumption. It is suitable for application in continuous-time delta-sigma modulator (CT [math]M) with sampling frequency in GHz range. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-23T07:00:00Z DOI: 10.1142/S0218126623502894
- Parallel Optimization of BLAS on a New-Generation Sunway Supercomputer
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yinqiao Ren, Yi Xu Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The new-generation Sunway supercomputer has ultra-high computing capacity. But due to the unique heterogeneous architecture of the supercomputer, the open-source versions of basic linear algebra subprograms (BLAS) are insufficient for performance or compatibility. In addition, due to the update of the architecture, BLAS based on the previous Sunway could not fully exploit the performance of the successor. To address the challenges, we propose an optimized BLAS on the new-generation Sunway supercomputer in this paper. Specially, for achieving efficient computation, a parallel optimization method based on the new-generation Sunway for the Level-1 BLAS computing between vectors and the Level-2 BLAS computing between vectors and matrices is first proposed. Then, an adaptive scheduling algorithm for various data sizes is proposed, which is used to balance the tasks of core groups. Finally, to achieve highly efficient general matrix multiplication (GEMM) kernels, a parallel optimization method based on the new-generation Sunway for the Level-3 BLAS computing between matrices is proposed, which includes source-level optimization as well as assembly-level optimization. Experimental results show that the memory bandwidth utilization of the optimized Level-1/2 BLAS exceeds 95%, and the computational efficiency of the optimized GEMM kernel exceeds 94%. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-23T07:00:00Z DOI: 10.1142/S0218126623502900
- High Performance FPGA Implementation of Single MAC Adaptive Filter for
Independent Component Analysis-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: M. R. Ezilarasan, J. Britto Pari, Man-Fai Leung Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Blind source separation (BSS) is the process of extracting sources from mixed data without or with limited awareness of the sources. This paper uses field programmable gate array (FPGA) to create an effective version of the Blind source separation algorithm (ICA) with a single Multiply Accumulate (MAC) adaptive filter and to optimize it. Recently, space research has paid a lot of attention to this technique. We address this problem in two sections. The first approach is ICA, which seeks a linear revolution that can enhance the mutual independence of the mixture to distinguish the source signals from mixed signals. The second is a powerful flexible finite impulse response (FIR) filter construction that makes use of a MAC core and is adaptable. The adjustable coefficient filters have been used in the proposed study to determine the undiscovered system utilizing an optimal least mean square (LMS) technique. The filter tap under consideration in this paper includes 32 taps, and hardware description language (HDL) and FPGA devices were used to carry out the analysis and synthesis of it. When compared to the described architecture, the executed filter architecture uses 80% fewer resources and increases clock frequency by nearly five times, and speed is increased up to 32%. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-23T07:00:00Z DOI: 10.1142/S0218126623502948
- Electronically Tunable Differential Difference Current Conveyor Using
Commercially Available OTAS-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Boonying Knobnob Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper presents a new electronically tunable differential difference current conveyor (EDDCC) using the commercially available operational transconductance amplifiers (OTAs). Unlike the conventional DDCC, the proposed EDDCC offers current gain that can be electronically controlled. The EDDCC can be used to realize a new electronically tunable fully differential difference second-generation current conveyor (EFDCCII). Therefore, the current gain of the proposed EFDCCII can be electronically controlled. To show the advantages of the proposed EDDCC and EFDCCII, the EDDCC has been used to realize a quadrature oscillator and the EFDCCII has been used to realize a current-mode universal filter. The proposed circuits have been investigated by simulation and experimental tests using the commercially available LM13700 OTAs. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-23T07:00:00Z DOI: 10.1142/S021812662350295X
- An Android Malware Detection Method Using Multi-Feature and MobileNet
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhiyao Yang, Xu Yang, Heng Zhang, Haipeng Jia, Mingliang Zhou, Qin Mao, Cheng Ji, Xuekai Wei Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Most of the existing static analysis-based detection methods adopt one or few types of typical static features for avoiding the problem of dimensionality and computational resource consumption. In order to further improve detecting accuracy with reasonable resource consumption, in this paper, a new Android malware detection model based on multiple features with feature selection method and feature vectorization method are proposed. Feature selection method for each type of features reduces the dimensionality of feature set. Weight-based feature vectorization method for API calls, intent and permission is designed to construct feature vector. Co-occurrence matrix-based vectorization method is proposed to vectorize opcode sequence. To demonstrate the effectiveness of our method, we conducted comprehensive experiments with a total of 30,000 samples. Experimental results show that our method outperforms state-of-the-art methods. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-23T07:00:00Z DOI: 10.1142/S0218126623502997
- Toward Design and Implementation of Self-Balancing Robot Using Deep
Learning-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Preeti Nagrath, Rachna Jain, Drishti Agarwal, Gopal Chaudhary, Tianhong Huang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In the Internet of Things (IoT) era, an immense amount of sensing devices are obtained and produce various sensory data over time for a wide range of disciplines and applications. These devices will result in significant, fast, and real-time data streams based on the utilization characteristics. Utilizing analytics over such data streams to identify new information, model future insights, and make control decisions is a necessary process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. This paper presents a study of digital agriculture and its significance in terms of the application of an IoT-based device — a two-wheeled self-balancing robot — followed by a thorough procedural explanation of the development of the device, which begins with the mathematical modeling of the system through the Euler–Lagrange method to obtain the equations of motion for the same and linearize the equation to define the control method to be used to balance the robot structure, all based on the concept of the inverted pendulum. Then paper discusses the suitable and the most efficient control method, which is the linear quadratic regulator (LQR), for these robots. Then deep learning-based LQR (DL-LQR) method is implemented in the robots performing the algorithm to balance it successfully. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-22T07:00:00Z DOI: 10.1142/S0218126623502602
- Cascaded Inner Loop Fuzzy SMC for DC–DC Boost Converter
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Y. Rekha, V. Jamuna, I. William Christopher Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In this paper, the implementation of the Fuzzy Sliding Mode Controller (FSMC) in the inner loop of the cascaded control structure of the DC–DC Boost Converter is presented. On account of nonlinearity and nonminimum phase nature, switched-mode DC–DC converters show a poor response in their dynamic characteristics. In most of the works, the inner loop is served by SMC/FLC, and the outer loop by PI. In this study, the proposed FSMC, which is the combination of SMC and FLC is recommended in the inner current loop which reduces the chattering phenomena and improves the robustness against uncertainties, disturbances and varying circuit parameters with the reaching law. The Lyapunov approach is considered to study the stability of the proposed controller. A comparative analysis is made with the results obtained from the proposed FSM controller, Fuzzy and SMC control. The effectiveness of the FSMC controller is validated by observing its system performance. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-22T07:00:00Z DOI: 10.1142/S0218126623502699
- PipCKG-BS: A Method to Build Cybersecurity Knowledge Graph for Blockchain
Systems via the Pipeline Approach-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Jianbin Li, Jifang Li, Chunlei Xie, Yousheng Liang, Ketong Qu, Long Cheng, Zhiming Zhao Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The increasing sophistication of cyberattacks on blockchain systems has significantly disrupted security experts from gaining immediate insight into the security situation. The Cybersecurity Knowledge Graph (CKG) currently provides a novel technical solution for blockchain system situational awareness by integrating massive fragmented Cyber Threat Intelligence (CTI) about blockchain technology. However, the existing literature does not provide a solution for building CKG appropriate for blockchain systems. Therefore, designing a method to construct a CKG for blockchain systems by efficiently extracting information from the CTI is mandatory. This paper proposes PipCKG-BS, a pipeline-based approach that builds CKG for blockchain systems. The PipCKG-BS incorporates contextual features and Pre-trained Language Models (PLMs) to improve the performance of the information extraction process. Precisely, we develop the Named Entity Recognition (NER) and Relation Extraction (RE) models for cybersecurity text in PipCKG-BS. In the NER model, we apply the prompt-based learning paradigm to cybersecurity text by constructing prompt templates. In the RE model, we employ external features and prior knowledge of sentences to improve entity relationship extraction accuracy. Several experimental results demonstrate that PipCKG-BS is better than advanced methods in extracting CTI information and is an appealing solution to build high-quality CKG for blockchain systems. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-22T07:00:00Z DOI: 10.1142/S0218126623502742
- A Vision Comprehension-Driven Intelligent Recognition Approach for Actions
of Tennis Players Based on Improved Convolution Neural Networks-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhiqiang Cai, Zhixin Zhang, Zhengdao Lu Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In this paper, we focus on two tasks: semantic segmentation and target detection in the visual semantic understanding of tennis sports images and we optimize the network structure to achieve a more complete location contour information mining of the target. In detail, we focus on a weakly supervised image semantic segmentation method based on null convolution pixel relations. To address the problem of incomplete pixel-level pseudo-labeling, we introduce a cavity convolution unit with multiple cavity rates and a self-attentive mechanism in the classification model to adaptively enhance the target regions and suppress other irrelevant regions while expanding the perceptual field to generate high-quality pixel-level pseudo-labeling and then train the semantic segmentation model. The final experimental results show that the hierarchical fusion algorithm proposed in this paper significantly outperforms other algorithms, and the overall classification accuracy of the tandem cavity neural network algorithm reaches 81% with good overall classification results. The recognition accuracy of static movements is higher than that of dynamic movements. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-22T07:00:00Z DOI: 10.1142/S0218126623502778
- Resource Allocation for 5G Network Considering Privacy Protection in Edge
Computing Environment-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Li Wang, Xiaokai Wang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In the field of Internet of Things, mobile communication will be deeply integrated with industrial construction, biomedicine, agricultural production, transportation and other industries to fully realize the “Internet of Everything” and realize smart city and sustainable development. This paper intends to solve problems in increasing latency and consumption of energy and security after deploying Mobile Edge Computing (MEC) servers in single cell multi-user network model, and has advocated a strategy of allocating the resources of 5G communication in regards to the protection mechanism for privacy in the environment of edge computing. First of all, solution model for the problem is characterized as a model with time delay and energy consumption. Second, privacy is used to measure the uncertainty of data in order to quantify privacy data while computing the allocation of tasks and facilitate the design of objective function. Finally, the task migration problem of minimizing energy consumption under the limitation of completion time is constructed as a nonlinear 0–1 programming problem. Besides, we design the optimal resource allocation strategy of discrete algorithm of binary particle swarm optimization (BPSO) for problem solution. Results indicate that while the quantity of users is 100, the total user expenditure of proposed method is 18.3J, which is lower than 20.0J and 30.7J of the compared methods. Moreover, the proposed method has lower latency and energy consumption, which can well balance the load of MEC servers. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-22T07:00:00Z DOI: 10.1142/S0218126623502857
- An Efficient Fully Automated Lung Cancer Classification Model Using
GoogLeNet Classifier-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: P. Samundeeswari, R. Gunasundari Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Lung cancer (LC) causes the most superior mortality rate globally. Medical experts diagnose the disease and stage with prolonged procedures. Early diagnosis is only a promising way to improve the survival rate. Previously, an enormous investigation was executed to detect LC by different artificial intelligence systems. Still, detection accuracy has to be improved as equal to expert diagnosis. They were not majorly focused on LC type and TNM stage prediction. However, the treatment planning is strictly based on one cancer cell type and the survival rate is closely related to the stage. Hence in this work, a new Fully Automated Lung Cancer Classification System (FALCCS) using GoogLeNet classifier is proposed to detect non-small cell LC along with its types and stages. Initially, our previous segmentation work is adapted to automatically extract tumor regions from CT images. Then, a new post-processing technique is introduced to enhance image features and create required training databases. Using deep learning techniques, the proposed system used GoogLeNet to create five new automatic classifiers to perform LC detection, type, T state, N state and M state prediction. Finally, TNM state classifier’s outputs were gathered and combined to find the LC stage by referring TNM staging system eighth edition. The proposed system successfully put a novel step towards TNM stage classification as equal to expert’s diagnosis. Experimental results show that the proposed system achieved the superior cancer detection accuracy of 99.2% simultaneously with the type and final TNM stage categorizer resulting in 96.5% and 90.5% of accuracy. These results illustrate the proposed classifier’s efficacy more than the existing methods. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-13T07:00:00Z DOI: 10.1142/S0218126623502468
- A Deep Learning and Morphological Method for Concrete Cracks Detection
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Qilin Jin, Qingbang Han, Nana Su, Yang Wu, Yufeng Han Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Concrete crack detection is essential for infrastructure safety, and its detection efficiency and accuracy are the key issues. An improved YOLOV5 and three measurement algorithms are proposed in this paper, where the original prediction heads are replaced by Transformer Heads (TH) to expose the prediction potential with one self-attention model. Experiments show that the improved YOLOV5 effectively enhances the detection and classification of concrete cracks, and the Mean Average Precision (MAP) value of all classes increases to 99.5%. The first method is more accurate for small cracks, whilst the average width obtained based on the axial traverse correction method is more exact for large cracks. The crack width obtained from the concrete picture sample is the same as that obtained from the manual detection, with a deviation rate of 0–5.5%. This research demonstrates the recognition and classification of concrete cracks by integrating deep learning and machine vision with high precision and high efficiency. It is helpful for the real-time measurement and analysis of concrete cracks with potential safety hazards in bridges, high-rise buildings, etc. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-13T07:00:00Z DOI: 10.1142/S0218126623502717
- Recent Progress on Calibration Methods of Timing Skew in Time-Interleaved
ADCS-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Huijing Yang, Ruidong Zhang, Mingyuan Ren Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Time interleaving has become a very common choice for increasing ADC speed. However, it is accompanied by defects such as offset, gain and time offset between the individual sub-ADCs, which can seriously degrade the performance of the overall ADC. For the elimination of gain and offset errors, the solution is relatively simple, and the calibration of the time offset is still in the exploratory stage. This paper systematically reviews several current mainstream time-interleaved ADC timing offset correction methods. At the same time, the characteristics and development trend of calibration methods are summarized. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-11T07:00:00Z DOI: 10.1142/S0218126623300040
- A Self-Improved Optimizer-Based CNN for Wind Turbine Fault Detection
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: T. Ahilan, Andriya Narasimhulu, D. V. S. S. S. V. Prasad Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In comparison to other alternative energy sources, wind power is more affordable and environmentally friendly, making it one of the most significant energy sources in the world. It is vital to monitor the condition of each wind turbine in the farm and recognize the various states of alert since difficulties with the operation as well as maintenance of wind farms considerably contribute to the rise in their overall expenses. The Supervisory Control and Data Acquisition (SCADA) data-based continuous observation of wind turbine conditions is the most widely used existing strategy to detect the fault early by preventing the wind turbine from reaching a shutdown stage. Several parameters irrelevant to the faults are saved in the SCADA system while the wind turbine is operating. To increase the efficacy of wind turbine fault diagnostics, optimally selected SCADA data parameters are required for fault prediction. Hence, this paper introduces an optimized Convolutional Neural Network (CNN)-based wind turbine fault identification method. For more precise detection, a Self-Improved Slime Mould Algorithm (SI-SMA) is used for the optimal selection of SCADA parameters as well as weight optimization of CNN. The proposed SI-SMA method is an enhanced form of the standard Slime Mould Algorithm (SMA). Eventually, an error analysis and a stability analysis are carried out to check the overall effectiveness of the suggested approach. In particular, the root mean square error (RMSE) of the implemented algorithm is lower, and it is 0.69%, 1.58%, 0.81% and 1.71% better than the existing FF, GWO, WOA and SMA models. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-11T07:00:00Z DOI: 10.1142/S021812662350247X
- Low-Phase Noise, Low-Power Four-Stage Ring VCO for OFDM Systems
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Parul Trivedi, Brij Bihari Tiwari Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This work describes a new design approach for a four-stage ring Voltage-Controlled Oscillator (VCO) for Orthogonal Frequency Division Multiplexing (OFDM) systems, which is ideal for low-phase noise and low-power applications. The phase noise of the proposed ring VCO has been improved by limiting the delay cell’s output current to a relatively narrow portion of the output waveform, and the complementary nature of the delay cell prevents the power from increasing substantially. The proposed VCO is designed and simulated in GPDK 90[math]nm CMOS technology using Cadence Virtuoso under 1.0[math]V power supply. A tuning frequency range of 112–362[math]MHz is obtained with control voltage ranges 0.0–1.0[math]V. The proposed ring VCO consumes 1.07[math]mW of power. At a 1[math]MHz offset frequency, phase noise reduction is achieved to [math][math]dBc/Hz. The proposed design is also validated by the Process–Voltage–Temperature (PVT) analysis. The proposed VCO has a Figure of Merit (FOM) of [math][math]dBc/Hz and acquires a total area of 0.00085[math]mm2. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-11T07:00:00Z DOI: 10.1142/S0218126623502572
- Area, Delay, and Energy-Efficient Full Dadda Multiplier
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Muteen Munawar, Zain Shabbir, Muhammad Akram Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The Dadda algorithm is a parallel structured multiplier, which is quite faster as compared to array multipliers, i.e., Booth, Braun, Baugh-Wooley, etc. However, it consumes more power and needs a larger number of gates for hardware implementation. In this paper, a modified-Dadda algorithm-based multiplier is designed using a proposed half-adder-based carry-select adder with a binary to excess-1 converter and an improved ripple-carry adder (RCA). The proposed design is simulated in different technologies, i.e., Taiwan Semiconductor Manufacturing Company (TSMC) 50[math]nm, 90[math]nm, and 120[math]nm, and on different GHz frequencies, i.e., 0.5, 1, 2, and 3.33[math]GHz. Specifically, the 4-bit circuit of the proposed design in TSMC’s 50[math]nm technology consumes 25[math]uW of power at 3.33[math]GHz with 76[math]ps of delay. The simulation results reveal that the design is faster, more power-energy-efficient, and requires a smaller number of transistors for implementation as compared to some closely related works. The proposed design can be a promising candidate for low-power and low-cost digital controllers. In the end, the design has been compared with recent relevant works in the literature. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-11T07:00:00Z DOI: 10.1142/S0218126623502584
- A Novel Business Scheduling Approach for Enterprises via Vision
Sensing-Based Automatic Documental Information Extraction-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yang Zhang, Xiu Liu Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Currently, the prevalence of various Internet intrusion technologies has brought much challenge to the enterprise management. For many core documents, the information leakage may lead to the loss of secrets of enterprises. Therefore, some core official documents in enterprises are in the format of papers, rather than electronic format. As a consequence, it is of significance to develop automatic information processing techniques for official documents in the format of papers, so as to improve the working efficiency of enterprises. In this paper, a novel business scheduling approach for enterprises via vision sensing-based automatic documental information extraction is proposed. For the first stage, the vision sensing-based optical character recognition (OCR) technique is utilized to extract textual information from official documents in the format of papers. For the second stage, the deep neural network is utilized to output business scheduling results on the basis of digital recognition contents from the first stage. Finally, the experimental simulation is also carried out to verify efficiency of the proposal. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-05-11T07:00:00Z DOI: 10.1142/S0218126623502663
- Mode Switching Technique for High Efficiency Buck Converter
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xu Xiao, Junjie Guo, Changyuan Chang, Pengyu Guo, Li Lu Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In order to achieve high efficiency in a wider load range, a PWM/PFM/Power-Saved mode dc–dc buck converter is proposed. The mode switching technique is based on an accurate power loss model consisting of critical components/parameters: The size of the power transistor and the ON/OFF status of power-hungry subcircuits. The circuit of mode switching technique is achieved by the reference, pulse skipped modulation (PSM) comparator and feedback resistors, and its own consumption of quiescent current is extremely low. At ultra-light load, by detecting the ZCD duration and the ripple of the output, the system can switch to the Power-Saved-Mode. The proposed buck converter was implemented by using a 0.18[math][math]m CMOS process. It achieves a peak efficiency of 97% and [math]90% efficiency from 100[math][math]A to 600[math]mA. A 360[math]nA quiescent current is achieved. With a low output ripple of less than 60[math]mV, the input voltage range and regulated output voltage are 3.6–5[math]V and 1.2–3.3[math]V, respectively. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-29T07:00:00Z DOI: 10.1142/S0218126623501591
- A Graph Neural Network-Based Digital Assessment Method for Vocational
Education Level of Specific Regions-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Weitai Luo, Haining Huang, Wei Yan, Daiyuan Wang, Man Yang, Zemin Zhang, Xiaoying Zhang, Meiyong Pan, Liyun Kong, Gengrong Zhang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. With the prevalence of artificial intelligence technologies, big data has been utilized to higher extent in many cross-domain fields. This paper concentrates on the digital assessment of vocational education level in some specific areas, and proposes a graph neural network-based assessment model for this purpose. Assume that all vocational colleges inside a specific region are with a social graph, in which each college is a node and the relations among them are the edges. The graph neural network (GNN) model is formulated to capture global structured features of all the nodes together. The GNN is then employed for the sequential modeling pattern, and the evolving characteristics of all the colleges can be captured. Some experiments are also conducted to evaluate the performance of the proposed GNN-VEL. It is compared with two typical forecasting methods under evaluation of two metrics. The results show that it performs better than other two methods. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-28T07:00:00Z DOI: 10.1142/S0218126623502626
- Semantic Segmentation Algorithm of Night Images Based on Attention
Mechanism-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xiaona Xie, Zhiyong Xu, Tao Jiang, JianYing Yuan, Zhengwei Chang, Linghao Zhang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. At present, there are many semantic segmentation algorithms with excellent performance for intelligent driving vehicles, but most of them only work well on scenes with good illumination. In order to solve the problem of scene segmentation under low illumination, this paper proposes a novel semantic segmentation algorithm that combines visible and infrared images. In this algorithm, two parallel encoders are designed as the input of the images, and the decoder divides the fused images output from the encoder. The model is based on ResNet algorithm, and the residual attention module is used in each branch to mine and enhance the spatial features of multilevel channels to extract images information. Experiments are carried out on publicly available thermal infrared and visible datasets. The results show that the algorithm proposed in this paper is superior to the algorithm using only visible images in semantic segmentation of traffic environment. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-28T07:00:00Z DOI: 10.1142/S0218126623502638
- Track Signal Intrusion Detection Method Based on Deep Learning in
Cloud-Edge Collaborative Computing Environment-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yaojun Zhong, Shuhai Zhong Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Aiming at the low accuracy of the track signal intrusion detection (IDe) algorithm in the traditional cloud-side collaborative computing environment, this paper proposes a deep learning (D-L)-based track signal IDe method in the cloud edge collaborative computing environment. First, the main framework of the IDe method is constructed by comprehensively considering the backbone network, network transmission and ground equipment, and edge computing (EC) is introduced to cloud services. Then, the The CNN (Convolutional Neural Networks)-attention-based BiLSTM (Bi-directional Long Short-Term Memory) neural network is used in the cloud center layer of the system to train the historical data, a D-L method is proposed. Finally, a pooling layer and a dropout layer are introduced into the model to effectively prevent the overfitting of the model and achieve accurate detection of track signal intrusion. The purpose of introducing the pooling layer is to accelerate the model convergence, remove the redundancy and reduce the feature dimension, and the purpose of introducing the dropout layer is to prevent the overfitting of the model. Through simulation experiments, the proposed IDe method and the other three methods are compared and analyzed under the same conditions. The results show that the F1 value of the method proposed in this paper is optimal under four different types of sample data. The F1 value is the lowest of 0.948 and the highest of 0.963. The performance of the algorithm is better than the other three comparison algorithms. The method proposed in this paper is important for solving the IDe signal in the cloud-edge cooperative environment, and also provides a theoretical basis for tracking the signal IDe direction. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-28T07:00:00Z DOI: 10.1142/S0218126623502675
- A 0.8-Volt 29.52-[math]W Current Mirror-Based OTA Design for Biomedical
Applications-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Pritty, Mansi Jhamb Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. An operational trans-conductance amplifier (OTA) is a fundamental component of electronic appliances. This paper introduces a novel design of OTA for minimal power, low voltage applications. The proposed OTA comprises an ultra low power current mirror design with enhanced bandwidth. The proposed OTA circuit is operated at 0.8[math]V, contributing input noise of 26.33[math]nV/(Hz)[math] with a power of [math]W. The additional parameters of new OTA are DC gain (87.32[math]dB), common mode rejection ratio (145.47[math]dB), gain bandwidth (4.73[math]MHz), phase margin (36.56[math]). These figures are significantly improved as compared to conventional OTAs. Analog-to-digital convertor (ADC) is also designed as an application of the proposed OTA. The improvements offered by ADC in terms of power and bandwidth are also compared with state of the art. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-25T07:00:00Z DOI: 10.1142/S0218126623502341
- An Ensemble Learning Method Based on One-Class and Binary Classification
for Credit Scoring-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zaimei Zhang, Yujie Yuan, Yan Liu Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. It is crucial to correctly assess whether a potential borrower can repay the loan in the credit scoring model. The credit loan data has a serious data imbalance because the number of defaulters is far less than the nondefaulters. However, most current methods for dealing with data imbalance are designed to improve the classification performance of minority data, which will reduce the performance of majority data. For a financial institution, the economic loss caused by the decrease in the classification performance of nondefaulters (majority data) cannot be ignored. This paper proposes an ensemble learning method based on one-class and binary classification (EMOBC) for credit scoring. The purpose is to improve the classification accuracy of the minority class while mitigating the loss of classification accuracy of the majority class as much as possible. EMOBC uses undersampling for the majority class (nondefault samples in credit scoring) and perform binary-class learning on the balanced data to improve the classification accuracy of the minority. To alleviate the decline in classification performance of the majority class, EMOBC uses one-class and binary collaborative classification to train classifiers. The classification result is determined by the average of one-class and binary-class classifiers. The experimental results show that EMOBC has good comprehensive performance compared with the existing methods. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-25T07:00:00Z DOI: 10.1142/S0218126623502560
- Design of Fruit-Carrying Monitoring System for Monorail Transporter in
Mountain Orchard-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhen Li, Yuehuai Zhou, Shilei Lyu, Ying Huang, Yuanfei Yi, Chonghai Zhao Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The real-time monitoring and detection of the fruit carrying for monorail transporter in the mountain orchard are significant for the transporter scheduling and safety. In this paper, we present a fruit-carrying monitoring system, including the pan-tilt camera platform, AI edge computing platform, improved detection algorithm and the web client. The system used a pan-tilt camera to capture images of the truck body of the monorail transporter, realizing monitoring of fruit carrying. Besides, we present an improved fruit-carrying detection algorithm based on YOLOv5s, taking the “basket”, “orange” and “fullbasket” as the object. We introduced the improved attention mechanism E-CBAM (Efficient-Convolutional Block Attention Module) based on CBAM, into the C3 module in the neck network of YOLOv5s. Focal loss was introduced to improve the classification and confidence loss to improve detection accuracy; to deploy the model on the embedded platform better, we compressed the model through the EagleEye pruning algorithm to reduce the parameters and improve the detection speed. The experiment was performed on the custom fruit-carrying datasets, the mAP was 91.5%, which was 9.6%, 9.9% and 12.0% higher than that of Faster-RCNN, RetinaNet-Res50 and YOLOv3-tiny, respectively, and detection speed at Jetson Nano was 72[math]ms/img. The monitoring system and detection algorithm proposed in the paper can provide technical support for the safe transportation of monorail transporter and scheduling transportation equipment more efficiently. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-25T07:00:00Z DOI: 10.1142/S021812662350264X
- Remote Sensing Image Object Detection Based on Improved YOLOv3 in Deep
Learning Environment-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Tianle Yang, Jinghui Li Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. A deep learning-based method, improved YOLOv3 algorithm is proposed in the deep learning environment to tackle challenges such as big scale, uneven distribution, large-scale variation, and complicated background of small- and medium-sized remote sensing photos. This manufacture uses Densenet as the backbone, replacing Darknet-53 to realize feature reuse and make the feature extraction more effective; introduces the spatial pyramid pooling module into the feature pyramid part for increasing the receptive field and isolating the most prominent contextual features; adds SE attention module in the process of feature extraction and obtains richer features by learning more location information and channel information from the images. Under DOTA dataset, the final results are that the mean Average Precision value is 86.78%, which is 4.16% higher than the baseline YOLOv3 network. The model put forward makes it easier to extract information from the feature map and achieve higher detection accuracy without influencing the real-time performance of detection. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-25T07:00:00Z DOI: 10.1142/S0218126623502651
- Research Progress on Interface Circuit of Capacitive Micro Accelerometer
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Huijing Yang, Runze Lv, Mingyuan Ren Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Micro-Electro-Mechanical System (MEMS) capacitive accelerometers have received extensive attention in recent years due to their excellent performance indicators; especially in the fields of noise, power consumption, and bias instability, great development and progress have been made. In the field of noise, effective noise reduction is achieved by introducing oversampling modulation technology combined with digital noise reduction technology. In power consumption, the power consumption of the accelerometer is effectively reduced by using the successive approximation structure in the interface circuit and using the finite state machine for precise control. In bias instability, the effects of temperature offset and zero-point drift are suppressed by using a hybrid topology connection structure in the interface circuit, and an effective reduction of bias instability is achieved. In this paper, the research and progress of MEMS capacitive accelerometer in the field of noise, power consumption and bias instability are reviewed, and the articles published in recent years are listed and summarized and prospected. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-21T07:00:00Z DOI: 10.1142/S0218126623300064
- A Deep Neural Network-Based Intelligent Detection Model for Manufacturing
Defects of Automobile Parts-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Wenbo Xu, Gang Liu, Mengmeng Wang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Image defect detection of casting parts is a key part of the production process in the machinery manufacturing industry. The traditional methods are ineffective because traditional computer image processing methods require a large number of manual features to be set artificially, and the detection time is too long. In order to save human resources and improve the efficiency of image defect detection, this paper proposes a deep learning-based defect detection method for automobile parts. This paper selects EfficientNetB0 as the backbone framework of the target detection network, which significantly reduces the memory usage of the model and shortens the model inference time, while improving the model detection accuracy. Facing the problem of small samples of defect image dataset, we analyze the image characteristics of the dataset and introduce shape transformation and scale scaling as the basic online data enhancement method according to the industrial field image projection law. Then, it is expected to combine the traditional image processing algorithms according to the characteristics of casting parts with different depth distribution and multiple morphological changes, and develop a special image defect data enhancement method. This further improves the performance of the model and increases the detection accuracy of the algorithm by 22.3% without increasing the data. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-21T07:00:00Z DOI: 10.1142/S0218126623502365
- Theoretical Investigation of Dual-Material Stacked Gate Oxide-Source
Dielectric Pocket TFET Based on Interface Trap Charges and Temperature Variations-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Kaushal Kumar Nigam, Dharmender, Vinay Anand Tikkiwal, Mukesh Kumar Bind Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In this paper, the performance of dual-material stacked gate oxide-source dielectric pocket-tunnel field-effect transistor (DMSGO-SDP-TFET) has been investigated by considering fixed interface trap charges (ITCs) at the Si–SiO2 interface. During the analysis, both types of trap charges, positive (donor) and negative (acceptor), have been considered to investigate their effect on the DC, analog/radio frequency, linearity and harmonic distortion performance parameters in terms of the carrier concentration, electric field, band-to-band tunneling rate, transfer characteristics, transconductance ([math]), unity gain frequency ([math]), gain–bandwidth product, device efficiency ([math]/[math]), transconductance frequency product, transit time ([math]), second- and third-order transconductance and voltage intercept points ([math], [math], VIP2 and VIP3), third-order Input Intercept Point and Intermodulation Distortion (IIP3, IMD3), second-, third-order and total harmonic distortions (HD2, HD3 and THD), respectively. Further, the impact of temperature variations from [math][math]K to [math][math]K in the presence of ITCs is investigated and the results are compared with conventional DMSGO-TFET. In terms of percentage variation, DMSGO-SDP-TFET depicts lower variation than conventional DMSGO-TFET, indicating that the proposed device is more immune to trap charges and can be used for energy-efficient, high-frequency and linearity applications at elevated temperatures. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-21T07:00:00Z DOI: 10.1142/S0218126623502523
- Optimizing FPGA-Based Convolutional Neural Network Performance
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Chi-Chou Kao Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In deep learning, convolutional neural networks (CNNs) are a class of artificial neural networks (ANNs), most commonly applied to analyze visual imagery. They are also known as Shift-Invariant or Space-Invariant Artificial Neural Networks (SIANNs), based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation-equivariant responses known as feature maps. Recently, various architectures for CNN based on FPGA platform have been proposed because it has the advantages of high performance and fast development cycle. However, some key issues including how to optimize the performance of CNN layers with different structures, high-performance heterogeneous accelerator design, and how to reduce the neural network framework integration overhead need to be improved. To overcome and improve these problems, we propose dynamic cycle pipeline tiling, data layout optimization, and a pipelined software and hardware (SW–HW)-integrated architecture with flexibility and integration. Some benchmarks have been tested and implemented on the FPGA board for the proposed architecture. The proposed dynamic tiling and data layout transformation improved by 2.3 times in the performance. Moreover, with two-level pipelining, we achieve up to five times speedup and the proposed system is 3.8 times more energy-efficient than the GPU. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-21T07:00:00Z DOI: 10.1142/S0218126623502547
- High-Performance Multi-RNS-Assisted Concurrent RSA Cryptosystem
Architectures-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: S. Elango, P. Sampath, S. Raja Sekar, Sajan P Philip, A. Danielraj Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In public-key cryptography, the RSA algorithm is an inevitable part of hardware security because of the ease of implementation and security. RSA Cryptographic algorithm uses many modular arithmetic operations that decide the overall performance of the architecture. This paper proposes VLSI architecture to implement an RSA public-key cryptosystem driven by the Residue Number System (RNS). Modular exponentiation in the RSA algorithm is executed by dividing the entire process into modular squaring and multiplication operations. Based on the RNS employment in modulo-exponential operation, two RSA architectures are proposed. A Verilog HDL code is used to model the entire RSA architecture and ported in Zynq FPGA (XC7Z020CLG484-1) for Proof of Concept (PoC). The Cadence Genus Synthesizer tool characterizes a system’s performance for TSMCs standard Cell library. Partial RNS (Proposed-I)- and Fully RNS (Proposed-II)-based RSA architectures increase the operation speed by 13% and 35%, respectively, compared with the existing RSA. Even though there is an increase in parameters like area, power and PDP for a smaller key size, the improvement in area utilization and encryption/decryption speed of RSA for a larger key size is evident from the analysis. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-21T07:00:00Z DOI: 10.1142/S0218126623502559
- A Memristive-Based Design of a Core Digital Circuit for Elliptic Curve
Cryptography-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Khalid Alammari, Majid Ahmadi, Arash Ahmadi Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The new emerging non-volatile memory (NVM) devices known as memristors could be the promising candidate for future digital architecture, owing to their nanoscale size and its ability to integrate with the existing CMOS technology. The device has involved in various applications from memory design to analog and digital circuit design. In this paper, a combination of memristor devices and CMOS transistors is working together to form a hybrid CMOS-memristor circuit for XAX- Module, a core element used as digital circuit for elliptic curve cryptography. The proposed design was implemented using Pt/TaOx/Ta memristor device and simulated in Cadence Virtuoso. The simulation results demonstrate the design functionality. The proposed module appears to be efficient in terms of layout area, delay and power consumption since the design utilizes the hybrid CMOS/memristor gates. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-21T07:00:00Z DOI: 10.1142/S0218126623502596
- A Fuzzy Comprehensive Evaluation Method of Regional Economic Development
Quality Based on a Convolutional Neural Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Jiqiang Li Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper presents an in-depth research analysis on the evaluation of the development quality of regional economy through an improved convolutional neural network algorithm, and uses it to design a fuzzy comprehensive evaluation model for the practical process. Based on the measured indices of different variables, a spatial econometric model is constructed and provincial panel data are selected to empirically analyze the impact and spatial spillover effects of financial agglomeration and technological innovation on regional economic quality development from both static and dynamic aspects and to examine the spatial correlation of the factors. A new serial data flow model is adopted, which optimizes the control of data flow in convolutional computation, reduces the percentage of clock cycles used to read memory data, and increases the computational efficiency. At the same time, with dynamic data caching, a convolutional computation can be completed in one clock cycle, reducing the memory capacity required for caching intermediate data. The effectiveness of the evaluation system constructed in this paper is further tested. Most of the indicators have a significant positive or negative impact on the quality level of economic development, and the direction of the impact is consistent with the positive and negative attributes of the indicators in this study, which verifies the validity of the evaluation indicator system constructed in this paper. In summary of the study, effective suggestions are made in terms of human capital investment, reasonable allocation of fiscal expenditure, enhancing regional greening development and improving risk prevention measures. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-21T07:00:00Z DOI: 10.1142/S0218126623502687
- An Efficient Model for Mitigating Power Transmission Congestion Using
Novel Rescheduling Approach-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Swakantik Mishra, Sudhansu Kumar Samal Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In the electricity business, power evacuation from source to load is a challenging task when there is contingency and agencies have to swirl around for solutions in terms of rescheduling generator dispatch, load slashing, adding of network, etc. The more challenging scenario arises when the independent operator wants to re-dispatch a generator during the contingent situation. So, in this paper, we focus on a new generator rescheduling technique with congestion price and transmission security as an intervention. The significant intention of this paper involves restricting the load thereby dispatching a particular generator or a set of generators with low cost and secure transmission line. In addition to this, the network jamming is unpredictable and does not follow any pattern, but power supply in some zones is disrupted due to hidden reasons. Therefore, a macroscopic or holistic approach is adopted for congestion forecasting through demand schedule, gathering, minimum as well as maximum drawls. Here, two significant factors namely the transmission utilization charges as well as transmission congestion charges for predicting the congestion of a transmission line are evaluated. Finally, the experimental analysis to determine transmission utilization charges, transmission congestion charges, cost function and generation supply as well as demand balance with congestion optimization. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-21T07:00:00Z DOI: 10.1142/S0218126623502377
- Architectural Design Model Guided On-Demand Power Management of
Energy-Efficient GPGPU for SLAM-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Kaige Yan, Zhujun Ma, Caiwei Li, Xin Fu, Jingweijia Tan Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Simultaneously localization and mapping (SLAM) is a core component in many embedded domains, e.g., robots, augmented and virtual reality. Due to SLAM’s high demand on computation resources, general-purpose graphic processing units (GPGPUs) are often used as its processing engine. Meanwhile, embedded systems usually have strict power constraint. Thus, how to deliver required performance for SLAM, yet still meet the power limit, is a great challenge faced by GPGPU designer. In this work, we discover the general principles of designing energy-efficient GPGPU for SLAM as “many SMs, enough SPs and registers, small caches”, by analyzing the implication of individual design parameters on both performance and power. Then, we conduct large-scale design space exploration and fit the Pareto frontier with a two-term exponential model. Further, we construct gradient boosting decision tree (GBDT)-based design models to predict the performance and power given the design parameters. The evaluation shows that our GBDT-based models can achieve [math]3% mean average percentage error, which significantly outperform other machine learning models. With these models, a kernel’s requirement on hardware resources can be well understood. Based on such knowledge, we introduce design model guided power management strategies, including power gating and dynamic frequency and voltage scaling (DFVS). Overall, by combining these two power management strategies, we can improve the energy delay product by 36%. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-21T07:00:00Z DOI: 10.1142/S0218126623502390
- Intelligent Edge Based Efficient Disease Diagnosis using Optimization
Based Deep Maxout Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: W Ancy Breen, S Muthu Vijaya Pandian Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The healthcare model is considered an imperative part of remote sensing of health. Finding the disease requires constant monitoring of patients’ health and the detection of diseases. In order to diagnose the disease utilizing an edge computing platform, this study develops a method called grey wolf invasive weed optimization-deep maxout network (GWIWO-DMN). The proposed GWIWO, which is developed by integrating invasive weed optimization (IWO) and grey wolf optimization (GWO), is used here to train the DMN. The distributed edge computing platform consists of four units, namely monitoring devices, first layer edge server, second layer edge server, and cloud server. The monitoring devices are used for accumulating patient information. The preprocessing and feature selection are performed in the first layer edge server. Here, the preprocessing is carried out using the exponential kernel function. The selection of features is done using Jaro–Winkler distance in the first layer edge server. Then, at the second layer edge server, clustering and classification are carried out using deep fuzzy clustering and DMN, respectively. The proposed GWIWO algorithm is used to do the DMN training. Finally, the cloud server processes the decision fusion. The proposed GWIWO-DMN outperformed with the highest true positive rate (TPR) of 89.2%, highest true negative rate (TNR) of 93.7%, and highest accuracy of 90.9%. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-21T07:00:00Z DOI: 10.1142/S0218126623502419
- LSTM Neural Network-Based Credit Prediction Method for Food Companies
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Luqi Miao Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. As information technology expands across industries in the age of deep learning, companies face new changes in their credit assessment methods. One of the difficulties in financing food enterprises stems from the complexity of investment in reviewing enterprises’ credit. Therefore, this paper proposes a deep learning-based credit prediction and evaluation model for food enterprises, which performs well on the dataset and achieves 85.73% and 88.56% accuracy in verifying the performance and default test samples, respectively. In addition, the model was confirmed to have good robustness through ablation experiments. Finally, the paper concludes with relevant recommendations for food companies based on the study’s findings, offering new methods to improve their corporate credit assessment. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-21T07:00:00Z DOI: 10.1142/S0218126623502420
- Innovative Energy Management System for Energy Storage Systems of
Multiple-Type with Cascade Utilization Battery-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Junhong Liu, Yongmi Zhang, Yanhong Li, Yulei Liu, Xingxing Wang, Lei Zhao, Qiguang Liang, Jun Ye Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The proposed system provides an energy management method for various types of an energy storage system including cascade utilization battery. The method is used to receive, store and manage the relevant operating data from the energy storage battery and also randomly determine the energy distribution coefficient of the energy storage battery. According to the adaptive energy distribution method, the power value of the total distributed energy storage power to the cascade utilization energy is calculated and also the energy distribution coefficient of the energy storage battery in real time is adjusted. Finally, the corrected command value of the energy storage battery power is obtained as an output. The system can not only prevent overcharging and over-discharging of the energy storage system, but also maintain the good performance of the energy storage system. To realize the coordinated control and energy management of the battery power plant, we use multiple types, including conventional battery and cascade utilization power battery control purpose. The performance metrics, namely, real-time energy management, computational time and operating cost are employed for the experimental purpose. The simulation results show the superior performance of the proposed energy management system over other state-of-the-art methods. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-15T07:00:00Z DOI: 10.1142/S0218126623501980
- A Floating Decremental/Incremental Meminductor Emulator Using Voltage
Differencing Inverted Buffered Amplifier and Current Follower-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Bhawna Aggarwal, Shireesh Kumar Rai, Akanksha Arora, Amaan Siddiqui, Rupam Das Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper presents a floating meminductor emulator circuit using a voltage differencing inverted buffered amplifier (VDIBA), current follower (CF), and two grounded capacitors. The parasitic resistance at the input terminal of the current follower has been utilized. The idea of implementing a meminductor emulator is simple and works on the principle of putting memory inside the active inductor circuit. A capacitor (memory element) has been charged by the current flowing through the active inductor circuit. Therefore, the proposed meminductor emulator can be viewed as an active inductor circuit having memory inside it. The proposed floating meminductor emulator works over a significant range of frequencies and satisfies all the characteristics of a meminductor. The meminductor emulator has been realized and simulated in the LTspice simulation tool using TSMC’s 180-nm CMOS technology parameters. A chaotic oscillator circuit has been realized using the proposed meminductor emulator to verify its performance. The results obtained for the chaotic oscillators are found to be satisfactory and thus verify the performance of the proposed meminductor emulator. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-15T07:00:00Z DOI: 10.1142/S0218126623502432
- Single-Inductor, Multiple-Input, Multiple-Output, DC–DC Converter Based
on A New Software Zero-Current Switching Technique-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Akbar Asgharzadeh-Bonab, Samad Sheikhaei Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. A single-inductor multiple-input multiple-output converter is proposed in this paper that can be used in low-power systems due to low output current and voltage. This converter is implemented discretely, and only one microcontroller is employed to control the system. The unique zero-current switching (ZCS) technique considered in this paper is such that only by reading the inductor’s left-side voltage the optimal value of the inductor discharge duty cycle is determined. This method can be generalized to low-power and high-power converters, whether implemented and designed as discrete or integrated. This converter works in discontinuous conduction mode. It uses pulse width modulation control and the time-multiplexing control method, which makes the system have high efficiency and makes the cross-regulation problem between the converter’s outputs tiny. The control algorithm considered in this converter is digital, which determines the optimal charge and discharge duty cycles. Also, the switching frequency of this converter is constant, relatively low, and equal to 5[math]kHz. The efficiency of this converter has reached 91.6% by using the ZCS technique and other mentioned control methods. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-15T07:00:00Z DOI: 10.1142/S0218126623502456
- PCSboost: A Multi-Model Machine Learning Framework for Key Fragments
Selection of Channelrhodopsins Achieving Optogenetics-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xihe Qiu, Bo Zhang, Qiong Li, Xiaoyu Tan, Jue Chen Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Optogenetics combines optical and genetic methods to modulate light-controlled gene expression, protein localization, signal transduction and protein interactions to achieve precise control of specific neuronal activity, with the advantages of low tissue damage, high spatial and temporal resolution, and genetic specificity. It provides a cutting-edge approach to establishing a causal relationship between brain activity and behaviors associated with health and disease. Channelrhodopsin (ChR) functions as a photogenic activator for the control of neurons. As a result, ChR and its variants are more widely used in the realization of optogenetics. To enable effective optogenetics, we propose a novel multi-model machine learning framework, i.e., PCSboost, to accurately assist key fragments selection of ChRs segments that realize optogenetics from protein sequence structure and information dataset. We investigate the key regions of the ChR variant protein fragments that impact photocurrent properties of interest and automatically screen important fragments that realize optogenetics. To address the issue of the dataset containing a limited quantity of data but a high feature dimension, we employ principal component analysis (PCA) to reduce the dimensionality of the data and perform feature extraction, followed by the XGBoost model to classify the ChRs based on their kinetics, photocurrent and spectral properties. Simultaneously, we employ the SHAP interpretability analysis to perform an interpretability analysis of the ChR variant protein for pointwise, characteristic similarities to identify key regions of the protein fragment structure that contribute to the regulation of photocurrent intensity, photocurrent wavelength sensitivity and nonkinetic properties. Experimental findings demonstrate that our proposed PCSboost approach can speed up genetic and protein engineering investigations, simplify the screening of important protein fragment sections, and potentially be used to advance research in the areas of optogenetics, genetic engineering and protein engineering. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-15T07:00:00Z DOI: 10.1142/S0218126623502493
- Optimal Sparse Volterra Modeling for Transient Behavior of Turbofan
Engines Based on Internet of Things-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yidan Ma, Jianfu Cao Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In recent years, Internet of Things (IoT) technologies have been increasingly utilized to collect enormous volumes of performance data for intelligent analysis and modeling of aero-engines. This has aided in the development of numerous data-driven solutions so that a strong knowledge of the intricate operations within the equipment is no longer needed. To characterize the dynamic nonlinear transient behavior of turbofan engines, fast response changes have the possibility of being captured accurately through high-order, long-memory-length Volterra series. However, exponentially increasing coefficients are still challenging to be handled properly. For fast and reliable modeling of turbofan engines, an Optimal Sparse Volterra (OSV) model is developed in this paper by reconstructing sparse nonzero coefficients after a global selection through particle swarm optimization. The OSV model focuses on the optimal sparsity of the Volterra kernels while being insensitive to the signal length. Besides, noise reduction and the correlation analysis method are specifically designed for sensor measurements of low-bypass ratio turbofan engines. The OSV model, as well as retaining the powerful descriptive capability of the Volterra series for nonlinear characteristics, finds the most relevant sets of variables and the set of model parameters automatically under the minimum computing workload. According to the experimental results, when real test data are used for turbofan transient maneuvers, the OSV model ensures that the mean absolute error is less than [math] for high-pressure rotor speed, thrust and exhaust temperature. Moreover, the nonzero identification coefficients produced by the OSV model in the experiments are less than 6% of the total coefficients. At the same time, the average running time required by the OSV model is less than 35% of that of traditional identification algorithms. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-15T07:00:00Z DOI: 10.1142/S0218126623502511
- A Cross Entropy-Based Approach to Controller Placement Problem with Link
Failures in SDN-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Hanmin Yin, Jue Chen Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The Controller Placement Problem (CPP) is a key research topic in Software Defined Network (SDN), as the communication delay is influenced by the position of controllers and switches. On that basis, the network failures may happen occasionally, which can cause the increase of propagation latency and the reduction of network performance. As a result, it is essential to research the Controller Placement problem for Link Failures (CPLF). In this paper, authors propose a method based on the cross entropy to solve CPP after link failures, and adopt the Halton sequence to reduce the computation overhead of simulating link failures while guaranteeing the accuracy. In the experiments, we measure and compare the worst-case delay among three methods: our proposed cross entropy-based controller placement algorithm, the optimized controller placement algorithm and a greedy-based controller placement algorithm, and conduct experiments on six real network topologies. The experimental results verify that our proposed method can reduce the worst-case delay by [math] in comparison with GPA. Moreover, the proposed method can always find optimized controller placement schemes no matter how the network scale or the number of controller varies, with a less than [math] error when compared with the optimal solution. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-06T07:00:00Z DOI: 10.1142/S0218126623502407
- Graphical User Interface for Design, Analysis, Validation, and Reporting
of Continuous-Time Systems Using Wolfram Language-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Maja Lutovac-Banduka, Danijela Milosevic, Yigang Cen, Asutosh Kar, Vladimir Mladenovic Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. A graphical user interface presented is intended for fast design, symbolic analysis, accurate simulation, exact verification, and test report preparation. It helps to skip the gap between theory and practice in electrical engineering because the numeric analysis is usually approximate, and the power of symbolic systems has insufficient speed even, for simple engineering problems. The software is written using a computer algebra system that is free on small computers. The mathematical representation of the system can be obtained automatically from the schematic description. Further automated symbolic manipulations are possible according to the user’s aspirations. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-04-06T07:00:00Z DOI: 10.1142/S0218126623502444
- Design of an Approximate Multiplier with Time and Power Efficient
Approximation Methods-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ruyi Liu, Wei Duan, Xiaodie Luo, Qian Ren, Yifan Li, Min Song Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Approximate multipliers have gradually become a focus of research due to the emergence of fault-tolerant applications. This paper deals with the approximation methods for an approximation multiplier with truncation, probability transformation and a majority gate-based compressor chain. With the help of probability analysis, the proposed approximation methods are utilized in an approximate [math] unsigned multiplier to achieve low accuracy loss, high efficiency for time and power. Compared with the precise and approximate multipliers, the proposed design brings 55.0%, 39.0% reduction in delay and 73.8%, 22.6% power saving. The proposed multiplier achieves better peak signal-to-noise ratio (PSNR) values when evaluated with an image processing application. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-31T07:00:00Z DOI: 10.1142/S0218126623502481
- Iterative Fusion and Dual Enhancement for Accurate and Efficient Object
Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhipeng Duan, Zhiqiang Zhang, Xinzhi Liu, Guoan Cheng, Liangfeng Xu, Shu Zhan Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Single Shot Multibox Detector (SSD) uses multi-scale feature maps to detect and recognize objects, which considers the advantages of both accuracy and speed, but it is still limited to detecting small-sized objects. Many researchers design new detectors to improve the accuracy by changing the structure of the multi-scale feature pyramid which has proved very useful. But most of them only simply merge several feature maps without making full use of the close connection between features with different scales. In contrast, a novel feature fusion module and an effective feature enhancement module is proposed, which can significantly improve the performance of the original SSD. In the feature fusion module, the feature pyramid is produced through iteratively fusing three feature maps with different receptive fields to obtain contextual information. In the feature enhancement module, the features are enhanced along the channel and spatial dimensions at the same time to improve their expression ability. Our network can achieve 82.5% mean Average Precision (mAP) on the VOC 2007 [math], 81.4% mAP on the VOC 2012 [math] and 34.8% mAP on COCO [math]-[math]2017, respectively, with the input size [math]. Comparative experiments prove that our method outperforms many state-of-the-art detectors in both aspects of accuracy and speed. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-21T07:00:00Z DOI: 10.1142/S0218126623502328
- An IoT-Enabled Ground Loop Detection System: Design, Implementation and
Testing-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Md. Saifur Rahman, Md. Palash Uddin, Sikyung Kim Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The ground loop is a solemn problem in complex environments including laboratories and industries. In particular, it creates spurious signals, which interfere with low-level signals of instrumentation, and often imperil the human community. Manual ground loop detection is inefficient and requires more diagnosis time. As such, automatic ground loop detection is demanding although it is still a complex task in an environment of massive instruments. In this paper, we exploit the Internet of Things (IoT) technology to present a novel ground loop detection system to cope with such a difficult scenario. Specifically, the proposed scheme comprises an exciter block along with the IoT device to generate up to 100[math]kHz ground loop current, and a detector module to regulate the affected cable by receiving the test current. We also use multiple detectors to give a virtual cable identity (ID) number in a complex area for recognizing the faulty cable accurately. After detecting the ground loop, the affected cable ID number is sent to the server for immediate action for prevention through the use of a smartphone (Android) application and website. The test results clarify the superiority of the proposed ground loop detection scheme in terms of accuracy, dependency and robustness. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-21T07:00:00Z DOI: 10.1142/S0218126623502389
- A Deep Learning-Based Surface Defects Detection and Color Classification
Method for Solar Cells-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Huimin Zhang, Yang Zhao, Shuangcheng Huang, Huifeng Kang, Haimin Han Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In recent years, solar photovoltaic-based power generation technology has become the key planning direction of many countries around the world. In the process of making solar cells, the quality inspection requirements are very particular, such as physical damages, surface scratches, broken grids and microcracks. In traditional factory production, the detection of the above defects requires professional inspectors to carry out visual inspection, which often leads to low detection efficiency, subjective assumption and fatigue, as well as some detection errors. In recent years, the rapid development of computer vision makes it possible to be used to detect the defects in solar cells. To overcome existing barriers, this paper proposes a method for detecting surface defects in solar cells based on deep neural network. Specifically, a specified image segmentation model named U-Net is developed for this purpose. By automatically segmenting little objects using the proposed recognition approach, surface defects detection can be realized. At last, we use a set of experiments on images from real scenes to verify the proposed method. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-18T07:00:00Z DOI: 10.1142/S0218126623501566
- Dynamic Virtual Machine Allocation in Cloud Computing Using Elephant Herd
Optimization Scheme-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: H. S. Madhusudhan, Punit Gupta, Dinesh Kumar Saini, Zhenhai Tan Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Cloud computing is a computing technology that is expeditiously evolving. Cloud is a type of distributed computing system that provides a scalable computational resource on demand including storage, processing power and applications as a service via Internet. Cloud computing, with the assistance of virtualization, allows for transparent data and service sharing across cloud users, as well as access to thousands of machines in a single event. Virtual machine (VM) allocation is a difficult job in virtualization that is governed as an important aspect of VM migration. This process is performed to discover the optimum way to place VMs on physical machines (PMs) since it has clear implications for resource usage, energy efficiency, and performance of several applications, among other things. Hence an efficient VM placement problem is required. This paper presents a VM allocation technique based on the elephant herd optimization scheme. The proposed method is evaluated using real-time workload traces and the empirical results show that the proposed method reduces energy consumption, and maximizes resource utilization when compared to the existing methods. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-18T07:00:00Z DOI: 10.1142/S0218126623501888
- A Survey on an Analysis of Big Data Open Source Datasets, Techniques and
Tools for the Prediction of Coronavirus Disease-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: R. Ame Rayan, A. Suruliandi, S. P. Raja, H. Benjamin Fredrick David Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Coronavirus disease-19 (COVID-19), an infectious disease that spreads when people live in close proximity has greatly impacted healthcare systems worldwide. The pandemic has so disrupted human life economically and socially that the scientific community has been impelled to devise a solution that assists in the diagnosis, prevention and outbreak prediction of COVID-19. This has generated an enormous quantum of unstructured data that cannot be processed by traditional methods. To alleviate COVID-19 threat and to process these unstructured data, big data analytics can be used. The main objective of this paper is to present a multidimensional survey on open source datasets, techniques and tools in big data to fight COVID-19. To this end, state-of-the-art articles have been analyzed, qualitatively and quantitatively, to put together a body of work in the prediction of COVID-19. The findings of this review show that machine learning classification algorithms in big data analytics helps design a predictive model for COVID-19 using the open source datasets. This survey may serve as a starting point to enhance the research in COVID-19. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-13T07:00:00Z DOI: 10.1142/S0218126623300039
- A Recurrent Attention Multi-Scale CNN–LSTM Network Based on
Hyperspectral Image Classification-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xinyue Zhang, Jing Zuo Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Since hyperspectral images contain a variety of ground objects of different scales, long-distance ground objects can fully extract the global spatial information of the image. However, most existing methods struggle to capture multi-scale information and global features simultaneously. Therefore, we combine two algorithms, MCNN and LSTM, and propose the MCNN–LSTM algorithm. The MCNN–LSTM model first performs multiple convolution operations on the image, and the result of each pooling layer is subjected to a feature fusion of the fully connected layer. Then, the results of fully connected layers at multiple scales and an attention mechanism are fused to alleviate the information redundancy of the network. Next, the results obtained by the fully connected layer are fed into the LSTM neural network, which enables the global information of the image to be captured more efficiently. In addition, to make the model meet the expected standard, a layer of loop control module is added to the fully connected layer of the LSTM network to share the weight information of multiple pieces of training. Finally, multiple public datasets are adopted for testing. The experimental results demonstrate that the proposed MCNN–LSTM model effectively extracts multi-scale features and global information of hyperspectral images, thus achieving higher classification accuracy. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-13T07:00:00Z DOI: 10.1142/S0218126623501967
- A 10-Bit 20 Channel LCD Column Driver Using Compact DAC
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Neeraj Agarwal, Neeru Agarwal, Chih-Wen Lu Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. A 20-channel liquid crystal display (LCD) driver architecture is implemented with [math]m CMOS technology. This work presents a novel design of 10-bit compact and high-resolution two-stage DAC to improve the linearity and uniformity of each channel performance. A complete column driver, including a compact DAC, low power buffer, global R-string and multiplexing circuit design, is implemented, and the layout of this 20-channel, 10-bit LCD driver is generated using [math]m CMOS technology. All the circuit blocks of the proposed LCD column driver were simulated using the EDA tool HSPICE and layout generation by Laker. This work also realizes a high-performance class AB operational amplifier with a gain of 140[math]dB for the proposed LCD driver. The 10-bit compact LCD driver has a 1.4 mV LSB and an output voltage of 1.7 V is achieved for the input range of 0.25–1.7[math]V. The compact DAC voltage selector with decoder in this design uses fewer switches in comparison to conventional tree-type RDAC, occupying a smaller chip area with fast response. The proposed design is sufficiently robust for high-color depth and resolution LCD driver applications. The experimental results exhibit maximum differential nonlinearity (DNL) and integral nonlinearity (INL) of 0.065 LSB and −0.12 LSB, respectively. The one channel area is [math]m and the settling time is [math]s for the [math] and 20[math]pF driving load. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-13T07:00:00Z DOI: 10.1142/S0218126623502225
- A New Design of a [math] Reversible Circuit Based on a Nanoscale
Quantum-Dot Cellular Automata-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ling-Li Liu, Nima Jafari Navimipour Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Quantum-dot cellular automata (QCA) is the best-suggested nanotechnology for designing digital electronic circuits. It has a higher switching frequency, low-power expenditures, low area, high speed and higher scale integration. Recently, many types of research have been on the design of reversible logic gates. Nevertheless, a high demand exists for designing high-speed, high-performance and low-area QCA circuits. Reversible circuits have notably improved with developments in complementary metal–oxide–semiconductor (CMOS) and QCA technologies. In QCA systems, it is important to communicate with other circuits and reversible gates reliably. So, we have used efficient approaches for designing a [math] reversible circuit based on XOR gates. Also, the suggested circuits can be widely used in reversible and high-performance systems. The suggested architecture for the [math] reversible circuit in QCA is composed of 28 cells, occupying only 0.04[math][math]m2. Compared to the state-of-the-art, shorter time, smaller areas, more operational frequency and better performance are the essential benefits of the suggested reversible gate design. Full simulations have been conducted with the utilization of QCADesigner software. Additionally, the proposed [math] gate has been schematized using two XOR gates. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-13T07:00:00Z DOI: 10.1142/S0218126623502298
- Performance Investigation of Generalized Rain Pattern Absorption Attention
Network for Single-Image Deraining-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: M. Pravin Kumar, Thiyagarajan Jayaraman, M. Senthilkumar, A. Sumaiya Begum Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Rainy weather conditions are challenging issues for many computer vision applications. Rain streaks and rain patterns are two crucial environmental factors that degrade the visual appearance of high-definition images. A deep attention network-based single-image deraining algorithm is more famous for handling the image with the statistical rain pattern. However, the existing deraining network suffers from the false detection of rain patterns under heavy rain conditions and ineffective detection of directional rain streaks. In this paper, we have addressed the above issues with the following contributions. We propose a multilevel shearlet transform-based image decomposition approach to identify the rain pattern on different scales. The rain streaks in various dimensions are enhanced using a residual recurrent rain feature enhancement module. We adopt the Rain Pattern Absorption Attention Network (RaPaat-Net) to capture and eliminate the rain pattern through the four-dilation factor network. Experiments on synthetic and real-time images demonstrate that the proposed single-image attention network performs better than existing deraining approaches. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-13T07:00:00Z DOI: 10.1142/S0218126623502316
- A Context-Aware Image Generation Method for Assisted Design of Movie
Posters Using Generative Adversarial Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yuan Lu, Ruoxu Hou, Jingya Zheng Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Considering the continuous development of the film industry and the improvement of the living standard among people, movies have gradually come to the civilians. A good movie poster can effectively reflect the content of the movie, attract the audience, stimulate the demand and achieve a good publicity effect. The current movie poster design work is mainly carried out by professional designers, which requires a lot of time and labor cost. In this paper, we propose a context-aware image generation method for assisted design of movie posters using generative adversarial network (named as MPAD-CIP for short). First, the basic information and visual contents of the movie are perceived, and the representative images are extracted, with the use of convolution operations. Then, a backbone network of deep convolutional generative neural network is formulated to generate images for summary of movies. The backbone network is composed of two components: a generator and a discriminator. Their combination realizes the computer-assisted movie poster design by sensing visual context. In the experimental part, the proposed MPAD-CIP method is compared with several benchmark models to demonstrate that the posters generated by this paper are more realistic and versatile, and some of the generated posters are exhibited. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-13T07:00:00Z DOI: 10.1142/S021812662350233X
- A Novel High Gain Non-Isolated Three-Port DC–DC Converter for DC
Microgrid Applications-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: T. S. Bheemraj, Dandu Prajapathi, V. Karthikeyan, S. Kumaravel Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In this paper, a high gain nonisolated three-port bidirectional DC–DC converter is proposed to interface solar photovoltaic and battery energy storage system to DC bus with a reduced number of components. Four modes of operation based on the power flow and load demand are identified. The operating principle of the proposed converter and its operational waveform for all four modes of operation is described in this paper. The steady-state analysis of the proposed converter is performed to determine the voltage gain of the converter in all four modes of operation. The steady-state analysis is done for both continuous conduction mode and discontinuous conduction mode; and the boundary condition. The reduced number of components to achieve high-voltage gain ensures the lowering of the cost and weight of the converter. The analytical results are validated utilizing the simulation results from PSCAD and hardware results. Also, the results indicate that the ripple in the current and voltage is reduced significantly. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-09T08:00:00Z DOI: 10.1142/S0218126623502262
- Design and Simulation of Low Dropout, Low Power Capless Linear Voltage
Regulator-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: K. S. Vasundhara Patel, Niranjan Kumar Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper demonstrates a low power, frequency compensated with on-chip capacitor, low dropout linear voltage regulator. The proposed low dropout regulator (LDO) delivers a constant output voltage of 2.4[math]V for an input range of 2.5–6.8[math]V. A minimal drop of 2[math]mV and 20[math]mV was observed at no load and full load output current of 0 A to 100[math]mA, respectively. LDO is realized by a high-gain two-stage error amplifier, internally compensated by passive a high-pass filter to achieve stability over a load current range of 0–100[math]mA without occupying as much area as an active high-pass filter. The LDO presented requires a bias current of 10[math]mA with a reference voltage of 1.5[math]V and is designed in 180-nm technology. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-08T08:00:00Z DOI: 10.1142/S0218126623502080
- Low Complex Analog Beamforming Design in Multi-User mmWave Non-Orthogonal
Multiple Access (NOMA)-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: S. Sumathi, T. K. Ramesh, Zhiguo Ding Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In this paper, we investigate non-orthogonal multiple access (NOMA) scheme in millimeter wave (mmWave) communication to serve nonclustered multiple users. We explore a low complex design of an analog beamforming weight vector and the power requirement of users aiming to minimize the total power targeting to satisfy the spectral efficiency (SE) requirements of all users. We propose a low complex constant modulus analog beamforming (CMAB) algorithm, where we first reduce the number of signal to interference plus noise ratio (SINR) constraints, which is attained from the order of equivalent channel gain of users. Then, the nonconvex constraint of constant modulus (CM) is relaxed and semi-definite programming (SDP) is used to solve the problem. Obtained weight vector and power for all users are optimal since the rank of positive semi-definite (PSD) matrix is one. Later, CM constraint is included. Simulation results show that the proposed algorithm requires less power with minimum complexity compared to the existing research, digital beamforming NOMA and time division multiple access (TDMA) for the same SE requirements. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-08T08:00:00Z DOI: 10.1142/S0218126623502195
- Cloud-Edge Computing-Based ICICOS Framework for Industrial Automation and
Artificial Intelligence: A Survey-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Weibin Su, Gang Xu, Zhengfang He, Ivy Kim Machica, Val Quimno, Yi Du, Yanchun Kong Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Industrial Automation (IA) and Artificial Intelligence (AI) need an integrated platform. Due to the uncertainty of the time required for training or reasoning tasks, it is difficult to ensure the real-time performance of AI in the factory. Thus in this paper, we carry out a detailed survey on cloud-edge computing-based Industrial Cyber Intelligent Control Operating System (ICICOS) for industrial automation and artificial intelligence. The ICICOS is built based on IEC61499 programming method and used to replace the obsolete Programmable Logic Controller (PLC). It is widely known that the third industrial revolution produced an important device: PLC. But the finite capability of PLC just only adapts automation which will not be able to support AI, especially deep learning algorithms. Edge computing promotes the expansion of distributed architecture to the Internet of Things (IoT), but little effect has been achieved in the territory of PLC. Therefore, ICICOS focuses on virtualization for IA and AI, so we introduce our ICICOS in this paper, and give the specific details. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-06T08:00:00Z DOI: 10.1142/S0218126623501682
- NER in Cyber Threat Intelligence Domain Using Transformer with TSGL
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yuhuang Huang, Mang Su, Yuting Xu, Tian Liu Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In response to the continuous sophistication of cyber threat actors, it is imperative to make the best use of cyber threat intelligence converted from structured or semi-structured data and Named Entity Recognition (NER) techniques that contribute to extracting critical cyber threat intelligence. To promote the NER research in Cyber Threat Intelligence (CTI) domain, we provide a Large Dataset for NER in Cyber Threat Intelligence (LDNCTI). On the LDNCTI corpus, we investigated the feasibility of mainstream transformer-based models in CTI domain. To settle the problem of unbalanced label distribution, we introduce a transformer-based model with a Triplet Loss based on metric learning and Sorted Gradient harmonizing mechanism (TSGL). Our experimental results show that the LDNCTI well represents critical threat intelligence and that our transformer-based model with the new loss function outperforms previous schemes on the Dataset for NER in Threat Intelligence (DNRTI) and the dataset for NER in Advanced Persistent Threats (APTNER). Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-06T08:00:00Z DOI: 10.1142/S0218126623502018
- Hybrid Brent Kung Adder with Modified Sum Generator For Energy Efficient
Applications-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: A. Niyas Ahamed, M. Madheswaran Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The demand for high-speed and energy-efficient adders in modern portable applications has drastically increased in recent decades. The existing adders have achieved high speed due to their parallel prefix structure, but it consumes more power for wide operands. The proposed Hybrid Brent Kung-Modified Sum Generator (HBK-MSG) achieved high speed due to the usage of Brent Kung (BK) adder and consumes low power with the help of MSG unit. This hybrid adder architecture is applicable for larger operands. It also uses two different structures of sum generators which compute the added result of the operands by incorporating the complement of the carriers. It is designed and simulated using Xilinx ISE 13.2 and it is coded in Verilog HDL. The performance of the proposed HBK-MSG adder is analyzed by measuring area, delay and power consumption. The proposed 64-bit HBK MSG adder reduces the energy consumption by 62.98%, 48.05%, 33.09%, 28.12% and delay by 63.54%, 48.46%, 34.58%, 28.15% when compared with the existing adder designs such as RCA, CSLA, PPF/CSSA_4 and hybrid adder. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-06T08:00:00Z DOI: 10.1142/S0218126623502122
- IoT Energy Management for Smart Homes’ Water Management System
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: P. Côrte, H. Sampaio, E. Lussi, C. Westphall Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. An Internet of Things (IoT) device that can automatically measure water consumption can help prevent excessive water usage or leaks. However, automating too many residences or condominiums with multiple IoT devices can lead to extra energy consumption and more network congestion. We propose controlling the energy consumption of an IoT water consumption management system by dynamically controlling its duty cycle. By analyzing the energy consumption of the developed prototype and its duty cycle variation, we calculated how much energy could be saved by controlling the antenna and the water flow sensor used in the IoT device. While controlling the antenna offered some energy savings, having some way to cut down on the water flow sensor’s consumption can have a dramatic impact on the overall IoT energy consumption or its battery longevity. Our results showed that we could get up to 69% extra energy savings compared to just putting the antenna in sleep mode. There is an observable trade-off in saving so much energy, as we can also see that water reading error rates go up alongside the extra energy savings. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-04T08:00:00Z DOI: 10.1142/S0218126623502171
- Caching Hybrid Rotation: A Memory Access Optimization Method for CNN on
FPGA-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Dong Dong, Hongxu Jiang, Xuekai Wei Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Custom computing architectures on field programmable gate array (FPGA) platforms are a viable solution to further accelerate convolutional neural network (CNN) inference. However, due to the large size feature map matrix data, the optimization of CNN feature maps storage computing on FPGA remains a challenge. To overcome these challenges, a FPGA-oriented memory access optimization method for CNN is proposed. Firstly, the feature map partition strategy is used to group the feature map efficiently. Second, the input and the output caching rotation methods are employed in adaptive memory access mode. Third, a caching hybrid rotation method is proposed to optimize memory access performance and can effectively reduce the access time of the CNN feature map. Experimental results based on SkyNet and VGG16 show that the inference speed of the proposed model is accelerated by 7.1 times compared with the previous conventional memory access optimization for CNN on FPGA. Through the evaluation of computational energy efficiency, our method can be improved by 6.4 times compared to the current typical accelerators. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-04T08:00:00Z DOI: 10.1142/S0218126623502183
- A Jointly Guided Deep Network for Fine-Grained Cross-Modal Remote Sensing
Text–Image Retrieval-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Lei Yang, Yong Feng, Mingling Zhou, Xiancai Xiong, Yongheng Wang, Baohua Qiang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Remote sensing (RS) cross-modal text–image retrieval has great application value in many fields such as military and civilian. Existing methods utilize the deep network to project the images and texts into a common space and measure the similarity. However, the majority of those methods only utilize the inter-modality information between different modalities, which ignores the rich semantic information within the specific modality. In addition, due to the complexity of the RS images, there exists a lot of interference relation information within the extracted representation from the original features. In this paper, we propose a jointly guided deep network for fine-grained cross-modal RS text–image retrieval. First, we capture the fine-grained semantic information within the specific modality and then guide the learning of another modality of representation, which can make full use of the intra- and inter-modality information. Second, to filter out the interference information within the representation extracted from the two modalities of data, we propose an interference filtration module based on the gated mechanism. According to our experimental results, significant improvements in terms of retrieval tasks can be achieved compared with state-of-the-art algorithms. The source code is available at https://github.com/CQULab/JGDN. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-04T08:00:00Z DOI: 10.1142/S0218126623502213
- A Deep Learning-Based Multimodal Resource Reconstruction Scheme for
Digital Enterprise Management-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Tingting Yang, Bing Zheng Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Nowadays, almost all of the enterprises are facing resources and materials with multimodal format. For example, textual information can be mixed with visual scenes, and visual information can be also mixed with textual scenarios. As a result, such information fusion among multimodal materials costs a large amount of human labors in daily management affairs. To deal with such issue, this paper introduces deep learning to characterize gap between vision and texts, and proposes a deep learning-based multimodal resource reconstruction scheme via awareness of table document, so as to facilitate digital enterprise management. A deep neural network is developed to construct a method to automatically extract table texts from images, so that multimodal information fusion can be realized. This can reduce much human labor in recognizing textual characteristics from visual scenarios, which can further facilitate the resource dispatching activities in the process of digital enterprise management. Some experiments are also conducted upon the basis of real-world data set, and proper results are obtained to prove that the proposal is endowed with considerable efficiency. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-03T08:00:00Z DOI: 10.1142/S0218126623501876
- Multi-Objective Optimal Power Flow Solutions Using Improved
Multi-Objective Mayfly Algorithm (IMOMA)-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: K. Vijaya Bhaskar, S. Ramesh, K. Karunanithi, S. P. Raja Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper realizes the implementation of Improved Multi-objective Mayfly Algorithm (IMOMA) for getting optimal solutions related to optimal power flow problem with smooth and nonsmooth fuel cost coefficients. It is performed by considering Simulated Binary Crossover, polynomial mutation and dynamic crowding distance in the existing Multi-objective Mayfly Algorithm. The optimal power flow problem is formulated as a Multi-objective Optimization Problem that consists of different objective functions, viz. fuel cost with/without valve point loading effect, active power losses, voltage deviation and voltage stability. The performance of Improved Multi-objective Mayfly Algorithm is interpreted in terms of the present Multi-objective Mayfly Algorithm and Nondominated Sorting Genetic Algorithm-II. The algorithms are applied under different operating scenarios of the IEEE 30-bus test system, 62-bus Indian utility system and IEEE 118-bus test system with different combinations of objective functions. The obtained Pareto fronts achieved through the implementation of Improved Multi-objective Mayfly Algorithm, Multi-objective Mayfly Algorithm and Nondominated Sorting Genetic Algorithm-II are compared with the reference Pareto front attained by using weighted sum method based on the Covariance Matrix-adapted Evolution Strategy method. The performances of these algorithms are individually analyzed and validated by considering the performance metrics such as convergence, divergence, generational distance, inverted generational distance, minimum spacing, spread and spacing. The best compromising solution is achieved by implementing the Technique for Order of Preference by Similarity to Ideal Solution method. The overall result has shown the effectiveness of Improved Multi-objective Mayfly Algorithm for solving multi-objective optimal power flow problem. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-01T08:00:00Z DOI: 10.1142/S0218126623502006
- A Low Spur 5.9-GHz CMOS Frequency Synthesizer with Loop Sampling Filter
for C-V2X Applications-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Emre Ulusoy, Ertan Zencir Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In this paper, a very low spur 5.9-GHz integer-N frequency synthesizer designed for a Cellular Vehicle-to-Everything (C-V2X) receiver is presented. The PLL is referenced to a 10-MHz crystal oscillator and the design is implemented in a 65-nm CMOS process. The output of the synthesizer has differential quadrature topology and provides the local oscillator signal to a downconverter mixer of C-V2X receiver. Post-layout simulations show that the reference spurs are better than −88[math]dBc through loop sampling technique which was implemented in a 11.8-GHz VCO design for the first time to the best of our knowledge. The best spur level without the loop sampling technique applied is limited to −55[math]dBc. Using the loop sampling technique provides a spur reduction of 33[math]dB which is a significant improvement at this frequency. Based on post-layout simulations, the design has a phase noise of −97/−99/−114[math]dBc for 10[math]kHz/100[math]kHz/1[math]MHz frequency offsets, respectively, which presents competitive numbers with the designs in the literature. The design has 1.2-V nominal supply voltage for the analog and digital blocks. The total power dissipation of the synthesizer core is 6[math]mW from a 1.2-V supply while the output buffers driving a 100-fF load consumes 18[math]mW. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-03-01T08:00:00Z DOI: 10.1142/S0218126623502237
- A Novel Multilevel DC–DC Flyback Converter-Fed H-Bridge Inverter
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Vijayalakshmi Subramanian, Marimuthu Marikannu, B. Senthilkumar, J. Reka, P. Rathe Devi, Venugopal Ramadoss Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The objective is to produce a high-gain DC to DC flyback boost converter integrated with a multi-level converter for inverter applications. It has two stages: the first one is flyback multilevel converter, and another level comprises level controller and H-Bridge Inverter. The addition of voltage multiplier cells enhances the commercial flyback converter with multiple voltage outputs producing high voltage gain. These multiple outputs are fed to an H-bridge inverter for producing a multilevel output in an inverter. This DC-to-DC converter steps up the DC supply, but it also decreases the switches, diodes, and capacitors. This kind of converter not only reduces the switch count but also reduces the voltage stress, and total harmonic distortion. To increase the number of levels in the converter, the number of capacitors and diodes in the DC-to-DC converter without disturbing the main circuit must be increased to achieve the required output voltage. The number of power switches for the proposed topology is compared to comparable topologies in the current literature. The results of the simulation are communicated via the MATLAB/Simulink domain, and the recommended converter’s functionality is demonstrated. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-27T08:00:00Z DOI: 10.1142/S0218126623502092
- A Hybrid Approximation Method for Integer-Order Approximate Realization of
Fractional-Order Derivative Operators-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Murat Köseoğlu Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The use of fractional-order (FO) calculus for the solution of different problems in many fields has increased recently. However, the usage of FO system models in practice brings some difficulties. The FO operator, fractance device, is usually realized via several integer-order approximation methods, which have pros and cons in the aspect of operation frequency, time response and stability region. These methods may not meet all performance expectations. In this regard, author proposes an efficient hybrid integer-order approximation method for FO derivative operator without causing any additional difficulty in realization. The proposed method combines Matsuda and modified stability boundary locus (M-SBL) approximation methods. The advantage of each method is combined in a single hybrid function by considering root mean square error (RMSE) rates for step response. The performance of hybrid transfer function is analyzed in comparison with Matsuda, Oustaloup, continued fraction expansion (CFE) and M-SBL transfer functions for both frequency and time response. Analog realization of the proposed model is performed experimentally via partial fraction expansion method. Analog design is verified via both Multisim simulations and experimental results. The improvements due to the hybrid behavior and the consistency of experimental results with theoretical and simulation results demonstrate the practicality and usefulness of the hybrid model. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-27T08:00:00Z DOI: 10.1142/S0218126623502249
- Thermal Performance Analysis and Prediction of Printed Circuit Boards
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yi Wan, Hailong Huang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Printed circuit boards (PCBs) are important components of electronic devices, they play the roles of mechanical connections and electrical transmission, thermal failure is their main failure mode, the heat flow analysis and thermal reliability design are the basis and premise to improve thermal performance of PCBs. In this paper, analysis models of PCBs thermal performance are built based on the principles of fluid mechanics and the finite element method, and we obtain the influence and analysis of internal heat sources on PCBs thermal performance. The study provides a theoretical basis for PCBs thermal reliability design which can be applied to high-density Internet of Things and blockchain ICT integration. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-27T08:00:00Z DOI: 10.1142/S0218126623502250
- YOLOv5s-Cherry: Cherry Target Detection in Dense Scenes Based on Improved
YOLOv5s Algorithm-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Rongli Gai, Mengke Li, Zumin Wang, Lingyan Hu, Xiaomei Li Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Intelligent agriculture has become the development trend of agriculture in the future, and it has a wide range of research and application scenarios. Using machine learning to complete basic tasks for people has become a reality, and this ability is also used in machine vision. In order to save the time in the fruit picking process and reduce the cost of labor, the robot is used to achieve the automatic picking in the orchard environment. Cherry target detection algorithms based on deep learning are proposed to identify and pick cherries. However, most of the existing methods are aimed at relatively sparse fruits and cannot solve the detection problem of small and dense fruits. In this paper, we propose a cherry detection model based on YOLOv5s. First, the shallow feature information is enhanced by convolving the feature maps sampled by two times down in BackBone layer of the original network model to the input end of the second and third CSP modules. In addition, the depth of CSP module is adjusted and RFB module is added in feature extraction stage to enhance feature extraction capability. Finally, Soft-Non-Maximum Suppression (Soft-NMS) is used to minimize the target loss caused by occlusion. We test the performance of the model, and the results show that the improved YOLOv5s-cherry model has the best detection performance for small and dense cherry detection, which is conducive to intelligent picking. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-25T08:00:00Z DOI: 10.1142/S0218126623502067
- A Big Data-Driven Risk Assessment Method Using Machine Learning for Supply
Chains in Airport Economic Promotion Areas-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhijun Ma, Xiaobei Yang, Ruili Miao Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. With the rapid development of economic globalization, population, capital and information are rapidly flowing and clustering between regions. As the most important transportation mode in the high-speed transportation systems, airports are playing an increasingly important role in promoting regional economic development, yielding a number of airport economic promotion areas. To boost effective development management of these areas, accurate risk assessment through data analysis is quite important. Thus in this paper, the idea of ensemble learning is utilized to propose a big data-driven assessment model for supply chains in airport economic promotion areas. In particular, we combine two aspects of data from different sources: (1) national economic statistics and enterprise registration data from the Bureau of Industry and Commerce; (2) data from the Civil Aviation Administration of China and other multi-source data. On this basis, an integrated ensemble learning method is constructed to quantitatively analyze the supply chain security characteristics in domestic airport economic area, providing important support for the security of supply chains in airport economic area. Finally, some experiments are conducted on synthetic data to evaluate the method investigated in this paper, which has proved its efficiency and practice. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-23T08:00:00Z DOI: 10.1142/S0218126623501700
- Learning Spatiotemporal-Selected Representations in Videos for Action
Recognition-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Jiachao Zhang, Ying Tong, Liangbao Jiao Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Action recognition is a challenging task of modeling both spatial and temporal context. Numerous works focus on architectures modality and successfully make worthy progress on this task. While due to the redundancy in time and the limit of computation resources, several works focus on the efficiency study like frame sampling, some for untrimmed videos, and some for trimmed videos. With the intent of improving the effectiveness of action recognition, we propose a novel Computational Spatiotemporal Selector (CSS) to refine and reinforce the key frames with discriminative information in video. Specifically, CSS includes two modules: Temporal Adaptive Sampling (TAS) module and Spatial Frame Resolution (SFR) module. The former can refine the key frames in the temporal space for capturing the key motion information, while the latter can further zoom out some refined frames in the spatial space for eliminating the discrimination-irrelevant structural information. The proposed CSS is flexible to be embedded into most representative action recognition models. Experiments on two challenging action recognition benchmarks, i.e., ActivityNet1.3 and UCF101, show that the proposed CSS improves the performance over most existing models, not only on trimmed videos but also untrimmed videos. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-23T08:00:00Z DOI: 10.1142/S0218126623502031
- Removal of Redundant Information via Discrete Representation for Monocular
Depth Estimation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Hao Du, Xinzhi Liu, Guoan Cheng, Ai Matsune, Liangfeng Xu, Shu Zhan Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Monocular depth estimation aims at inferring three-dimensional (3D) cues from a single RGB image. Although existing methods have achieved a certain degree of success, the impact of redundant information has rarely been studied. We propose to improve estimation accuracy by implicitly eliminating redundant information. To this end, we creatively apply discrete representation to monocular depth estimation. By mapping continuous variables into the corresponding learning-based discrete latent space, a hierarchical multi-scale latent map is acquired as the decoder input. Removing redundant information can enhance prediction performance by making the depth estimator balance the local and global. Furthermore, to fully take advantage of the discrete representation, a lightweight fusion mechanism is introduced to aggregate information in multi-scale feature maps.Experiments on NYU Depth V2 dataset demonstrate that our network is competitive with the state of the arts. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-23T08:00:00Z DOI: 10.1142/S0218126623502079
- Rate Control in Versatile Video Coding with Cosh Rate–Distortion
Model-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Dongzi Wang, Bin Fang, Xuekai Wei, Weizhi Xian, Mingliang Zhou, Qin Mao Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In this paper, we propose a rate control (RC) scheme for versatile video coding (VVC) to enhance coding performance. First, we propose a rate–distortion (R–D) model at the coding tree unit (CTU) level to describe the R–D relationship. Second, we adjust the quantization parameter (QP) and Lagrange multiplier at the coding unit (CU) level according to visual features to improve the rationality of local coding results. Finally, we propose a model parameter updating strategy to guarantee bitrate accuracy. The experimental results demonstrate that our RC method has better R–D performance with similar bitrate accuracy and better visual quality compared to the default RC strategy in VVC reference software VTM 10.2. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-23T08:00:00Z DOI: 10.1142/S0218126623502109
- New CMOS Linear Transconductors and their Applications
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Manish Rai, Raj Senani, Abdhesh Kumar Singh Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The object of this paper is to introduce a new CMOS differential input single output (DISO) transconductor (realizable from eight MOSFETs operating in saturation) and a dual input dual output (DIDO) transconductor (realizable with 16 MOSFETs operating in saturation) and their applications in realizing grounded/floating, positive/negative, electronically controllable resistors and inductors. The workability of all the prepositions has been demostrated by SPICE simulations using TSMC [math] m CMOS technology parameters. The paper, thus, adds a number of useful electronically-controllable circuits to the existing repertoire of CMOS analog circuits for signal processing. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-23T08:00:00Z DOI: 10.1142/S0218126623502110
- Chaotic Oscillator with Diode–Inductor Nonlinear Bipole-Based Jerk
Circuit: Dynamical Study and Synchronization-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: K. Zourmba, C. Fischer, B. Gambo, J. Y. Effa, A. Mohamadou Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper proposes a novel jerk circuit obtained by using an alternative nonlinear bipole component of inductor and diode in parallel. The circuit is described by five differential equations and investigated by the stability analysis, equilibria points, Kaplan–Yorke dimension, phase portraits, Lyapunov characteristic exponent estimation, bifurcation diagram and the 0–1 test chaos detection. The control parameter is adopted by varying the inductor [math] value, this system can display periodic orbit, quasi-periodic orbit and chaotic behavior. The dynamic influence of transit diode capacitance is done and this confirms the robustness of the system to noise influence. The validity of the numerical simulations is experimentally realized through the phase portraits of the circuit. Finally, the synchronization of the systems is studied and time simulation results are presented. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-23T08:00:00Z DOI: 10.1142/S0218126623502146
- WFLTree: A Spanning Tree Construction for Federated Learning in Wireless
Networks-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: huo Li, Yanwei Zheng, Yifei Zou Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Nowadays, more and more federated learning algorithms have been implemented in edge computing, to provide various customized services for mobile users, which has strongly supported the rapid development of edge intelligence. However, most of them are designed relying on the reliable device-to-device communications, which is not a realistic assumption in the wireless environment. This paper considers a realistic aggregation problem for federated learning in a single-hop wireless network, in which the parameters of machine learning models are aggregated from the learning agents to a parameter server via a wireless channel with physical interference constraint. Assuming that all the learning agents and the parameter server are within a distance [math] from each other, we show that it is possible to construct a spanning tree to connect all the learning agents to the parameter server for federated learning within [math] time steps. After the spanning tree is constructed, it only takes [math] time steps to aggregate all the training parameters from the learning agents to the parameter server. Thus, the server can update its machine learning model once according to the aggregated results. Theoretical analyses and numerical simulations are conducted to show the performance of our algorithm. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-23T08:00:00Z DOI: 10.1142/S0218126623502201
- Prediction of the Test Yield of Future Integrated Circuits Through the
Deductive Estimation Method-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Chung-Huang Yeh, Jwu E. Chen Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In the past 20 years, semiconductor manufacturing technology has advanced rapidly, but the advancement of integrated circuit (IC) testers has been slow. Using obsolete testers to inspect advanced wafers has become a significant challenge for test manufacturers. In this research, we used DITM (digital IC testing model) to discuss the impact of the test guardband (TGB) on quality and yield. Considering the interaction between semiconductor fabrication capability parameters and test capability parameters, we proposed an estimation method [deductive estimation method (DEM)] to analyze the electrical distribution changes of products after chip production and deduce the yield of future products. The deductive estimation method can correctly depict the future test yield [math] curve using the chip frequency data published by IRDS (International Roadmap for Devices and Systems) in 2017. Furthermore, test manufacturers can measure whether the current test capabilities can cope with future semiconductor chip manufacturing capabilities by predicting the trends. Next, test manufacturers can maintain high-quality and high-yield chip output by pre-adjusting the hardware testing capabilities of ATE (automated test equipment) or proposing more effective chip testing methods. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-20T08:00:00Z DOI: 10.1142/S021812662350202X
- The DC Microgrid-Based SoC Adaptive Droop Control Algorithm
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Hongyu Yang, Nannan Zhang, Bo Gao Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In the direct current (DC) microgrid, a communication-less control algorithm to keep a stable state of the isolated DC microgrid is presented in this paper. In order to get balance of the state of charge (SoC) among multiple battery energy storage systems (BESs), a control algorithm with adaptive droop coefficient is proposed. When the SoC of BESs has a deviation, the droop coefficient will change with the SoC. The DC bus voltage is kept at a stable value within a small range of deviation using the proposed adaptive control algorithm, thus improving the voltage quality. The coordinated control strategy is significant to keep the system stable and to ensure the power balance of the DC microgrid. Finally, several representative cases are discussed and verified. The simulation results show that the DC bus voltage regulation has better stability and robustness without communication and the proposed control algorithm has a desirable performance on the balance of the SoC. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-20T08:00:00Z DOI: 10.1142/S0218126623502055
- A Scalable Neuristor Based on a Half-Wave Memristor Emulator
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Lei Zhou, Sibei Yin, Chune Wang, Huibin Qin, Qianjin Wang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The neuristor based on memristors can be used to mimic synapse and neurons of biological neural systems, and it is the key unit of spiking neural networks. However, the resistance states of realistic memristors are nonvolatile, which is not conducive to mimicking the forgetting function of the brain. Given that the resistance states of memristor emulators are volatile after power down, this paper exhibits a scalable neuristor built with a half-wave memristor emulator. The proposed neuristor demonstrates four critical features for action-potential-based computing: the all-or-nothing spiking of an action potential, threshold-driven spiking, diverse periodic spiking and symmetric anti-Hebbian learning rule of spike-timing-dependent plasticity. Particularly, there are no complex shape and duration constraints on pre- and post-spikes for implementing the symmetric anti-Hebbian learning rule. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-20T08:00:00Z DOI: 10.1142/S0218126623502134
- A Novel Graphical Approach for the Fast Estimation of Filter Capacitor
Value and the Output Performance of Various Uncontrolled Rectifier-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Mehmet Akbaba, Omar Dakkak, Ferhat Atasoy, Adnan Cora Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper proposes a novel approach for low-power applications for the fast estimation of filter capacitor value and the output performance parameters such as average, and Root Mean Square (RMS) values for the voltage and current considering single-phase full-wave, three-phase half-wave and three-phase full-wave rectifier circuits. For this aim, the novel equations in this work are derived for the average output voltage and rms ripple voltage separately for each of the above-mentioned three rectifier types. Then % ripple factors for each rectifier type are calculated using newly derived equations and plotted versus the newly introduced Normalized Time Constant (NTC). Besides, considering the peak supply voltage as the base voltage, the Per Unit (p.u.) output average voltage and rms ripple voltage for each of the rectifier circuits are computed and plotted versus NTC. These graphs will be normalized graphs since the output values of these graphs have turned into independent of both supply voltage amplitude and supply frequency. Normalized graphs are set up only once for each type of the rectifier circuit. Then, and for a pre-selected ripple factor value, the corresponding [math] value is obtained by straightforward reading from the graphs set up between the % ripple factor and [math]. Once the NTC value has been acquired, using the formula of [math] leads to finding the capacitor value required for the pre-selected ripple factor with one simple step calculation. Furthermore, the p.u. values of the average output voltage and [math] ripple voltage values that correspond to the same [math] value are obtained by reading them from the set-up graphs directly. Finally, the efficiency of the proposed method is demonstrated through design examples for each type of rectifier circuit. The three design examples highlight how the output performance values can be obtained easily, accurately and swiftly. Furthermore, the viability of the graphical approach is verified by the experimental results which demonstrate the suitability of the derived equations in the proposed method. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-20T08:00:00Z DOI: 10.1142/S0218126623502158
- A New Tunable Gyrator-C-Based Active Inductor Circuit for Low Power
Applications-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Rasool Gardeshkhah, Ali Naderi Saatlo Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. A new CMOS implementation of active inductor circuit based on the gyrator-C which is suitable for low-voltage and RF applications is presented in this paper. One of the critical features of active inductors which directly affect their quality factor is their series-loss resistance. In this regard, a multi-regulated cascade stage is employed in the new structure which decreases loss. Moreover, by cascading input transistor, the transistors which determine the self-resonance frequency and quality factor will be separated from each other. This results in arranging properties of designed active inductor independently. The configuration of two-stage conventional gyrators is improved which gives the proposed design more opportunities in determining and tuning its characteristics. Also by employing common-source configuration, low conductance nodes are achieved which decrease the ohm-loss of AI. Furthermore, main properties of design can be tuned without affecting each other. The power consumption of the circuit remains as low as 0.62[math]mW, then the circuit is suitable for low power applications. The results showed that AI is suitable for RF applications over 0.2–11.8[math]GHz frequency range. In order to verify theoretical calculations, simulations are carried out using HSPICE and level 49 parameters (BSIM3v3) in 130[math]nm CMOS technology. In addition, Corner and Monte Carlo analyses are considered to prove the efficiency of the circuit against the process variation. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-17T08:00:00Z DOI: 10.1142/S0218126623502353
- Comprehensive Performance Evaluation of Landing Gear Retraction Mechanism
in a Certain Model of Aircraft Based on RPCA Method-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhijuan Sun, Jing Zhao Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The performance of landing gear retraction mechanism in aircraft directly affects its safe operation. Therefore, it is important to analyze and evaluate its comprehensive performance during the design process. Multiple single kinematic and dynamic performance indexes of landing gear retraction mechanism could be solved by CAD/CAE software. The weighting factors of every single performance index are used to distinguish the different effects of comprehensive evaluation, and also achieved by the expert investigation method. Combining the a priori information of the mechanism, the comprehensive performance of landing gear retraction mechanism could be analyzed by Relative Principal Component Analysis (RPCA) method, and the scale of the landing gear retraction mechanism with the best comprehensive performance could be effectively selected. Further, RPCA could also provide a scientific reference basis for the optimization design of landing gear retraction mechanism. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-16T08:00:00Z DOI: 10.1142/S0218126623501955
- Random Access Preamble Detection with Noise Suppression for 5G-Integrated
Satellite Communication Systems-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Li Zhen, Yan Zhao, Yanyan Zhu, Chenchen Pei, Yinghua Li Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Thanks to its capability of providing seamless massive access and extended coverage, satellite communication has been envisioned as a promising complementary part of the future 6G network. Due to the large satellite-to-ground propagation loss, noise mitigation is one of the most important considerations for implementing key interface technologies on boarding a satellite, e.g., random access (RA). This paper aims at developing an effective preamble detection method with noise suppression for 5G-ntegrated satellite RA systems. Specifically, according to the satellite ephemeris and user equipment location, we first perform the pre-compensation of timing and frequency offset before preamble transmission to determine all the possible correlation peak positions in advance. By leveraging the advantage of the wavelet transform in signal-to-noise separation, we further design a novel detection framework based on wavelet denoising, which can efficiently reconstruct preamble signature from the noisy power delay profile. Simulation results validate the feasibility of the proposed method, and show that our method can achieve a notably improved detection performance under extremely low signal-to-noise ratio conditions, in comparison with the conventional one. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-15T08:00:00Z DOI: 10.1142/S0218126623501979
- DRMT: A Decentralized IoT Device Recognition and Management Technology in
Smart Cities-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yu Tang, Yi Sun, Bin Ning, Jun Wun, Zhaowen Lin Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The Internet of Things (IoT) faces many security and privacy risks due to the large number of devices accessing the network while providing convenience to the current society. At present, the management of IoT devices is centralized. All terminal devices are managed in a centralized manner, and the data accessing IoT devices can only be connected to the network with the permission of the centralized nodes. In this scenario, the increase in the amount of equipment can cause central node overload or single node failure. Moreover, flexible expansion requirements are difficult to achieve in such a centralized architecture. This paper proposes a safe and efficient IoT device recognition and management scheme based on blockchain and edge computing, DRMT. This scheme realizes feature extraction, clustering, and learning of IoT devices’ traffic based on the characteristics of distributed deployment of edge servers. In this process, the recognition model can be continuously updated. The scheme also has an advantage that it is possible to enhance the management efficiency and scalability of the system by establishing a blockchain network between edge servers to share identification information and automate deployment security. This paper also verifies the scheme’s effectiveness by building an experimental network and a blockchain system. The results showed that DRMT would help realize the effective management of large-scale IoT devices in smart cities. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-13T08:00:00Z DOI: 10.1142/S0218126623501943
- A 6.7 GHz, 89.33[math][math]W Power and 81.26% Tuning Range Dual Input
Ring VCO with PMOS Varactor-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Mohd Saqib, Subodh Wairya, Anurag Yadav Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper proposes an improved two-stage and four-stage CMOS ring-Voltage Controlled Oscillator (VCO) design with large frequency at output, improved phase noise, and reduced consumption of power. A PMOS varactor is used in conventional circuit to obtain high tuning-range and very low consumption of power. Cadence Virtuoso 90 nm technology was used to simulate this differential ring-VCO with the proposed dual input differential delay cell. This two-stage and four-stage design gives wider range of tuning from 1.254 to 6.694 GHz (81.26%) and 1.821 to 5.259 GHz (65.37%), respectively, with the change in [math] from 0.1 to 1 V. The power-consumption of two-stage ring VCO and four-stage ring VCO varies from 48.02 to 89.33 [math] and 66.81 to 157.02 [math] respectively. The proposed two-stage and four-stage VCO exhibits −114.46 and −111.06 dBc/Hz at 1 MHz offset from 6.694 and 5.259 GHz carrier frequency, respectively. The proposed two-stage and four-stage differential ring-VCO results in wider tuning range and very low consumption of power and an improved figure of merit and phase-noise. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-10T08:00:00Z DOI: 10.1142/S0218126623501992
- Safety-Critical Task Offloading Heuristics for Workflow Applications in
Mobile Edge Computing-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yushen Wang, Tianwen Sun, Guang Yang, Kai Yang, Xuefei Song, Changling Zheng Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. As the fundamental mechanism in mobile edge computing (MEC), task offloading strategy is of great significance to the quality of computing services provided by MEC systems. When coping with workflow applications, the precedence relations among tasks increase the difficulty in developing task offloading strategies. This paper studies the problem of safety-critical task offloading for workflow applications in a MEC environment. Considering the precedence constraints on workflow tasks and the overhead of security services, we formulate the safety-critical workflow offloading model with the objective of jointly optimizing the total completion time and energy consumption. By using a task sequence to represent a feasible solution to the optimization model, we introduce a family of heuristics to solve the safety-critical workflow offloading problem under precedence constraints upon workflow tasks. Depending on whether the offloading solution satisfies the precedence relations among workflow tasks, task sequences can be classified into two categories, i.e., precedence-aware and precedence-unaware offloading solutions. With the satisfaction of precedence constraints, a family of heuristics by using a precedence-aware strategy and a precedence-unaware strategy is designed to offload safety-critical workflow tasks. Given an offloading sequence and the operating conditions of MEC servers, the heuristic algorithms select the currently best MEC server to offload workflow tasks. Experimental results justify the performance of the proposed algorithms in solving the safety-critical workflow offloading problem under precedence constraints. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-09T08:00:00Z DOI: 10.1142/S0218126623501864
- An Example of Solution for Data Preparation Required for Some Purposes of
People Identification or Re-Identification-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Adnan Ramakić, Zlatko Bundalo, Dušanka Bundalo Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In this paper, we present an example of solution for obtaining some key elements required for people identification or re-identification. During an identification or re-identification process, different elements are in use. Depending on the method to be implemented, said elements can be RGB (red, green, blue) images, grayscale images, different types of features extracted from them, etc. In this work, we focused on obtaining certain elements suitable for use in the gait recognition methods or re-identification methods that use the obtained features from the images with people in gait. The presented solution can be useful in many applications of identification or re-identification, since key elements required by the implemented methods can be obtained in a simple way. Also, different methods for identification or re-identification can be added in the presented solution. Based on this, we have developed a simple system for people re-identification. An experiment was conducted on our own dataset containing 13 people in gait. The results obtained were over 90% for 4 out of 5 types of features used in the presented re-identification system. It is important to emphasize that more reliable results can be obtained by combining different types of features. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-08T08:00:00Z DOI: 10.1142/S0218126623501645
- A Big Data-Driven Intelligent Knowledge Discovery Method for Epidemic
Spreading Paths-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yibo Zhang, Jierui Zhang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The prevention and control of communicable diseases such as COVID-19 has been a worldwide problem, especially in terms of mining towards latent spreading paths. Although some communication models have been proposed from the perspective of spreading mechanism, it remains hard to describe spreading mechanism anytime. Because real-world communication scenarios of disease spreading are always dynamic, which cannot be described by time-invariant model parameters, to remedy such gap, this paper explores the utilization of big data analysis into this area, so as to replace mechanism-driven methods with big data-driven methods. In modern society with high digital level, the increasingly growing amount of data in various fields also provide much convenience for this purpose. Therefore, this paper proposes an intelligent knowledge discovery method for critical spreading paths based on epidemic big data. For the major roadmap, a directional acyclic graph of epidemic spread was constructed with each province and city in mainland China as nodes, all features of the same node are dimension-reduced, and a composite score is evaluated for each city per day by processing the features after principal component analysis. Then, the typical machine learning model named XGBoost carries out processing of feature importance ranking to discriminate latent candidate spreading paths. Finally, the shortest path algorithm is used as the basis to find the critical path of epidemic spreading between two nodes. Besides, some simulative experiments are implemented with use of realistic social network data. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-08T08:00:00Z DOI: 10.1142/S0218126623501931
- Study and Design of Privacy-Preserving Range Query Protocol in Sensor
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yun Deng, Zitao Zheng, Yu Wang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In the current research on data query for two-tiered WSN, the privacy-preservation range query is one of the hotspots. However, there are some problems in the existing researches in two-tiered wireless sensor networks such as high computational and communication costs for security comparison items and high energy consumption of sensing nodes. In this paper, a privacy-preservation range query protocol based on the integration reversal 0-1 encoding with Bloom filter is researched and designed. In the sensing data submission stage, the optimized reversal 0-1 encoding, HMAC algorithm, AES encryption algorithm and variable-length Bloom filter are used for generating the maximum–minimum comparison encoding and constructing a shorter verification index chain to reduce computational and communication costs of sensing nodes; in the private data range query stage, the base station uses the HMAC algorithm to convert the plaintext query range into the ciphertext query range and sends it to the storage node. In the storage node, the bitmap encoding information of the verification index chain is calculated with the comparison rule of the reversal 0-1 encoding and it is returned to the base station together with the verification index chain and the data ciphertext that compliance with the query rule; in the data integrity verification stage, the integrity of the query results using the verification index chain and bitmap encoding is verified at the base station. In the experimental section, the Cortex-M4 development board equipped with the Alios-Things operating system as sensing node and the Cortex-A9 development board equipped with the Linux operating system as storage node are implemented in this protocol, which is compared with the existing protocols in three aspects: the number of data collected in each cycle, the length of data and the number of data dimensions. The experimental results show that the energy consumption of this protocol is lower under the same experimental environment. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-07T08:00:00Z DOI: 10.1142/S0218126623501852
- MMsRT: A Hardware Architecture for Ray Tracing in the Mobile Domain
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Run Yan, Libo Huang, Hui Guo, Yashuai Lü, Ling Yang, Nong Xiao, Li Shen, Mengqiao Lan, Yongwen Wang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Today’s desktop rendering platforms typically use GPUs, which have become the most powerful computing chip to meet the growing visual needs, especially in ray tracing. However, ray tracing is challenging for mobile platforms because mobile GPUs need to accommodate insufficient computing power, hardware resources, and memory bandwidth. This paper presents a novel architecture for the mobile domain called Mobile Multiple stacks Ray Tracing (MMsRT). The most complicated calculations in ray tracing are completed through lightweight embedded design. MMsRT has three key features: First, we set multiple stacks to ensure multiple rays are parallel in the system. Second, it sets a stack cache to store the data in stacks when the storage space of multiple stacks is insufficient. Third, we adopt the data prefetching mechanism to set caches to improve the cache hit rate and performance. An accurate simulator test proves that our design can be applied to mobile devices. We calculate the performance of about 82.9 Million Rays Per Second (MRPS), the chip area is about 0.856[math]mm2, and 96.85[math]MRPS/mm2. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-02-02T08:00:00Z DOI: 10.1142/S021812662350192X
- Tree Social Relations Optimization-Based ReLU-BiLSTM Framework for
Improving Video Quality in Video Compression-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: K. Sivakumar, S. Sasikumar, M. Krishnamurthy Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. High-Efficiency Video Coding (HEVC) has a higher coding efficiency, its encoding performance must be increased to keep up with the expanding number of multimedia applications. Therefore, this paper proposes a novel Rectified Linear Unit-Bidirectional Long Short-Term Memory-based Tree Social Relations Optimization (ReLU-BiLSTM-based TSRO) method to enhance the quality of video transmission. The significant objective of our proposed method aims in enhancing the standards of entropy encoding process in HEVC. Here, context-adaptive binary arithmetic coding (CABAC) framework which is prevalent and an improved form of entropy coding model is utilized in HEVC standards. In addition to this, the performances of the proposed method are determined by evaluating various measures such as mean square error, cumulative distribution factor, compression ratio, peak signal-to-noise ratio (PSNR) and bit error rate. Finally, the proposed method is examined with five different sequences of video from football, tennis, garden, mobile and coastguard. The performances of the proposed method are compared with various approaches, and the result analysis shows that the proposed method attained minimum mean square error (MSE) loss with maximum PSNR rate. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-01-31T08:00:00Z DOI: 10.1142/S0218126623501797
- Intelligent Hyperparameter-Tuned Deep Learning-Based Android Malware
Detection and Classification Model-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Rincy Raphael, P. Mathiyalagan Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Recently, Android applications have been playing a vital part in the everyday life as several services are offered via mobile applications. Due of its market dominance, Android is more at danger from malicious software, and this threat is growing. The exponential growth of malicious Android apps has made it essential to develop cutting-edge methods for identifying them. Despite the prevalence of a number of security-based approaches in the research, feature selection (FS) methods for Android malware detection methods still have to be developed. In this research, researchers provide a method for distinguishing malicious Android apps from legitimate ones by using a intelligent hyperparameter tuned deep learning based malware detection (IHPT-DLMD). Extraction of features and preliminary data processing are the main functions of the IHPT-DLMD method. The proposed IHPT-DLMD technique initially aims to determine the considerable permissions and API calls using the binary coyote optimization algorithm (BCOA)-based FS technique, which aids to remove the unnecessary features. Besides, bidirectional long short-term memory (Bi-LSTM) model is employed for the detection and classification of Android malware. Finally, the glowworm swarm optimization (GSO) algorithm is applied to optimize the hyperparameters of the BiLSTM model to produce effectual outcomes for Android application classification. This IHPT-DLMD method is checked for quality using a benchmark dataset and evaluated in several ways. The test data demonstrated overall higher performance of the IHPT-DLMD methodology in comparison to the most contemporary methods that are currently in use. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-01-28T08:00:00Z DOI: 10.1142/S0218126623501918
- Lightweight CNN-Based Image Recognition with Ecological IoT Framework for
Management of Marine Fishes-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Lulu Jia, Xikun Xie, Junchao Yang, Fukun Li, Yueming Zhou, Xingrong Fan, Yu Shen, Zhiwei Guo Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. With the development of emerging information technology, the traditional management methods of marine fishes are slowly replaced by new methods due to high cost, time-consumption and inaccurate management. The update of marine fishes management technology is also a great help for the creation of smart cities. However, some new methods have been studied that are too specific, which are not applicable for the other marine fishes, and the accuracy of identification is generally low. Therefore, this paper proposes an ecological Internet of Things (IoT) framework, in which a lightweight Deep Neural Networks model is implemented as a image recognition model for marine fishes, which is recorded as Fish-CNN. In this study, multi-training and evaluation of Fish-CNN is accomplished, and the accuracy of the final classification can be fixed to 89.89%–99.83%. Moreover, the final evaluation compared with Rem-CNN, Linear Regression and Multilayer Perceptron also verify the stability and advantage of our method. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-01-25T08:00:00Z DOI: 10.1142/S0218126623501694
- High Selectivity, Dual-Mode Substrate Integrated Waveguide Cavity Bandpass
Filter Loaded Using CCSRR-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: N. Praveena, N. Gunavathi Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In this paper, a third-order, high selectivity Substrate Integrated Waveguide (SIW) bandpass filter (BPF) loaded with Circular Complementary Split-Ring Resonator (CCSRR) is proposed for C band applications. The SIW cavity BPF comprises two stepped resonators ([math] and [math]) and one CSRR ([math]) engraved on the top metal plate. Initially, the cavity loaded with stepped resonators ([math] and [math]) alone does not make a good selectivity on the lower out of band. Hence, the CCSRR is introduced between the stepped resonators ([math] and [math]) to improve the filter’s selectivity and enhance the filter’s bandwidth. The introduction of CCSRR enhances the selectivity on the lower out-of-band at 5.5[math]GHz, and the extra pole increases the bandwidth. The metamaterial behavior of CCSRR is validated by permittivity extraction. The 3[math]dB bandwidth of the third-order SIW filter is 840 MHz with a center frequency of 6.18[math]GHz. The designed filter uses RTDuroid 5880 substrate of [math] = 2.2 with a size of 28.4 [math] 28.4 [math] 0.51[math]mm3. The proposed filter measurement and simulated results agree with each other. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-01-25T08:00:00Z DOI: 10.1142/S0218126623501906
- THD Minimization and Reliability Analysis of Cascaded Multilevel Inverter
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: R. Kavitha Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Inverters are DC to AC power converters that are widely used in AC motor drives and distributed energy generation systems. Multi-Level Inverters (MLIs) have emerged as the preferred inverter technology because of their advantages of reduced switching losses and better harmonic profile. This paper deals with the Minimization of Total Harmonic Distortion (MTHD) of Symmetric Cascaded H-bridge Multi-Level Inverter (SCMLI) and Asymmetric Cascaded H-bridge Multi-Level Inverter (ACMLI). A hybrid memetic algorithm composed of heuristic PSO (Particle Swarm Optimization) algorithm and traditional MAS (Mesh Adaptive direct Search) is proposed to optimize the switching angles. In Photo Voltaic (PV) system, the input DC voltage generally varies from its nominal value due to the change in temperature and irradiance. Thus, the sensitivity analysis of THD and harmonics is also carried out considering the variations with non-integer magnitudes of input DC sources. The experimental prototype of SCMLI and ACMLI topology is developed and validated with simulation results. The reliability and Mean Time to Failure (MTTF) of SCMLI and ACMLI are investigated based on power losses of the MOSFET and thermal parameters. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-01-18T08:00:00Z DOI: 10.1142/S0218126623501815
- Urban Digital Transformation and Enterprise Personal Data Protection
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Wanyi Chen, Luqi Miao Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. With the continuous digital transformation of Shanghai, the problem of illegal use of data has become prominent. This study examines the relationship between urban digital transformation and enterprises’ personal data protection from both micro- and macro-viewpoints. Enterprises establishing personal data protection system (PDPS) can gain public support in the process of urban digital transformation, accelerate urban digital transformation, and develop the whole market. Public support, corporate governance, and government guidance interact to provide the impetus for urban digital transformation. This study provides a theoretical basis for the establishment of a smart city under emerging markets. Meanwhile, it provides practical implications for governance guidelines for promoting the digital transformation of cities. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-01-12T08:00:00Z DOI: 10.1142/S0218126623501803
- An Artificial Neural Network-Based Intelligent Prediction Model for
Financial Credit Default Behaviors-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhuo Chen, Zihao Wu, Wenwei Ye, Shuang Wu Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. With the rapid development of intelligent techniques, smart finance has become a hot topic in daily life. Currently, financial credit is facing increasing business volume, and it is expected that investigating the intelligent algorithms can help reduce human labors. In this area, the prediction of latent credit default behaviors can help deal with loan approval affairs, and it is the most important research topic. Machine learning-based methods have received much attention in this area, and they can achieve proper performance in some scenarios. However, machine learning-based models cannot have resilient objective function, which can cause failure in having stable performance in different problem scenarios. This work introduces deep learning that has the objective function with high freedom degree, and proposes an artificial neural network-based intelligent prediction model for financial credit default behaviors. The whole technical framework is composed of two stages: information encoding and backbone network. The former makes encoding toward initial features, and the latter builds a multi-layer perceptron to output prediction results. Finally, the experiments are conducted on a real-world dataset to evaluate the efficiency of the proposed approach. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-01-11T08:00:00Z DOI: 10.1142/S0218126623501748
- Statement-Level Software Defect Prediction Based on Improved R-Transformer
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yulei Zhu, Yufeng Zhang, Zhenbang Chen Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Engineers use software defect prediction (SDP) to locate vulnerable areas of software. Recently, statement-level SDP has attracted the attention of researchers due to its ability to localize faulty code areas. This paper proposes DP-Tramo, a new model dedicated to improving the state-of-the-art statement-level SDP. We use Clang to extract abstract syntax trees from source code and extract 32 statement-level metrics as static features for each sentence. Then we feed static features and token sequences as inputs to our improved R-Transformer to learn the syntactic and semantic features of the code. Furthermore, we use label smoothing and weighted loss to improve the performance of DP-Tramo. To evaluate DP-Tramo, we perform a 10-fold cross-validation on 119,989 C/C++ programs selected from Code4Bench. Experimental results show that DP-Tramo can classify the dataset with an average performance of 0.949, 0.602, 0.734 and 0.737 regarding the recall, precision, accuracy and F1-measure, respectively. DP-Tramo outperforms the baseline method on F1-measure by 1.2% while maintaining a high recall rate. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-01-11T08:00:00Z DOI: 10.1142/S0218126623501839
- A 40% PAE and 34[math]dBm Peak OIP3 CMOS Power Amplifier with Integrated
Zero Power Consumption Phase Linearizer-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Premmilaah Gunasegaran, Jagadheswaran Rajendran, Selvakumar Mariappan, Yusman Yusof, Zulfiqar Ali Abd Aziz, Narendra Kumar Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In this paper, an Integrated Phase Linearizer (IPL) technique is designed to improve the linearity performance of a CMOS power amplifier (PA). The IPL is integrated at the gate of the PA so that the effect of the parasitic gate-to-source ([math]) capacitance of the main transistor is compensated by the linearizer. Thus, it improves the 3rd-order intermodulation point (OIP3) without trading-off the power added efficiency (PAE). The proposed solution is designed and fabricated in an 180[math]nm CMOS technology process consuming the chip area of 2.25[math]mm2. At the operating frequency of 2.45[math]GHz, it exhibits a gain of 11.14[math]dB with unconditional stability characteristics from 1[math]GHz to 10[math]GHz. Biased quiescent current of 19.35[math]mA, the IPL-PA delivers a maximum output power of 15.20[math]dBm with 40.86% peak PAE, 34.91[math]dBm of peak OIP3 and maximum power consumption of 63.65[math]mW at 2.45[math]GHz with supply voltage headroom of 1.8[math]V. The proposed linearization scheme proved to be an excellent solution for low-power transceivers integration. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-01-11T08:00:00Z DOI: 10.1142/S021812662350189X
- [math] Filtering Controller for Discrete Time-Varying Delay System with
Missing Measurements-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Fatima Zahra Darouiche, El Houssaine Tissir Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The aim of this paper is to address the analysis and design problem of [math] filtering for discrete time-varying delay systems with missing measurements. Our attention is focused on the filter design, using Scaled Small Gain (SSG) approach, guaranteeing the asymptotic stability of the augmented system. A transformation model is obtained via three-term approximation method and Input–Output (IO) approach based on the SSG theorem. The use of SSG theorem for the stability of discrete time-varying delayed systems with missing measurements has not been studied elsewhere in the literature. This represents the main innovation of this paper. Less conservative results are obtained, compared with previous results, and are established in terms of LMIs. Numerical examples illustrate the applicability of the proposed methodology. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-01-11T08:00:00Z DOI: 10.1142/S0218126623501463
- An Efficient Channel Attention-Enhanced Lightweight Neural Network Model
for Metal Surface Defect Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xikun Xie, Changjiang Li, Yang Liu, Junjie Song, Jonghyun Ahn, Zhong Zhang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. There are problems of low model detection accuracy, low detection speed and difficulty in deploying online inspection in industrial surface defect detection relying on deep learning object detection algorithms. In order to effectively solve this problem, an efficient channel attention-enhanced lightweight neural network model named as EMV2-YOLOX is proposed in this paper. The algorithm incorporates the ECA module into the lightweight backbone extraction network MobileNetV2 to achieve adaptive adjustment of channel information weights, which can improve the extraction capability of the algorithm. The YOLOX model is also introduced to enhance the model’s identification and localization of tiny defects. The improved algorithm can guarantee the model’s accuracy and improve the model detection performance, as well as the carrying capacity of hardware devices. The experimental results show that the highest accuracy is achieved on the GCT10 and NEU public defect datasets with mean Average Precision values of 0.86 and 0.68, respectively, which is higher than the accuracy of the EMV2yoloV4 model. The parametric model number is only 10.24[math]M in size, and the detection rate is 54.25[math]f/s, which is the highest performance in embedded devices. EMV2-YOLOX, combined with the attention mechanism, can efficiently extract the location and semantic information of hard-to-detect defects and plays a vital role in the intelligent detection methods. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-01-07T08:00:00Z DOI: 10.1142/S0218126623501785
- Research on Calibration Method of Rail Profile Measurement System
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ning Wang, Hao Wang, Shengchun Wang, Xinxin Zhao, Fan Wang, Jinfei Hao Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The rail profile measurement system is used to obtain the full section profile of the rail, which is the core of the wheel/rail interaction, based on the principle of infrared structured light. The system calibration is the key as to whether the accuracy is high enough to guide the railway maintenance. In this study, we propose a convenient and efficient checkerboard plane target calibration method based on the partition-based calibration. This method can theoretically solve the three unavoidable factors that affect the accuracy in the traditional method and is very easy to use in the field with the designed equipment. The test shows that this method has higher accuracy. We propose a correction method for the stitching calibration of double-sided cameras. Based on the standard block and high-precision stitching of rail, full section profile is achieved. Finally, through ingeniously designed field tests, it is proved that the original accuracy is significantly improved from 0.3[math]mm to 0.1[math]mm, and the repeatability is obviously improved as well. The method proposed in this study can also be extended to similar systems, improving system accuracy and simplifying the calibration procedures. Citation: Journal of Circuits, Systems and Computers PubDate: 2023-01-06T08:00:00Z DOI: 10.1142/S0218126623501736
- Impacts of Electric Vehicle Connected with Charging Station using Student
Psychology Optimization Algorithm (SPOA) and AdaBoost Algorithm-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: M. Murugan, S. Satheesh Kumar, M. Panneer Selvam, P. Rajesh, Francis H. Shajin Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper presents an electric vehicle connected to a charging station based on the proposed method. The proposed technique is the joint execution of the Student Psychology Optimization Algorithm (SPOA) and the AdaBoost algorithm and is therefore called the SPOA-AdaBoost algorithm. In particular, the annualized social cost depends on CS and EVCS set from the objective function of the allocation model. The EVCS is linked with the CS and allows the charging service for electric vehicles. The vehicle-to-grid functions of electric vehicles are properly considered under the present optimization model. The load demands are considered as controllable resources, and EV optimal optimization problems are connected with allocation problems. When EV arrives at the charging station, it reports its own energy demand and expected departure time using the EVCS operator. Every EVCS could attack the details of electric vehicles via the proposed method. With this proper action, this method manages the energy demand and the total supply. The constraints are the power flow equations, equivalent load demands on the buses, branch current constraints, discrete size restrictions for CS, constraints on CS outputs, EV participation on V2G activities, mutual exclusivity of the EV charge and discharge statuses, EV Owner charge satisfaction, EV Owner satisfaction charge, EV SOC restrictions, Occupied CF quantities, and EVCS CF sufficiency. Among these, the execution of the present model done by the MATLAB/Simulink platform and the performance of the proposed model is likened with other systems. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-31T08:00:00Z DOI: 10.1142/S0218126623501530
- Novel Critical Gate-Based Circuit Path-Level NBTI-Aware Aging Circuit
Degradation Prediction-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Hui Xu, Rui Zhu, Xia Sun, Xianjin Fang, Pan Qi, Huaguo Liang, Zhengfeng Huang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. With the rapid development of semiconductor technology, chip integration is getting beyond imagination. Aging has become one of the main threats to circuit reliability. In order to develop aging degradation prediction, it is critical to evaluate aging to avoid circuit failures. At present, the research on aging prediction is mainly focused at the transistor and gate levels: at transistor level, the precision is high, but the speed is low; whereas, the gate-level accuracy is not high, but the speed is very fast. In this paper, a path-level aging prediction framework based on the novel critical gate is proposed. The 10-year Negative Bias Temperature Instability (NBTI) aging delay of the critical subcircuit extracted by the novel critical gate is obtained, and the aging delay trend is learnt by using a linear regression model. Then the critical path aging delay can be obtained quickly based on the framework developed by machine learning using the linear regression model. The experimental results of ISCAS’85 and ISCAS’89 benchmark circuits based on 45-nm PTM show that the proposed framework is superior to the existing methods. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-31T08:00:00Z DOI: 10.1142/S021812662350175X
- Wireless Communication for Drilling Using Acoustic Wave Based on MIMO-OFDM
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yanfeng Geng, Zhong Zheng Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Underground data transmission methods are divided into wired and wireless. Wired transmission includes traditional cable and optical fiber. Wireless transmission includes pressure wave and electromagnetic wave. The limitation of the down-hole data upload rate has become the bottleneck. The wireless transmission system of an acoustic wave using a drilling fluid channel is studied. Drilling fluid channel is a multi-path channel with frequency selective fading characteristics. In the process of drilling, the shape of drill string is spatial spiral bending. This paper provides an exhaustive research in OFDM-MIMO for drilling acoustic telemetry system. The characteristics of drilling fluid channel are discussed. The performances of multiple input multiple output (MIMO), multiple input single output (MISO) and single input single output (SISO) wireless acoustic transmission systems are compared. Acoustic waves transmitted along the drilling fluid channel can realize underground wireless communication with a high transmission rate and a low bit error rate. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-31T08:00:00Z DOI: 10.1142/S0218126623501773
- An Integrated XI-UNet for Accurate Retinal Vessel Segmentation
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: C. Aruna Vinodhini, S. Sabena Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Segmentation of blood vessels captured using a fundus camera is the cornerstone for the medical examination of several retinal vascular disorders. In recent research studies, vessel segmentation models focus on deep neural learning. To overlook the segmentation of the toughest retinal vessels like thin vessels, a new neural network architecture is developed based on U-Net integrated with the idea of depth-wise separable convolution and the Inception network incorporated with the sparsity of information. The developed XI-UNet network is trained and tested on DRIVE, STARE and CHASE_DB1 public datasets. The performance and the achievements of the XI-UNet network are greater compared to the prevalent methods. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-31T08:00:00Z DOI: 10.1142/S0218126623501827
- A Space-Efficient Universal and Multi-Operative Reversible Gate Design
Based on Quantum-Dots-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Saeid Seyedi, Nima Jafari Navimipour Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Because of the high speed, low-power consumption, low latency and possible use at the atomic and molecular levels, Quantum-dot Cellular Automata (QCA) technology is one of the future nanoscale technologies that can replace the present transistor-based technology. For the purpose of creating QCA circuits, reversible logic can be regarded as an appropriate candidate. In this research, a new structure for multi-operative reversible designs is suggested. The Saeid Nima Gate (SNG), proposed in this research study, is a brand-new, incredibly effective, multi-operative, universal reversible gate implemented in QCA nanotechnology employing both majority and inverter gates. Reversible gates, also known as reversible logic gates, are gates that have n inputs and n outputs, which is an equal number of inputs and outputs. The amount of energy lost during computations will be reduced if the numbers of inputs and outputs are identical. The proposed gate is modified and reorganized to optimize further, employing exact QCA cell interaction. All fundamental logic gates are implemented using it to demonstrate the universality of the proposed SNG. Reversible logic has advanced, and as a result, our suggested solution has a lower quantum cost than previously reported systems. The suggested design is simulated using the QCADesigner-E tools. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-28T08:00:00Z DOI: 10.1142/S0218126623501669
- Modified Dual Mode Transmission Gate Diffusion Input Logic for Improving
Energy Efficiency-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Neetika Yadav, Neeta Pandey, Deva Nand Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper presents a modified energy-efficient Dual Mode Transmission Gate Diffusion Input (DMTGDI) design and is termed as M-DMTGDI. A contention issue in dynamic mode operation of existing DMTGDI design and DMPL design is identified and illustrated through mathematical formulation and simulations. To resolve this concern, the pre-charge/pre-discharge transistor in existing DMTGDI design is replaced by dual mode inverter in the proposal. The functional verification and performance comparison of NAND, NOR, XOR gates and 1-bit full adder based on proposed M-DMTGDI is carried out using 90 nm BSIM4 model card for bulk CMOS using Symica DE tool. The performance of the circuits is evaluated in terms of power, delay and Power Delay Product (PDP) in both static and dynamic modes. The variation of PDP with the ratio of the time the circuit is designed to run in dynamic mode against static mode is also investigated to analyze the energy efficiency of the M-DMTGDI design. The proposed approach offers maximum PDP reduction of 33.52%, 99.39% and 96.61% for 2-input gates as compared to their footed DML, DMPL and DMTGDI counterparts, respectively. The reduction in PDP is quite significant in 1-bit full adder circuit where the corresponding values are 94.18%, 99.41% and 99.79%, respectively. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-28T08:00:00Z DOI: 10.1142/S0218126623501712
- Congestion Detection and Alleviation Mechanism Using a Multi-Level Cluster
Based Cuckoo Hosted Rider Search Multi-Hop Hierarchical Routing Protocol in Wireless Sensor Networks-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Kavita K. Patil, T. Senthil Kumaran Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Wireless sensor networks congestion occurs easily due to its centralized traffic pattern. Normally, mono-sync wireless sensor network experiences multiple traffic flow congestion in the dense environment of the network, which leads to excess energy consumption and severe packet loss. To overcome these issues, a congestion detection and alleviation mechanism using cluster based heuristic optimized hierarchical routing protocol is proposed in this paper. Here, congestion detection and alleviation utilize the features of sensor nodes. The congestion is categorized into two types: (i) node level congestion and (ii) link level congestion. The node level congestion is detected by assessing the buffer utilization and the interval amid the consecutive data packets. The link level congestion is considered with computing link usage utilizing back-off step of round robin carrier sense multi-access with collision avoidance. Congestion detection and alleviation reactively affected node/link through cuckoo hosted rider search multi-hop routing algorithm. It has two phases: the cluster head selection and multi-path routing. Cluster head selection is performed through Taylor multiple random forest kernel fuzzy C-means clustering algorithm and multi-path routing is performed through cuckoo hosted rider search multi-hop routing algorithm. The simulation of the proposed method is done in network simulator tool. Here, the performance metrics, like packet delivery ratio, delay, energy consumption, packet drop, overhead, network lifetime and throughput are calculated. The experimental outcomes of the proposed technique shows 11.6%, 18.4% and 28.1% lower delay, 78.2%, 65.4% and 52.6% higher packet delivery ratio, and 29.2%, 37.4% and 40.8% lower packet drop compared with the existing methods, like congestion detection and alleviation using multi-attribute decision-making in optimization-based hybrid congestion alleviation routing protocol in wireless sensor networks, congestion detection and alleviation using hybrid K-means with greedy best first search algorithms in packet rate reduction utilizing adaptive weight firefly algorithm ant colony optimization based routing protocol in wireless sensor networks and congestion detection and alleviation using multi-input time on task optimization algorithm for altered gravitational search algorithm routing protocol in wireless sensor networks. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-23T08:00:00Z DOI: 10.1142/S0218126623501621
- A Hybrid Neural Network-Based Intelligent Forecasting Approach for
Capacity of Photovoltaic Electricity Generation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yinjuan Zhang, Yongke Wang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In recent years, photovoltaic power generation technology has become the key planning direction of the country. It is important to effectively predict photovoltaic (PV) electricity generation capacity, so that the administrators can well schedule resource allocation. Currently, most of the photovoltaic electricity generation forecasting models took meteorological data as the input parameters of neural network. However, the input parameters and redundant data cause neural network to converge difficultly. Besides, single types of neural network models cannot well capture the comprehensive characteristics, which may influence forecasting effect in evolving process. As a result, we propose a hybrid neural network-based intelligent forecasting approach for PV electricity generation capacity. First, convolution neural network (CNN) is adopted to extract the connection between features and data from the perspective of convolution operations. And then, the extracted feature vector of time series is sent into the long short-term memory (LSTM) model. Finally, the forecasting values are predicated by training the outlined LSTM network. The experimental results indicate that such a hybrid CNN-LSTM model can significantly improve the precision of PV electricity generation prediction and provide an effective way to forecast generation power of PV system. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-23T08:00:00Z DOI: 10.1142/S0218126623501724
- A New Radio Frequency Distortion Dynamic Range Index for Performance
Evaluation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Desheng Wang, Yangjie Wei, Dong Ji, Yi Wang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Dynamic range and spurious-free dynamic range are two of the most critical performance indexes in the field of radio frequency (RF). However, the definitions of both the indexes are ambiguous, and their characterization ability is insufficient, resulting in unfair, and even mutually incompatible, performance evaluation in practice. In this study, a new index named radio frequency distortion dynamic range and its corresponding evaluation method are proposed to achieve a fair and detailed dynamic range evaluation by unifying the existing definitions and improving the performance resolution ability. First, the sliding threshold selection method is introduced to replace the classification-based method of dynamic range definition to characterize more details of the dynamic range. Second, a dynamic range evaluation method of “performance body” is proposed to obtain a more comprehensive evaluation by generalizing the current evaluation from being based on a single condition to the one based on scanning critical conditions. Experiments show that the proposed radio frequency distortion dynamic range index with the proposed evaluation method reduces the ambiguity of dynamic range evaluation and can distinguish the performance difference that the current indexes cannot do. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-19T08:00:00Z DOI: 10.1142/S0218126623501608
- FPGA-Based Implementation of an Error-Controllable and Resource-Efficient
Approximation Method for Transcendental Functions-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhenyu Zhang, Guangsen Wang, Qing Liu, Zhiwei Wang, Kang Wang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Transcendental functions cannot be expressed algebraically, which brings a big challenge to efficient and accurate approximation. Lookup table (LUT) and piecewise fitting are common traditional methods. However, they either trade approximate accuracy for computation and storage or require unaffordable resources when the expected accuracy is high. In this paper, we have developed a high-precision approximation method, which is error-controllable and resource-efficient. The method originally divides a transcendental function into two parts based on their slope. The steep slope part is approximated by the method of LUT with the interpolation, while the gentle slope part is approximated by the range-addressable lookup table (RALUT) algorithm. The boundary of two parts can be adjusted adaptively according to the expected accuracy. Moreover, we analyzed the error source of our method in detail, and proposed an optimal selection method for table resolution and data bit-width. The proposed algorithm is verified on an actual FPGA board and the results show that the proposed method’s error can achieve arbitrarily low. Compared to other methods, the proposed algorithm has a more stable increase in resource consumption as the required accuracy grows, which consumes fewer hardware resources especially at the middle accuracy, with at least 30% LUT slices saved. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-19T08:00:00Z DOI: 10.1142/S0218126623501633
- A Neural Network-Based Method for Surface Metallization of Polymer
Materials-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Lina Liu, Yuhao Qiao, Dongxia Wang, Xiaoguang Tian, Feiyue Qin Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. It’s no secret that polymers have been employed extensively in a variety of industries. Polymers, on the other hand, have faced difficulties in their development because of their complicated chemical composition and structure. Data-driven approaches in polymer science and technology have resulted in new directions in research leading to the implementation of deep learning models and vast data assets. In the growing area of polymer informatics, deep learning methods based on factual data are being used to speed up the performance assessment and process improvement of new polymers. Using a deep neural network (DNN), we can now forecast the surface metallization properties of polymer materials, which we describe in this research. First, we collect a raw dataset of polymer materials’ characteristics. The raw data are filtered and normalized using the min–max normalization approach. To convert normalized data into numerical characteristics, principal component analysis (PCA) is employed. Polymer surface metallization characteristics can then be predicted using a suggested DNN technique. The proposed and conventional approaches are also compared so that our research can be done to its full potential. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-19T08:00:00Z DOI: 10.1142/S0218126623501670
- Long-Range Technology-Enabled Smart Communication: Challenges and
Comparison-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Sneha, Praveen Malik, Sudipta Das, Syed Inthiyaz Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Among other technologies, the Internet of Things (IoT) is one of the promising technologies that will help us to make our environment smarter and gather, analyze and process sensor data without any human intervention. Low Power Wide Area Network (LPWAN) is a technology that is part of the IoT, in which is the long-range (LoRa) technology that can provide wide coverage at a cost of very low power and that can boost data communication in our smart environment. The ease of their integration with the electronic system has opened a lot of new scope in the field of low-power wide-area networks in IoT applications. This paper provides a well-rounded review of LPWAN technology and in-depth details of LoRa technology. A good insight is given into the implementation of LoRa technology to design smart cities and the types of antenna technology used in LoRa data communications. Finally, a few open issues, possible solutions and future development are pointed out. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-15T08:00:00Z DOI: 10.1142/S021812662350161X
- A Switched-Capacitor, Integrator-Multiplexing, Second-Order Delta-Sigma
Modulator Featuring a Single Differential Difference Amplifier for Portable EEG Application-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Quanzhen Duan, Dameng Kong, Chenxi Lin, Shengming Huang, Zhen Meng, Yuemin Ding Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. We present a novel switched-capacitor, integrator-multiplexing, second-order delta-sigma modulator (DSM) featuring a single differential difference amplifier (DDA). Power consumption is low and resolution is high when this DSM is used for portable electroencephalographic applications. A single DDA (rather than a conventional operational transconductance amplifier) with appropriate switch and capacitor architectures is used to create the second-order switched-capacitor DSM. The configuration ensures that the resolution is high. The modulator was implemented using a standard 180[math]nm complementary metal–oxide–silicon process. At a supply voltage of 1.8[math]V, a signal bandwidth of 250[math]Hz and a sampling frequency of 200[math]kHz, simulations demonstrated that the modulator achieved an 82[math]dB peak signal-to-noise–distortion ratio and an effective number of bits of 14. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-12T08:00:00Z DOI: 10.1142/S0218126623501554
- A Wireless Virtual Reality-Based Multimedia-Assisted Teaching System
Framework under Mobile Edge Computing-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Wei Cui, Ding Eng Na, Yuting Zhang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In recent years, virtual reality (VR) has gradually entered the daily education and teaching activities from pure scientific research. In the area of assistance teaching, some typical computer softwares still play some important roles. This makes remote teaching activities can just learn voice, yet cannot possess the feeling of realistic existence. Especially in scenario of COVID-19, remote teaching activities with proper perceptibility are in urgent demand. To deal with the current challenge, this paper proposes a wireless VR-based multimedia-assisted teaching system framework under mobile edge computing networks. In this framework, cooperative edge caching and adaptive streaming based on viewport prediction are adopted to jointly improve the quality of experience (QoE) of VR users. First, we investigated the resource management problem of caching and adaptive streaming in this framework. Considering the complexity of the formulated problem, a distributed learning scheme is proposed to solve the problem. The experimental data are verified and the experimental results prove that the studied methods improve the performance of user QoE. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-10T08:00:00Z DOI: 10.1142/S0218126623501165
- An Improved Sparrow Search Algorithm for Location Optimization of
Logistics Distribution Centers-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yaqin Ou, Lei Yu, Ailing Yan Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In modern society where business development is booming, it is of great importance to investigate location optimization of logistics distribution centers. In order to achieve greater efficiency of distribution system, save the distribution costs and the construction cost of logistics distribution center, good point set method and decreasing nonlinear inertia weight are proposed to improve sparrow search algorithm (SSA) to solve the mathematical model for logistics distribution center location. First, in order to prevent SSA falling in local optimal, while improving the convergence speed and efficiency of SSA, good point set method and decreasing nonlinear inertia weight are applied to improve the SSA. The test results of eight benchmark functions show that the fitness value and convergence speed of the proposed ISSA are smaller and faster. Second, compared with SSA, WOSA and PSO, the ISSA has the lowest total cost for solving the mathematical model of logistics distribution center. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-10T08:00:00Z DOI: 10.1142/S0218126623501505
- DMPRA: A Dynamic Reconfiguration Mechanism for a Dual-Mode Programmable
Reconfigurable Array Architecture-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Kangle Li, Lin Jiang, Xingjie Huang, Kun Yang, Xiaoyan Xie, Junyong Deng, Rui Shan Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. To improve the performance and power efficiency of various algorithms in specific applications, reconfigurable architecture has become an effective choice in academia and industry. However, due to the slow context updates and the insufficient flexibility, the existing reconfigurable architectures suffer from performance bottlenecks. Therefore, this paper proposed a dynamic reconfiguration mechanism applied to dual-mode programmable reconfigurable array architecture. This mechanism adopts Huffman-like coding and mask addressing, and through the H-tree transmission network, it can transmit the reconfiguration instruction/context to a specific processing element or processing element cluster in unicast, multicast, or broadcast modes within a clock cycle and shut down unnecessary processing elements or processing element clusters according to the current configuration. Meanwhile, a homogeneous reconfigurable array is designed to verify the correctness and effectiveness of the proposed dynamic reconfiguration mechanism. In this array, the processing element supports both instruction flow and data flow modes. Finally, the proposed work is implemented in register transfer level synthesis and a field-programmable gate array prototype, and its performance is verified using high-efficiency video coding algorithm. The results show that the proposed reconfiguration mechanism can effectively improve the hardware resource utilization and reconfiguration efficiency, it also can achieve the performance breakthrough of the reconfigurable architecture. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-10T08:00:00Z DOI: 10.1142/S0218126623501578
- Optimization of Performance Parameters of Phase Frequency Detector Using
Taguchi DoE and Pareto ANOVA Techniques-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Jyoti Sharma, Gaurav Kumar Sharma, Tarun Varma, Dharmendar Boolchandani Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper utilizes Taguchi design of experiments and Pareto analysis of variance statistical approaches to demonstrate circuit optimization. The phase-frequency detector (PFD) circuit based on dynamic logic has been chosen for optimization. For various MOSFETs of PFD, three levels and three factors of power supply and width of PMOS and NMOS ([math], [math], and [math]) are considered to be critical performance governing factors. The Taguchi technique determines the level of significance of a factor that influences a given performance parameter. The crucial factor for a given response is determined via ANOVA analysis. The optimum values for the parameters [math], [math], and [math] are likewise determined using this procedure to maximize the circuit’s overall performance. Taguchi DoE and Pareto Anova analyses have been performed using the Minitab software. Simulating the circuit with GPDK 180[math]nm CMOS technology using these methods ensures that the acquired parameters are correct for best performance. The Cadence Virtuoso tool has been used to conduct pre-layout and post-layout simulations. The simulation outcomes are reasonably close to the ANOVA predicted result. Phase noise, power dissipation, and frequency of operation of the proposed PFD are [math][math]dBc/Hz, 9.83[math][math]W, and 10.21[math]GHz, respectively, and it occupies a chip area of 300.41[math][math]. The proposed PFD is used to implement a charge-pump PLL which performs effectively with a settling time of 2.59[math][math]s. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-10T08:00:00Z DOI: 10.1142/S021812662350158X
- Construction Technique and Evaluation of High Performance [math]-bit Burst
Error Correcting Codes for Protecting MCUs-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Raj Kumar Maity, Jagannath Samanta, Jaydeb Bhaumik Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The occurrences of Multiple Cell Upset (MCU) are more liable to arise in modern memory systems with the continuous upgradation of microelectronics technology from micron to deep submicron scales. These MCUs are mainly induced due to radiations in memory systems. Error Correcting Codes (ECCs) with lower design complexity are generally preferred for the mitigation of MCUs. The major drawback of the existing ECC is requiring higher overheads as error correction capability increases. In this paper, authors have proposed a new class of high performance [math]-bit Burst Error Correcting (BEC) codes. Parity check matrices ([math]) have been proposed for 3-bit and 4-bit BEC codes with word lengths of 16-bit, 32-bit and 64-bit. Also a simplified decoding scheme has been introduced for these codes. The proposed codecs have been designed and implemented in FPGA and ASIC platforms. The proposed codecs are compact in area, faster in speed and efficient in power compared to existing related schemes. But these lower design constrains are achieved at the cost of increase in redundancy. So, the proposed codecs can be employed in applications where redundancy is not the only constrain for correcting [math]-bit burst errors caused by MCUs. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-08T08:00:00Z DOI: 10.1142/S0218126623501426
- Design and Implementation of a Sense Amplifier for Low-Power Cardiac
Pacemaker-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Pavankumar Bikki, Yenduri Dhiraj, R. V. S. Nivas Kumar Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper presents the implementation of a sense amplifier for a low-power cardiac pacemaker using the Differential Voltage Current Conveyor (DVCC). Two significant aspects of the pacemaker are sensing and pacing. The pulse generator, which is the heart of the pacemaker, consists of a sense amplifier, a logic unit and a timing control unit. The sense amplifier comprises an instrumentation amplifier, a bandpass filter and a comparator that are used to detect the QRS complex wave from the cardiac signal. Based on the output of a sense amplifier, the logic unit and the timing control unit decide whether to pace the heart or not, which achieves the requirement of the demand pacing. In this paper, a novel design of the sense amplifier using a DVCC is proposed, and the simulations are performed using 130-nm TSMC technology. Furthermore, the modes of the pacemaker VVI, DDD and rate-responsive algorithms have been implemented using the structural approach in VHDL by taking into consideration the timing cycles of a pacemaker. The design analysis shows that the proposed model of pacemaker is highly efficient and consumes significantly less energy. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-08T08:00:00Z DOI: 10.1142/S0218126623501487
- On-Chip GaN Planar Transformer Design for Highly Integrated RF Systems
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Mokhtaria Derkaoui, Yamina Benhadda, Ghacen Chaabene, Pierre Spiteri Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The work presented in this paper concerns the design of an on-chip GaN transformer. The integrated transformer is composed of two planar stacked coils with spiral octagonal geometry. A comparison is done on different analytical methods to calculate the inductance of the transformer spiral planar coils. Three transformers with different outer diameters are compared to illustrate the influence of the coil geometry. The estimated inductance and dc series resistance are evaluated. Using COMSOL Multiphysics 5.3 software, the thermal effect is illustrated in the integrated transformer operating at high frequencies. The different parasitic effects created by the planar stacked layers are validated by the equivalent electrical circuit, and the different electrical parameters are calculated. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-08T08:00:00Z DOI: 10.1142/S0218126623501499
- The Construction of an Intelligent Service System for Students’
Physique and Health-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Huan Zhai Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The development of young people’s physique and health is the core element of national manpower reserves, which is related to the rise and fall of national power in the future. In recent years, a large-scale physical fitness test for students has been carried out by the state every year; thus, abundant data have been accumulated. However, for a long time, the collection, integration, analysis and utilization of these data resources have been seriously insufficient; thus, it is difficult to meet the needs of student health services. Wireless devices are emerging rapidly due to their sensing, computing and communication capabilities and are gradually being applied to physique and health research. This is expected to improve the traditional service model. Personalized physique and health information of students can be obtained via wearable devices. However, to effectively analyze and utilize these data and improve the effectiveness of corresponding health management decisions, intelligent analysis methods are needed. Machine learning, as the core of artificial intelligence technology, can learn from big data and mine the potential value of data in order to predict events and propose countermeasures. This paper aims to collect and transmit various kinds of physique and health data of students through wireless communication technology and to realize intelligent analysis and management of these data based on a machine learning algorithm. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-08T08:00:00Z DOI: 10.1142/S0218126623501517
- GPS Receivers Spoofing Detection Based on Subtractive, FCM and DBSCAN
Clustering Algorithms-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Z. Sarpanah, M. R. Mosavi, E. Shafiee Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. GPS receivers have a wide range of applications, but are not always secure. A spoofing attack is one source of conscious errors in which the counterfeit signal overcomes the authentic GPS signal and takes control of the receiver’s operation. Recently, GPS spoofing attack detection based on computational algorithms, such as machine learning, classification, wavelet transform and clustering, has been developing. This paper proposes multiple clustering algorithms for accurately clustering the authentic and spoofing signals, called subtractive, FCM and DBSCAN clustering. The spoofing attack is recognized using two distinct features: moving phase detector variance and norms of correlators. Spoofing and authentic signals have different patterns in the proposed features. According to the Dunn and Silhouette indexes, the validation of the results is investigated. The Dunn values for the proposed approaches are 0.8592, 0.5285 and 0.6039 for DBSCAN, FCM and subtractive clustering, respectively. Also, the DBSCAN algorithm is implemented at the RTL level because of its highest value for the Dunn index and algorithm verifiability. Using the Vivado tools, this algorithm is implemented and designed on a Xilinx Virtex 7 xc7vx690tffg1930-3 hardware device for two-dimensional data with 32-bit accuracy and 130 data points. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-08T08:00:00Z DOI: 10.1142/S0218126623501529
- Validation of the Hospital Score as Predictor of 30-Day Potentially
Avoidable Readmissions in a Brazilian Population: Retrospective Cohort Study-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Nayara Cristina da Silva, Marcelo Keese Albertini, André Ricardo Backes, Geórgia das Graças Pena Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Background: Hospital readmissions are associated with several negative health outcomes and higher hospital costs. The HOSPITAL score is one of the tools developed to identify patients at high risk of hospital readmission, but its predictive capacity in more heterogeneous populations involving different diagnoses and clinical contexts is poorly understood. Objective: The aim of this study is to externally validate the HOSPITAL score in a hospitalized Brazilian population. Methods: A retrospective cohort study was carried out with patients over the age of 18 years in a tertiary university hospital. We performed a refitted HOSPITAL score with the same definitions and predictive variables included in the original HOSPITAL score and compared the predictive capacity of both. The receiver operating characteristic was constructed by comparing the performance risk forecasting tools measuring the area under the curve (AUC). Results: Of the 47,464 patients, 50.9% were over 60 years and 58.4% were male. The frequency of 30-day potentially avoidable readmission was 7.70%. The accuracy of original and refitted HOSPITAL scores was close, although statistically different ([math]), AUC: 0.733 (CI 95%: 0.718, 0.748) and 0.7401 (CI 95%: 0.7256, 0.7547), respectively. The frequency of 60, 90, 180, and 365-days readmissions ranged from 10.60% to 18.30%. Conclusion: The original and refitted HOSPITAL score is a useful tool to identify patients at high risk of 30-day potentially avoidable readmission, in patients with different diagnoses in public tertiary hospitals. In this sense, our study expands and reinforces the usefulness of the HOSPITAL score as a tool that can be used as part of intervention strategies to reduce the rate of hospital readmission. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-08T08:00:00Z DOI: 10.1142/S0218126623501542
- Analysis on Millimeter-Wave Channel Dispersion Over Nonreciprocal Beam
Patterns Based on the Propagation-Graph Model-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Jiachi Zhang, Liu Liu, Zhenhui Tan, Kai Wang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Millimeter-wave (mmWave) together with multiple-input multiple-output (MIMO)-enabled beamforming technology offers greater bandwidths than previously available and overcomes the high propagation loss. Nonreciprocal beam patterns, i.e., transceivers using beams with different beamwidths, can not only reduce the hardware cost but also make the beam alignment more efficient. In this paper, we use the propagation-graph (PG) model to fully investigate the channel characterization over nonreciprocal beam patterns at 45 GHz. Specifically, we propose a beam-enabled propagation-graph channel model by considering the spatial filter effect of beams. Moreover, the propagation gain of each path with certain angular information is filtered by the array response. Then, channel dispersions in delay and frequency domains are analyzed over nonreciprocal beam patterns. Simulation results reveal that the downlink in a macrocell scenario covers more important scatterers than those of the uplink, leading to an evident dispersion effect in delay, frequency, and angular domains consequently. Whereas, the uplink, with little or no dispersion effect, can be approximated as a line-of-sight (LoS) scenario with a single propagation path. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-07T08:00:00Z DOI: 10.1142/S0218126623501153
- Research on Output Power of Series–Series Resonance Wireless Power
Transmission System-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Haokun Chi, Chunxiao Mu, Yingjie Wang, Fei Wang, Yongchao Hou, Feixiang Gong Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The popularization of smart mobile devices has led to the research on wireless power transfer (WPT) technology. Previous research shows that increasing the excitation coil or the energy transfer coil can increase the energy transmission distance of the WPT system. But the mutual inductance calculation of coils is not accurate enough when the distance is close. This paper presents an improved formula to calculate the mutual inductance between two round inductance coils. Contrasts with the previous calculation methods, computer simulation and experimental results are done to validate the accuracy of the formula, especially when the two inductance coils are close together. On the basis of this formula, the maximum output power (MOP) and the maximum output power distance (MOPD) of the two-, three- and four-coil series–series resonance WPT systems based on circuit models are studied and compared. The simulation and experimental results show a high consistency with the theoretical calculation results. We can find that the farther the MOPD of the system is, the smaller the MOP is. Moreover, when the operating frequency increases, the MOPD of the two-coil conformation increases, while the four-coil conformation is just opposite. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-07T08:00:00Z DOI: 10.1142/S0218126623501177
- Analog Compatible Logic Functions Using EXCCIIs and an Extended
Application for Sigmoid Activation Function-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Sudhanshu Maheshwari Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper introduces the use of an extra X current conveyor (EXCCII) for realizing logic operations, suited for mixed signal design, without necessitating separate digital blocks. The NOR and OR logic functions are thus shown to be realized using the current-mode analog building block operable at a low voltage. The use of [math][math]V supply makes the new circuits operable at low voltage. The EXCCII bias current used in the design is 25[math][math]A. The input currents for defining logic 0 and logic 1 are, respectively, taken as 0[math]mA and 0.5[math]mA. The proposed circuits’ delay is found to be 10[math]ns for low-to-high transition, while 2[math]ns for high-to-low transition for NOR gate under capacitive loading conditions. The use of the current-mode approach makes the new proposed circuit apt for current-mode signal processing. The new proposed logic function circuits are compatible with analog circuits, thus providing an easy merger for mixed signal systems design, paving the way to a modular design approach. As an extended application in artificial neural networks, the proposed circuit is shown to generate a sigmoid activation function with convincing results. Several results are included by varying the reference current, and the circuit’s robustness aspect is also tested in the presence of fluctuation in the bias current of EXCCII. The novel approach of utilizing an analog building block for realizing digital functions is, therefore, verified with promising future applications. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-07T08:00:00Z DOI: 10.1142/S0218126623501475
- GOKA: A Network Partition and Cluster Fusion Algorithm for Controller
Placement Problem in SDN-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Changwei Xiao, Jue Chen, Xihe Qiu, Dun He, Hanmin Yin Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Software Defined Networking (SDN) is a new promising network architecture, with the property of decoupling the data plane from the control plane and centralizing the network topology logically, making the network more agile than traditional networks. However, with the continuous expansion of network scales, the single-controller SDN architecture is unable to meet the performance requirements of the network. As a result, the logically centralized and physically separated SDN multi-controller architecture comes into being, and thereupon the Controller Placement Problem (CPP) is proposed. In order to minimize the propagation latency in Wide Area Network (WAN), we propose Greedy Optimized K-means Algorithm (GOKA) which combines K-means with greedy algorithm. The main thought is to divide the network into multiple clusters, merge them greedily and iteratively until the given number of controllers is satisfied, and place a controller in each cluster through the K-means algorithm. With the purpose of proving the effectiveness of GOKA, we conduct experiments to compare with Pareto Simulated Annealing (PSA), Adaptive Bacterial Foraging Optimization (ABFO), K-means and K-means[math] on 6 real topologies from the Internet Topology Zoo and Internet2 OS3E. The results demonstrate that GOKA has a better and more stable solution than other four heuristic algorithms, and can decrease the propagation latency by up to [math], [math], [math] and [math] in contrast to PSA, ABFO, K-means and K-means[math], respectively. Moreover, the error rate between GOKA and the best solution is always less than [math], which promises the precision of our proposed algorithm. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-02T08:00:00Z DOI: 10.1142/S021812662350144X
- A CMOS Relaxation Oscillator with Process and Temperature Variation
Compensation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhenyan Huang, Kewei Hu, Yi Ding, Nick Nianxiong Tan, Hanming Wu, Xiao-Peng Yu Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In this paper, a CMOS relaxation oscillator with trimming and temperature compensation is presented for the on-chip multi-sensor systems which need MHz level frequency source. The proposed scheme uses a single current branch to charge the capacitor to generate the oscillation with voltage average feedback (VAF) circuit. Binary-weight current trimming array is adopted to reduce the frequency variation caused by the process variation under different process corners. A compensation calibration resistor array with Kelvin connection is utilized to improve the frequency variation with temperature. With the help of VAF, the frequency spread caused by the comparator delay is suppressed. This relaxation oscillator with a typical frequency of 13.4[math]MHz is implemented in a standard 180[math]nm CMOS process. Simulation results show that it achieves a frequency temperature coefficient of 28.3[math]ppm/∘C from [math]C to 125∘C and a 0.074%/0.1[math]V frequency variation when supply voltage changes from 2.9 to 3.7[math]V. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-12-02T08:00:00Z DOI: 10.1142/S0218126623501451
- Delay Boundary Analysis of RC Flows in the TTE Switch
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Wei Xu, Ruiqi Lu, Fei Peng, Ran Li, Jianmei Lei, Guoqi Xie Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. With the rapid development of advanced driver assistance systems (ADASs) and infotainment systems, automotive networks have higher requirements for bandwidth and real-time than before. Time-triggered Ethernet (TTE) supports real-time time-triggered communication and event-triggered communication coexistence. TTE is used as a backbone network in intelligent connected vehicles to meet the growing real-time and reliability requirements. To design a real-time TTE switch, the time delay of data streams needs to be analyzed. Most of the recent works analyze end-to-end latency, which is usually based on a particular integration policy and ignores the impact of BE flows. In this paper, we analyze the delay bounds of RC flows in the TTE switch using delay reachability techniques. We consider the impacts of TT flows, RC flows and BE flows on RC flows based on three integration policies: shuffling, preemption and timely blocking, respectively. We obtain a secure upper limit of delay through analysis. Experiments illustrate the degree of influence of each factor on the delay bound of RC flow, which provides a reference for the optimal design of the TTE switch in intelligent connected vehicles. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-30T08:00:00Z DOI: 10.1142/S0218126623501116
- Erratum: Prediction of Elephant Movement Using Intellectual Virtual
Fencing Model-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: R. Vasanth, A. Pandian Abstract: Journal of Circuits, Systems and Computers, Ahead of Print.
Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-30T08:00:00Z DOI: 10.1142/S0218126623920019
- CCII-Based Lossless Floating Frequency-Dependent Negative Resistor with
Minimum Passive Elements-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Tolga Yucehan, Erkan Yuce, Zafer Dicle Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper proposes a new lossless floating frequency-dependent negative resistor (FDNR). The proposed floating FDNR circuit is designed with two dual output second-generation current conveyors (DO-CCIIs) and a minimum number of passive elements. The first DO-CCII behaves like a minus-type second-generation current conveyor, while the other is modified DO-CCII. The proposed floating FDNR does not require any passive element matching conditions, but all the passive elements are floating. The simulations are made through the SPICE program. A second-order high-pass filter (HPF) is given as an application example. In addition, some experimental results are included for the second-order HPF in which AD844s are utilized. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-28T08:00:00Z DOI: 10.1142/S0218126623501244
- Optimization Approach in Window Function Design for Real-Time Filter
Applications-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Fatmanur Serbet, Turgay Kaya Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Eliminating the Gibbs oscillations that occur during the Finite Impulse Response (FIR) digital filter design with the Fourier Series method will ensure correct filtering. For this reason, the development of the window improves the performance of the filter and, therefore, the system. In this study, the cosh window function is designed using Particle Swarm Optimization, which is a preferred optimization method in many areas. Thus, alternatives to the standard results obtained from the existing traditional calculations will be produced, and different windows that perform the same function will be obtained. In addition, exponential and cosh window functions were designed in LabVIEW environment, which is a graphical programming language-based program, and the designed windows were analyzed at different parameter values. LabVIEW provides a fast and easy programming environment, and it provides the opportunity to realize real-time applications with its external hardware. Utilizing this feature, the amplitude spectrum of cosh window designed in LabVIEW is displayed in real time for different window parameter values. As a result, FIR digital filters were designed using cosh window based on optimization and the cosh window designed in LabVIEW, and the distorted EEG signal was filtered using these filters and displayed in real time. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-28T08:00:00Z DOI: 10.1142/S0218126623501438
- Simple Yet Secure Encoder Architecture and Ultralightweight Mutual
Authentication Protocol for RFID Tags in IoT-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Manikandan Nagarajan, Muthaiah Rajappa Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Internet of things (IoT) has evolved as the internet of everything, and it has grabbed the interest of all the researchers in recent days. Almost all the objects, including nonelectronics devices, can also be connected with the internet through radio frequency identification (RFID) technology. The security of the perception layer is crucial to secure the entire IoT network. RFID-enabled IoT perception layer has secured reader-to-server channel and unsecured tag to reader channel. Hence, securing the unsecured communication channel between the reader and the tag is the need of the hour. This work proposes a simple yet secure permutation approximate adder (SYSPXA)-based RFID mutual authentication protocol to address the need. The proposed protocol dramatically reduces the tag’s storage and computational overhead. It needs 40% less storage and 66.7% less permutation operation in comparison with the existing protocols. Nondisclosure of the key and freshness of key, IDS and random numbers at every mutual authentication process gives resistance to the protocol against de-synchronization attack, disclosure attack, tag tracking, replay attack. The SYSPXA protocol is validated for its security features using Burrows–Abadi–Needham (BAN) logic formal verification. The performance and security of the proposed protocol are contrasted with various futuristic permutation-based protocols, and its superiority over other protocols is highlighted. We have simulated the SYSPXA protocol with ModelSim tool for verifying its functionality. The protocol encoder architecture is implemented in the Intel cyclone IV Field Programmable Gate Array (FPGA) EP4CE115F29C7 device. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-25T08:00:00Z DOI: 10.1142/S0218126623501189
- Design and Lifetime Estimation of Low-Power 6-Input Look-Up Table Used in
Modern FPGA-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Vivek Kumar Singh, Abhishek Nag, Abhishek Bhattacharjee, Sambhu Nath Pradhan Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Due to technological advancements and voltage scaling, leakage power has become an important concern in CMOS design. The implementation of a field-programmable gate array (FPGA) circuit utilizes a portion of the FPGA’s resources as compared to an application-specific integrated circuit (ASIC). Both the utilized and unutilized parts of the FPGA dissipate leakage power. In this work, two dynamic power gating techniques, PSG-1 and PSG-2, are proposed, which are able to reduce the leakage power of the 6-input look-up table (LUT) used in the Xilinx Spartan-6 series. The obtained results show that the proposed approaches PSG-1 and PSG-2 reduce average leakage power by 54.61% and 66.69%, respectively, at the expense of nominal area and delay overhead. The proposed method is also capable of lowering the average total power of the 6-input LUT. The suggested PSG-1 and PSG-2 approaches reduce average power by 53.75% and 60.83%, respectively. However, header-based power supply gating is extremely vulnerable to the negative-bias temperature instability (NBTI) aging effect and, due to this, the lifetime of the circuit is reduced considerably. Therefore, in this work, lifetime estimation-based analysis is performed by varying the stress probability of the sleep transistor. The results show that the LUT with PSG-1 and PSG-2 techniques has a lifetime of 4.55 years and 11.13 years, respectively. The stress time of the sleep transistor is 50% for the PSG-1 technique and 25% for the PSG-2 technique, respectively. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-24T08:00:00Z DOI: 10.1142/S021812662350113X
- Dimensionality Reduction with Weighted Voting Ensemble Classification
Model Using Speech Data Based Parkinson’s Disease Diagnosis-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: A. Manjula, P. K. Vaishali, P. Pranitha, S. Ashok Kumar Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Parkinson’s disease (PD) is a progressive neurodegenerative illness that frequently affects phonation, articulation, fluency, and prosody of speech. Speech impairment is a major sign of PD which can be employed for the earlier identification of the disease and provide proper treatment. Besides, the machine learning (ML) models can be commonly employed for PD detection and classification by the use of speech data. Since the speech data has the features of maximum data redundancy, high aliasing, and small sample sizes, dimensionality reduction (DR) techniques become essential for effective PD diagnosis. Therefore, this paper presents a new DR with weighted voting ensemble classification (DR-WVEC) model for PD diagnosis. The presented DR-WVEC model operates on different stages such as pre-processing, DR, classification, and voting process. Primarily, the speech data undergoes min–max normalization process in order to normalize the speech data. Besides, linear discriminant analysis (LDA) technique is applied for reducing the dimensionality of the features. In addition, an ensemble of two ML models, namely extreme learning machine (ELM) and Adaboost models, is employed for classification. Finally, a weighted voting-based classification process is carried out where the integration of two ML models takes place and the highest outcome is chosen as the final results. In order to assess the effective PR diagnostic outcome, an extensive set of simulations were carried out on Parkinson’s telemonitoring dataset. The obtained experimental results reported the betterment of the DR-VWEC technique over the other compared methods in terms of different aspects. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-24T08:00:00Z DOI: 10.1142/S0218126623501207
- Differential Input First-Order Universal Filter with Two DVCC+s
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Tayfun Unuk, Erkan Yuce, Shahram Minaei Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. In this paper, a differential-input plus-type differential voltage current conveyor-based first-order universal filter is designed. This voltage-mode filter uses a grounded capacitor. In addition, it can provide all the noninverting and inverting first-order universal filter responses. The circuit provides high common-mode rejection ratio of about 76.5[math]dB. Nevertheless, it needs a single matching problem and comprises two floating resistors. Quadrature oscillator (QO) design is obtained using this filter as an application example. The designed filter and QO circuits are simulated through the SPICE program, and some experimental studies are carried out using AD844 ICs to verify the theory. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-24T08:00:00Z DOI: 10.1142/S0218126623501220
- Design and Development of Microbial Fuel Cells Based Low Power Energy
Harvesting Mechanism for Ecological Monitoring and Farming of Agricultural Applications-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: P Suganya, J Divya Navamani, A Lavanya, Rishabh Mrinal Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Energy harvesting from the microbial fuel cells have a significant attention in the recent days, due to their cost efficiency, simple designing structure and self-powered system. Also, the emergence of internet of things plays a vital role in many real time application scenarios like agricultural purposes and activities. But, the incorporation of these techniques is one of challenging and interesting tasks in the research field. In the conventional works, the internet of things has been utilized as a cloud storage domain for activating the sensors used for environmental monitoring and controlling purposes. The main intention of this paper is to design a robust and cost-effective sludge water based microbial fuel cells, and utilize it for an internet of things incorporated ecological monitoring and farming applications by activating the smart sensors. It discusses about the various electrode combination with several mixture of substrate to study about the optimum performance of microbial fuel cells. To ease the comparative study, Thing Speak platform is used along with the necessary sensors for continuous monitoring. In addition to that, the efficiency of single and dual chamber microbial fuel cell is analyzed based on the set of parameters such as cost, size, and construction. In this work, the microbial fuel cell-based energy harvesting scheme is also developed with switched capacitance-based metal oxide semiconductor field effect transistor and relay-based charge pump circuit which can be incorporated to the internet of things based agriculture applications. Here, the cost analysis of microbial fuel cell with and without DC–DC converter have been compared for selecting the most suitable one for the application system. Moreover, the digital temperature and humidity sensor can be utilized with the proposed microbial fuel cell system for gathering the inputs of the ecological system, which acts as an interface of the microbial fuel cell and cloud systems. During experimentation, the results of both the energy harvesting schemes are evaluated and compared by using various performance indicators. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-23T08:00:00Z DOI: 10.1142/S0218126623501128
- Fast Bipartite Synchronization of Complex Networks with Signed Graph Based
on TS Fuzzy System by Fixed-Time Technique-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Dongmei Ruan, Shiju Yang, Qin Zhang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. This paper mainly discusses the problem of fast fixed-time bipartite synchronization in the complex networks with signed graph that is based on TS fuzzy system. By designing suitable and effective controller, the synchronization of the considered complex networks has been achieved successfully in this paper, whose convergence rate is superior to the great majority of existing results. With the assistance of a comparison system being built and using the theory of Lyapunov stability, this paper has established sufficient criteria successfully that are able to achieve fast fixed-time bipartite synchronization. And a numerical simulation example displays the performance of the obtained new results at the end of this paper. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-19T08:00:00Z DOI: 10.1142/S0218126623501190
- Distributed Logistics Resources Allocation with Blockchain, Smart
Contract, and Edge Computing-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Junhua Chen, Jiatong Zhang, Chenggen Pu, Ping Wang, Min Wei, Seungho Hong Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. The traditional centralized logistics resources allocation method can no longer adapt to the new business model of decentralized e-commerce, requiring transaction security for all parties involved in the logistics process. Utilizing blockchain and smart contract technologies to build logistics resources allocation network foundation and edge computing technology to assist the resource-constrained transport nodes in implementing complex computation, this paper proposes a distributed logistics resources allocation chain (DLRAChain) concept and designs a DLRAChain network that supports independent decision-making, fair bidding, and secure allocation of interests for all resources allocation participants. The corresponding system models are constructed according to the different roles of DLRAChain participants. Furthermore, the logistics resources requester–provider negotiation process is formulated as a two-stage Stackelberg game. To resolve the optimization problem of the game, the iterative game algorithm (IGA) and distributed logistics resources allocation algorithm (DLRAA) are proposed. Finally, the utility of warehouse and transport nodes and reward of mobile edge computing (MEC) nodes are analyzed with experimental simulation results. The results demonstrate that the proposed models adequately address the DLRA problem, and that the proposed game and corresponding algorithms efficiently achieve the optimal strategy, saving the response time of resources allocation participants. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-19T08:00:00Z DOI: 10.1142/S0218126623501219
- New FTFN-Based Tunable Memristor Emulator Circuit and its Mutation to
Meminductor and Memcapacitor Emulators-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Kapil Bhardwaj, Ravuri Narayana, Mayank Srivastava Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. For the first time, a new memristor emulator structure using a single four-terminal floating nullor (FTFN) and a transconductance stage has been presented with tunable circuit configuration. Along with that the circuit requires only a single grounded capacitance and two external MOS transistors to realize both incremental and decremental types of memductance functions. The use of the FTFN block has been demonstrated for the first time to build such a compact memristor emulator, which fully utilizes the employed circuit resources. The wide-band operating frequency range (1[math]kHz–3[math]MHz) is another attractive feature of the proposed emulator. Moreover, the mutation of the proposed memristor emulator into meminductor and memcapacitor emulators is also presented by the mutators based on FTFN. All the presented circuits have been tested by performing simulations using PSPICE with 0.18-[math]m CMOS technology. The generated simulation results clearly show the ideal nonvolatile nature of the realized memristor, which has also been utilized in an op-amp-based circuit designed to exhibit associative learning phenomena. The proposed FTFN-based memristor has been implemented using commercially available ICs, LM13700, and AD844, and the generated PHL plot is discussed. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-19T08:00:00Z DOI: 10.1142/S0218126623501232
- Multi-Modal Emotion Recognition Combining Face Image and EEG Signal
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ying Hu, Feng Wang Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Face expression can be used to identify human emotions, but it is easy to misjudge when hidden artificially. In addition, the sentiment recognition of a single mode often results in low recognition rate due to the characteristics of the single mode itself. In order to solve the mentioned problems, the spatio-temporal neural network and the separable residual network proposed by fusion can realize the emotion recognition of EEG and face. The average recognition rates of EEG and face data sets are 78.14% and 70.89%, respectively, and the recognition rates of decision fusion on DEAP data sets are 84.53%. Experimental results show that compared with the single mode, the proposed two-mode emotion recognition architecture has better performance, and can well integrate the emotional information contained in human face visual signals and EEG signals. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-19T08:00:00Z DOI: 10.1142/S0218126623501256
- Intrusion Detection for In-Vehicle CAN Bus Based on Lightweight Neural
Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Defeng Ding, Yehua Wei, Can Cheng, Jing Long Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. With the rapid development of automobile intelligent and networking, substantial information is exchanged between in-vehicle network system and the outside world, thereby threatening the automobile security. Intrusion detection is an important technology to realize the security of in-vehicle networks. The existing research on in-vehicle network intrusion detection mainly focuses on the improvement of detection accuracy, but it lacks consideration of timeliness, whereas the in-vehicle network is a time-sensitive system. This study proposes an anomaly detection method for in-vehicle Controller Area Network (CAN) based on lightweight neural network to reduce the operation time while maintaining the detection accuracy. The redundant neuron screening method and model compression algorithm for layer-by-layer neuron pruning are designed. This presented method can delete the neurons with small contribution and obtain lightweight neural network model. The detection performance of model compression and noncompression is compared through experiments. Results show that under the two real in-vehicle datasets, the detection time is accelerated by 47.7 times and 34.2 times at most, and the average accuracy is increased by 14.5% and 15.7%. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-16T08:00:00Z DOI: 10.1142/S0218126623501104
- A Parallel Text Recognition in Electrical Equipment Nameplate Images Based
on Apache Flink-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhen Liu, Lin Li, Da Zhang, Liangshuai Liu, Ze Deng Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Information on the equipment nameplate is important for the storage, transportation, verification and maintenance of electrical equipment. However, because a natural image of the device on the text nameplate may be multidirectional, curved, noisy or blurry, automatically recognizing the image from the device nameplate can be difficult. Meanwhile, image preprocessing methods are carried out in a serial manner, so the processing speed with regard to the above problems is slower and takes a longer time. Accordingly, this study proposes a parallel and deep-learning-based text automatic recognition method. In the proposed method, a pretreatment method comprising edge detection, morphological manipulation and projection transformation is used to obtain the corrected nameplate region. The connectionist text proposal network (CTPN) is then activated to detect text lines on the corrected nameplate area. Next, a deep-learning method is proposed to study the classification methods of convolutional recurrent neural networks and connectionist time classification for identifying text in each line of text detected by CTPN. Finally, we use Apache Flink to parallelize the above processes, including parallelization preprocessing and bidirectional long short-term memory parallelization in the process of text line detection and text recognition. Experimental results on the collected nameplate show that the proposed imaging processing method has a good recognition performance and that the parallelization method significantly reduces the data processing time cost. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-14T08:00:00Z DOI: 10.1142/S0218126623501098
- Framework for QCA Layout Generation and Rules for Rotated Cell Design
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Raja Sekar Kumaresan, Marshal Raj, Lakshminarayanan Gopalakrishnan Abstract: Journal of Circuits, Systems and Computers, Ahead of Print. Quantum-dot Cellular Automata (QCA) is a nontransistor-based nanotechnology circuit design paradigm. The circuits are implemented by using cells having quantum dots and electrons. There are several cell configurations with varying combinations of electrons and quantum dots. But the widely used cells have the four-dot two-electron structure. Circuits are realized and validated by using QCADesigner. However, the layouts are developed manually by using this tool. Layouts are not entirely automated in QCA. Hence, in this work, the existing QCA tools and the different techniques proposed in the literature to improve the QCA layout generation are analyzed and a complete framework for QCA layout generation is proposed. It also explores the gap that needs to be filled to achieve a reliable CAD tool for QCA layout generation. In addition to that, to design circuits using rotated cells, design rules and cost functions are proposed. Novel circuits of multiplexer and D-flip-flop are also proposed using rotated cells. The proposed designs have better output polarization compared to other designs. Verification is done in QCADesigner. Citation: Journal of Circuits, Systems and Computers PubDate: 2022-11-14T08:00:00Z DOI: 10.1142/S0218126623501141
|