for Journals by Title or ISSN for Articles by Keywords help
 Subjects -> ENGINEERING (Total: 2284 journals)     - CHEMICAL ENGINEERING (192 journals)    - CIVIL ENGINEERING (184 journals)    - ELECTRICAL ENGINEERING (102 journals)    - ENGINEERING (1208 journals)    - ENGINEERING MECHANICS AND MATERIALS (389 journals)    - HYDRAULIC ENGINEERING (55 journals)    - INDUSTRIAL ENGINEERING (65 journals)    - MECHANICAL ENGINEERING (89 journals) ENGINEERING (1208 journals)                  1 2 3 4 5 6 7 | Last
 Arabian Journal for Science and Engineering   [SJR: 0.345]   [H-I: 20]   [5 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1319-8025    Published by Springer-Verlag  [2355 journals]
• Forecasting Chaotic Time Series Via Anfis Supported by Vortex Optimization
Algorithm: Applications on Electroencephalogram Time Series
• Authors: Utku Kose; Ahmet Arslan
Pages: 3103 - 3114
Abstract: In the context of time series analysis, forecasting time series is known as an important sub-study field within the associated scientific fields. At this point, especially forecasting chaotic systems has been a remarkable research approach. As being associated with the works on forecasting chaotic systems, some application areas are very interested in benefiting from advantages of forecasting time series. For instance, forecasting electroencephalogram (EEG) time series enables researchers to learn more about future status of the brain activity in terms of any physical or pathological case. In this sense, this work introduces an ANFIS–VOA hybrid system, which is based on ANFIS and a new optimization algorithm called as vortex optimization algorithm (VOA). Generally, the system provides a basic, strong enough, alternative forecasting solution way for EEG time series. The performed evaluation applications have shown that the ANFIS–VOA approach here provided effective enough solution way for forecasting EEG time series, as a result of learning–reasoning infrastructure achieved by the combination of two different artificial intelligence techniques.
PubDate: 2017-08-01
DOI: 10.1007/s13369-016-2279-z
Issue No: Vol. 42, No. 8 (2017)

• DCCD: Distributed N -Body Rigid Continuous Collision Detection for
Large-Scale Virtual Environments
• Authors: Peng Du; Jieyi Zhao; Weijuan Cao; Yigang Wang
Pages: 3141 - 3147
Abstract: Continuous collision detection (CCD) is a process to interpolate the trajectory of polygons and detect collisions between successive time steps. However, this process is time-consuming, especially for a large number of moving polygons. In this paper, we present a parallel CCD algorithm, which aims to accelerate N-body rigid CCD culling by distributing the load across a distributed-memory system. This algorithm is particularly suitable for large-scale distributed simulations. Experimental results, based on a message passing interface implementation, demonstrate that our approach is more computationally efficient than existing sequential CCD approaches.
PubDate: 2017-08-01
DOI: 10.1007/s13369-016-2411-0
Issue No: Vol. 42, No. 8 (2017)

• An Efficient Distributed Algorithm for Big Data Processing
• Authors: Mohammed S. Al-kahtani; Lutful Karim
Pages: 3149 - 3157
Abstract: This paper introduces an efficient distributed data analysis framework for big data which comprises data processing at the data collecting nodes and the central server end as opposed to the existing framework that only comprises data processing at the central server end. As data are being processed at the data collecting end in the proposed framework, the amount of data is reduced to be processed at the server side by the commodity computers. The proposed distributed algorithm works both in low-powered nodes such as sensors and high-speed commodity computers and also performs sequential and parallel processing based on the amount of data received at the central server. Simulation results demonstrate that the proposed distributed algorithm outperforms traditional distributed algorithms in terms of the size of data to be processed at the central server and data processing time.
PubDate: 2017-08-01
DOI: 10.1007/s13369-016-2405-y
Issue No: Vol. 42, No. 8 (2017)

• A Controlled Experiment on Comparison of Data Perspectives for Software
Requirements Documentation
• Authors: Iyas Ibriwesh; Sin-Ban Ho; Ian Chai; Chuie-Hong Tan
Pages: 3175 - 3189
Abstract: Requirements define what the stakeholders or end users want and what the system must have to satisfy their needs. The reason for poor requirements arises from a documentation format that paints an incomplete picture. Documentation methods lacking necessary details cause requirements engineers to waste time arguing over what to do and how to do it. This causes budget and schedule overruns. This research was conducted using the E-market application domain as a test context. Three documentation data perspectives were evaluated, namely entity relationship diagram (ERD), natural language (NL), and class diagram (CD), which are frequently applied to document stakeholders’ statements. Due to the lack of research in this area, a controlled experiment was conducted focusing on requirements documentation, in which ERD, NL, and CD were compared among 103 participants. The philosophy of this controlled experiment depends on the participant’s ability to transform each perspective into the use case model. This process will represent the efficiency of each perspective. It has been found that participants who used the ERD perspective had significantly higher scores in the experiment, reported a lower number of difficulties, and used less time than those who used the NL and CD perspectives. The study results indicate that the ERD is easier to understand, more helpful, and less time-consuming in documenting requirements than the other two perspectives. In conclusion, using the ERD perspective in developing the E-market application domain could be more effective for documenting preliminary requirements than the NL and CD perspectives.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2425-2
Issue No: Vol. 42, No. 8 (2017)

• Pedestrian Detection with Minimal False Positives per Color-Thermal Image
• Authors: Masoud Afrakhteh; Park Miryong
Pages: 3207 - 3219
Abstract: This research is based on aggregate channel features utilized for pedestrian detection, and the main focus is to investigate a simple way to reduce the number of false positives per image. The importance of this will be to increase the accuracy of the detector by removing the excessive number of false positives while maintaining the missing rate as low as possible. To omit such unwanted false positives, we utilized an image categorization method for day and night images in order to minimize the misclassification rate. Furthermore, the best extension of the aggregate channel features method was analyzed and is recommended as a base detector. As a result, a night-time pre-trained pedestrian detector is only applied to night images, and a daytime detector is applied to daytime images. Thus, a large number of false positives are avoided while the missing rate is greatly reduced.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2424-3
Issue No: Vol. 42, No. 8 (2017)

• A Novel Shape-Based Character Segmentation Method for Devanagari Script
• Authors: Khushneet Jindal; Rajiv Kumar
Pages: 3221 - 3228
Abstract: This paper presents a new algorithm to extract shape-oriented feature vectors using pixel intensities from offline printed Devanagari script documents. Almost, all the characters of the script contain Shirorekha (header line) on the upper portion, which makes segmentation a difficult and complex problem. The problem gets more challenging when images are in multiple gray levels, skewed and noisy. A new fast and effective algorithm is designed using gradient structural information, and its performance is evaluated on a challenging dataset containing 80 printed documents consisting of around 87,000 characters. Experimental results show that the proposed algorithm has 98.56% accuracy, which is 02.66% higher than that reported in literature. Also, the proposed algorithm is time efficient and less complex in comparison with the existing methods.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2420-7
Issue No: Vol. 42, No. 8 (2017)

• An Enhanced Vertical Handover Based on Fuzzy Inference MADM Approach for
Heterogeneous Networks
• Authors: Aymen Ben Zineb; Mohamed Ayadi; Sami Tabbane
Pages: 3263 - 3274
Abstract: Seamless handover between different radio access technologies is a great challenge since it needs to guarantee the link continuity and satisfy the subscriber’s quality-of-services (QoS) requirements. The performances of roaming across different technologies depend on the number of implemented handover criteria, which can be large. In this case, multiple attribute decision making (MADM) methods such as simple additive weighting, technique for order preference by similarity to ideal solution and the compromise ranking method called (VIKOR: VlseKriterijumska Optimizacija I Kompromisno Resenje) have been proposed to solve this kind of multi-criteria problems. However, the major shortcomings of these techniques are the high decision delay and the complexity. In this paper, we propose a novel handover (HO) scheme called Fuzzy-MADM. It is based on the combination of one classical MADM method with a fuzzy logic inference system in order to reduce decisional time. The optimal network is selected, based on multiple criteria such as network QoS indicators, speed of mobile station, battery level and signal strength. The efficiency of our approach is evaluated through appropriate simulations. The results related to HO delays, throughput, complexity and number of executed handovers show the performances of our proposal when compared to classical MADM and load balancing methods.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2418-1
Issue No: Vol. 42, No. 8 (2017)

• EIMAKP: Heterogeneous Cross-Domain Authenticated Key Agreement Protocols
in the EIM System
• Authors: Chao Yuan; Wenfang Zhang; Xiaomin Wang
Pages: 3275 - 3287
Abstract: In recent years, instant messaging (IM) has increasingly become a popular communication technology around the world, and the enterprise instant messaging (EIM) system is one of IM’s applications for enterprise use. The existing studies of EIM systems are directed at the design of functional components and the process of communication, which are usually based on XMPP protocol suite. However, in this paper, the security of EIM is more concerned from another perspective, which is the problem of identity authentication and key agreement between users and services. Several EIM systems are based on public key infrastructure (PKI) to achieve the high-security requirements of enterprises, while identity-based cryptography (IBC) brings new development direction for EIM systems. Although most of the EIM applications are applied independently in different enterprises, users’ heterogeneous cross-domain service access has become an inevitable trend. However, there is still no heterogeneous cross-domain authentication protocol between the PKI domain and the IBC domain having been proposed. Therefore, in order to address this problem, a novel and detailed heterogeneous cross-domain authenticated key agreement scheme is proposed in this paper. By utilizing the PKI-based distributed trust model and the access authorization tickets, this scheme can realize interconnection and seamless authentication between the PKI domain and the IBC domain. Analysis shows that the proposed scheme is theoretically correct, while guaranteeing high security and efficiency.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2447-9
Issue No: Vol. 42, No. 8 (2017)

• Accurate and Fast Computation of Exponent Fourier Moment
• Authors: Satya P Singh; Shabana Urooj
Pages: 3299 - 3306
Abstract: Exponent Fourier moments (EFMs) are suitable for image representation and invariant pattern recognition. EFMs posses more number of uniformly distributed zeros as compared to Zernike Moments. However, these moments tend to be unstable near the center of image and also show a rise in reconstruction error for higher order of moments. In this paper, we propose a new computational framework for calculating the traditional EFM by partitioning the radial and angular part into equally spaced sectors. The proposed approach is simple and results in better image representation capability, numerical stability, and computational speed. Moreover, the proposed approach is completely stable near the center of image.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2465-7
Issue No: Vol. 42, No. 8 (2017)

• Enhancing Efficiency of the Test Case Prioritization Technique by
Improving the Rate of Fault Detection
• Authors: Soumen Nayak; Chiranjeev Kumar; Sachin Tripathi
Pages: 3307 - 3323
Abstract: Test case prioritization techniques organize test cases for implementation in a manner that enhance their efficacy in accordance with some performance goal. The main aim of regression testing is to test the amended software to assure that the amendments performed in software are correct. It is not always feasible to retest entire test cases in a test suite due to limited resources. Therefore, it is necessary to develop some effective techniques that can enhance the regression testing effectiveness by organizing the test cases in an order following some testing criterion. One possible criterion of such prioritization is to enhance a test suite’s fault detection rate. It aspires to arrange test cases in an order that higher priority test cases run earlier than lower ones. This paper proposed a methodology for prioritizing regression test cases based on four factors namely the rate of fault detection, the number of faults detected, the test case ability of risk detection and the test case effectiveness. The proposed approach is implemented on two projects. The resultant test case order is analyzed with other prioritization techniques such as no prioritization, random prioritization, reverse prioritization, optimal prioritization and along with previous works for project 1. We have applied our proposed approach for prioritizing test cases in an order that maximize fault coverage with least test suite execution and compared its effectiveness with other orderings. The result of proposed approach shows higher average percentage of fault detected value and outperforms all other approaches.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2466-6
Issue No: Vol. 42, No. 8 (2017)

• A Generic Framework for Building Heterogeneous Simulations of Parallel and
Distributed Computing Systems
• Authors: Taner Dursun; Hasan Dağ
Pages: 3357 - 3373
Abstract: There have been many systems available for parallel and distributed computing (PDC) applications such as grids, clusters, super-computers, clouds, peer-to-peer and volunteer computing systems. High-performance computing (HPC) has been an obvious candidate domain to take advantage of PDC systems. Most of the research on HPC has been conducted with simulations and has been generally focused on a specific type of PDC system. This paper, however, introduces a general purpose simulation model that can be easily enlarged for constructing simulations of many of the most well-known PDC system types. Although it might create a new vision for research activities in the simulation community, current simulation tools do not provide proper support for cooperation between software working in real-time and simulation time. In this paper, thus, we also present a promising approach for constructing hybrid simulations that offers great potential for many research areas. As a proof of concept, we implemented a prototype for our simulation model. Then, we are able to rely on this prototype to build simulations of various PDC systems. Thanks to hybrid simulation support of our model, we are able to combine and manage the simulated PDC systems with our previously developed policy-based management framework in simulation runs.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2497-z
Issue No: Vol. 42, No. 8 (2017)

• Popularity-Aware Content Caching for Distributed Wireless Helper Nodes
• Authors: Furqan H. Khan; Zeashan Khan
Pages: 3375 - 3389
Abstract: Content caching enables end users to obtain contents in a short time. To further reduce the time to access a mobile wireless network, we consider a device-to-device communication scenario where contents are distributed and users may obtain contents directly from neighbors instead of accessing the base station. For maximizing the cache hit rate per unit of bandwidth consumed, we derive a theoretical model based on the probability distribution of content popularity. To make the problem tractable, we propose a popularity-aware content caching mechanism using the modified weighted Zipf distribution. Through our simulation with a real YouTube trace, the proposed algorithm is compared with the existing well-known least-recently-used algorithm in terms of the achieved hit rate, caching time, cache eviction rate, and consumed bandwidth. The results show that our approach is able to achieve near-optimal performances and at the same time approximate the optimal cache time a cached content resides at the device.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2505-3
Issue No: Vol. 42, No. 8 (2017)

• Electronically Tunable Fractional Order Filter
• Authors: Rakesh Verma; Neeta Pandey; Rajeshwari Pandey
Pages: 3409 - 3422
Abstract: In this paper, an electronically tunable resistorless fractional order filter (FOF) based on operational transconductance amplifier (OTA) is presented. It uses two fractional capacitors (FC) of same order and provides fractional order low-pass filter and fractional order band-pass filter responses simultaneously. Mathematical formulations are outlined for various critical frequencies and transfer function sensitivities for presented FOF. The FCs of orders 0.5 and 0.9 are considered for illustrating the proposal. The FCs are realized using the fourth-order continued fraction expansion-based RC ladder and are characterized using SPICE simulations. Functional verification of presented FOF with FC of orders 0.5 and 0.9 is exhibited through SPICE simulations. The OTA is implemented using $$0.5\,\upmu \hbox {m}$$ CMOS technology model parameters. Electronic tunability of half power and right-phase frequencies of presented FOF is achieved through bias current variation of OTA. The transfer functions’ sensitivity with respect to various circuit parameters is also examined through simulations, and it is found that the values remain well within unity for most of the circuit parameters. Furthermore, the presented FOF is attractive from integration viewpoint as it achieves tunability via bias current variation in contrast to tuning through resistor variation in existing FOFs.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2500-8
Issue No: Vol. 42, No. 8 (2017)

• Comparative Modular Exponentiation with Randomized Exponent to Resist
Power Analysis Attacks
• Authors: Hridoy Jyoti Mahanta; Ajoy Kumar Khan
Pages: 3423 - 3434
Abstract: We present a secure variant of modular exponentiation implemented in RSA and CRT-RSA to resist power analysis attacks. For speeding up the computation, modular exponentiation is generally done through “squaring and multiplication” according to the binary bit of the secret exponent. This is popularly known as straight forward method. However, such computation leaves behind distinct traces of power consumption leading to power analysis attacks. These attacks were so powerful that they could reveal the secret exponent or the key challenging the vulnerability of any cryptosystem. In this work, we have enhanced the security of modular exponentiation by first randomizing the exponent or the secret key and then executing comparative bitwise squaring and multiplication. Our proposed work could be implemented in left-to-right or right-to-left without any modification of the algorithm which removed the dependency of “squaring and multiplication” on the bits of the key. As RSA decryption has more security privilege, we have implemented our proposed work only while decrypting the message. However, it can be used during encryption too. The randomized key, bit-independent squaring–multiplication and comparative modular exponentiation would generate non-uniform and random power traces resisting it from power analysis attacks.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2517-z
Issue No: Vol. 42, No. 8 (2017)

• A Novel LWCSO-PKM-Based Feature Optimization and Classification of Attack
• Authors: Dhanalakshmi Krishnan Sadhasivan; Kannapiran Balasubramanian
Pages: 3435 - 3449
Abstract: Currently, Supervisory Control and Data Acquisition (SCADA) systems are widely used in the remote monitoring and control of the large-scale manufacturing plants and power grids. The development of high-security SCADA is the major requirement due to their vulnerability to attacks based on the architectural constraints. The decision making regarding the controlling of power flows and the replacement of faulty devices is based on the two stages normal or attacked. The observations from the sensor play the major role in the classification of normal and abnormal patterns. With the increase in a number of observations, the dimensionality of features is high and thus there is a chance of misleading results during the classification progress. Various classification and the intrusion detection (ID) algorithms are available to reduce the dimensionality of features for better classification. This paper proposes a novel approach for feature optimization and classification of the attack types in the SCADA network with better performance than the existing algorithms. The Linear Weighted Cuckoo Search Optimization (LWCSO) algorithm in proposed work selects the best features from the overall features. A Probabilistic Kernel Model (PKM) updates the weight function of each node to form the clusters representing the optimal features. The label is applied to each cluster based on the difference between the set of labeled training features with the testing feature set. Based on this label, the features are applied to detect the anomaly node in the network area. From the classification result, if the attack type is already known, then appropriate action is taken immediately. If the attack type is unknown, its type is added to the database. The periodical discovery of the type of attack and the database update with the unknown attacks increases the detection ability effectively. From the performance analysis, it is observed that the proposed LWCSO-PKM approach achieves better performance than the existing classification techniques and IDS algorithms.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2524-0
Issue No: Vol. 42, No. 8 (2017)

• Evolution Prediction and Process Support of OSS Studies: A Systematic
Mapping
• Authors: Ghulam Rasool; Nancy Fazal
Pages: 3465 - 3502
Abstract: Open source software (OSS) evolution is an important research domain, and it is continuously getting more and more attention of researchers. A large number of studies are published on different aspects of OSS evolution. Different metrics, models, processes and tools are presented for predicting the evolution of OSS studies. These studies foster researchers for contemporary and comprehensive review of literature on OSS evolution prediction. We present a systematic mapping that covers two contexts of OSS evolution studies conducted so far, i.e., OSS evolution prediction and OSS evolution process support. We selected 98 primary studies from a large dataset that includes 56 conference, 35 journal and 7 workshop papers. The major focus of this systematic mapping is to study and analyze metrics, models, methods and tools used for OSS evolution prediction and evolution process support. We identified 20 different categories of metrics used by OSS evolution studies and results show that SLOC metric is largely used. We found 13 different models applied to different areas of evolution prediction and auto-regressive integrated moving average models are largely used by researchers. Furthermore, we report 13 different approaches/methods/tools in existing literature for the evolution process support that address different aspects of evolution.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2556-5
Issue No: Vol. 42, No. 8 (2017)

• A Novel, Efficient, Robust, and Blind Imperceptible 3D Anaglyph Image
Watermarking
• Authors: Hidangmayum Saxena Devi; Khumanthem Manglem Singh
Pages: 3521 - 3533
Abstract: This paper proposes a novel, robust and efficient blind three-dimensional anaglyph image-watermarking scheme for copyright protection which utilizes the nonsubsampled contourlet transform and principal component analysis for embedding and extracting a binary watermark. The binary watermark is encrypted or scrambled using Arnold transform so as to give security and embedded using principal component analysis into the selected subband of the nonsubsampled contourlet-transformed 3D anaglyph image. Nonsubsampled contourlet transform is used due to its varying properties so as to resist against intentional image processing attacks. Principal component analysis is used in decorrelating the data. The proposed system is tested using Middlebury stereo dataset and experimental results show that the proposed scheme is highly imperceptible and has better performance than the existing anaglyph watermarking schemes in terms of both robustness and imperceptibility.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2531-1
Issue No: Vol. 42, No. 8 (2017)

• An Algorithmic Approach for Predicting Unknown Information in Incomplete
Fuzzy Soft Set
• Authors: Sujit Das; Sumonta Ghosh; Samarjit Kar; Tandra Pal
Pages: 3563 - 3571
Abstract: This paper proposes a novel approach to estimate the missing or unknown information in incomplete fuzzy soft sets (FSSs). Incomplete information in fuzzy soft sets leads to more uncertainty and ambiguity in decision making. The need to represent unknown or missing information using the available knowledge is becoming increasingly important. The proposed method initially finds the mean value of each parameter exploiting the existing information. Then average distance of each parameter from the mean is computed. A pair of useful distance information is derived using the average distance and mean. Next we determine the unknown information using the probabilistic weight and distance information. In order to generalize the concept, we also extend the proposed approach for finding the missing or unknown information in the context of interval-valued fuzzy soft sets. Two illustrative examples are provided to show the effectiveness of the developed approaches. The result of the proposed method for FSS has been compared with the existing method using two well-known entropy measures, Kosko’s (Inf Sci 40:165–174, 1986) entropy and De Luca and Termini’s (Inf Control 20:301–312, 1972) entropy. The comparative analysis has shown that the proposed approach is preferable as it has less entropy, i.e., less degree of fuzziness than that of the existing approach.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2591-2
Issue No: Vol. 42, No. 8 (2017)

• Extended Absolute Fuzzy Connectedness Segmentation Algorithm Utilizing
Region and Boundary-Based Information
• Authors: T. H. Farag; W. A. Hassan; H. A. Ayad; A. S. AlBahussain; U. A. Badawi; M. K. Alsmadi
Pages: 3573 - 3583
Abstract: Image segmentation is the process of dividing an image into meaningful objects to perform different analysis operations. Fuzzy connectedness (FC)-based segmentation methods usually give robust segmentation results; on the other hand, they suffer from some weaknesses. The generalized or absolute fuzzy connectivity (GFC) segmentation method is the foundation of most FC-based methods. This method has two apparent weaknesses: It combines different objects in the case of their boundaries are blurred, and it can not find the object of interest if the threshold value determined without interactive manner. In this manuscript, we introduce extensions to the GFC algorithm to tackle the mentioned weaknesses. The FC and affinity functions in the extended algorithm utilize region- and boundary-based information to overcome the first weakness. Moreover, this algorithm suggests a near optimal threshold generated automatically to eliminate the need for any interaction. Comparisons has been made to quantitatively evaluate the proposed algorithm over a three sorts of data set of scenes. Measures of relevance have been calculated for two data sets. Results indicate improved segmentation accuracy and also showed that the weaknesses of the traditional GFC algorithms have been eliminated to some extent.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2577-0
Issue No: Vol. 42, No. 8 (2017)

• Foreground Detection via Background Subtraction and Improved Three-Frame
Differencing
• Authors: Sandeep Singh Sengar; Susanta Mukhopadhyay
Pages: 3621 - 3633
Abstract: Moving object detection is a widely used and important research topic in computer vision and video processing. Foreground aperture, ghosting and sudden illumination changes are the main problems in moving object detection. To consider the above problems, this work proposes two approaches: (i) improved three-frame difference method and (ii) combining background subtraction and improved three-frame difference method for the detection of multiple moving objects from indoor and outdoor real video dataset. This work accurately detects the moving objects with varying object size and number in different complex environments. We compute the detection error and processing time of two proposed as well as previously existing approaches. Experimental results and error rate analysis show that our methods detect the moving targets efficiently and effectively as compared to the traditional approaches.
PubDate: 2017-08-01
DOI: 10.1007/s13369-017-2672-2
Issue No: Vol. 42, No. 8 (2017)

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327

Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs