for Journals by Title or ISSN for Articles by Keywords help
 Subjects -> ENGINEERING (Total: 2298 journals)     - CHEMICAL ENGINEERING (192 journals)    - CIVIL ENGINEERING (192 journals)    - ELECTRICAL ENGINEERING (104 journals)    - ENGINEERING (1209 journals)    - ENGINEERING MECHANICS AND MATERIALS (385 journals)    - HYDRAULIC ENGINEERING (55 journals)    - INDUSTRIAL ENGINEERING (69 journals)    - MECHANICAL ENGINEERING (92 journals) ENGINEERING (1209 journals)                  1 2 3 4 5 6 7 | Last
 Arabian Journal for Science and Engineering   [SJR: 0.345]   [H-I: 20]   [5 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1319-8025    Published by Springer-Verlag  [2355 journals]
• Weighting Factor Selection Techniques for Predictive Torque Control of
Induction Motor Drives: A Comparison Study
• Authors: M. Mamdouh; M. A. Abido; Z. Hamouz
Pages: 433 - 445
Abstract: Abstract For the last few years, predictive torque control (PTC) has attracted the attention of the researchers due to its simplicity and effectiveness. In PTC, flux-weighting factor needs to be wisely adjusted since it affects greatly the performance of the drive system. Many research trials were devoted to select this weighting factor or even eliminate it. Each of these trials illustrates a method to overcome this problem and presents a solution to the conventional weighting factor calculation method. This paper presents a critical evaluation of the performance of recently proposed methods for weighting factor selection for finite control set PTC. Based on the way the weighting factor is calculated, the methods are classified to offline and online methods. In this study, more focus will be directed to the evaluation of the online methods since they can update the weighting factor automatically if the operating point changes. Specifically, four recently developed methods along with the conventional method are considered in this study. Flux ripple, torque ripple, current total harmonic distortion, and average switching frequency are adopted as the judging criteria for this comparison. Simulations at different operating points are used to assess the performance of each method, and the characteristics of each method are compared according to the performance indices suggested. The strengths and weaknesses of each method are highlighted. Therefore, the suitable method can be identified according to different application requirements.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2842-2
Issue No: Vol. 43, No. 2 (2018)

• A Systematic Review of Agent-Based Test Case Generation for Regression
Testing
• Authors: Pardeep Kumar Arora; Rajesh Bhatia
Pages: 447 - 470
Abstract: Abstract There is an urgent need to create awareness about the potential benefits of using agents in software test case generation and to identify the need to develop agent-based regression testing techniques and approaches. It may help in reducing time and cost required for testing. This study reports systematic literature review of existing test case generation approaches for regression testing and agent-based software testing systems. The emphasis is articulated on agent-based regression test case generation. Further research directions are recommended. In the systematic literature review, we framed three sets of research questions. Based on our inclusion and exclusion criteria, we identified 115 potential research papers on test case generation in regression testing and agent-based software testing. We explored journals, international conferences, workshops and identified 59 studies in test case generation for regression testing and 56 studies in agent-based software testing. The data extracted from our study are classified into seven broader areas of agent-based software testing. Based on our systematic literature survey, we recognized available techniques, approaches, platforms as well as methodologies for regression test case generation and developing agent-based software testing systems. This study will benefit the researchers to carry forward their work in the domain of regression test case generation and agent-based software testing. To cut down on schedule and cost, mobile agent-based software testing can be a promising alternative.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2796-4
Issue No: Vol. 43, No. 2 (2018)

• Robust Visual Tracking via Incremental Subspace Learning and Local Sparse
Representation
• Authors: Guoliang Yang; Zhengwei Hu; Jun Tang
Pages: 627 - 636
Abstract: Abstract Single target tracking is an important part of computer vision, and its robustness is always restricted by target occlusion, illumination change, target pose change and so far. To deal with this problem, this paper proposed a robust visual tracking based on incremental subspace learning and local sparse representation. The algorithm adopts local sparse representation to test occlusion and rectifies the incremental learning error according to the occlusion detection outcome and to overcome the influence of occlusion on target template. Moreover, similarity between target templates and candidate templates is computed on the basis of local sparse representation. In the frame of particle filter, target tracking is achieved by combining incremental error and similarity measurement. The experimental resulting in several challenging sequences shows that the proposed method has better performance than that of state-of-the-art tracker.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2734-5
Issue No: Vol. 43, No. 2 (2018)

• Using the Modified Diffie–Hellman Problem to Enhance Client
Computational Performance in a Three-Party Authenticated Key Agreement
• Authors: Hung-Yu Chien
Pages: 637 - 644
Abstract: Abstract A three-party authenticated key agreement (3PAKA) scheme is a protocol that enables a pair of registered clients to establish session keys via the help of a trusted server such that each client pre-shares its secret key with the server only. This approach greatly improves the scalability of key agreement protocols and provides better user convenience. Conventionally, 3PAKA-like many other key agreement schemes are based on the classic computational Diffie–Hellman problem (CDHP) to establish the session keys, and each client requires at least two modular exponentiations. However, as more and more mobile devices with limited resources are becoming popular, it is desirable to reduce the computational load for those clients while still preserving its strong security. In this paper, based on the modified CDHP, we propose new 3PAKA schemes which require only four message steps and reduce clients’ exponentiation computations up to 50%, compared to those schemes that are based on the CDHP and provide the same functions. The security of the proposed schemes is formally proved. The excellent performance makes them very attractive to those clients with limited resources.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2725-6
Issue No: Vol. 43, No. 2 (2018)

• Cloud Computing: A Multi-workflow Scheduling Algorithm with Dynamic
Reusability
• Authors: Mainak Adhikari; Santanu Koley
Pages: 645 - 660
Abstract: Abstract Cloud computing provides a dynamic environment of well-organized deployment of hardware and software that are common in nature and the requirement for propping up heterogeneous workflow applications to realize high performance and improved throughput where the most demanding task is multiple workflow applications surrounded by their fixed deadline. These workflow applications consist of interconnected jobs and data. Nevertheless, hardly any initiations are tailored on multi-workflow scheduling exertion. These scheduling problems have been considered methodically in cloud atmosphere. Accessibility of the computing resources on the data center (DC) provides the exact time of execution of each process, whereas the execution time of every process within a workflow is pre-calculated in the majority of the existing multi-workflow scheduling problem. System overhead so far is an additional concern at the same time as dynamically generating virtual machines (VMs) with salvage them dipping the power eating. The aim of this paper is to reduce the execution time of every job and finalize the execution of all workflow within its deadline by producing VMs dynamically in DC and recycle them as necessary. We recommend a dynamic multi-workflow scheduling algorithm formally named as competent dynamic multi-workflow scheduling (CDMWS) algorithm. Simulation process describes one of the best algorithms so far in terms of performance among subsistent algorithm and moves toward a new era of multi-workflow relevance.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2739-0
Issue No: Vol. 43, No. 2 (2018)

• An Enhanced and Provably Secure Chaotic Map-Based Authenticated Key
Agreement in Multi-Server Architecture
Pages: 811 - 828
Abstract: Abstract In the multi-server authentication (MSA) paradigm, a subscriber might avail multiple services of different service providers, after registering from registration authority. In this approach, the user has to remember only a single password for all service providers, and servers are relieved of individualized registrations. Many MSA-related schemes have been presented so far, however with several drawbacks. In this connection, recently Li et al. in Wirel. Pers. Commun., (2016). doi:10.1007/s11277-016-3293-x presented a chaotic map-based multi-server authentication scheme. However, we observed that Li et al. suffer from malicious server insider attack, stolen smart card attack, and session-specific temporary information attack. This research work is based on improving security of Li et al.’s protocol in minimum possible computation cost. We also evaluate the security for the contributed work which is provable under formal security analysis employing random oracle model and BAN Logic.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2764-z
Issue No: Vol. 43, No. 2 (2018)

Cloud
• Authors: Neha Garg; Major Singh Goraya
Pages: 829 - 841
Abstract: Abstract Data centers in cloud environment consume high amount of energy which not only raises the electricity bills of the data center hosting organizations but also has the strong environmental footprints. Therefore, energy efficiency of the data centers has become an important research issue. Many energy efficiency approaches have been proposed in the literature for cloud. Efficient resource scheduling is one of the important approaches to achieve energy efficiency in cloud. In this paper, a task deadline-aware energy-efficient scheduling model for virtualized cloud is presented. Independent and dynamically arriving deadline-aware tasks are scheduled by virtualizing the physical hosts in the data center. The proposed scheduling model at the first instance achieves the energy efficiency by executing maximum workload in the operational state of the host and at the second instance by maximum energy saving in the idle state of the host. In the operational state of the host, maximum workload is executed by exploiting the task slack time in a new context, and in the idle state of the host, maximum energy is saved by deploying core-level granularity of dynamic voltage and frequency scaling. The presented scheduling model is evaluated on the synthetic and real-world workload. Results clearly indicate that the presented scheduling model outperforms the existing scheduling model on the account of performance parameters of guarantee ratio, total energy consumption, energy consumption per task and resource utilization.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2779-5
Issue No: Vol. 43, No. 2 (2018)

• NLP-MTFLR: Document-Level Prioritization and Identification of Dominant
Multi-word Named Products in Customer Reviews
• Authors: R. Sivashankari; B. Valarmathi
Pages: 843 - 855
Abstract: Abstract The accessibility to large amount of datasets in commercial domains has accentuated the importance of data mining in the last few years. Practitioners as well as researchers rely on them to reflect on the magnitude and effect of data-related problems that require solution in business environments. In recent years, the volume of online data submissions (e-commerce data) on products, services and organizations has increased exponentially. However, the submitted data are highly unstructured and largely dependent on language. Mining and extracting useful information from such data is a colossal task, as analysis of the data should include opinion word identification/extraction, aspect extraction and entity extraction. Of the three, the entity extraction is one of the governing approaches in text analysis and plays a major role in e-commerce, biomedical and automobile industries and supports the categorization of the records based on the entity names, generation of short summary on the entities and grouping of the similar records. The existing approaches in entity extraction are capable of recognizing and extracting single-word named entities. However, the product names are often given as a sequence of words (multiple words or multi-word named entities) and, therefore, cannot be recognized by the existing methods. To resolve this issue, this paper presents a novel approach of NLP-Modified Token-based Frequencies of Left and Right (NLP-MTFLR), which is considered as an effective approach to detect and extract the multi-word named products and dominant multi-word named product from the customer review corpus. Using this NLP-MTFLR approach, from the review corpus the subwords and multi-subwords are identified and mapped them with its multi-word named products to recognize dominant product of that corpus. With this dominant product identification, the proposed method reveals in that corpus that the identified dominant product is highly reviewed by the reviewers compared to other products. This NLP-MTFLR approach is achieved 97% accuracy, 77% precision, 89% recall and 82% F-score.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2773-y
Issue No: Vol. 43, No. 2 (2018)

• RIFT: A Rule Induction Framework for Twitter Sentiment Analysis
• Authors: Muhammad Zubair Asghar; Aurangzeb Khan; Furqan Khan; Fazal Masud Kundi
Pages: 857 - 877
Abstract: Abstract The rapid evolution of microblogging and the emergence of sites such as Twitter have propelled online communities to flourish by enabling people to create, share and disseminate free-flowing messages and information globally. The exponential growth of product-based user reviews has become an ever-increasing resource playing a key role in emerging Twitter-based sentiment analysis (SA) techniques and applications to collect and analyse customer trends and reviews. Existing studies on supervised black-box sentiment analysis systems do not provide adequate information, regarding rules as to why a certain review was classified to a class or classification. The accuracy in some ways is less than our personal judgement. To address these shortcomings, alternative approaches, such as supervised white-box classification algorithms, need to be developed to improve the classification of Twitter-based microblogs. The purpose of this study was to develop a supervised white-box microblogging SA system to analyse user reviews on certain products using rough set theory (RST)-based rule induction algorithms. RST classifies microblogging reviews of products into positive, negative, or neutral class using different rules extracted from training decision tables using RST-centric rule induction algorithms. The primary focus of this study is also to perform sentiment classification of microblogs (i.e. also known as tweets) of product reviews using conventional, and RST-based rule induction algorithms. The proposed RST-centric rule induction algorithm, namely Learning from Examples Module version: 2, and LEM2 $$+$$ Corpus-based rules (LEM2 $$+$$ CBR),which is an extension of the traditional LEM2 algorithm, are used. Corpus-based rules are generated from tweets, which are unclassified using other conventional LEM2 algorithm rules. Experimental results show the proposed method, when compared with baseline methods, is excellent, with regard to accuracy, coverage and the number of rules employed. The approach using this method achieves an average accuracy of 92.57% and an average coverage of 100%, with an average number of rules of 19.14.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2770-1
Issue No: Vol. 43, No. 2 (2018)

• An Approach Toward Amelioration of a New Cloudlet Allocation Strategy
Using Cloudsim
• Authors: Sourav Banerjee; Aritra Roy; Amritap Chowdhury; Ranit Mutsuddy; Riman Mandal; Utpal Biswas
Pages: 879 - 902
Abstract: Abstract Cloud computing is a varied computing archetype uniting the benefits of service-oriented architecture and utility computing. In cloud computing, resource allocation and its proper utilization, to achieve a higher throughput and quality of service (QoS), has become a great research issue. This paper highlights a new cloudlet allocation strategy that utilizes all available resources efficiently and enhances the QoS by applying deadline-based workload distribution. It is believed that this paper would benefit both cloud users and researchers in various aspects. The entire experiment is done in Cloudsim Toolkit-3.0.3, by modifying the required classes.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2781-y
Issue No: Vol. 43, No. 2 (2018)

• A Fast Parallel Modular Exponentiation Algorithm
• Authors: Khaled A. Fathy; Hazem M. Bahig; A. A. Ragab
Pages: 903 - 911
Abstract: Abstract Modular exponentiation is a fundamental and most time-consuming operation in several public-key cryptosystems such as the RSA cryptosystem. In this paper, we propose two new parallel algorithms. The first one is a fast parallel algorithm to multiply n numbers of a large number of bits. Then we use it to design a fast parallel algorithm for the modular exponentiation. We implement the parallel modular exponentiation algorithm on Google cloud system using a machine with 32 processors. We measured the performance of the proposed algorithm on data size from $$2^{12}$$ to $$2^{20}$$ bits. The results show that our work has a fast running time and more scalable than previous works.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2797-3
Issue No: Vol. 43, No. 2 (2018)

• Task Partitioning Scheduling Algorithms for Heterogeneous Multi-Cloud
Environment
• Authors: Sanjaya Kumar Panda; Sohan Kumar Pande; Satyabrata Das
Pages: 913 - 933
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2798-2
Issue No: Vol. 43, No. 2 (2018)

• Interest-Based Clustering Approach for Social Networks
• Authors: Lulwah AlSuwaidan; Mourad Ykhlef
Pages: 935 - 947
Abstract: Abstract Recently, the applications of community detection have increased because of their effectiveness in identifying communities correctly. Many methods and algorithms have been introduced to bring new insights that will improve community detection in social networks. While such algorithms can find useful communities, they tend to focus on network structure and ignore node interests and interconnections. However, accurate community detection requires the consideration of both network structure and node interests. The best method to achieve this is by utilizing unsupervised models. In this article, we introduce a new approach for social network clustering, termed Interest-based Clustering, which clusters nodes in social networks based on a measure of interest similarity. It considers structure, interaction, and node interest along with nodes friends’ interests. The empirical evaluation of this new approach was done using real dataset crawled from Twitter. The approach outperforms well-known community detections algorithms, SCAN, Fast Modularity, Zhao et al., in terms of modularity, connectivity, and overlapping.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2800-z
Issue No: Vol. 43, No. 2 (2018)

• A Feature Selection Approach to Detect Spam in the Facebook Social Network
• Authors: Mohammad Karim Sohrabi; Firoozeh Karimi
Pages: 949 - 958
Abstract: Abstract The widespread adoption of social networks and their enormous facilities and growing opportunities has attracted many users and audience. But along with attractive and interesting messages and topics, inappropriate and sometimes criminal contents, such as spam, are also released on these networks. Malicious spammers intend to send inaccurate or irrelevant contents to distribute malformed information on online social networks. This paper is about the spam comments detection on the Facebook social network. By reviewing the posts and comments, and studying their features, an online spam filtering system has been designed in this paper. The proposed filtering system is able to exploit various exploration methods and optimization algorithms such as simulated annealing, particle swarm optimization, ant colony optimization, and differential evolution to detect and filter malicious contents and to prevent publishing spam comments to provide a secure environment for users of this popular social network. Furthermore, supervised machine learning methods, clustering techniques, and decision trees have been exploited to provide an accurate performance and appropriate speed for the proposed filtering system.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2855-x
Issue No: Vol. 43, No. 2 (2018)

• Distributed Denial-of-Service Attack Detection and Mitigation Using
Feature Selection and Intensive Care Request Processing Unit
• Authors: Nitesh Bharot; Priyanka Verma; Sangeeta Sharma; Veenadhari Suraparaju
Pages: 959 - 967
Abstract: Abstract Worldwide acceptance of cloud computing is increasing day by day because it provides large amount of IT resources in a very simplified and economic manner. Cloud provides high security to its customers, but still there are some vulnerabilities present in cloud that attracts many attackers. Distributed denial-of-service (DDoS) attack is one of the nightmares for many cloud providers which affects the availability of resources in cloud network. This paper proposes a DDoS attack detection and mitigation model using the feature selection method and Intensive Care Request Processing Unit (ICRPU). In the proposed work, initially traffic is analyzed using Hellinger distance function, and if some distance is found, then all the packets are analyzed and classified in two categories, as DDoS and legitimate request groups on the basis of feature selected for the classification. The entire legitimate requests are forwarded to Normal Request Processing Unit where these request could be completed. All the DDoS request are sent to ICRPU were these request got busy in question and answer and in parallel source of these request are identified and blocked for further access. The specialty of ICRPU is that the attacker will never realize that the request sent by them to exhaust the resources are trapped, so the attacker will not perform any reflex action, and it becomes easy to track the attacker. Results shows that the proposed method provides the best detection rate, accuracy, and false alarm in comparison with existing filter methods and other such proposed method.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2844-0
Issue No: Vol. 43, No. 2 (2018)

• Proportionate Flow Shop Scheduling with Multi-agents to Maximize Total
Gains of JIT Jobs
• Authors: Shi-Sheng Li; Ren-Xia Chen; Wen-Jie Li
Pages: 969 - 978
Abstract: Abstract Different variants of the multi-agent scheduling have been studied in the literature due to its wide applications in artificial intelligence, decision theory, operations research, etc. Most of previous research focused on the single-machine environment and two-agent scheduling. In this paper, we address a multi-agent scheduling problem on a set of m machines in a proportionate flow shop system, where the job processing times are machine independent. Each agent desires to maximize its total gains of JIT jobs which are completed exactly at the due dates. The goal is to find a feasible schedule in which each agent’s cost function value does not less than a given lower bound. When the number of agents is part of the input, we use the reduction method to show that the general problem is strongly $$\mathcal {NP}$$ -complete even if all jobs have unit processing times. When the number of agents is fixed, we first develop a dynamic programming algorithm that runs in pseudo-polynomial time, then we design a fully polynomial time approximation scheme by exploiting the technique of trimming-the-state-space. The results presented in this paper imply that by relaxing each agent’s desired cost function value a small fraction, we can obtain an efficient approximate schedule for the problem with fixed number of agents in polynomial time, while when the number of agents is part of the input, the problem become much intractable, and it needs more sophisticated methods to solve in future research.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2900-9
Issue No: Vol. 43, No. 2 (2018)

• Robust Reversible Watermarking Algorithm Based on RIWT and Compressed
Sensing
• Authors: Zhengwei Zhang; Lifa Wu; Shangbing Gao; He Sun; Yunyang Yan
Pages: 979 - 992
Abstract: Abstract In order to improve the robustness of existing reversible watermarking algorithms and strengthen the imperceptibility of watermarked images, a robust reversible watermarking algorithm based on redundant integer wavelet transform and compressed sensing is proposed. Firstly, the algorithm selects the high capacity embedding regions in the original image and then redundant integer wavelet transform is conducted within the selected areas, and wavelet coefficients matrices are obtained after the sparse. After that, the two intermediate-frequency parts of each wavelet coefficient matrix are conducted by compressed sensing with the same observation matrix, and the two generated compressed observation values are merged. Finally, the watermark is embedded into the observation value of intermediate-frequency coefficient part, to recover the sparse signal by using the reconstruction algorithm, and then the watermarked image is obtained through the redundant integer wavelet inverse transform. Simulation results indicate that the algorithm can not only realize blind extraction, but also significantly improve robustness, imperceptibility and embedded watermark capacity compared with other similar algorithms. It is easy to implement. What’s more, the generated watermarked images have high quality and can completely lossless restore the original carrier image without any attacks.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2898-z
Issue No: Vol. 43, No. 2 (2018)

• Hybrid Hierarchical Backtracking Search Optimization Algorithm and Its
Application
• Authors: Feng Zou; Debao Chen; Renquan Lu
Pages: 993 - 1014
Abstract: Abstract As a young intelligence optimization algorithm, backtracking search optimization algorithm (BSA) has been used to solve many optimization problems successfully. However, BSA has some disadvantages such as being easy to fall into local optimum, lacking the learning from the optimal individual, and being difficult to adjust the control parameter F. Motivated by these analyses, to improve the optimization performance of the original BSA, a new hybrid hierarchical backtracking search optimization algorithm (HHBSA) is proposed in this paper. In the proposed method, a two-layer hierarchy structure of population and a randomized regrouping strategy are introduced in the proposed HHBSA for improving the diversity of population, a mutation strategy is used to help the population when the evolution is stagnant and an adaptive control parameter is presented to increase the learning ability of the BSA. To verify the performance of the proposed approaches, 48 benchmark functions and three real-world optimization problems are evaluated to test the performance of the proposed approach. Experiment results indicate that HHBSA is competitive to some existing EAs.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2852-0
Issue No: Vol. 43, No. 2 (2018)

• Embedding Advanced Harmony Search in Ordinal Optimization to Maximize
Throughput Rate of Flow Line
• Authors: Shih-Cheng Horng; Shieh-Shing Lin
Pages: 1015 - 1031
Abstract: Abstract Flow line systems are production systems in which successive operations are performed on a product in a manner so that it moves through the factory in a certain direction. This work firstly formulates a flow line system as an integer-ordered inequality-constrained simulation–optimization problem and present a stochastic simulation procedure to estimate the throughput rate. The mathematical formulation and simulated procedure can be used for any distribution of processing rate and can be applied to high-dimensional problems. An approach that embeds advanced harmony search (AHS) in ordinal optimization (OO), abbreviated as AHSOO, is developed to find a near-optimal design of the flow line system to maximize the throughput rate. The proposed approach comprises three levels, which are meta-modeling, diversification and intensification. A radial basis function network is a meta-model to approximate the performance of a design. The proposed approach integrates the AHS approach for diversification with improved optimal computing budget allocation (IOCBA) for intensification. AHS favorably explores the solution space initially and moves toward exploiting good solutions close to the end. The IOCBA maximized the overall simulation efficiency for finding an optimal solution. The proposed AHSOO is tested on three examples. In the moderately sized example, simulation results reveal that the average best-so-far performances that were determined using PSO, GA and ES were 6.12, 9.65 and 8.53% less than that obtained using AHSOO—even after the former took more than 50 times the CPU time that was consumed by AHSOO upon completion. Analytical results reveal that the proposed method yields designs of much higher quality with a much higher computing efficiency than the seven competing methods.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2864-9
Issue No: Vol. 43, No. 2 (2018)

• An Improved Localization Scheme Based on PMCL Method for Large-Scale
Mobile Wireless Aquaculture Sensor Networks
• Authors: Chunfeng Lv; Jianping Zhu; Zhengsu Tao
Pages: 1033 - 1052
Abstract: Abstract Localization is crucial to many applications in wireless sensor networks (WSNs) because measurement data or information exchanges happened in WSNs without location information are meaningless. Most localization schemes for mobile WSNs are based on Sequential Monte Carlo (SMC) algorithm. These SMC-based methods often suffer from too many iterations, sample impoverishment and less sample diversity, which leads to low sampling and filtering efficiency, and consequently low localization accuracy and high localization costs. In this paper, we propose an improved range-free localization scheme for mobile WSNs based on improved Population Monte Carlo localization (PMCL) method, accompanying with Hidden Terminal Couple scheme. A population of probability density functions is proposed to approximate the distribution of unknown locations based on a set of observations through an iterative importance sampling procedure. Behaviors are enhanced by adopting three improved methods to increase accuracy, enhance delay and save cost. Firstly, resampling, with importance weights, is introduced in PMCL method to avoid sample degeneracy. Secondly, twofold constraints, constraining the number of random samples in initialized step and constraining valid observations in resampling step, are proposed to decrease the number of iterations. Thirdly, mixture perspective is introduced to maintain the diversity of samples in resampling weighted process. Then, localization error, delay and consumption, especial delay, are predicted based on the statistic point of view, which takes mobile model of RWP into account. Moreover, performance comparisons of PMCL with other SMC-based schemes are also proposed. Simulation results show that delay of PMCL has some superiorities to that of other schemes, and accuracy and energy consumption is improved in some cases of less anchor rate and lower mobile velocity.
PubDate: 2018-02-01
DOI: 10.1007/s13369-017-2871-x
Issue No: Vol. 43, No. 2 (2018)

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327

Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs