- Logistics labor: Insights from the sociologies of globalization, the
economy, and work
- Authors: Elizabeth A. Sowers
Abstract: This article identifies the origins of the rise of the logistics industry to highlight the powerful structural position that this endows on the industry and its workers. I begin by analyzing an often-neglected aspect of globalization by describing the logistics, or goods movement industry, and identifying the role that the “logistics revolution” plays within the contemporary capitalist system. Then, synthesizing insights from global, economic, and labor sociology, I argue that the structural “brokerage” position of logistics workers in the global economy offers them key advantages on which labor and political movements might capitalize in struggles for economic justice and worker rights. I examine empirical research regarding labor organizing within logistics to determine if workers leverage this powerful position into concrete gains. Finally, I argue that more attention needs to be paid to how logistics workers recognize, articulate, and utilize their potentially powerful position in globalization flows. Future research should endeavor to understand how this can be achieved among wide groups of logistics workers to achieve the most success in labor and political movements.
- Why social status matters for understanding the interrelationships between
testosterone, economic risk-taking, and gender
- Authors: Susan Rebecca Fisk; Brennan J. Miller, Jon Overton
Abstract: We conduct an extensive review of the literature on testosterone and economic risk-taking behavior. In sum, there is evidence of a positive association between testosterone and economic risk taking, although it is unlikely to be a strong association given the abundance of null results. However, we argue that the existing literature may overstate the causal effects of testosterone on economic risk taking (or even report a spurious correlation) because this research has not considered the potentially confounding role of social status. Status could concurrently influence both testosterone and economic risk taking, given that testosterone is a social hormone with a reciprocal relationship with social status, and social status has been found to drive risk-taking behavior. We also argue against using findings from this literature to make gender essentialist claims, primarily because social phenomena influence the size—and existence—of gender differences in economic risk-taking behavior. We conclude with suggestions for future research.
- Old age rational suicide
- Authors: Naomi Richards
Abstract: In the societal debate surrounding voluntary euthanasia or physician-assisted suicide, there is a concern that older people will be left exposed to any legislation, subject to either faint suggestion or outright coercion from familial or professional carers. Whilst it is critical to take account of older people's potential vulnerability to any current or proposed assisted suicide legislation, there is a parallel strand of research exploring another relationship which older people can have with this debate: one of activism. Sociological research has shown that older people make up the “rank and file” of those active within the right-to-die movement. One of the stated motivations of some older people requesting hastened death has been that, in spite of an absence of life-threatening disease, they feel “tired of life” or that they have lived a “completed life” and feel ready to die. The notion of suicide for reasons of longevity and being tired of life are becoming increasingly significant given the fact of global ageing. This article brings together empirical and theoretical research on the phenomenon of old age rational suicide in order to develop an underexplored area in both the sociology of death and the sociology of ageing.
- Issue Information
- Abstract: No abstract is available for this article.
- What is new with old' What old age teaches us about inequality and
- Authors: Corey M. Abramson; Elena Portacolone
Abstract: Aging is remarkably unequal. Who survives to grow old in America and the circumstances they face once there reflect durable racial, socioeconomic, and gender inequalities that structure our lives from birth. Yet within the field of social stratification and mainstream sociology proper, examinations of the rapidly growing population of older Americans are often relegated to a “gerontological” periphery. This essay posits that the failure to place aging as a core concern in stratification and inequality is a missed opportunity. We argue for the importance of reintegrating studies on the stratification of aging and explain why such a move is necessary. Specifically, we posit that (a) examining the aging population is necessary for understanding American inequality because aging is an outcome that is ubiquitous yet highly stratified; (b) aging and being seen as “old” in a youth-focused society are stratifying processes in their own right; and (c) later life provides for analytical comparisons that are illustrative of how key mechanisms of inequality structure and stratify. After examining insights provided by a new wave of research on the aging U.S. population, we revisit the implications for understanding inequality and stratification in a graying and unequal America.
- Conceptualizing and understanding the Gülen movement
- Authors: Scott T. Fitzgerald
Abstract: The Gülen movement (GM) is a controversial international Islamic movement originating in Turkey. Interestingly, the movement seems to be “in between” the standard conceptual categories used by social movement scholars: The GMs' focus on individual transformation and religious practices suggests that it is a religious movement; its extensive outreach into various institutions (i.e., education, health care, and media) suggests a social movement seeking legitimacy and broad social change; its purported infiltration of key government and military offices suggests a political movement. In this article, I demonstrate the utility of conceptualizing the GM as an everyday-life-based movement and of using a multi-institutional politics model to examine this type of movement. By doing so, it becomes clear that sometimes, movements focusing on individual change may also be seeking to transform social, economic, and political institutions.
- From bias to coverage: What explains how news organizations treat social
- Authors: Edwin Amenta; Thomas Alan Elliott, Nicole Shortt, Amber Celina Tierney, Didem Türkoğlu, Burrel Vann
Abstract: Why do newspapers cover social movement actors, and why is this coverage sometimes favorable' Early scholarship saw the news media mainly as a source of data on collective action, and sought to ascertain its biases, but scholarship has increasingly focused directly on why movements gain coverage, especially coverage that can advance their goals. To understand why and how newspapers cover movement actors, we start with the insight that movements rely on the news media for many reasons, but their coverage is largely in the control of news institutions. In this review, we focus on perspectives that specify 3-way interactions between the characteristics of newspapers, social movement actors, and the social and political contexts, but we begin with how news media institutions are organized. We conclude with suggestions for future research that take advantage of the digital revolution of the last generation.
- WikiLeaks: Between disclosure and whistle-blowing in digital times
- Authors: Benedetta Brevini
Abstract: The 2010 WikiLeaks' disclosures of U.S. war logs were the first megaleaks to shake the world of international diplomacy and political elites. Since then, more leaks followed, from the Snowden to the Panama Papers. As this phenomenon continues to evolve, a significant body of scholarly work has analysed the emergence, the struggle, and the history of WikiLeaks .This article aims to provide a cross disciplinary overview of the research that has explored the rise and the legacy of the disclosure platform and whistle-blowing website WikiLeaks. It identifies four scholarship approaches to research focusing on Julian Assange's platform in order to understand its impact on various aspects of the media and of public life.The approaches considered range from the effect WikiLeaks has had on traditional journalism to the platform's challenge to power in the realm of the balance between openness and secrecy in domestic and international politics; further scholarships use WikiLeaks as a case study to understand the relationship between media and social movements and to study the platform's ethics and the legal consequences of its operations.The impact of WikiLeaks's revelations still poses relevant questions the media, politics, and regulators must address in such a pivotal time that sees a change in news consumption and an increasingly bitter debate between online privacy and transparency.The conclusion reflects upon current development of what the author calls “new digital culture of disclosure.” Future research should explore questions about the opportunities, challenges, and obstacles for this emerging culture of disclosure. What are the socio-political-economic conditions that have enabled this new culture' Are these leaks becoming a renewed example of democratic accountability' Is this culture of disclosure replacing public interest journalism in times of crisis'
- Blurring the boundaries: Using Gamergate to examine “real” and
symbolic violence against women in contemporary gaming culture
- Authors: Kishonna L. Gray; Bertan Buyukozturk, Zachary G. Hill
Abstract: Recent controversies in gaming culture (i.e., Gamergate) highlight the lack of attention devoted to discussions of actual violence women experience in gaming. Rather, the focus is often situated on in-game violence; however, we must extend discussions of in-game violence and increased aggression to account for the “real world,” violent, realities of women as gamers, developers, and even critics of the medium. As such, we provide context with a brief introduction to the events of Gamergate. We then discuss the connections between the continued marginalization of women both in video games and in “real life.” Drawing from a range of sociological and ludological research, especially Bourdieu and Wacquant's conceptualization of symbolic violence, we examine the normalization of violence towards women in gaming culture. We conclude with considerations for future work involving symbolic violence and other conceptualizations of violence. This focus allows for a more impactful consideration as to why and how codified simulated violence affects marginalized members of communities. Using symbolic violence to connect trends within games to the lived experiences of women in gaming communities binds virtual experiences to “real” ones.
- Colonial criminology: A survey of what it means and why it is important
- Authors: Sanna King
Abstract: As the United States is experiencing unprecedented high rates of incarceration, especially of minorities and marginalized communities, racialized punishment has been addressed by many scholars (Alexander 2010; Wacquant 2001; Cole 1999, Tonry 2011; Stevenson 2014). Studies have shown the connection between racialized structures of inequality, punishment, and colonization (Agozino 2000, 2003; Irwin and Umemoto 2016; Bosworth and Flavin 2007). However, scholars have recognized a void in the discussion of colonial theory in the field of criminology (Agozino 2003; Cunneen and Tauri 2016; Bosworth and Flavin 2007). In this paper, I identify several ways in which criminology is closely tied to colonialism. I argue that a colonial criminology perspective assists in identifying power distinctions that construct notions of difference, thus providing a more nuanced understanding of crime, violence, and criminalization as a response to oppression and alienation. I focus primarily on colonialism in Hawai'i because of its fairly recent colonization and continuing indigenous struggle for Hawaiian sovereignty. Furthermore, Hawai'i is representative of racial and ethnic inequality and disparity within the United States criminal justice system, as the majority of both the adult and juvenile incarcerated populations in Hawai'i are of Native Hawaiian and/or Pacific Islander decent.
- Applying agile methods to aircraft embedded software: an experimental
- Authors: Samoel Mirachi; Valdir Costa Guerra, Adilson Marques Cunha, Luiz Alberto Vieira Dias, Emilia Villani
Abstract: This paper discusses the applicability of agile methods to aircraft embedded software development. It presents the main results of an experiment that combines agile practices from Scrum with model-based development and distributed development. The experiment consists of the development of an aircraft cockpit display system divided in five distributed teams. Three features are analysed and quantified, using the output artefacts of each team: the artefacts' quality, the adherence to agile methods, and the adherence to standard DO-178C. The main conclusion of the experiment is that there is a high correlation between the adherence to agile methods and the artefacts' quality, motivating the use of agile methods in aircraft industry. Also, the experiment evinced that agile methods does not specifically address the integration of distributed teams and the hardware/software integration. This lacuna affects the artefacts' quality. The results of the experiment emphasize the importance of concentrating future work in the proposal of specific agile practices for these activities. Copyright © 2017 John Wiley & Sons, Ltd.
- StopGap: elastic VMs to enhance server consolidation
- Authors: Vlad Nitu; Boris Teabe, Leon Fopa, Alain Tchana, Daniel Hagimont
Abstract: Virtualized cloud infrastructures (also known as IaaS platforms) generally rely on a server consolidation system to pack virtual machines (VMs) on as few servers as possible. However, an important limitation of consolidation is not addressed by such systems. Because the managed VMs may be of various sizes (small, medium, large, etc.), VM packing may be obstructed when VMs do not fit available spaces. This phenomenon leaves servers with a set of unused resources (‘holes’). It is similar to memory fragmentation, a well-known problem in operating system domain. In this paper, we propose a solution which consists in resizing VMs so that they can fit with holes. This operation leads to the management of what we call elastic VMs and requires cooperation between the application level and the IaaS level, because it impacts management at both levels. To this end, we propose a new resource negotiation and allocation model in the IaaS, called HRNM. We demonstrate HRNM's applicability through the implementation of a prototype compatible with two main IaaS managers (OpenStack and OpenNebula). By performing thorough experiments with SPECvirt_sc2010 (a reference benchmark for server consolidation), we show that the impact of HRNM on customer's application is negligible. Finally, using Google data center traces, we show an improvement of about 62.5% for the traditional consolidation engines. Copyright © 2017 John Wiley & Sons, Ltd.
- Sampled suffix array with minimizers
- Authors: Szymon Grabowski; Marcin Raniszewski
Abstract: Sampling (evenly) the suffixes from the suffix array is an old idea trading the pattern search time for reduced index space. A few years ago Claude et al. showed an alphabet sampling scheme allowing for more efficient pattern searches compared with the sparse suffix array, for long enough patterns. A drawback of their approach is the requirement that sought patterns need to contain at least one character from the chosen subalphabet. In this work, we propose an alternative suffix sampling approach with only a minimum pattern length as a requirement, which is more convenient in practice. Experiments show that our algorithm (in a few variants) achieves competitive time-space tradeoffs on most standard benchmark data. Copyright © 2017 John Wiley & Sons, Ltd.
- RESeED: A secure regular-expression search tool for storage clouds
- Authors: Mohsen Amini Salehi; Thomas Caldwell, Alejandro Fernandez, Emmanuel Mickiewicz, Eric W. D. Rozier, Saman Zonouz, David Redberg
Abstract: Lack of trust has become one of the main concerns of users who tend to utilize one or multiple Cloud providers. Trustworthy Cloud-based computing and data storage require secure and efficient solutions which allow clients to remotely store and process their data in the Cloud. User-side encryption is an established method to secure the user data on the Cloud. However, using encryption, we lose processing capabilities, such as searching, over the Cloud data. In this paper, we present RESeED, a tool that provides user-transparent and Cloud-agnostic regular-expression search functionality over encrypted data across multiple Clouds. Upon a client's intent to upload a new document to the Cloud, RESeED analyzes the document's content and updates its data structures accordingly. Then, it encrypts and transfers the document to the Cloud. RESeED provides the regular-expression search functionality over encrypted data by translating the search queries on-the-fly to finite automata and analyzing concise and secure representations of the data before asking the Cloud to download the encrypted documents. RESeED's parallel architecture enables efficient search over large-scale (and potentially big data scale) data-sets. We evaluate the performance of RESeED experimentally and demonstrate its scalability and correctness using real-world data-sets from arXiv.org and Internet Engineering Task Force (IETF). Our results show that RESeED produces accurate query responses with a reasonable (≃6%) storage overhead. The results also demonstrate that for many search queries, RESeED performs faster in compare with the grep utility that functions on unencrypted data. Copyright © 2017 John Wiley & Sons, Ltd.
- Schedulability analysis and efficient scheduling of rate constrained
messages in the TTEthernet protocol
- Authors: Omar Kermia
Abstract: Over time, cyber-physical systems are becoming mixed criticality systems. As the complexity and the size of these systems grow, computation/communication resources should be more efficient than with traditional systems. TTEthernet is a communication infrastructure that enables the use of a single physical communication infrastructure for distributed mixed criticality applications while providing timely determinism. TTEthernet distinguishes between two traffic categories: the standard event triggered and the time triggered. The latter, for which higher priority is granted, is subject to strong timing guarantees because of strict periodicity constraint that fixes start-time cycles of time-triggered messages. In addition, event-triggered traffic includes rate-constrained messages that are of lower priority and have a minimum time interval between their transmission. The paper proposes both an on-line efficient scheduling algorithm and a necessary and sufficient schedulability condition based on the worst case response time computation for rate-constrained messages while taking into account time-triggered messages transmission. Copyright © 2017 John Wiley & Sons, Ltd.
- UKI: universal Kinect-type controller by ICE Lab
- Authors: Pujana Paliyawan; Ruck Thawonmas
Abstract: Universal Kinect-type-controller by ICE Lab (UKI, pronounced as ‘You-key’) was developed to allow users to control any existing application by using body motions as inputs. The middleware works by converting detected motions into keyboard and/or mouse-click events and sending them to a target application. This paper presents the structure and design of core modules, along with examples from real cases to illustrate how the middleware can be configured to fit a variety of applications. We present our designs for interfaces that decode all configuration details into a human-interpretable language, and these interfaces significantly promote user experience and eliminate the need for programming skill. The performance of the middleware is evaluated on fighting-game motion data, and we make the data publicly available so that they can be used in other researches. UKI welcomes its use by everyone without any restrictions on use; for instance, it can be used to promote healthy life through a means of gaming and/or used to conduct serious research on motion systems. The middleware serves as a shortcut in the development of motion applications—coding of an application to detect motions can be replaced with simple clicks on UKI. Copyright © 2017 John Wiley & Sons, Ltd.
- Modular composition of multiple applications with architectural event
- Authors: Somayeh Malakuti
Abstract: A complex software system is usually developed as a system of systems (SoS's) in which multiple constituent applications are composed and coordinated to fulfill desired system-level requirements. To facilitate the interoperability of the constituent applications, they must be augmented with suitable coordination-specific interfaces, through which they can participate in coordinated interactions. To increase the reusability of the applications and to increase the comprehensibility of SoS's, suitable mechanisms are required to modularize the coordination rules and interfaces from the constituent applications. We introduce a new abstraction named as architectural event modules (AEMs), which facilitates defining constituent applications and desired coordination rules as modules of SoS's. AEMs modularly augment the constituent applications with event-based interfaces to let them participate in coordinated interactions. We introduce the EventArch language in which the concept of AEMs is implemented, and illustrate its suitability using a case study in the domain of energy optimization. Copyright © 2017 John Wiley & Sons, Ltd.
- JDAS: a software development framework for multidatabases
- Authors: Guoqi Xie; Yuekun Chen, Yan Liu, Chunnian Fan, Renfa Li, Keqin Li
Abstract: Modern software development for services computing and cloud computing software systems is no longer based on a single database but on existing multidatabases and this convergence needs new software architecture and framework design. Most current popular frameworks are not designed for multidatabases, and many practical problems in development arise. This study designs and implements a software development framework called Java data access service (JDAS) for multidatabases using the object-oriented programming language Java. The JDAS framework solves related problems that arise when other frameworks are employed in practical software development with multidatabases by presenting and introducing design methods. JDAS consists of the modules of the database abstract, object relational mapping, connection pools management, configuration management, data access service, and inversion of control. Results and case study reveal that the JDAS framework effectively reduces development complexity and improves development efficiency of the software systems with multidatabases. Copyright © 2017 John Wiley & Sons, Ltd.
- Issue Information
- Pages: 503 - 504
Abstract: No abstract is available for this article.
- EMP: execution time measurement protocol for compute-bound programs
- Authors: Young-Kyoon Suh; Richard T. Snodgrass, John D. Kececioglu, Peter J. Downey, Robert S. Maier, Cheng Yi
Pages: 559 - 597
Abstract: Measuring execution time is one of the most used performance evaluation techniques in computer science research. Inaccurate measurements cannot be used for a fair performance comparison between programs. Despite the prevalence of its use, the intrinsic variability in the time measurement makes it hard to obtain repeatable and accurate timing results of a program running on an operating system. We propose a novel execution time measurement protocol (termed EMP) for measuring the execution time of a compute-bound program on Linux, while minimizing that measurement's variability. During the development of execution time measurement protocol, we identified several factors that disturb execution time measurement. We introduce successive refinements to the protocol by addressing each of these factors, in concert, reducing variability by more than an order of magnitude. We also introduce a new visualization technique, what we term ‘dual-execution scatter plot’ that highlights infrequent, long-running daemons, differentiating them from frequent and/or short-running daemons. Our empirical results show that the proposed protocol successfully achieves three major aspects—precision, accuracy, and scalability—in execution time measurement that can work for open-source and proprietary software. Copyright © 2017 John Wiley & Sons, Ltd.
- Supporting collaborative software development over GitHub
- Authors: Ritu Arora; Sanjay Goel, Ravi Kant Mittal
Abstract: GitHub is a web-based, distributed Software Configuration Management (SCM) system build over Git, which enables developers to host shared repositories over the Internet and access them from any location, at any time. It helps developers to effectively orchestrate their activities over shared codebases by capturing direct conflicts arising because of concurrent editing on the same shared artifact. However, SCM systems have limited support for capturing inconsistencies arising because of indirect conflicts which arise because of software dependency relationships that exist between related artifacts, and lead to the introduction of syntactic and semantic inconsistencies in codebases.In this paper, we propose a novel collaborative software development (CSD) tool named, Collaboration Over GitHub (COG), that provides real-time information about arising direct and indirect conflicts among collaborative developers, working over GitHub, through a collection of workspace awareness widgets. These widgets provide people-centric information about direct and indirect collaborators over GitHub. Resource-centric information about current and conflicting activities of real-time collaborators is captured and propagated to others, based on the dependency relationships between software artifacts being manipulated by them. COG uses dependency graphs to store and process dependency relationship information which is required to ascertain information about indirect conflicts. Notably, the most important novel contribution of COG is that it not only captures indirect conflicts that lead to the introduction of syntactic inconsistencies but also changes that lead to semantic inconsistencies in the codebase. It also does so at finer levels of granularity, with changes to individual method's body being traced, thereby capturing statement-level conflicts as well. Copyright © 2016 John Wiley & Sons, Ltd.
- Definition of REST web services with JSON schema
- Authors: Guido Barbaglia; Simone Murzilli, Stefano Cudini
- Multi-criteria IoT resource discovery: a comparative analysis
- Authors: Luiz Henrique Nunes; Julio Cezar Estrella, Charith Perera, Stephan Reiff-Marganiec, Alexandre Cláudio Botazzo Delbem
Abstract: The growth of real-world objects with embedded and globally networked sensors allows to consolidate the Internet of things paradigm and increase the number of applications in the domains of ubiquitous and context-aware computing. The merging between cloud computing and Internet of things named cloud of things will be the key to handle thousands of sensors and their data. One of the main challenges in the cloud of things is context-aware sensor search and selection. Typically, sensors require to be searched using two or more conflicting context properties. Most of the existing work uses some kind of multi-criteria decision analysis to perform the sensor search and selection, but does not show any concern for the quality of the selection presented by these methods. In this paper, we analyse the behaviour of the SAW, TOPSIS and VIKOR multi-objective decision methods and their quality of selection comparing them with the Pareto-optimality solutions. The gathered results allow to analyse and compare these algorithms regarding their behaviour, the number of optimal solutions and redundancy. Copyright © 2016 John Wiley & Sons, Ltd.
- In-memory distributed software solution to improve the performance of
- Authors: Enrique Costa-Montenegro; Alexander Tsybanev, Héctor Cerezo-Costas, Francisco Javier González-Castaño, Felipe Gil-Castiñeira, Belén Barragáns-Martínez, Diego Almuiña-Troncoso
Abstract: Many recommender systems are currently available for proposing content (movies, TV series, music, etc.) to users according to different profiling metrics, such as ratings of previously consumed items and ratings of people with similar tastes. Recommendation algorithms are typically executed by powerful servers, as they are computationally expensive. In this paper, we propose a new software solution to improve the performance of recommender systems. Its implementation relies heavily on Apache Spark technology to speed up the computation of recommendation algorithms. It also includes a webserver, an API REST, and a content cache. To prove that our solution is valid and adequate, we have developed a movie recommender system based on two methods, both tested on the freely available Movielens and Netflix datasets. Performance was assessed by calculating root-mean-square error values and the times needed to produce a recommendation. We also provide quantitative measures of the speed improvement of the recommendation algorithms when the implementation is supported by a computing cluster. The contribution of this paper lies in the fact that our solution, which improves the performance of competitor recommender systems, is the first proposal combining a webserver, an API REST, a content cache and Apache Spark technology. Copyright © 2016 John Wiley & Sons, Ltd.
- A new digital watermarking evaluation and benchmarking methodology using
an external group of evaluators and multi-criteria analysis based on
- Authors: B. B. Zaidan; A. A. Zaidan, H. Abdul. Karim, N. N. Ahmad
Abstract: Digital watermarking evaluation and benchmarking are challenging tasks because of multiple evaluation and conflicting criteria. A few approaches have been presented to implement digital watermarking evaluation and benchmarking frameworks. However, these approaches still possess a number of limitations, such as fixing several attributes on the account of other attributes. Well-known benchmarking approaches are limited to robust watermarking. Therefore, this paper presents a new methodology for digital watermarking evaluation and benchmarking based on large-scale data by using external evaluators and a group decision making context. Two experiments are performed. In the first experiment, a noise gate-based digital watermarking approach is developed, and the scheme for the noise gate digital watermarking approach is enhanced. Sixty audio samples from different audio styles are tested with two algorithms. A total of 120 samples were evaluated according to three different metrics, namely, quality, payload, and complexity, to generate a set of digital watermarking samples. In the second experiment, the situation in which digital watermarking evaluators have different preferences is discussed. Weight measurement with a decision making solution is required to solve this issue. The analytic hierarchy process is used to measure evaluator preference. In the decision making solution, the technique for order of preference by similarity to the ideal solution with different contexts (e.g., individual and group) is utilized. Therefore, selecting the proper context with different aggregation operators to benchmark the results of experiment 1 (i.e., digital watermarking approaches) is recommended. The findings of this research are as follows: (1) group and individual decision making provide the same result in this case study. However, in the case of selection where the priority weights are generated from the evaluators, group decision making is the recommended solution to solve the trade-off reflected in the benchmarking process for digital watermarking approaches. (2) Internal and external aggregations show that the enhanced watermarking approach demonstrates better performance than the original watermarking approach. © 2016 The
Authors . Software: Practice and Experience published by John Wiley & Sons Ltd.
- Robust power optimization scheme for cooperative wireless relay system in
- Authors: Zhixin Liu; Peng Zhang, Hak-Keung Lam, Kit Yan Chan, Kai Ma
Abstract: Ultra dense deployment of base stations is one of most significant features in smart city communication networks. Aiming at the large-scale wireless communication issue in smart city, we propose a distributed robust power allocation scheme with proportional fairness for cooperative orthogonal frequency-division-multiple-access relay network. With the amplify-and-forward relay mode, all of the relays assist the information transmission simultaneously on orthogonal subcarriers. Considering the uncertainty of channel gains, first we aim at achieving the maximum utility subject to the constraints of outage probability threshold and power bound. Subsequently, the problem is transformed to a solvable convex optimization problem with determination constraints. The dual-decomposition method is applied to solve the formulated optimization problem. To reduce the information exchange of the whole system, we propose a computationally efficient distributed iteration algorithm. Numerical results reveal the effectiveness of the proposed robust optimization algorithm. Copyright © 2016 John Wiley & Sons, Ltd.
- Energy consumption analysis of data stream processing: a benchmarking
- Authors: Miyuru Dayarathna; Yuanlong Li, Yonggang Wen, Rui Fan
Abstract: Energy efficiency of data analysis systems has become a very important issue in recent times because of the increasing costs of data center operations. Although distributed streaming workloads have increasingly been present in modern data centers, energy-efficient scheduling of such applications remains as a significant challenge. In this paper, we conduct an energy consumption analysis of data stream processing systems in order to identify their energy consumption patterns. We follow stream system benchmarking approach to solve this issue. Specifically, we implement Linear Road benchmark on six stream processing environments (S4, Storm, ActiveMQ, Esper, Kafka, and Spark Streaming) and characterize these systems' performance on a real-world data center. We study the energy consumption characteristics of each system with varying number of roads as well as with different types of component layouts. We also use a microbenchmark to capture raw energy consumption characteristics. We observed that S4, Esper, and Spark Streaming environments had highest average energy consumption efficiencies compared with the other systems. Using a neural networkbased technique with the power/performance information gathered from our experiments, we developed a model for the power consumption behavior of a streaming environment. We observed that energy-efficient execution of streaming application cannot be specifically attributed to the system CPU usage. We observed that communication between compute nodes with moderate tuple sizes and scheduling plans with balanced system overhead produces better power consumption behaviors in the context of data stream processing systems. Copyright © 2016 John Wiley & Sons, Ltd.
- Modelling a family of systems for crisis management with concern-oriented
- Authors: Omar Alam; Jörg Kienzle, Gunter Mussbacher
Abstract: Concern-oriented reuse (CORE) proposes the concern as a new unit of model-based reuse encapsulating software artefacts pertaining to a domain of interest that span multiple development phases and levels of abstraction. With CORE, a concern encapsulates multiple reusable features, while allowing its generic models to be customized to problem-specific contexts. We report on our experience of designing a family of crisis management systems (CMS) with the help of reusable concern libraries. The collected metrics show a considerable amount of reuse in our CMS design. The study provides encouraging evidence that CORE's vision to create large-scale, generic and reusable entities that are expressed with the most appropriate modelling formalisms at the right level of abstraction is feasible. We present our experience in the design of the CMS and elaborate on the advantages as well as the efforts required to adopt CORE in an industrial setting. Copyright © 2016 John Wiley & Sons, Ltd.
- Adaptive trade-off between consistency and performance in data replication
- Authors: Hailong Sun; Bang Xiao, Xu Wang, Xudong Liu
Abstract: Replication is widely adopted in modern Internet applications and distributed systems to improve the reliability and performance. Though maintaining the strong consistency among replicas can guarantee the correctness of application behaviors, however, it will affect the application performance at the same time because there is a well-known trade-off between consistency and performance. Many real-world applications favoring performance often choose to enforce weak consistency. Although there has been some work on flexible configuration of consistency, most focuses on design or deployment time. As the system settings constantly change during runtime, the tuning of the consistency-performance trade-off needs to be handled dynamically. Failing to do that will cause either underestimation or overestimation of the consistency and performance that can be achieved. Existing work does not well support the dynamic tuning of the aforementioned trade-off in runtime, which is mainly because of the lack of an appropriate quantitative model of consistency and performance. In this work, based on our previous effort on the quantitative model of consistency and latency, we design a replication protocol, CC-Paxos, to achieve an adaptive trade-off between consistency and performance according to application preferences and runtime information. By design, CC-Paxos is not bound to any specific underlying data stores. We have implemented CC-Paxos and applied it to MySQL databases. And real experiments both within a data center and across data centers show that CC-Paxos not only can dynamically adjust the delivered consistency in return for ensured performance but also outperforms MySQL Cluster in the case of strong consistency guarantee. Copyright © 2016 John Wiley & Sons, Ltd.
- Regular and almost universal hashing: an efficient implementation
- Authors: Dmytro Ivanchykhin; Sergey Ignatchenko, Daniel Lemire
Abstract: Random hashing can provide guarantees regarding the performance of data structures such as hash tables – even in an adversarial setting. Many existing families of hash functions are universal: given two data objects, the probability that they have the same hash value is low given that we pick hash functions at random. However, universality fails to ensure that all hash functions are well behaved. We might further require regularity: when picking data objects at random they should have a low probability of having the same hash value, for any fixed hash function. We present the efficient implementation of a family of non-cryptographic hash functions (PM+) offering good running times, good memory usage, and distinguishing theoretical guarantees: almost universality and component-wise regularity. On a variety of platforms, our implementations are comparable with the state of the art in performance. On recent Intel processors, PM+ achieves a speed of 4.7 bytes per cycle for 32-bit outputs and 3.3 bytes per cycle for 64-bit outputs. We review vectorization through Single Instruction on Multiple Data instructions (e.g., AVX2) and optimizations for superscalar execution. Copyright © 2016 John Wiley & Sons, Ltd.
- Exploiting long-term and short-term preferences and RFID trajectories in
- Authors: Yue Ding; Dong Wang, Guoqiang Li, Daniel Sun, Xin Xin, Shiyou Qian
Abstract: Shop recommendation in large shopping malls is useful in the mobile internet era. With the maturity of indoor positioning technology, customers' indoor trajectories can be captured by radio frequency identification devices readers, which provides a new way to analyze customers' potential preferences. In this paper, we design three methods for the top-N shop recommendation problem. The first method is an improved matrix factorization method fusing estimated prior customer preference matrix that is constructed by Session-based Temporal Graph computing. The second method is a Bayesian personalized ranking method based on the first method. The third method is by tensor decomposition combined with Session-based Temporal Graph. Besides, we exploit customer history radio frequency identification devices trajectory information to find customers' frequent paths and revise predicted rating values to improve recommendation accuracy. Our methods are effective in modeling customers' temporal dynamics. At the same time, our approach considers repeated recommendation of the same shop by designing rating update rules. The test dataset is formed by JoyCity customer behavior records. JoyCity is a large-scale modern shopping center in downtown Shanghai, China. The results show that our approaches are effective and outperform previous state-of-the-art approaches. Copyright © 2016 John Wiley & Sons, Ltd.
- Dynamic reconfiguration of cloud application architectures
- Authors: Miguel Zúñiga-Prieto; Javier González-Huerta, Emilio Insfran, Silvia Abrahão
Abstract: Service-based cloud applications are software systems that continuously evolve to satisfy new user requirements and technological changes. This kind of applications also require elasticity, scalability, and high availability, which means that deployment of new functionalities or architectural adaptations to fulfill service level agreements (SLAs) should be performed while the application is in execution. Dynamic architectural reconfiguration is essential to minimize system disruptions while new or modified services are being integrated into existing cloud applications. Thus, cloud applications should be developed following principles that support dynamic reconfiguration of services, and also tools to automate these reconfigurations at runtime are needed. This paper presents an extension of a model-driven method for dynamic and incremental architecture reconfiguration of cloud services that allows developers to specify new services as software increments, and the tool to generate the implementation code for the services integration logic and the deployment and architectural reconfiguration scripts specific to the cloud environment in which the service will be deployed (e.g., Microsoft Azure). We also report the results of a quasi-experiment that empirically validate our method. It was conducted to evaluate their perceived ease of use, perceived usefulness, and perceived intention to use. The results show that the participants perceive the method to be useful, and they also expressed their intention to use the method in the future. Although further experiments must be carried out to corroborate these results, the method has proven to be a promising architectural reconfiguration process for cloud applications in the context of agile and incremental development processes. Copyright © 2016 John Wiley & Sons, Ltd.
- JAMES: An object-oriented Java framework for discrete optimization using
local search metaheuristics
- Authors: Herman De Beukelaer; Guy F. Davenport, Geert De Meyer, Veerle Fack
Abstract: This paper describes the Java Metaheuristics Search framework (JAMES, v1.1): an object-oriented Java framework for discrete optimization using local search algorithms that exploits the generality of such metaheuristics by clearly separating search implementation and application from problem specification. A wide range of generic local searches are provided, including (stochastic) hill climbing, tabu search, variable neighbourhood search and parallel tempering. These can be applied to any user-defined problem by plugging in a custom neighbourhood for the corresponding solution type. Using an automated analysis workflow, the performance of different search algorithms can be compared in order to select an appropriate optimization strategy. Implementations of specific components are included for subset selection, such as a predefined solution type, generic problem definition and several subset neighbourhoods used to modify the set of selected items. Additional components for other types of problems (e.g. permutation problems) are provided through an extensions module which also includes the analysis workflow. In comparison with existing Java metaheuristics frameworks that mainly focus on population-based algorithms, JAMES has a much lower memory footprint and promotes efficient application of local searches by taking full advantage of move-based evaluation. Releases of JAMES are deployed to the Maven Central Repository so that the framework can easily be included as a dependency in other Java applications. The project is fully open source and hosted on GitHub. More information can be found at http://www.jamesframework.org. Copyright © 2016 John Wiley & Sons, Ltd.
- Pattern-based multi-cloud architecture migration
- Authors: Pooyan Jamshidi; Claus Pahl, Nabor C. Mendonça
Abstract: Many organizations migrate on-premise software applications to the cloud. However, current coarse-grained cloud migration solutions have made such migrations a non transparent task, an endeavor based on trial-and-error. This paper presents Variability-based, Pattern-driven Architecture Migration (V-PAM), a migration method based on (i) a catalogue of fine-grained service-based cloud architecture migration patterns that target multi-cloud, (ii) a situational migration process framework to guide pattern selection and composition, and (iii) a variability model to structure system migration into a coherent framework. The proposed migration patterns are based on empirical evidence from several migration projects, best practice for cloud architectures and a systematic literature review of existing research. Variability-based, Pattern-driven Architecture Migration allows an organization to (i) select appropriate migration patterns, (ii) compose them to define a migration plan, and (iii) extend them based on the identification of new patterns in new contexts. The patterns are at the core of our solution, embedded into a process model, with their selection governed by a variability model. Copyright © 2016 John Wiley & Sons, Ltd.
- A low-cost real-time face tracking system for ITSs and SDASs
- Authors: Leyuan Liu; Jingying Chen, Changxin Gao, Nong Sang
Abstract: It is important to track people's face efficiently and accurately in many Intelligent Transportation Systems (ITSs) and Safety Driving Assistant Systems (SDASs). This paper presents a high-performance and low-cost real-time face tracking system, which runs on general onboard computer with very low CPU consumption. The proposed face tracking system is composed of four modules: the motion detector, face detector, face tracker, and face validator. The motion detector extracts motion areas by using a spatial-temporal bi-differential method with a very low computational cost. The face detector integrates motions into a cascade face detection framework to reject most of non-face scanning-windows to ensure efficient face localization. The face tracker fuses motion feature with color feature to alleviate the drifting problem during tracking. The face validator builds face appearance models online and identifies each specific tracked face to avoid confusion. Experimental results on three challenging video sequences show that the proposed face tracking system outperforms the state-of-the-art face tracker and consumes only 5–13% CPU resources of a low-spec onboard computer while processing in real time. Copyright © 2016 John Wiley & Sons, Ltd.
- Freeze'nSense: estimation of performance isolation in cloud environments
- Authors: Alexander Kandalintsev; Dzmitry Kliazovich, Renato Lo Cigno
Abstract: Modern computing hardware has a very good task parallelism, but resource contention between tasks remains high. This renders large fractions of CPU time wasted and leads to application interference. Even tasks running on dedicated CPU cores can still incur interference from other tasks, most notably because of the caches and other hardware components shared by more than one core. The level of interference depends on the nature of executed tasks and is difficult to predict. A customer who has been granted that his task will run as if it were alone (e.g., a CPU core dedicated to a virtual machine), indeed suffers from significant performance degradation due to the time spent waiting for resources occupied by other tasks. Measuring actual performance of a task or a virtual machine can be difficult. However, even more challenging is estimating what the performance of the task should be if it were running completely in isolation. In this paper, we present a measurement technique Freeze'nSense. It is based on the hardware performance counters and allows measuring actual performance of a task and estimating performance as if the task were in isolation, all during runtime. To estimate performance in isolation, the proposed technique performs a short-time freezing of the potentially interfering tasks. Freeze'nSense introduces lower than 1% overhead and is confirmed to provide accurate and reliable measurements. In practice, Freeze'nSense becomes a valuable tool helping to automatically identify tasks that suffer the most in a shared environment and move them to a distant core. The observed performance improvement can be as large as 80–100% for individual tasks, and scale up to 15–20% for the computing node. Copyright © 2016 John Wiley & Sons, Ltd.
- Toward cost-effective replica placements in cloud storage systems with
- Authors: Lingfang Zeng; Shijie Xu, Yang Wang, Kenneth B. Kent, David Bremner, Chengzhong Xu
Abstract: In this paper, we propose a simulation model to study real-world replication workflows for cloud storage systems. With this model, we present three new methods to maximize the storage space usage during replica creation, and two novel QoS aware greedy algorithms for replica placement optimization. By using a simulation method, our algorithms are evaluated, through a comparison with the existing placement algorithms, to show that (i) a more evenly distributed replicas for a data set can be achieved by using round-robin methods in replica creation phase and (ii) the two proposed greedy algorithms, named GS_QoS and GS_QoS_C1, not only have more economical results than those from Chen et al., but also guarantee the QoS for clients. Copyright © 2016 John Wiley & Sons, Ltd.
- Parallel computation of the reachability graph of petri net models with
- Authors: Eduardo González-López de Murillas; Javier Fabra, Pedro Álvarez, Joaquín Ezpeleta
Abstract: Formal verification plays a crucial role when dealing with correctness of systems. In a previous work, the authors proposed a class of models, the Unary Resource Description Framework Petri Nets (U-RDF-PN), which integrated Petri nets and (RDF-based) semantic information. The work also proposed a model checking approach for the analysis of system behavioural properties that made use of the net reachability graph. Computing such a graph, specially when dealing with high-level structures as RDF graphs, is a very expensive task that must be considered. This paper describes the development of a parallel solution for the computation of the reachability graph of U-RDF-PN models. Besides that, the paper presents some experimental results when the tool was deployed in cluster and cloud frameworks. The results not only show the improvement in the total time required for computing the graph, but also the high scalability of the solution, which make it very useful thanks to the current (and future) availability of cloud infrastructures. Copyright © 2016 John Wiley & Sons, Ltd.
- Perils of opportunistically reusing software module
- Authors: Naveen Kulkarni; Vasudeva Varma
Abstract: Opportunistic reuse is a need based sourcing of software modules without a prior reuse plan. It is a common tactical approach in software development. Developers often reuse an external software module opportunistically to improve their productivity. But, studies have shown that this results in extensive refactoring and adds maintenance owes. We assert this problem to the mismatches between the software under development and the reused external module; caused because of their different assumptions and constraints. We highlight the problems of such opportunistic reuse practices with the help of a case study. In our study, we found issues such as unanticipated behavior, violated constraints, conflict in assumption, fragile structure, and software bloat. In this paper, we like to draw attention of the research community to the wide spread opportunistic reuse practices and the lack of methods to pro-actively identify and resolve the mismatches. We propose the need for supporting developers in reasoning before reuse from the perspective of identifying and fixing both local and global mismatches. Furthermore, we identify other opportunistic software development practices where similar issues can be observed and also suggest the research areas where further investigation can benefit developers in improving their productivity. Copyright © 2016 John Wiley & Sons, Ltd.
- A novel model-driven approach for seamless integration
- Authors: Ahmet F. Mustacoglu
Abstract: This research mentions integration problems and describes a novel model-driven approach that intend to reach a higher degree of interoperability among different software development tools coming from different technological spaces (TSs) by representing data of tools through the models. The proposed concept introduce a way to integrate various software related tools and aim to provide a modular syntax for tool integration that leverage the collaboration of different tools. In this work, the proposed approach has been tested through a case–study by demonstrating a single aspect of the model-driven tool integration because the model-driven tool integration has a wide scope and it is difficult to show all aspects of it in one research. It is proved that the model-driven tool integration is possible based on the proposed concept between different TSs, and the formulation of the proposed approach is provided. As the results indicate, the proposed system integrates selected software-related tools coming from different TSs and enables them to use each other's capabilities. This work paves the way to contribute for the standardization efforts of the model-driven tool integration. Finally, further research opportunities are provided. Copyright © 2016 John Wiley & Sons, Ltd.
- Optimising unicode regular expression evaluation with previews
- Authors: Howard Chivers
Abstract: The jsre regular expression library was designed to provide fast matching of complex expressions over large input streams using user-selectable character encodings. An established design approach was used: a simulated non-deterministic automaton (NFA) implemented as a virtual machine, avoiding exponential cost functions in either space or time. A deterministic automaton (DFA) was chosen as a general dispatching mechanism for Unicode character classes, and this also provided the opportunity to use compact DFAs in various optimization strategies. The result was the development of a regular expression Preview which provides a summary of all the matches possible from a given point in a regular expression in a form that can be implemented as a compact DFA and can be used to further improve the performance of the standard NFA simulation algorithm. This paper formally defines a preview and describes and evaluates several optimizations using this construct. They provide significant speed improvements accrued from fast scanning of anchor positions, avoiding retesting of repeated strings in unanchored searches and efficient searching of multiple alternate expressions which in the case of keyword searching has a time complexity which is logarithmic in the number of words to be searched. Copyright © 2016 John Wiley & Sons, Ltd.
- A Bloom filter based semi-index on q-grams
- Authors: Szymon Grabowski; Robert Susik, Marcin Raniszewski
Abstract: We present a simple q-gram based semi-index, which allows to look for a pattern typically only in a small fraction of text blocks. Several space-time tradeoffs are presented. Experiments on Pizza & Chili datasets show that our solution is up to three orders of magnitude faster than the Claude et al. (Journal of Discrete Algorithms 2012; 11:37) semi-index at a comparable space usage. Moreover, the construction of our data structure is fast and easily parallelizable. Copyright © 2016 John Wiley & Sons, Ltd.
- A cloud-based taxi trace mining framework for smart city
- Authors: Jin Liu; Xiao Yu, Zheng Xu, Kim-Kwang Raymond Choo, Liang Hong, Xiaohui Cui
Abstract: As a well-known field of big data applications, smart city takes advantage of massive data analysis to achieve efficient management and sustainable development in the current worldwide urbanization process. An important problem in smart city is how to discover frequent trajectory sequence pattern and cluster trajectory. To solve this problem, this paper proposes a cloud-based taxi trajectory pattern mining and trajectory clustering framework for smart city. Our work mainly includes (1) preprocessing raw Global Positioning System trace by calling the Baidu API Geocoding; (2) proposing a distributed trajectory pattern mining (DTPM) algorithm based on Spark; and (3) proposing a distributed trajectory clustering (DTC) algorithm based on Spark. The proposed DTPM algorithm and DTC algorithm can overcome the high input/output overhead and communication overhead by adopting in-memory computation. In addition, the proposed DTPM algorithm can avoid generating redundant local trajectory patterns to significantly improve the overall performance. The proposed DTC algorithm can enhance the performance of trajectory similarity computation by transforming the trajectory similarity calculation into AND and OR operators. Experimental results indicate that DTPM algorithm and DTC algorithm can significantly improve the overall performance and scalability of trajectory pattern mining and trajectory clustering on massive taxi trace data. Copyright © 2016 John Wiley & Sons, Ltd.
- Engineering order-preserving pattern matching with SIMD parallelism
- Authors: Tamanna Chhabra; Simone Faro, M. Oğuzhan Külekci, Jorma Tarhio
Abstract: The order-preserving pattern matching problem has gained attention in recent years. It consists in finding all substrings in the text, which have the same length and relative order as the input pattern. Typically, the text and the pattern consist of numbers. Since recent times, there has been a tendency to utilize the ability of the word RAM model to increase the efficiency of string matching algorithms. This model works on computer words, reading and processing blocks of characters at once, so that usual arithmetic and logic operations on words can be performed in one unit of time. In this paper, we present a fast order-preserving pattern matching algorithm, which uses specialized word-size packed string matching instructions, grounded on the single instruction multiple data instruction set architecture. We show with experimental results that the new proposed algorithm is more efficient than the previous solutions. ©2016 The
Authors . Software: Practice and Experience Published by John Wiley & Sons Ltd.
- Analytics-as-a-service in a multi-cloud environment through
semantically-enabled hierarchical data processing
- Authors: Prem Prakash Jayaraman; Charith Perera, Dimitrios Georgakopoulos, Schahram Dustdar, Dhavalkumar Thakker, Rajiv Ranjan
Abstract: A large number of cloud middleware platforms and tools are deployed to support a variety of internet-of-things (IoT) data analytics tasks. It is a common practice that such cloud platforms are only used by its owners to achieve their primary and predefined objectives, where raw and processed data are only consumed by them. However, allowing third parties to access processed data to achieve their own objectives significantly increases integration and cooperation and can also lead to innovative use of the data. Multi-cloud, privacy-aware environments facilitate such data access, allowing different parties to share processed data to reduce computation resource consumption collectively. However, there are interoperability issues in such environments that involve heterogeneous data and analytics-as-a-service providers. There is a lack of both architectural blueprints that can support such diverse, multi-cloud environments and corresponding empirical studies that show feasibility of such architectures. In this paper, we have outlined an innovative hierarchical data-processing architecture that utilises semantics at all the levels of IoT stack in multi-cloud environments. We demonstrate the feasibility of such architecture by building a system based on this architecture using OpenIoT as a middleware, and Google Cloud and Microsoft Azure as cloud environments. The evaluation shows that the system is scalable and has no significant limitations or overheads. Copyright © 2016 John Wiley & Sons, Ltd.
- Modeling and verification of Web services composition based on model
- Authors: Yi Zhu; Zhiqiu Huang, Hang Zhou
Abstract: With the rapid development of Cloud computing, social computing, and Web of Things, an increasing number of requirements of complexity and reliability for modeling Web services composition have emerged too. As more reliable methods are needed to model and verify current complex Web services composition, this paper proposes a method to model and verify Web services composition based on model transformation. First, a modeling and verifying framework based on model transformation is established. Then, Communicating Sequential Process (CSP) is defined according to the features of Web services composition and the corresponding model checking tool Failure Divergence Refinement (FDR) is introduced. The transformation approaches between Business Process Execution Language (BPEL) and CSP are later defined in detail. Lastly, the effect of this method is evaluated by modeling and verifying the Web services composition of a Online Shopping System. The results of the experiments show that this method can greatly increase the reliability of Web services composition. Copyright © 2016 John Wiley & Sons, Ltd.
- A sensitive object-oriented approach to big surveillance data compression
for social security applications in smart cities
- Authors: Jing Xiao; Zhongyuan Wang, Yu Chen, Liang Liao, Jun Xiao, Gen Zhan, Ruimin Hu
Abstract: Surveillance has become a fairly common practice with the global boom in “smart cities”. How to efficiently store and manage the vast quantities of surveillance data is a persistent challenge in terms of analyzing social security problems. Developing data compression technology under the analytic requirements of surveillance data is the key to solving the storage problem. Criminal investigation demands the quality preservation of sensitive objects, typically pedestrians, human faces, vehicles, and license plates; however, the analytical value of surveillance data is rapidly lost as the compression ratio increases. In this paper, we propose a sensitive object-oriented regions of interest-based coding strategy for preserving the analytical value of surveillance data. In the proposed method, instead of generating a saliency map based on human visual perception, we consider saliency as a set of characteristics important for object detection and recognition. By making this modification, almost all sensitive objects necessary in a criminal investigation are assigned high saliency value rather than only one or two salient regions. Motions in the temporal domain are integrated to place emphasis on moving objects, namely moving sensitive objects, which then gain the highest saliency. Finally, a saliency-based rate control algorithm embedded in High Efficiency Video Coding is used to maintain the quality of sensitive objects in the encoded video under a fixed bitrate. Experiments were conducted on two analytical indexes: Feature similarity and object detection accuracy. The results showed that by achieving the same feature similarity and object detection accuracy, our method can save 20% and 40% bitrate over High Efficiency Video Coding, respectively, for the storage of big surveillance data. Copyright © 2016 John Wiley & Sons, Ltd.
- Synergies and tradeoffs in software reuse – a systematic mapping
- Authors: Denise Bombonatti; Miguel Goulão, Ana Moreira
Abstract: Software reuse is a broadly accepted practice to improve software development quality and productivity. Although an object of study in software engineering since the late sixties, achieving effective reuse remains challenging for many software development organizations. This paper reports a systematic mapping study on how reusability relates to other non-functional requirements and how different contextual factors influence the success of a reuse initiative. The conclusion is that the relationships are discussed rather informally, and that human, organizational, and technological domain factors are extremely relevant to a particular reuse context. This mapping study highlights the need for further research to better understand how exactly the different non-functional requirements and context factors affect reusability. Copyright © 2016 John Wiley & Sons, Ltd.
- Big forensic data management in heterogeneous distributed systems: quick
analysis of multimedia forensic data
- Authors: Darren Quick; Kim-Kwang Raymond Choo
Abstract: The growth in the data volume and number of evidential data from heterogeneous distributed systems in smart cities, such as cloud and fog computing systems and Internet-of-Things devices (e.g. IP-based CCTVs), has led to increased collection, processing and analysis times, potentially resulting in vulnerable persons (e.g. victims of terrorism incidents) being at risk. A process of Digital Forensic Data Reduction of source multimedia and forensic images has provided a method to reduce the collection time and volume of data. In this paper, a methodology of Digital Forensic Quick Analysis is outlined, which describes a method to review Digital Forensic Data Reduction subsets to pinpoint relevant evidence and intelligence from heterogeneous distributed systems in a timely manner. Applying the proposed methodology to real-world data from an Australian police agency highlighted the timeliness of the process, resulting in significant improvements in processing times in comparison with processing a full forensic image. The Quick Analysis methodology, combined with Digital Forensic Data Reduction, has potential to locate evidence and intelligence in a timely manner. Copyright © 2016 John Wiley & Sons, Ltd.
- Variability management of plugin-based systems using feature models
- Authors: André L. Santos
Abstract: Plugin-based systems are typically realized with resort to a component framework that offers an infrastructure for assembling plugin components, which can be composed to form system variants. Feature models have been proposed as an abstraction to manage software variability, where feature configurations describe variants of a software system. In this paper, we propose an automated approach to map the artifacts of plugin-based component frameworks to feature models. We describe a methodology for structuring the architecture of a plugin-based system, so that the variability space and variants are reflected in a feature model and its configurations. We materialized the proposed approach for the Eclipse Equinox component framework in a tool to visualize the variability of plugin-based systems in feature diagrams, which can be used to generate system variants. We carried out an experiment where we developed a small plugin-based product line on top of Equinox in the context of an advanced software development course. Copyright © 2016 John Wiley & Sons, Ltd.
- Improving scientific application execution on android mobile devices via
- Authors: Ana Rodriguez; Cristian Mateos, Alejandro Zunino
Abstract: The increasing number of mobile devices with ever-growing capabilities makes them useful for running scientific applications. However, these applications have high computational demands, whereas mobile devices have limited capabilities when compared with non-mobile devices. More importantly, mobile devices rely on batteries for their power supply. We initially measure the battery consumption of different versions of known micro-benchmarks representing common programming primitives found in scientific applications. Then, we analyze the performance of such micro-benchmarks in CPU-intensive mobile applications. We apply good programming practices and code refactorings to reduce battery consumption of scientific mobile applications. Our results show the reduction in energy usage from applying these refactorings to three scientific applications, and we consequently propose guidelines for high-performance computing applications. Our focus is on Android, the dominant mobile operating system. As a long-term contribution, our results represent one more step in the progress towards hybrid distributed infrastructures comprising fixed and mobile nodes, that is, the so-called mobile grids. Copyright © 2016 John Wiley & Sons, Ltd.
- Frameworks compiled from declarations: a language-independent approach
- Authors: Paul Walt; Charles Consel, Emilie Balland
Abstract: Programming frameworks are an accepted fixture in the object-oriented world, motivated by the need for code reuse, developer guidance and restriction. A new trend is emerging where frameworks require domain experts to provide declarations using a domain-specific language, influencing the structure and behaviour of the resulting application. These mechanisms address concerns such as user privacy. Although many popular open platforms such as Android are based on declaration-driven frameworks, current implementations provide ad hoc and narrow solutions to concerns raised by their openness to non-certified developers. Most widely used frameworks fail to address serious privacy leaks and provide the user with little insight into application behaviour. To address these shortcomings, we show that declaration-driven frameworks can limit privacy leaks, as well as guide developers, independently from the underlying programming paradigm. To do so, we identify concepts that underlie declaration-driven frameworks and apply them systematically to an object-oriented language, Java and a dynamic functional language, Racket. The resulting programming framework generators are used to develop a prototype mobile application, illustrating how we mitigate a common class of privacy leaks. Finally, we explore the possible design choices and propose development principles for developing domain-specific language compilers to produce frameworks, applicable across a spectrum of programming paradigms. Copyright © 2016 John Wiley & Sons, Ltd.
- ContainerCloudSim: An environment for modeling and simulation of
containers in cloud data centers
- Authors: Sareh Fotuhi Piraghaj; Amir Vahid Dastjerdi, Rodrigo N. Calheiros, Rajkumar Buyya
Pages: 505 - 521
Abstract: Containers are increasingly gaining popularity and becoming one of the major deployment models in cloud environments. To evaluate the performance of scheduling and allocation policies in containerized cloud data centers, there is a need for evaluation environments that support scalable and repeatable experiments. Simulation techniques provide repeatable and controllable environments, and hence, they serve as a powerful tool for such purpose. This paper introduces ContainerCloudSim, which provides support for modeling and simulation of containerized cloud computing environments. We developed a simulation architecture for containerized clouds and implemented it as an extension of CloudSim. We described a number of use cases to demonstrate how one can plug in and compare their container scheduling and provisioning policies in terms of energy efficiency and SLA compliance. Our system is highly scalable as it supports simulation of large number of containers, given that there are more containers than virtual machines in a data center. Copyright © 2016 John Wiley & Sons, Ltd.
- All-in-one implementation framework for binary heaps
- Authors: Jyrki Katajainen
Pages: 523 - 558
Abstract: Even a rough literature review reveals that there are many alternative ways of implementing a binary heap, the fundamental priority-queue structure loved by us all. Which one of these alternatives is the best in practice? The opinions of crowd-pullers and textbook authors are aligned: use an array. Of course, the correct answer is ‘it depends’. To get from opinions to facts, a framework—a set of class templates—was written that provides a variety of customization options so it could be used to realize a large part of the proposed variants. Also, some of the derived implementations were performance benchmarked. From this work, three conclusions can be drawn: (i) It is difficult to achieve space efficiency and speed at the same time. If n denotes the current number of values in the data structure, ϵ is a small positive real, ϵ
- Architecting cloud-enabled systems: a systematic survey of challenges and
- Authors: Muhammad Aufeef Chauhan; Muhammad Ali Babar, Boualem Benatallah
Pages: 599 - 644
Abstract: The literature on the challenges of and potential solutions to architecting cloud-based systems is rapidly growing but is scattered. It is important to systematically analyze and synthesize the existing research on architecting cloud-based software systems in order to build a cohesive body of knowledge of the reported challenges and solutions. We have systematically identified and reviewed 133 papers that report architecture-related challenges and solutions for cloud-based software systems. This paper reports the methodological details, findings, and implications of a systematic review that has enabled us to identify 44 unique categories of challenges and associated solutions for architecting cloud-based software systems. We assert that the identified challenges and solutions classified into the categories form a body of knowledge that can be leveraged for designing or evaluating software architectures for cloud-based systems. Our key conclusions are that a large number of primary studies focus on middleware services aimed at achieving scalability, performance, response time, and efficient resource optimization. Architecting cloud-based systems presents unique challenges as the systems to be designed range from pervasive embedded systems and enterprise applications to smart devices with Internet of Things. We also conclude that there is a huge potential of research on architecting cloud-based systems in areas related to green computing, energy efficient systems, mobile cloud computing, and Internet of Things. Copyright © 2016 John Wiley & Sons, Ltd.