Advances in Software Engineering
[10 followers] Follow
Open Access journal
ISSN (Print) 1687-8655 - ISSN (Online) 1687-8663
Published by Hindawi [333 journals]
- Tag-Protector: An Effective and Dynamic Detection of Illegal Memory
Accesses through Compile Time Code Instrumentation
Abstract: Programming languages permitting immediate memory accesses through pointers often result in applications having memory-related errors, which may lead to unpredictable failures and security vulnerabilities. A lightweight solution is presented in this paper to tackle such illegal memory accesses dynamically in C/C++ based applications. We propose a new and effective method of instrumenting an application’s source code at compile time in order to detect illegal spatial and temporal memory accesses. It is based on creating tags to be coupled with each memory allocation and then placing additional tag checking instructions for each access made to the memory. The proposed solution is evaluated by instrumenting applications from the BugBench benchmark suite and publicly available benchmark software, run-time intrusion prevention evaluator (RIPE), detecting all the bugs successfully. The performance and memory overheads are further analyzed by instrumenting and executing real-world applications from various renowned benchmark suites. In addition, the proposed solution is also tested to analyze the performance overhead for multithreaded applications in multicore environments. Overall our technique can detect a wide range of memory bugs and attacks with reduced performance overhead and higher detection rate as compared to the similar existing countermeasures when tested under the same experimental setup.
PubDate: Sun, 19 Jun 2016 12:25:21 +000
- Locating Minimal Fault Interaction in Combinatorial Testing
Abstract: Combinatorial testing (CT) technique could significantly reduce testing cost and increase software system quality. By using the test suite generated by CT as input to conduct black-box testing towards a system, we are able to detect interactions that trigger the system’s faults. Given a test case, there may be only part of all its parameters relevant to the defects in system and the interaction constructed by those partial parameters is key factor of triggering fault. If we can locate those parameters accurately, this will facilitate the software diagnosing and testing process. This paper proposes a novel algorithm named complete Fault Interaction Location (comFIL) to locate those interactions that cause system’s failures and meanwhile obtains the minimal set of target interactions in test suite produced by CT. By applying this method, testers can analyze and locate the factors relevant to defects of system more precisely, thus making the process of software testing and debugging easier and more efficient. The results of our empirical study indicate that comFIL performs better compared with known fault location techniques in combinatorial testing because of its improved effectiveness and precision.
PubDate: Sun, 29 May 2016 07:51:42 +000
- A Slice-Based Change Impact Analysis for Regression Test Case
Prioritization of Object-Oriented Programs
Abstract: Test case prioritization focuses on finding a suitable order of execution of the test cases in a test suite to meet some performance goals like detecting faults early. It is likely that some test cases execute the program parts that are more prone to errors and will detect more errors if executed early during the testing process. Finding an optimal order of execution for the selected regression test cases saves time and cost of retesting. This paper presents a static approach to prioritizing the test cases by computing the affected component coupling (ACC) of the affected parts of object-oriented programs. We construct a graph named affected slice graph (ASG) to represent these affected program parts. We determine the fault-proneness of the nodes of ASG by computing their respective ACC values. We assign higher priority to those test cases that cover the nodes with higher ACC values. Our analysis with mutation faults shows that the test cases executing the fault-prone program parts have a higher chance to reveal faults earlier than other test cases in the test suite. The result obtained from seven case studies justifies that our approach is feasible and gives acceptable performance in comparison to some existing techniques.
PubDate: Sun, 08 May 2016 09:33:15 +000
- Innovation Drivers and Outputs for Software Firms: Literature Review and
Abstract: Software innovation, the ability to produce novel and useful software systems, is an important capability for software development organizations and information system developers alike. However, the software development literature has traditionally focused on automation and efficiency while the innovation literature has given relatively little consideration to the software development context. As a result, there is a gap in our understanding of how software product and process innovation can be managed. Specifically, little attention has been directed toward synthesizing prior learning or providing an integrative perspective on the key concepts and focus of software innovation research. We therefore identify 93 journal articles and conference papers within the domain of software innovation and analyse repeating patterns in this literature using content analysis and causal mapping. We identify drivers and outputs for software innovation and develop an integrated theory-oriented concept map. We then discuss the implications of this map for future research.
PubDate: Sun, 10 Apr 2016 15:20:18 +000
- Classifying Obstructive and Nonobstructive Code Clones of Type I Using
Simplified Classification Scheme: A Case Study
Abstract: Code cloning is a part of many commercial and open source development products. Multiple methods for detecting code clones have been developed and finding the clones is often used in modern quality assurance tools in industry. There is no consensus whether the detected clones are negative for the product and therefore the detected clones are often left unmanaged in the product code base. In this paper we investigate how obstructive code clones of Type I (duplicated exact code fragments) are in large software systems from the perspective of the quality of the product after the release. We conduct a case study at Ericsson and three of its large products, which handle mobile data traffic. We show how to use automated analogy-based classification to decrease the classification effort required to determine whether a clone pair should be refactored or remain untouched. The automated method allows classifying 96% of Type I clones (both algorithms and data declarations) leaving the remaining 4% for the manual classification. The results show that cloning is common in the studied commercial software, but that only 1% of these clones are potentially obstructive and can jeopardize the quality of the product if left unmanaged.
PubDate: Mon, 21 Dec 2015 08:00:16 +000
- Supporting Technical Debt Cataloging with TD-Tracker Tool
Abstract: Technical debt (TD) is an emergent area that has stimulated academic concern. Managers must have information about debt in order to balance time-to-market advantages and issues of TD. In addition, managers must have information about TD to plan payments. Development tasks such as designing, coding, and testing generate different sorts of TD, each one with specific information. Moreover, literature review pointed out a gap in identifying and accurately cataloging technical debt. It is possible to find tools that can identify technical debt, but there is not a described solution that supports cataloging all types of debt. This paper presents an approach to create an integrated catalog of technical debts from different software development tasks. The approach allows tabulating and managing TD properties in order to support managers in the decision process. It also allows managers to track TD. The approach is implemented by TD-Tracker tool, which can integrate different TD identification tools and import identified debts. We present integrations between TD-Tracker and two external tools, used to identify potential technical debts. As part of the approach, we describe how to map the relationship between TD-Tracker and the external tools. We also show how to manage external information within TD-Tracker.
PubDate: Thu, 17 Sep 2015 06:55:43 +000
- Breaking the Web Barriers of the e-Administration Using an Accessible
Digital Certificate Based on a Cryptographic Token
Abstract: The purpose of developing e-Government is to make public administrations more efficient and transparent and to allow citizens to more comfortably and effectively access information. Such benefits are even more important to people with a physical disability, allowing them to reduce waiting times in procedures and travel. However, it is not in widespread use among this group, as they not only harbor the same fears as other citizens, but also must cope with the barriers inherent to their disability. This research proposes a solution to help persons with disabilities access e-Government services. This work, in cooperation with the Spanish Federation of Spinal-Cord Injury Victims and the Severely Disabled, includes the development of a portal specially oriented towards people with disabilities to help them locate and access services offered by Spanish administrations. Use of the portal relies on digital authentication of users based on X.509, which are found in identity cards of Spanish citizens. However, an analysis of their use reveals that this feature constitutes a significant barrier to accessibility. This paper proposes a more accessible solution using a USB cryptographic token that can conceal from users all complexity entailed in access to certificate-based applications, while assuring the required security.
PubDate: Mon, 14 Sep 2015 14:11:18 +000
- LTTng CLUST: A System-Wide Unified CPU and GPU Tracing Tool for OpenCL
Abstract: As computation schemes evolve and many new tools become available to programmers to enhance the performance of their applications, many programmers startedto look towards highly parallel platforms such as Graphical Processing Unit (GPU). Offloading computations that can take advantage of the architecture of the GPUis a technique that has proven fruitful in recent years. This technology enhances the speed and responsiveness of applications. Also, as a side effect, it reduces thepower requirements for those applications and therefore extends portable devices battery life and helps computing clusters to run more power efficiently. Many performance analysis tools such as LTTng, strace and SystemTap already allow Central Processing Unit (CPU) tracing and help programmers to use CPU resources more efficiently. On the GPU side, different tools such as Nvidia’s Nsight, AMD’s CodeXL, and third party TAU and VampirTrace allow tracing Application Programming Interface (API) calls and OpenCL kernel execution. These tools are useful but are completelyseparate, and none of them allow a unified CPU-GPUtracing experience. We propose an extension to the existing scalableand highly efficient LTTng tracing platform to allowunified tracing of GPU along with CPU’s full tracingcapabilities.
PubDate: Wed, 19 Aug 2015 06:29:59 +000
- On Using Fuzzy Linguistic 2-Tuples for the Evaluation of Human Resource
Suitability in Software Development Tasks
Abstract: Efficient allocation of human resources to the development tasks comprising a software project is a key challenge in software project management. To address this critical issue, a systematic human resource evaluation and selection approach can be proven helpful. In this paper, a fuzzy linguistic approach is introduced to evaluate the suitability of candidate human resources (software developers) considering their technical skills (i.e., provided skills) and the technical skills required to perform a software development task (i.e., task-related skills). The proposed approach is based on qualitative evaluations which are derived in the form of fuzzy linguistic 2-tuples from a group of decision makers (project managers). The approach applies a group/similarity degree-based aggregation technique to obtain an objective aggregation of the ratings of task-related skills and provided skills. To further analyse the suitability of each candidate developer, possible skill relationships are considered, which reflect the contribution of provided skills to the capability of learning other skills. The applicability of the approach is demonstrated and discussed through an exemplar case study scenario.
PubDate: Wed, 24 Jun 2015 09:14:46 +000
- Mutation Analysis Approach to Develop Reliable Object-Oriented Software
Abstract: In general, modern programs are large and complex and it is essential that they should be highly reliable in applications. In order to develop highly reliable software, Java programming language developer provides a rich set of exceptions and exception handling mechanisms. Exception handling mechanisms are intended to help developers build robust programs. Given a program with exception handling constructs, for an effective testing, we are to detect whether all possible exceptions are raised and caught or not. However, complex exception handling constructs make it tedious to trace which exceptions are handled and where and which exceptions are passed on. In this paper, we address this problem and propose a mutation analysis approach to develop reliable object-oriented programs. We have applied a number of mutation operators to create a large set of mutant programs with different type of faults. We then generate test cases and test data to uncover exception related faults. The test suite so obtained is applied to the mutant programs measuring the mutation score and hence verifying whether mutant programs are effective or not. We have tested our approach with a number of case studies to substantiate the efficacy of the proposed mutation analysis technique.
PubDate: Thu, 25 Dec 2014 07:16:00 +000
- SPOT: A DSL for Extending Fortran Programs with Metaprogramming
Abstract: Metaprogramming has shown much promise for improving the quality of software by offering programming language techniques to address issues of modularity, reusability, maintainability, and extensibility. Thus far, the power of metaprogramming has not been explored deeply in the area of high performance computing (HPC). There is a vast body of legacy code written in Fortran running throughout the HPC community. In order to facilitate software maintenance and evolution in HPC systems, we introduce a DSL that can be used to perform source-to-source translation of Fortran programs by providing a higher level of abstraction for specifying program transformations. The underlying transformations are actually carried out through a metaobject protocol (MOP) and a code generator is responsible for translating a SPOT program to the corresponding MOP code. The design focus of the framework is to automate program transformations through techniques of code generation, so that developers only need to specify desired transformations while being oblivious to the details about how the transformations are performed. The paper provides a general motivation for the approach and explains its design and implementation. In addition, this paper presents case studies that illustrate the potential of our approach to improve code modularity, maintainability, and productivity.
PubDate: Wed, 17 Dec 2014 07:57:53 +000
- A Cost Effective and Preventive Approach to Avoid Integration Faults
Caused by Mistakes in Distribution of Software Components
Abstract: In distributed software and hardware environments, ensuring proper operation of software is a challenge. The complexity of distributed systems increases the number of integration faults resulting from configuration errors in the distribution of components. Therefore, the main contribution of this work is a change in perspective, where integration faults (in the context of mistakes in distribution of components) are prevented rather than being dealt with through postmortem approaches. To this purpose, this paper proposes a preventive, low cost, minimally intrusive, and reusable approach. The generation of hash tags and a built-in version control for the application are at the core of the solution. Prior to deployment, the tag generator creates a list of components along with their respective tags, which will be a part of the deployment package. During production and execution of the application, on demand, the version control compares the tag list with the components of the user station and shows the discrepancies. The approach was applied to a complex application of a large company and was proven to be successful by avoiding integration faults and reducing the diagnosis time by 65%.
PubDate: Mon, 15 Dec 2014 10:51:43 +000
- MetricsCloud: Scaling-Up Metrics Dissemination in Large Organizations
Abstract: The evolving software development practices in modern software development companies often bring in more empowerment to software development teams. The empowered teams change the way in which software products and projects are measured and how the measures are communicated. In this paper we address the problem of dissemination of measurement information by designing a measurement infrastructure in a cloud environment. The described cloud system (MetricsCloud) utilizes file-sharing as the underlying mechanism to disseminate the measurement information at the company. Based on the types of measurement systems identified in this paper MetricsCloud realizes a set of synchronization strategies to fulfill the needs specific for these types. The results from migrating traditional, web-based, folder-sharing distribution of measurement systems to the cloud show that this type of measurement dissemination is flexible, reliable, and easy to use.
PubDate: Wed, 10 Dec 2014 06:31:46 +000
- The Impact of the PSP on Software Quality: Eliminating the Learning Effect
Threat through a Controlled Experiment
Abstract: Data from the Personal Software Process (PSP) courses indicate that the PSP improves the quality of the developed programs. However, since the programs (exercises of the course) are in the same application domain, the improvement could be due to programming repetition. In this research we try to eliminate this threat to validity in order to confirm that the quality improvement is due to the PSP. In a previous study we designed and performed a controlled experiment with software engineering undergraduate students at the Universidad de la República. The students performed the same exercises of the PSP course but without applying the PSP techniques. Here we present a replication of this experiment. The results indicate that the PSP and not programming repetition is the most plausible cause of the important software quality improvements.
PubDate: Tue, 30 Sep 2014 10:54:35 +000
- An Improved Approach for Reduction of Defect Density Using Optimal Module
Abstract: Nowadays, software developers are facing challenges in minimizing the number of defects during the software development. Using defect density parameter, developers can identify the possibilities of improvements in the product. Since the total number of defects depends on module size, so there is need to calculate the optimal size of the module to minimize the defect density. In this paper, an improved model has been formulated that indicates the relationship between defect density and variable size of modules. This relationship could be used for optimization of overall defect density using an effective distribution of modules sizes. Three available data sets related to concern aspect have been examined with the proposed model by taking the distinct values of variables and parameter by putting some constraint on parameters. Curve fitting method has been used to obtain the size of module with minimum defect density. Goodness of fit measures has been performed to validate the proposed model for data sets. The defect density can be optimized by effective distribution of size of modules. The larger modules can be broken into smaller modules and smaller modules can be merged to minimize the overall defect density.
PubDate: Sun, 24 Aug 2014 05:46:54 +000
- Model-Driven Development of Automation and Control Applications: Modeling
and Simulation of Control Sequences
Abstract: The scope and responsibilities of control applications are increasing due to, for example, the emergence of industrial internet. To meet the challenge, model-driven development techniques have been in active research in the application domain. Simulations that have been traditionally used in the domain, however, have not yet been sufficiently integrated to model-driven control application development. In this paper, a model-driven development process that includes support for design-time simulations is complemented with support for simulating sequential control functions. The approach is implemented with open source tools and demonstrated by creating and simulating a control system model in closed-loop with a large and complex model of a paper industry process.
PubDate: Thu, 07 Aug 2014 09:07:55 +000
- Prediction Model for Object Oriented Software Development Effort
Estimation Using One Hidden Layer Feed Forward Neural Network with Genetic
Abstract: The budget computation for software development is affected by the prediction of software development effort and schedule. Software development effort and schedule can be predicted precisely on the basis of past software project data sets. In this paper, a model for object-oriented software development effort estimation using one hidden layer feed forward neural network (OHFNN) has been developed. The model has been further optimized with the help of genetic algorithm by taking weight vector obtained from OHFNN as initial population for the genetic algorithm. Convergence has been obtained by minimizing the sum of squared errors of each input vector and optimal weight vector has been determined to predict the software development effort. The model has been empirically validated on the PROMISE software engineering repository dataset. Performance of the model is more accurate than the well-established constructive cost model (COCOMO).
PubDate: Tue, 03 Jun 2014 11:15:29 +000
- Recovering Software Design from Interviews Using the NFR Approach: An
Abstract: In the US Air Force there exist several systems for which design documentation does not exist. Chief reasons for this lack of system documentation include software having been developed several decades ago, natural evolution of software, and software existing mostly in its binary versions. However, the systems are still being used and the US Air Force would like to know the actual designs for the systems so that they may be reengineered for future requirements. Any knowledge of such systems lies mostly with its users and managers. A project was commissioned to recover designs for such systems based on knowledge of systems obtained from stakeholders by interviewing them. In this paper we describe our application of the NFR Approach, where NFR stands for Nonfunctional Requirements, to recover software design of a middleware system used by the Air Force called the Phoenix system. In our project we interviewed stakeholders of the Phoenix system, applied the NFR Approach to recover design artifacts, and validated the artifacts with the design engineers of the Phoenix system. Our study indicated that there was a high correlation between the recovered design and the actual design of the Phoenix system.
PubDate: Thu, 17 Apr 2014 11:37:23 +000
- Tuning of Cost Drivers by Significance Occurrences and Their Calibration
with Novel Software Effort Estimation Method
Abstract: Estimation is an important part of software engineering projects, and the ability to produce accurate effort estimates has an impact on key economic processes, including budgeting and bid proposals and deciding the execution boundaries of the project. Work in this paper explores the interrelationship among different dimensions of software projects, namely, project size, effort, and effort influencing factors. The study aims at providing better effort estimate on the parameters of modified COCOMO along with the detailed use of binary genetic algorithm as a novel optimization algorithm. Significance of 15 cost drivers can be shown by their impact on MMRE of efforts on original 63 NASA datasets. Proposed method is producing tuned values of the cost drivers, which are effective enough to improve the productivity of the projects. Prediction at different levels of MRE for each project reflects the percentage of projects with desired accuracy. Furthermore, this model is validated on two different datasets which represents better estimation accuracy as compared to the COCOMO 81 based NASA 63 and NASA 93 datasets.
PubDate: Tue, 31 Dec 2013 13:45:57 +000
- Thematic Review and Analysis of Grounded Theory Application in Software
Abstract: We present metacodes, a new concept to guide grounded theory (GT) research in software engineering. Metacodes are high level codes that can help software engineering researchers guide the data coding process. Metacodes are constructed in the course of analyzing software engineering papers that use grounded theory as a research methodology. We performed a high level analysis to discover common themes in such papers and discovered that GT had been applied primarily in three software engineering disciplines: agile development processes, geographically distributed software development, and requirements engineering. For each category, we collected and analyzed all grounded theory codes and created, following a GT analysis process, what we call metacodes that can be used to drive further theory building. This paper surveys the use of grounded theory in software engineering and presents an overview of successes and challenges of applying this research methodology.
PubDate: Tue, 22 Oct 2013 11:33:15 +000
- A Granular Hierarchical Multiview Metrics Suite for Statecharts Quality
Abstract: This paper presents a bottom-up approach for a multiview measurement of statechart size, topological properties, and internal structural complexity for understandability prediction and assurance purposes. It tackles the problem at different conceptual depths or equivalently at several abstraction levels. The main idea is to study and evaluate a statechart at different levels of granulation corresponding to different conceptual depth levels or levels of details. The higher level corresponds to a flat process view diagram (depth = 0), the adequate upper depth limit is determined by the modelers according to the inherent complexity of the problem under study and the level of detail required for the situation at hand (it corresponds to the all states view). For purposes of measurement, we proceed using bottom-up strategy starting with all state view diagram, identifying and measuring its deepest composite states constituent parts and then gradually collapsing them to obtain the next intermediate view (we decrement depth) while aggregating measures incrementally, until reaching the flat process view diagram. To this goal we first identify, define, and derive a relevant metrics suite useful to predict the level of understandability and other quality aspects of a statechart, and then we propose a fuzzy rule-based system prototype for understandability prediction, assurance, and for validation purposes.
PubDate: Sun, 22 Sep 2013 10:41:48 +000
- A New Software Development Methodology for Clinical Trial Systems
Abstract: Clinical trials are crucial to modern healthcare industries, and information technologies have been employed to improve the quality of data collected in trials and reduce the overall cost of data processing. While developing software for clinical trials, one needs to take into account the similar patterns shared by all clinical trial software. Such patterns exist because of the unique properties of clinical trials and the rigorous regulations imposed by the government for the reasons of subject safety. Among the existing software development methodologies, none, unfortunately, was built specifically upon these properties and patterns and therefore works sufficiently well. In this paper, the process of clinical trials is reviewed, and the unique properties of clinical trial system development are explained thoroughly. Based on the properties, a new software development methodology is then proposed specifically for developing electronic clinical trial systems. A case study shows that, by adopting the proposed methodology, high-quality software products can be delivered on schedule within budget. With such high-quality software, data collection, management, and analysis can be more efficient, accurate, and inexpensive, which in turn will improve the overall quality of clinical trials.
PubDate: Thu, 21 Mar 2013 17:38:01 +000
- Gesture Recognition Using Neural Networks Based on HW/SW Cosimulation
Abstract: Hardware/software (HW/SW) cosimulation integrates software simulation and hardware simulation simultaneously. Usually, HW/SW co-simulation platform is used to ease debugging and verification for very large-scale integration (VLSI) design. To accelerate the computation of the gesture recognition technique, an HW/SW implementation using field programmable gate array (FPGA) technology is presented in this paper. The major contributions of this work are: (1) a novel design of memory controller in the Verilog Hardware Description Language (Verilog HDL) to reduce memory consumption and load on the processor. (2) The testing part of the neural network algorithm is being hardwired to improve the speed and performance. The American Sign Language gesture recognition is chosen to verify the performance of the approach. Several experiments were carried out on four databases of the gestures (alphabet signs A to Z). (3) The major benefit of this design is that it takes only few milliseconds to recognize the hand gesture which makes it computationally more efficient.
PubDate: Sun, 24 Feb 2013 11:25:10 +000
- Accountability in Enterprise Mashup Services
Abstract: As a result of the proliferation of Web 2.0 style web sites, the practice of mashup services has become increasingly popular in the web development community. While mashup services bring flexibility and speed in delivering new valuable services to consumers, the issue of accountability associated with the mashup practice remains largely ignored by the industry. Furthermore, realizing the great benefits of mashup services, industry leaders are eagerly pushing these solutions into the enterprise arena. Although enterprise mashup services hold great promise in delivering a flexible SOA solution in a business context, the lack of accountability in current mashup solutions may render this ineffective in the enterprise environment. This paper defines accountability for mashup services, analyses the underlying issues in practice, and finally proposes a framework and ontology to model accountability. This model may then be used to develop effective accountability solutions for mashup environments. Compared to the traditional method of using QoS or SLA monitoring to address accountability requirements, our approach addresses more fundamental aspects of accountability specification to facilitate machine interpretability and therefore enabling automation in monitoring.
PubDate: Mon, 21 Jan 2013 09:38:55 +000
- Combining Slicing and Constraint Solving for Better Debugging: The CONBAS
Abstract: Although slices provide a good basis for analyzing programs during debugging, they lack in their capabilities providing precise information regarding the most likely root causes of faults. Hence, a lot of work is left to the programmer during fault localization. In this paper, we present an approach that combines an advanced dynamic slicing method with constraint solving in order to reduce the number of delivered fault candidates. The approach is called Constraints Based Slicing (CONBAS). The idea behind CONBAS is to convert an execution trace of a failing test case into its constraint representation and to check if it is possible to find values for all variables in the execution trace so that there is no contradiction with the test case. For doing so, we make use of the correctness and incorrectness assumptions behind a diagnosis, the given failing test case. Beside the theoretical foundations and the algorithm, we present empirical results and discuss future research. The obtained empirical results indicate an improvement of about 28% for the single fault and 50% for the double-fault case compared to dynamic slicing approaches.
PubDate: Mon, 31 Dec 2012 16:40:08 +000
- Applying a Goal Programming Model to Support the Selection of Artifacts in
a Testing Process
Abstract: This paper proposes the definition of a goal programming model for the selection of artifacts to be developed during a testing process, so that the set of selected artifacts is more viable to the reality of micro and small enterprises. This model was based on the IEEE Standard 829, which establishes a set of artifacts that must be generated throughout the test activities. Several factors can influence the definition of this set of artifacts. Therefore, in order to consider such factors, we developed a multicriteria model that helps in determining the priority of artifacts according to the reality of micro and small enterprises.
PubDate: Sun, 30 Dec 2012 08:22:52 +000
- An SOA-Based Model for the Integrated Provisioning of Cloud and Grid
Abstract: In the last years, the availability and models of use of networked computing resources within reach of e-Science are rapidly changing and see the coexistence of many disparate paradigms: high-performance computing, grid, and recently cloud. Unfortunately, none of these paradigms is recognized as the ultimate solution, and a convergence of them all should be pursued. At the same time, recent works have proposed a number of models and tools to address the growing needs and expectations in the field of e-Science. In particular, they have shown the advantages and the feasibility of modeling e-Science environments and infrastructures according to the service-oriented architecture. In this paper, we suggest a model to promote the convergence and the integration of the different computing paradigms and infrastructures for the dynamic on-demand provisioning of resources from multiple providers as a cohesive aggregate, leveraging the service-oriented architecture. In addition, we propose a design aimed at endorsing a flexible, modular, workflow-based computing model for e-Science. The model is supplemented by a working prototype implementation together with a case study in the applicative domain of bioinformatics, which is used to validate the presented approach and to carry out some performance and scalability measurements.
PubDate: Tue, 20 Nov 2012 17:55:19 +000
- Towards Self-Adaptive KPN Applications on NoC-Based MPSoCs
Abstract: Self-adaptivity is the ability of a system to adapt itself dynamically to internal and external changes. Such a capability helps systems to meet the performance and quality goals, while judiciously using available resources. In this paper, we propose a framework to implement application level self-adaptation capabilities in KPN applications running on NoC-based MPSoCs. The monitor-controller-adapter mechanism is used at the application level. The monitor measures various parameters to check whether the system meets the assigned goals. The controller takes decisions to steer the system towards the goal, which are applied by the adapters. The proposed framework requires minimal modifications to the application code and offers ease of integration. It incorporates a generic adaptation controller based on fuzzy logic. We present the MJPEG encoder as a case study to demonstrate the effectiveness of the approach. Our results show that even if the parameters of the fuzzy controller are not tuned optimally, the adaptation convergence is achieved within reasonable time and error limits. Moreover, the incurred steady-state overhead due to the framework is 4% for average frame-rate, 3.5% for average bit-rate, and 0.5% for additional control data introduced in the network.
PubDate: Mon, 19 Nov 2012 17:18:09 +000
- Assessing the Open Source Development Processes Using OMM
Abstract: The assessment of development practices in Free Libre Open Source Software (FLOSS) projects can contribute to the improvement of the development process by identifying poor practices and providing a list of necessary practices. Available assessment methods (e.g., Capability Maturity Model Integration (CMMI)) do not address sufficiently FLOSS-specific aspects (e.g., geographically distributed development, importance of the contributions, reputation of the project, etc.). We present a FLOSS-focused, CMMI-like assessment/improvement model: the QualiPSo Open Source Maturity Model (OMM). OMM focuses on the development process. This makes it different from existing assessment models that are focused on the assessment of the product. We have assessed six FLOSS projects using OMM. Three projects were started and led by a software company, and three are developed by three different FLOSS communities. We identified poorly addressed development activities as the number of commit/bug reports, the external contributions, and the risk management. The results showed that FLOSS projects led by companies adopt standard project management approaches as product planning, design definition, and testing, that are less often addressed by community led FLOSS projects. The OMM is valuable for both the FLOSS community, by identifying critical development activities necessary to be improved, and for potential users that can better decide which product to adopt.
PubDate: Thu, 04 Oct 2012 12:47:09 +000
- A Multi-Layered Control Approach for Self-Adaptation in Automotive
Abstract: We present an approach for self-adaptation in automotive embedded systems using a hierarchical, multi-layered control approach. We model automotive systems as a set of constraints and define a hierarchy of control loops based on different criteria. Adaptations are performed at first locally on a lower layer of the architecture. If this fails due to the restricted scope of the control cycle, the next higher layer is in charge of finding a suitable adaptation. We compare different options regarding responsibility split in multi-layered control in a self-healing scenario with a setup adopted from automotive in-vehicle networks. We show that a multi-layer control approach has clear performance benefits over a central control, even though all layers work on the same set of constraints. Furthermore, we show that a responsibility split with respect to network topology is preferable over a functional split.
PubDate: Thu, 04 Oct 2012 11:31:16 +000