Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We propose a new Statistical Model Checking (SMC) method to identify bugs in variability-intensive systems (VIS). The state-space of such systems is exponential in the number of variants, which makes the verification problem harder than for classical systems. To reduce verification time, we propose to combine SMC with featured transition systems (FTS)—a model that represents jointly the state spaces of all variants. Our new methods allow the sampling of executions from one or more (potentially all) variants. We investigate their utility in two complementary use cases. The first case considers the problem of finding all variants that violate a given property expressed in Linear-Time Logic (LTL) within a given simulation budget. To achieve this, we perform random walks in the featured transition system seeking accepting lassos. We show that our method allows us to find bugs much faster (up to 16 times according to our experiments) than exhaustive methods. As any simulation-based approach, however, the risk of Type-1 error exists. We provide a lower bound and an upper bound for the number of simulations to perform to achieve the desired level of confidence. Our empirical study involving 59 properties over three case studies reveals that our method manages to discover all variants violating 41 of the properties. This indicates that SMC can act as a coarse-grained analysis method to quickly identify the set of buggy variants. The second case complements the first one. In case the coarse-grained analysis reveals that no variant can guarantee to satisfy an intended property in all their executions, one should identify the variant that minimizes the probability of violating this property. Thus, we propose a fine-grained SMC method that quickly identifies promising variants and accurately estimates their violation probability. We evaluate different selection strategies and reveal that a genetic algorithm combined with elitist selection yields the best results. PubDate: 2021-12-15 DOI: 10.1007/s00165-021-00563-2
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Model management is a central activity in Software Engineering. The most challenging aspect of model management is to keep inter-related models consistent with each other while they evolve. As a consequence, there is a lot of scientific activity in this area, which has produced an extensive body of knowledge, methods, results and tools. The majority of these approaches, however, are limited to binary inter-model relations; i.e. the synchronisation of exactly two models. Yet, not every multi-ary relation can be factored into a family of binary relations. In this paper, we propose and investigate a novel comprehensive system construction, which is able to represent multi-ary relations among multiple models in an integrated manner and thus serves as a formal foundation for artefacts used in consistency management activities involving multiple models. The construction is based on the definition of partial commonalities among a set of models using the same language, which is used to denote the (local) models. The main theoretical results of this paper are proofs of the facts that comprehensive systems are an admissible environment for (i) applying formal means of consistency verification (diagrammatic predicate framework), (ii) performing algebraic graph transformation (weak adhesive HLR category), and (iii) that they generalise the underlying setting of graph diagrams and triple graph grammars. PubDate: 2021-12-01 DOI: 10.1007/s00165-021-00555-2
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: We propose \(\varepsilon^\downarrow(\mathcal{\vec{D}})\) -logic as a formal foundation for the specification and development of event-based systems with data states. The framework is presented as an institution in the sense of Goguen and Burstall and the logic itself is parametrised by an underlying institution \(\mathcal{\vec{D}}\) whose structures are used to model data states. \(\varepsilon^\downarrow(\mathcal{\vec{D}})\) -logic is intended to cover a broad range of abstraction levels from abstract requirements specifications up to constructive specifications. It uses modal diamond and box operators over complex actions adopted from dynamic logic. Atomic actions are pairs where e is an event and \(\psi\) a state transition predicate capturing the allowed reactions to the event. To write concrete specifications of recursive process structures we integrate (control) state variables and binders of hybrid logic. The semantic interpretation relies on event/data transition systems. For the presentation of constructive specifications we propose operational event/data specifications allowing for familiar, diagrammatic representations by state transition graphs. We show that \(\varepsilon^\downarrow(\mathcal{\vec{D}})\) -logic is powerful enough to characterise the semantics of an operational specification by a single \(\varepsilon^\downarrow(\mathcal{\vec{D}})\) -sentence. Thus the whole (formal) development process for event/data-based systems relies on \(\varepsilon^\downarrow(\mathcal{\vec{D}})\) -logic and its semantics as a common basis. It is supported by a variety of implementation constructors which can express, among others, event refinement and parallel composition. Due to the genericity of the approach, it is also possible to change a data state institution during system development when needed. All steps of our formal treatment are illustrated by a running example. PubDate: 2021-12-01 DOI: 10.1007/s00165-021-00550-7
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Probabilistic model checking (PMC) is a well-established method for the quantitative analysis of state based operational models such as Markov decision processes. Description logics (DLs) provide a well-suited formalism to describe and reason about knowledge and are used as basis for the web ontology language (OWL). We investigate how such knowledge described by DLs can be integrated into the PMC process, introducing ontology-mediated PMC. Specifically, we propose ontologized programs as a formalism that links ontologies to behaviors specified by probabilistic guarded commands, the de-facto standard input formalism for PMC tools such as Prism. Through DL reasoning, inconsistent states in the modeled system can be detected. We present three ways to resolve these inconsistencies, leading to different Markov decision process semantics. We analyze the computational complexity of checking whether an ontologized program is consistent under these semantics. Further, we present and implement a technique for the quantitative analysis of ontologized programs relying on standard DL reasoning and PMC tools. This way, we enable the application of PMC techniques to analyze knowledge-intensive systems.We evaluate our approach and implementation on amulti-server systemcase study,where different DL ontologies are used to provide specifications of different server platforms and situations the system is executed in. PubDate: 2021-12-01 DOI: 10.1007/s00165-021-00549-0
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Assurance cases are often required to certify critical systems. The use of formal methods in assurance can improve automation, increase confidence, and overcome errant reasoning. However, assurance cases can never be fully formalised, as the use of formal methods is contingent on models that are validated by informal processes. Consequently, assurance techniques should support both formal and informal artifacts, with explicated inferential links between them. In this paper, we contribute a formal machine-checked interactive language, called Isabelle/SACM, supporting the computer-assisted construction of assurance cases compliant with the OMG Structured Assurance Case Meta-Model. The use of Isabelle/SACM guarantees well-formedness, consistency, and traceability of assurance cases, and allows a tight integration of formal and informal evidence of various provenance. In particular, Isabelle brings a diverse range of automated verification techniques that can provide evidence. To validate our approach, we present a substantial case study based on the Tokeneer secure entry system benchmark. We embed its functional specification into Isabelle, verify its security requirements, and form a modular security case in Isabelle/SACM that combines the heterogeneous artifacts. We thus show that Isabelle is a suitable platform for critical systems assurance. PubDate: 2021-12-01 DOI: 10.1007/s00165-021-00537-4
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The substitution of traditional occupancy detecting sensors with an Autonomous Positioning System (APS) is a promising solution to contain costs and improve performance of current tramway signalling systems. APS is an onboard system using satellite positioning and other inertial platforms to autonomously estimate the position of the tram with the needed levels of uncertainty and protection. However, autonomous positioning introduces, even in absence of faults, a quantitative uncertainty with respect to traditional sensors. This paper investigates this issue in the context of an industrial project: a model of the envisaged solution is proposed, and it is analysed using Uppaal Statistical Model Checker. A novel model-driven hazard analysis approach to the exploration of emerging hazards is proposed. The analysis emphasises how the virtualisation of legacy track circuits and on-board satellite positioning equipment may give rise to new hazards, not present in the traditional system. PubDate: 2021-12-01 DOI: 10.1007/s00165-021-00556-1
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The decentralised railway signalling systems have a potential to increase capacity, availability and reduce maintenance costs of railway networks. However, given the safety-critical nature of railway signalling and the complexity of novel distributed signalling solutions, their safety should be guaranteed by using thorough system validation methods. To achieve such a high-level of safety assurance of these complex signalling systems, scenario-based testing methods are far from being sufficient despite that they are still widely used in the industry. Formal verification is an alternative approach which provides a rigorous approach to verifying complex systems and has been successfully used in the railway domain. Despite the successes, little work has been done in applying formal methods for distributed railway systems. In our research we are working towards a multifaceted formal development methodology of complex railway signalling systems. The methodology is based on the Event-B modelling language which provides an expressive modelling language, a stepwise development and a proof-based model verification. In this paper, we present the application of the methodology for the development and verification of a distributed protocol for reservation of railway sections. The main challenge of this work is developing a distributed protocol which ensures safety and liveness of the distributed railway system when message delays are allowed in the model. PubDate: 2021-11-24 DOI: 10.1007/s00165-021-00567-y
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Model design is not a linear, one-shot process. It proceeds through refinements and revisions. To effectively support developers in generating model refinements and revisions, it is desirable to have some automated support to verify evolvable models. To address this problem, we recently proposed to adopt topological proofs, which are slices of the original model that witness property satisfaction. We implemented TOrPEDO, a framework that provides automated support for using topological proofs during model design. Our results showed that topological proofs are significantly smaller than the original models, and that, in most of the cases, they allow the property to be re-verified by relying only on a simple syntactic check. However, our results also show that the procedure that computes topological proofs, which requires extracting unsatisfiable cores of LTL formulae, is computationally expensive. For this reason, TOrPEDO currently handles models with a small dimension. With the intent of providing practical and efficient support for flexible model design and wider adoption of our framework, in this paper, we propose an enhanced—re-engineered—version of TOrPEDO. The new version of TOrPEDO relies on a novel procedure to extract topological proofs, which has so far represented the bottleneck of TOrPEDO performances. We implemented our procedure within TOrPEDO by considering Partial Kripke Structures (PKSs) and Linear-time Temporal Logic (LTL): two widely used formalisms to express models with uncertain parts and their properties. To extract topological proofs, the new version of TOrPEDO converts the LTL formulae into an SMT instance and reuses an existing SMT solver (e.g., Microsoft Z3) to compute an unsatisfiable core. Then, the unsatisfiable core returned by the SMT solver is automatically processed to generate the topological proof. We evaluated TOrPEDO by assessing (i) how does the size of the proofs generated by TOrPEDO compares to the size of the models being analyzed; and (ii) how frequently the use of the topological proof returned by TOrPEDO avoids re-executing the model checker. Our results show that TOrPEDO provides proofs that are smaller ( \(\approx \) 60%) than their respective initial models effectively supporting designers in creating model revisions. In a significant number of cases ( \(\approx \) 79%), the topological proofs returned by TOrPEDO enable assessing the property satisfaction without re-running the model checker. We evaluated our new version of TOrPEDO by assessing (i) how it compares to the previous one; and (ii) how useful it is in supporting the evaluation of alternative design choices of (small) model instances in applied domains. The results show that the new version of TOrPEDO is significantly more efficient than the previous one and can compute topological proofs for models with less than 40 states within two hours. The topological proofs and counterexamples provided by TOrPEDO are useful to support the development of alternative design choices of (small) model instances in applied domains. PubDate: 2021-11-06 DOI: 10.1007/s00165-021-00564-1
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Schematic drawings showing railway tracks and equipment are commonly used to visualize railway operations and to communicate system specifications and construction blueprints. Recent advances in on-line collaboration and modeling tools have raised the expectations for quickly making changes to models, resulting in frequent changes to layouts, text, and/or symbols in schematic drawings. Automating the creation of high-quality schematic views from geographical and topological models can help engineers produce and update drawings efficiently. This paper introduces four methods for automatically producing schematic railway drawings with increasing level of quality and control over the result. The final method, implemented in the open-source tool that we have developed, can use any combination of the following optimization criteria, which can have different priorities in different use cases: width and height of the drawing, the diagonal line lengths, and the number of bends. We show how to encode schematic railway drawings as an optimization problem over Boolean and numerical domains, using combinations of unary number encoding, lazy difference constraints, and numerical optimization into an incremental SAT formulation. We compare drawings resulting from each of the four methods, applied to models of real-world engineering projects and existing railway infrastructure. We also show how to add symbols and labels to the track plan, which is important for the usefulness of the final outputs. Since the proposed tool is customizable and efficiently produces high-quality drawings from railML 2.x models, it can be used (as it is or extended) both as an integrated module in an industrial design tool like RailCOMPLETE, or by researchers for visualization purposes. PubDate: 2021-10-20 DOI: 10.1007/s00165-021-00566-z
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The current travel demand in railways requires the adoption of novel approaches and technologies in order to increase network capacity. Virtual Coupling is considered one of the most innovative solutions to increase railway capacity by drastically reducing train headway. The aim of this paper is to provide an approach to investigate the potential of Virtual Coupling in railways by composing stochastic activity networks model templates. The paper starts describing the Virtual Coupling paradigm with a focus on standard European railway traffic controllers. Based on stochastic activity network model templates, we provide an approach to perform quantitative evaluation of capacity increase in reference Virtual Coupling scenarios. The approach can be used to estimate system capacity over a modelled track portion, accounting for the scheduled service as well as possible failures. Due to its modularity, the approach can be extended towards the inclusion of safety model components. The contribution of this paper is a preliminary result of the PERFORMINGRAIL (PERformance-based Formal modelling and Optimal tRaffic Management for movING-block RAILway signalling) project funded by the European Shift2Rail Joint Undertaking. PubDate: 2021-09-24 DOI: 10.1007/s00165-021-00560-5
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In the field of Model-Driven Engineering, Triple Graph Grammars (TGGs) play an important role as a rule-based means of implementing consistency management. From a declarative specification of a consistency relation, several operations including forward and backward transformations, (concurrent) synchronisation, and consistency checks can be automatically derived. For TGGs to be applicable in realistic application scenarios, expressiveness in terms of supported language features is very important. A TGG tool is schema compliant if it can take domain constraints, such as multiplicity constraints in a meta-model, into account when performing consistency management tasks. To guarantee schema compliance, most TGG tools allow application conditions to be attached as necessary to relevant rules. This strategy is problematic for at least two reasons: First, ensuring compliance to a sufficiently expressive schema for all previously mentioned derived operations is still an open challenge; to the best of our knowledge, all existing TGG tools only support a very restricted subset of application conditions. Second, it is conceptually demanding for the user to indirectly specify domain constraints as application conditions, especially because this has to be completely revisited every time the TGG or domain constraint is changed. While domain constraints can in theory be automatically transformed to obtain the required set of application conditions, this has only been successfully transferred to TGGs for a very limited subset of domain constraints. To address these limitations, this paper proposes a search-based strategy for achieving schema compliance. We show that all correctness and completeness properties, previously proven in a setting without domain constraints, still hold when schema compliance is to be additionally guaranteed. An implementation and experimental evaluation are provided to support our claim of practical applicability. PubDate: 2021-08-24 DOI: 10.1007/s00165-021-00557-0
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Designing and reasoning about complex systems such as wireless sensor networks is hard due to highly dynamic environments: sensors are heterogeneous, battery-powered, and mobile. While formal modelling can provide rigorous mechanisms for design/reasoning, they are often viewed as difficult to use. Graph rewrite-based modelling techniques increase usability by providing an intuitive, flexible, and diagrammatic form of modelling in which graph-like structures express relationships between entities while rewriting mechanisms allow model evolution. Two major graph-based formalisms are Graph Transformation Systems (GTS) and Bigraphical Reactive Systems (BRS). While both use similar underlying structures, how they are employed in modelling is quite different. To gain a deeper understanding of GTS and BRS, and to guide future modelling, theory, and tool development, in this experience report we compare the practical modelling abilities and style of GTS and BRS when applied to topology control in WSNs. To show the value of the models, we describe how analysis may be performed in both formalisms. A comparison of the approaches shows that although the two formalisms are different, from both a theoretical and practical modelling standpoint, they are each successful in modelling topology control in WSNs. We found that GTS, while featuring a small set of entities and transformation rules, relied on entity attributes, rule application based on attribute/variable side-conditions, and imperative control flow units. BRS on the other hand, required a larger number of entities in order to both encode attributes directly in the model (via nesting) and provide tagging functionality that, when coupled with rule priorities, implements control flow. There remains promising research mapping techniques between the formalisms to further enable flexible and expressive modelling. PubDate: 2021-08-17 DOI: 10.1007/s00165-021-00558-z
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Modelling is an essential activity in software engineering. It typically involves two meta-levels: one includes meta-models that describe modelling languages, and the other contains models built by instantiating those meta-models. Multi-level modelling generalizes this approach by allowing models to span an arbitrary number of meta-levels. A scenario that profits from multi-level modelling is the definition of language families that can be specialized (e.g., for different domains) by successive refinements at subsequent meta-levels, hence promoting language reuse. This enables an open set of variability options given by all possible specializations of the language family. However, multi-level modelling lacks the ability to express closed variability regarding the availability of language primitives or the possibility to opt between alternative primitive realizations. This limits the reuse opportunities of a language family. To improve this situation, we propose a novel combination of product lines with multi-level modelling to cover both open and closed variability. Our proposal is backed by a formal theory that guarantees correctness, enables top-down and bottom-up language variability design, and is implemented atop the MetaDepth multi-level modelling tool. PubDate: 2021-08-10 DOI: 10.1007/s00165-021-00554-3
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, an efficient approach to data validation of distributed geographical interlocking systems (IXLs) is presented. In the distributed IXL paradigm, track elements are controlled by local computers communicating with other control components over local and wide area networks. The overall control logic is distributed over these track-side computers and remote server computers that may even reside in one or more cloud server farms. Redundancy is introduced to ensure fail-safe behaviour, fault-tolerance, and to increase the availability of the overall system. To cope with the configuration-related complexity of such distributed IXLs, the software is designed according to the digital twin paradigm: physical track elements are associated with software objects implementing supervision and control for the element. The objects communicate with each other and with high-level IXL control components in the cloud over logical channels realised by distributed communication mechanisms. The objective of this article is to explain how configuration rules for this type of IXLs can be specified by temporal logic formulae interpreted on Kripke Structure representations of the IXL configuration. Violations of configuration rules can be specified using formulae from a well-defined subset of LTL. By decomposing the complete configuration model into sub-models corresponding to routes through the model, the LTL model checking problem can be transformed into a CTL checking problem for which highly efficient algorithms exist. Specialised rule violation queries that are hard to express in LTL can be simplified and checked faster by performing sub-model transformations adding auxiliary variables to the states of the underlying Kripke Structures. Further performance enhancements are achieved by checking each sub-model concurrently. The approach presented here has been implemented in a model checking tool which is applied by Siemens Mobility for data validation of geographical IXLs. PubDate: 2021-08-10 DOI: 10.1007/s00165-021-00551-6
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper presents counterexample-guided inductive synthesis (CEGIS) to automatically synthesise probabilistic models. The starting point is a family of finite-stateMarkov chains with related but distinct topologies. Such families can succinctly be described by a sketch of a probabilistic program. Program sketches are programs containing holes. Every hole has a finite repertoire of possible program snippets by which it can be filled.We study several synthesis problems—feasibility, optimal synthesis, and complete partitioning—for a given quantitative specification \(\varphi\) . Feasibility amounts to determine a family member satisfying \(\varphi\) , optimal synthesis amounts to find a family member that maximises the probability to satisfy \(\varphi\) , and complete partitioning splits the family in satisfying and refuting members. Each of these problems can be considered under the additional constraint of minimising the total cost of instantiations, e.g., what are all possible instantiations for \(\varphi\) that are within a certain budget' The synthesis problems are tackled using a CEGIS approach. The crux is to aggressively prune the search space by using counterexamples provided by a probabilistic model checker. Counterexamples can be viewed as sub-Markov chains that rule out all family members that share this sub-chain. Our CEGIS approach leverages efficient probabilisticmodel checking,modern SMT solving, and programsnippets as counterexamples. Experiments on case studies froma diverse nature—controller synthesis, program sketching, and security—show that synthesis among up to a million candidate designs can be done using a few thousand verification queries. PubDate: 2021-08-01 DOI: 10.1007/s00165-021-00547-2