Authors:Hazem Torfah; Martin Zimmermann Pages: 191 - 212 Abstract: We determine the complexity of counting models of bounded size of specifications expressed in linear-time temporal logic. Counting word-models is #P-complete, if the bound is given in unary, and as hard as counting accepting runs of nondeterministic polynomial space Turing machines, if the bound is given in binary. Counting tree-models is as hard as counting accepting runs of nondeterministic exponential time Turing machines, if the bound is given in unary. For a binary encoding of the bound, the problem is at least as hard as counting accepting runs of nondeterministic exponential space Turing machines, and not harder than counting accepting runs of nondeterministic doubly-exponential time Turing machines. Finally, counting arbitrary transition systems satisfying a formula is #P-hard and not harder than counting accepting runs of nondeterministic polynomial time Turing machines with a PSPACE oracle, if the bound is given in unary. If the bound is given in binary, then counting arbitrary models is as hard as counting accepting runs of nondeterministic exponential time Turing machines. PubDate: 2018-05-01 DOI: 10.1007/s00236-016-0284-z Issue No:Vol. 55, No. 3 (2018)

Authors:Hadassa Daltrophe; Shlomi Dolev; Zvi Lotker Pages: 213 - 225 Abstract: Given a large set of measurement data, in order to identify a simple function that captures the essence of the data, we suggest representing the data by an abstract function, in particular by polynomials. We interpolate the datapoints to define a polynomial that would represent the data succinctly. The interpolation is challenging, since in practice the data can be noisy and even Byzantine where the Byzantine data represents an adversarial value that is not limited to being close to the correct measured data. We present two solutions, one that extends the Welch-Berlekamp technique (Error correction for algebraic block codes, 1986) to eliminate the outliers appearance in the case of multidimensional data, and copes with discrete noise and Byzantine data; and the other solution is based on Arora and Khot (J Comput Syst Sci 67(2):325–340, 2003) method which handles noisy data, and we have generalized it in the case of multidimensional noisy and Byzantine data. PubDate: 2018-05-01 DOI: 10.1007/s00236-016-0288-8 Issue No:Vol. 55, No. 3 (2018)

Authors:Roberto Barbuti; Roberta Gori; Francesca Levi; Paolo Milazzo Pages: 227 - 267 Abstract: Reaction systems are a qualitative formalism for the modelling of systems of biochemical reactions. In their original formulation, a reaction system executes in an environment (or context) that can supply it with new objects at each evolution step. The context drives the behaviour of a reaction system: it can provide different inputs to the system that can lead to different behaviours. In order to more faithfully deal with open systems, in this paper we propose a more powerful notion of context having not only the capability to provide objects, but also to absorb (or remove) objects at each evolution step. For such reaction systems with generalized context we investigate properties of dynamic causality by revising the previously proposed concept of formula based predictor. A formula based predictor is a Boolean formula characterising all contexts that lead to the production of a certain object after a given number of steps. In this paper, we revise the theory of formula based predictors in order to deal with reaction systems executed in a context of the new kind. As applications, we show an example of interaction between biochemical pathways and a reaction system modelling cell metabolism and respiration. PubDate: 2018-05-01 DOI: 10.1007/s00236-017-0296-3 Issue No:Vol. 55, No. 3 (2018)

Authors:Patricia Bouyer; Nicolas Markey; Mickael Randour; Kim G. Larsen; Simon Laursen Pages: 91 - 127 Abstract: Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in \(\mathsf{NP}\cap \mathsf{coNP}\) and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements. PubDate: 2018-03-01 DOI: 10.1007/s00236-016-0274-1 Issue No:Vol. 55, No. 2 (2018)

Authors:Martin Zimmermann Pages: 129 - 152 Abstract: We continue the investigation of parameterized extensions of linear temporal logic (LTL) that retain the attractive algorithmic properties of LTL: a polynomial space model checking algorithm and a doubly-exponential time algorithm for solving games. Alur et al. and Kupferman et al. showed that this is the case for parametric LTL (PLTL) and PROMPT-LTL respectively, which have temporal operators equipped with variables that bound their scope in time. Later, this was also shown to be true for parametric LDL (PLDL), which extends PLTL to be able to express all \(\omega \) -regular properties. Here, we generalize PLTL to systems with costs, i.e., we do not bound the scope of operators in time, but bound the scope in terms of the cost accumulated during time. Again, we show that model checking and solving games for specifications in PLTL with costs is not harder than the corresponding problems for LTL. Finally, we discuss PLDL with costs and extensions to multiple cost functions. PubDate: 2018-03-01 DOI: 10.1007/s00236-016-0279-9 Issue No:Vol. 55, No. 2 (2018)

Authors:Rayna Dimitrova; Rupak Majumdar Pages: 153 - 189 Abstract: Extensions to finite-state automata on strings, such as multi-head automata or multi-counter automata, have been successfully used to encode many infinite-state non-regular verification problems. In this paper, we consider a generalization of automata-theoretic infinite-state verification from strings to labelled series–parallel graphs. We define a model of non-deterministic, 2-way, concurrent automata working on series–parallel graphs and communicating through shared registers on the nodes of the graph. We consider the following verification problem: given a family of series–parallel graphs described by a context-free graph transformation system (GTS), and a concurrent automaton over series–parallel graphs, is some graph generated by the GTS accepted by the automaton' The general problem is undecidable already for (one-way) multi-head automata over strings. We show that a bounded version, where the automata make a fixed number of reversals along the graph and use a fixed number of shared registers is decidable, even though there is no bound on the sizes of series–parallel graphs generated by the GTS. Our decidability result is based on establishing that the number of context switches can be bounded and on an encoding of the computation of bounded concurrent automata that allows us to reduce the reachability problem to the emptiness problem for pushdown automata. PubDate: 2018-03-01 DOI: 10.1007/s00236-016-0290-1 Issue No:Vol. 55, No. 2 (2018)

Authors:Édouard Bonnet; Vangelis Th. Paschos Pages: 1 - 15 Abstract: Instance sparsification is well-known in the world of exact computation since it is very closely linked to the Exponential Time Hypothesis. In this paper, we extend the concept of sparsification in order to capture subexponential time approximation. We develop a new tool for inapproximability, called approximation preserving sparsification, and use it in order to get strong inapproximability results in subexponential time for several fundamental optimization problems such as min dominating set , min feedback vertex set , min set cover, min feedback arc set, and others. PubDate: 2018-02-01 DOI: 10.1007/s00236-016-0281-2 Issue No:Vol. 55, No. 1 (2018)

Authors:Henrik Björklund; Wim Martens; Thomas Schwentick Pages: 17 - 56 Abstract: We study the containment, satisfiability, and validity problems for conjunctive queries over trees with respect to a schema. We show that conjunctive query containment and validity are 2EXPTIME -complete with respect to a schema, in both cases where the schema is given as a DTD or as a tree automaton. Furthermore, we show that satisfiability for conjunctive queries with respect to a schema can be decided in NP . The problem is NP -hard already for queries using only one kind of axis. Finally, we consider conjunctive queries that can test for equalities and inequalities of data values. Here, satisfiability and validity are decidable, but containment is undecidable, even without schema information. On the other hand, containment with respect to a schema becomes decidable again if the “larger” query is not allowed to use both equalities and inequalities. PubDate: 2018-02-01 DOI: 10.1007/s00236-016-0282-1 Issue No:Vol. 55, No. 1 (2018)

Authors:Lene M. Favrholdt; Jesper W. Mikkelsen Pages: 57 - 80 Abstract: We study a version of online edge coloring, where the goal is to color as many edges as possible using only a given number, k, of available colors. All of our results are with regard to competitive analysis. Previous attempts to identify optimal algorithms for this problem have failed, even for bipartite graphs. Thus, in this paper, we analyze even more restricted graph classes, paths and trees. For paths, we consider \(k=2\) , and for trees, we consider any \(k \ge 2\) . We prove that a natural greedy algorithm called \({\textsc {First-Fit}}\) is optimal among deterministic algorithms, on paths as well as trees. For paths, we give a randomized algorithm, which is optimal and better than the best possible deterministic algorithm. For trees, we prove that to obtain a better competitive ratio than \({\textsc {First-Fit}}\) , the algorithm would have to be both randomized and unfair (i.e., reject edges that could have been colored), and even such algorithms cannot be much better than \({\textsc {First-Fit}}\) . PubDate: 2018-02-01 DOI: 10.1007/s00236-016-0283-0 Issue No:Vol. 55, No. 1 (2018)

Authors:Juha Honkala Pages: 81 - 88 Abstract: We study the language equivalence problem for smooth and loop-free D0L systems. We show that the number of initial terms in the associated D0L sequences we have to consider to decide language equivalence depends only on the cardinality of the underlying alphabet. PubDate: 2018-02-01 DOI: 10.1007/s00236-016-0286-x Issue No:Vol. 55, No. 1 (2018)

Authors:Soumyadip Bandyopadhyay; Dipankar Sarkar; Chittaranjan Mandal Abstract: Extensive optimizing and parallelizing transformations are carried out on programs, both by (untrusted) compilers and human experts, before deploying them on some platform architecture which is by and large parallel. It is therefore important to devise some suitable modelling paradigm which is capable of capturing parallelism in such a way that proving equivalence of the source programs and their transformed versions becomes easier. In the present work, an untimed Petri net model with data constraints, called CPN model (Coloured Petri net), is used to model the parallel behaviours. Being value based, such models depict more vividly the data dependencies which lie at the core of such transformations; accordingly, they are likely to provide more suitable internal representations (IRs) of both the source and the transformed programs than the IRs like sequential control flow graphs (CFGs). A path based equivalence checking method for CPN models with rigorous treatment of the complexity and correctness issues have been presented. Experimental results show the effectiveness of the approach. PubDate: 2018-04-23 DOI: 10.1007/s00236-018-0320-2

Authors:Sascha Fendrich; Gerald Lüttgen Abstract: Interface theories allow system designers to reason about the composability and compatibility of concurrent system components. Such theories often extend both de Alfaro and Henzinger’s Interface Automata and Larsen’s Modal Transition Systems, which leads, however, to several issues that are undesirable in practice: an unintuitive treatment of specified unwanted behaviour, a binary compatibility concept that does not scale to multi-component assemblies, and compatibility guarantees that are insufficient for software product lines. In this article we show that communication mismatches are central to all these problems and, thus, the ability to represent such errors semantically is an important feature of an interface theory. Accordingly, we present the error-aware interface theory EMIA, where the above shortcomings are remedied by introducing explicit fatal error states. In addition, we prove via a Galois insertion that EMIA is a conservative generalisation of the established Modal Interface Automata theory. PubDate: 2018-04-17 DOI: 10.1007/s00236-018-0319-8

Authors:Jörg Desel; Javier Esparza; Philipp Hoffmann Abstract: This paper introduces negotiations, a model of concurrency close to Petri nets, with multi-party negotiations as concurrency primitive. We study two fundamental analysis problems. The soundness problem consists in deciding if it is always possible for a negotiation to terminate successfully, whatever the current state is. Given a sound negotiation, the summarization problem aims at computing an equivalent one-step negotiation with the same input/output behavior. The soundness and summarization problems can be solved by means of simple algorithms acting on the state space of the negotiation, which however face the well-known state explosion problem. We study alternative algorithms that avoid the construction of the state space. In particular, we define reduction rules that simplify a negotiation while preserving the sound/non-sound character of the negotiation and its summary. In a first result we show that our rules are complete for the class of weakly deterministic acyclic negotiations, meaning that they reduce all sound negotiations in this class, and only them, to equivalent one-step negotiations. This provides algorithms for both the soundness and the summarization problem that avoid the construction of the state space. We then study the class of deterministic negotiations. Our second main result shows that the rules are also complete for this class, even if the negotiations contain cycles. Moreover, we present an algorithm that completely reduces all sound deterministic negotiations, and only them, in polynomial time. PubDate: 2018-03-20 DOI: 10.1007/s00236-018-0318-9

Authors:Rosa Abbasi; Fatemeh Ghassemi; Ramtin Khosravi Abstract: Component-based systems evolve as a new component is added or an existing one is replaced by a newer version. Hence, it is appealing to assure the new system still preserves its safety properties. However, instead of inspecting the new system as a whole, which may result in a large state space, it is beneficial to reuse the verification results by inspecting the newly added component in isolation. To this aim, we study the problem of model checking component-based asynchronously communicating systems in the presence of an unspecified component against safety properties. Our solution is based on assume-guarantee reasoning, adopted for asynchronous environments, which generates the weakest assumption. If the newly added component conforms to the assumption, then the whole system still satisfies the property. To make the approach efficient and convergent, we produce an overapproximated interface of the missing component and by its composition with the rest of the system components, we achieve an overapproximated specification of the system, from which we remove those traces of the system that violate the property and generate an assumption for the missing component. We have implemented our approach on two case studies. Furthermore, we compared our results with the state of the art direct approach. Our resulting assumptions are smaller in size and achieved faster. PubDate: 2018-03-07 DOI: 10.1007/s00236-018-0317-x

Authors:Lijun Zhang; Pengfei Yang; Lei Song; Holger Hermanns; Christian Eisentraut; David N. Jansen; Jens Chr. Godskesen Abstract: Weak distribution bisimilarity is an equivalence notion on probabilistic automata, originally proposed for Markov automata. It has gained some popularity as the coarsest behavioral equivalence enjoying valuable properties like preservation of trace distribution equivalence and compositionality. This holds in the classical context of arbitrary schedulers, but it has been argued that this class of schedulers is unrealistically powerful. This paper studies a strictly coarser notion of bisimilarity, which still enjoys these properties in the context of realistic subclasses of schedulers: Trace distribution equivalence is implied for partial information schedulers, and compositionality is preserved by distributed schedulers. The intersection of the two scheduler classes thus spans a coarser and still reasonable compositional theory of behavioral semantics. PubDate: 2018-02-24 DOI: 10.1007/s00236-018-0313-1

Authors:Hongbo Zhang Abstract: In this paper, we analyze an M/M/1 queue with batch arrival and multiple working vacations. We describe the queueing model by a special GI/M/1 type Markov process with infinite phases, and by the matrix analytic method, we not only give the stationary queue length distribution of the model, but also obtain the exact number of vacations completed by the server. PubDate: 2018-02-05 DOI: 10.1007/s00236-018-0316-y

Authors:Henning Fernau; Lakshmanan Kuppusamy; Indhumathi Raman Abstract: A graph-controlled insertion–deletion system is a regulated extension of an insertion–deletion system. It has several components and each component contains some insertion–deletion rules. These components are the vertices of a directed control graph. A transition is performed by any applicable rule in the current component on a string and the resultant string is then moved to the target component specified in the rule. This also describes the arcs of the control graph. Starting from an axiom in the initial component, strings thus move through the control graph. The language of the system is the set of all terminal strings collected in the final component. In this paper, we investigate a variant of the main question in this area: which combinations of size parameters (the maximum number of components, the maximal length of the insertion string, the maximal length of the left context for insertion, the maximal length of the right context for insertion; plus three similar restrictions with respect to deletion) are sufficient to maintain computational completeness of such restricted systems under the additional restriction that the (undirected) control graph is a path' Notice that these results also bear consequences for the domain of insertion–deletion P systems, improving on a number of previous results from the literature, concerning in particular the number of components (membranes) that are necessary for computational completeness results. PubDate: 2018-02-05 DOI: 10.1007/s00236-018-0312-2

Authors:Masoud Ebrahimi; Gholamreza Sotudeh; Ali Movaghar Abstract: Few fuzzy temporal logics and modeling formalisms are developed such that their model checking is both effective and efficient. State-space explosion makes model checking of fuzzy temporal logics inefficient. That is because either the modeling formalism itself is not compact, or the verification approach requires an exponentially larger yet intermediate representation of the modeling formalism. To exemplify, Fuzzy Program Graph (FzPG) is a very compact, and powerful formalism to model fuzzy systems; yet, it is required to be translated into an equal Fuzzy Kripke model with an exponential blow-up should it be formally verified. In this paper, we introduce Fuzzy Computation Tree Logic (FzCTL) and its direct symbolic model checking over FzPG that avoids the aforementioned state-space explosion. Considering compactness and readability of FzPG along with expressiveness of FzCTL, we believe the proposed method is applicable in real-world scenarios. Finally, we study formal verification of fuzzy flip-flops to demonstrate capabilities of the proposed method. PubDate: 2018-02-03 DOI: 10.1007/s00236-018-0311-3

Authors:Paolo Baldan; Fabio Gadducci Abstract: In a seminal paper Montanari and Meseguer have shown that an algebraic interpretation of Petri nets in terms of commutative monoids can be used to provide an elegant characterisation of the deterministic computations of a net, accounting for their sequential and parallel composition. A smoother and more complete theory for deterministic computations has been later developed by relying on the concept of pre-net, a variation of Petri nets with a non-commutative flavor. This paper shows that, along the same lines, by adding an (idempotent) operation and thus considering dioids (idempotent semirings) rather than just monoids, one can faithfully characterise the non-deterministic computations of a net. PubDate: 2018-01-24 DOI: 10.1007/s00236-018-0314-0