Abstract: Publication date: Available online 10 May 2016
Source:Journal of Applied Logic
Author(s): Paul D. Thorn, Gerhard Schurz
In previous work, we studied four well known systems of qualitative probabilistic inference, and presented data from computer simulations in an attempt to illustrate the performance of the systems. These simulations evaluated the four systems in terms of their tendency to license inference to accurate and informative conclusions, given incomplete information about a randomly selected probability distribution. In our earlier work, the procedure used in generating the unknown probability distribution (representing the true stochastic state of the world) tended to yield probability distributions with moderately high entropy levels. In the present article, we present data charting the performance of the four systems when reasoning in environments of various entropy levels. The results illustrate variations in the performance of the respective reasoning systems that derive from the entropy of the environment, and allow for a more inclusive assessment of the reliability and robustness of the four systems.

Abstract: Publication date: Available online 10 May 2016
Source:Journal of Applied Logic
Author(s): Nico Potyka, Engelbert Mittermeier, David Marenke
The expert system shell MECore provides a series of knowledge management operations to define probabilistic knowledge bases and to reason under uncertainty. To provide a reference work for MECore algorithmics, we bring together results from different sources that have been applied in MECore and explain their intuitive ideas. Additionally, we report on our ongoing work regarding further development of MECore's algorithms to compute optimum entropy distributions and provide some empirical results. Altogether this paper explains the intuition of important theoretical results and their practical implications, compares old and new algorithmic approaches and points out their benefits as well as possible limitations and pitfalls.

Abstract: Publication date: Available online 10 May 2016
Source:Journal of Applied Logic
Author(s): Achim Kuwertz, Jürgen Beyerer
Adaptive knowledge modeling is an approach for extending the abilities of the Object-Oriented World Model, a system for representing the state of an observed real-world environment, to open-world modeling. In open environments, entities unforeseen at the design-time of a world model can occur. For coping with such circumstances, adaptive knowledge modeling is tasked with adapting the underlying knowledge model according to the environment. The approach is based on quantitative measures, introduced previously, for rating the quality of knowledge models. In this contribution, adaptive knowledge modeling is extended by measures for detecting the need for model adaptation and identifying the potential starting points of necessary model change as well as by an approach for applying such change. Being an extended and more detailed version of [1], the contribution also provides background information on the architecture of the Object-Oriented World Model and on the principles of adaptive knowledge modeling, as well as examination results for the proposed methods. In addition, a more complex scenario is used to evaluate the overall approach.

Abstract: Publication date: Available online 11 May 2016
Source:Journal of Applied Logic
Author(s): Lieven Decock, Igor Douven, Marta Sznajder
That one's degrees of belief at any one time obey the axioms of probability theory is widely regarded as a necessary condition for static rationality. Many theorists hold that it is also a sufficient condition, but according to critics this yields too subjective an account of static rationality. However, there are currently no good proposals as to how to obtain a tenable stronger probabilistic theory of static rationality. In particular, the idea that one might achieve the desired strengthening by adding some symmetry principle to the probability axioms has appeared hard to maintain. Starting from an idea of Carnap and drawing on relatively recent work in cognitive science, this paper argues that conceptual spaces provide the tools to devise an objective probabilistic account of static rationality. Specifically, we propose a principle that derives prior degrees of belief from the geometrical structure of concepts.

Abstract: Publication date: Available online 27 April 2016
Source:Journal of Applied Logic
Author(s): Gerhard Jäger, Michel Marti
Starting off from the usual language of modal logic for multi-agent systems dealing with the agents' knowledge/belief and common knowledge/belief we define so-called epistemic Kripke structures for intuitionistic (common) knowledge/belief. Then we introduce corresponding deductive systems and show that they are sound and complete with respect to these semantics.

Abstract: Publication date: Available online 25 April 2016
Source:Journal of Applied Logic
Author(s): Wenyan Xu
Cirquent calculus is a new proof-theoretic and semantic approach introduced by G.Japaridze for the needs of his theory of computability logic (CoL). The earlier article “From formulas to cirquents in computability logic” by Japaridze generalized formulas in CoL to circuit-style structures termed cirquents. It showed that, through cirquents with what are termed clustering and ranking, one can capture, refine and generalize independence-friendly (IF) logic. Specifically, the approach allows us to account for independence from propositional connectives in the same spirit as IF logic accounts for independence from quantifiers. Japaridze's treatment of IF logic, however, was purely semantical, and no deductive system was proposed. The present paper syntactically constructs a cirquent calculus system with clustering and ranking, sound and complete w.r.t. the propositional fragment of cirquent-based semantics. Such a system captures the propositional version of what is called extended IF logic, thus being an axiomatization of a nontrivial fragment of that logic.

Abstract: Publication date: Available online 2 March 2016
Source:Journal of Applied Logic
Author(s): J.A. Bergstra, C.A. Middelburg
Meadows are alternatives for fields with a purely equational axiomatization. At the basis of meadows lies the decision to make the multiplicative inverse operation total by imposing that the multiplicative inverse of zero is zero. Divisive meadows are meadows with the multiplicative inverse operation replaced by a division operation. Viewing a fraction as a term over the signature of divisive meadows that is of the form p / q , we investigate which divisive meadows admit transformation of fractions into simple fractions, i.e. fractions without proper subterms that are fractions.

Abstract: Publication date: Available online 21 February 2016
Source:Journal of Applied Logic
Author(s): Mauricio Osorio Galindo, Verónica Borja Macías, José Ramón Enrique Arrazola Ramírez
In [25] Priest developed the da Costa logic (daC); this is a paraconsistent logic which is also a co-intuitionistic logic that contains the logic C ω . Due to its interesting properties it has been studied by Castiglioni, Ertola and Ferguson, and some remarkable results about it and its extensions are shown in [8,11]. In the present article we continue the study of daC, we prove that a restricted Hilbert system for daC, named DC, satisfies certain properties that help us show that this logic is not a maximal paraconsistent system. We also study an extension of daC called P H 1 and we give different characterizations of it. Finally we compare daC and P H 1 with several paraconsistent logics.

Abstract: Publication date: Available online 4 March 2016
Source:Journal of Applied Logic
Author(s): Sergey Babenyshev, Manuel A. Martins
This work advances a research agenda which has as its main aim the application of Algebraic Logic (AAL) methods and tools to the specification and verification of software systems. It uses a generalization of the notion of an abstract deductive system to handle multi-sorted deductive systems which differentiate visible and hidden sorts. Two main results of the paper are obtained by generalizing properties of the Leibniz congruence — the central notion in AAL. In this paper we discuss a question we posed in [1] about the relationship between the behavioral equivalences of equivalent hidden logics. We also present a necessary and sufficient intrinsic condition for two hidden logics to be equivalent.

Abstract: Publication date: December 2015
Source:Journal of Applied Logic, Volume 13, Issue 4, Part 1
Author(s): Maximiliano C.D. Budán, Mauro Gómez Lucero, Ignacio Viglizzo, Guillermo R. Simari
To increase the expressivity of an argumentation formalism, we propose adding meta-level information to the arguments in the form of labels representing quantifiable data such as reliability degree, strength measure, skill, time availability, or any other feature about arguments. The extra information attached to an argument is then used in the acceptability determination process. We present a Labeled Argumentation Framework (LAF), combining the knowledge representation capabilities provided by the Argument Interchange Format (AIF) with an Algebra of Argumentation Labels, enabling us to handle the labels associated with the arguments. These labels are propagated through an argumentative graph according to the relations of support, conflict, and aggregation between arguments. Through this process we obtain final labels attached to the arguments that are useful to determine their acceptability.

Abstract: Publication date: December 2015
Source:Journal of Applied Logic, Volume 13, Issue 4, Part 1
Author(s): Andrea Cohen, Sebastian Gottifredi, Alejandro J. García, Guillermo R. Simari
This work introduces the Attack–Support Argumentation Framework (ASAF), an approach to abstract argumentation that allows for the representation and combination of attack and support relations. This framework extends the Argumen-tation Framework with Recursive Attacks (AFRA) in two ways. Firstly, it adds a support relation enabling to express support for arguments; this support can also be given to attacks, and to the support relation itself. Secondly, it extends AFRA's attack relation by allowing attacks to the aforementioned support relation. Moreover, since the support relation of the ASAF has a necessity interpretation, the ASAF also extends the Argumentation Framework with Necessities (AFN). Thus, the ASAF provides a unified framework for representing attack and support for arguments, as well as attack and support for the attack and support relations at any level.

Abstract: Publication date: December 2015
Source:Journal of Applied Logic, Volume 13, Issue 4, Part 1
Author(s): Andreas Ecke, Rafael Peñaloza, Anni-Yasmin Turhan
In Description Logics (DL) knowledge bases (KBs), information is typically captured by clear-cut concepts. For many practical applications querying the KB by crisp concepts is too restrictive; a user might be willing to lose some precision in the query, in exchange of a larger selection of answers. Similarity measures can offer a controlled way of gradually relaxing a query concept within a user-specified limit. In this paper we formalize the task of instance query answering for DL KBs using concepts relaxed by concept similarity measures (CSMs). We investigate computation algorithms for this task in the DL EL , their complexity and properties for the CSMs employed regarding whether unfoldable or general TBoxes are used. For the case of general TBoxes we define a family of CSMs that take the full TBox information into account, when assessing the similarity of concepts.

Abstract: Publication date: July 2016
Source:Journal of Applied Logic, Volume 16
Author(s): Albert Kadji, Celestin Lele, Jean B. Nganou
The goal of the present article is to extend the study of commutative rings whose ideals form an MV-algebra as carried out by Belluce and Di Nola [1] to non-commutative rings. We study and characterize all rings whose ideals form a pseudo MV-algebra, which shall be called here generalized Łukasiewicz rings. We obtain that these are (up to isomorphism) exactly the direct sums of unitary special primary rings.

Abstract: Publication date: Available online 10 March 2016
Source:Journal of Applied Logic
Author(s): Rob Egrot
A poset is representable if it can be embedded in a field of sets in such a way that existing finite meets and joins become intersections and unions respectively (we say finite meets and joins are preserved). More generally, for cardinals α and β a poset is said to be ( α , β ) -representable if an embedding into a field of sets exists that preserves meets of sets smaller than α and joins of sets smaller than β. We show using an ultraproduct/ultraroot argument that when 2 ≤ α , β ≤ ω the class of ( α , β ) -representable posets is elementary, but does not have a finite axiomatization in the case where either α or β = ω . We also show that the classes of posets with representations preserving either countable or all meets and joins are pseudoelementary.

Abstract: Publication date: Available online 6 April 2016
Source:Journal of Applied Logic
Author(s): Philippe Balbiani
We consider the binary relations of negligibility, comparability and proximity in the set of all hyperreals. Associating with negligibility, comparability and proximity the binary predicates N, C and P and the connectives [ N ] , [ C ] and [ P ] , we consider a first-order theory based on these predicates and a modal logic based on these connectives. We investigate the axiomatization/completeness and the decidability/complexity of this first-order theory and this modal logic.

Abstract: Publication date: May 2016
Source:Journal of Applied Logic, Volume 15
Author(s): Norihiro Kamide
It is known that the logic BI of bunched implications is a logic of resources. Many studies have reported on the applications of BI to computer science. In this paper, an extension BIS of BI by adding a sequence modal operator is introduced and studied in order to formalize more fine-grained resource-sensitive reasoning. By the sequence modal operator of BIS, we can appropriately express “sequential information” in resource-sensitive reasoning. A Gentzen-type sequent calculus SBIS for BIS is introduced, and the cut-elimination and decidability theorems for SBIS are proved. An extension of the Grothendieck topological semantics for BI is introduced for BIS, and the completeness theorem with respect to this semantics is proved. The cut-elimination, decidability and completeness theorems for SBIS and BIS are proved using some theorems for embedding BIS into BI.

Abstract: Publication date: May 2016
Source:Journal of Applied Logic, Volume 15
Author(s): Robert Demolombe, Luis Fariñas del Cerro, Naji Obeid
A translation technique is presented which transforms a class of First Order Logic formulas, called Restricted formulas, into ground formulas. For the formulas in this class the range of quantified variables is restricted by Domain formulas. If we have a complete knowledge of the predicates involved in the Domain formulas their extensions can be evaluated with the Relational Algebra and these extensions are used to transform universal (respectively existential) quantifiers into finite conjunctions (respectively disjunctions). It is assumed that the complete knowledge is represented by Completion Axioms and Unique Name Axioms à la Reiter. These axioms involve the equality predicate. However, the translation allows to remove the equality in the ground formulas and for a large class of formulas their consequences are the same as the initial First Order formulas. This result open the door for the design of efficient deduction techniques.

Abstract: Publication date: May 2016
Source:Journal of Applied Logic, Volume 15
Author(s): Lorenzo Magnani
In the companion article “The eco-cognitive model of abduction” [66] I illustrated the main features of my eco-cognitive model of abduction (EC-Model). With the aim of delineating further aspects of that “naturalization of logic” recently urged by John Woods [94] I will now set out to further analyze some properties of abduction that are essential from a logical standpoint. When dealing with the so-called “inferential problem”, I will opt for the more general concepts of input and output instead of those of premisses and conclusions, and show that in this framework two consequences can be derived that help clarify basic logical aspects of abductive reasoning: 1) it is more natural to accept the “multimodal” and “context-dependent” character of the inferences involved, 2) inferences are not merely conceived of in the terms of the process leading to the “generation of an output” or to the proof of it, as in the traditional and standard view of deductive proofs, but rather, from this perspective abductive inferences can be seen as related to logical processes in which input and output fail to hold each other in an expected relation, with the solution involving the modification of inputs, not that of outputs. The chance of finding an abductive solution still appears to depend on the Aristotelian concept of “leading away” (ἀπαγωγή) I dealt with in the companion article, that is, on the starting of the application of a supplementary logic implementing an appropriate formal inference engine. An important result I will emphasize is that irrelevance and implausibility are not always offensive to reason. In addition, we cannot be sure, more broadly, that our guessed hypotheses are plausible (even if we know that looking – in advance – for plausibility is a human good and wise heuristic), indeed an implausible hypothesis can later on result plausible. In the last part of the article I will describe that if we wish to naturalize the logic of the abductive processes and its special consequence relation, we should refer to the following main aspects: “optimization of situatedness”, “maximization of changeability” of both input and output, and high “information-sensitiveness”. Furthermore, I will point out that a logic of abduction must acknowledge the importance of keeping record of the “past life” of abductive inferential praxes, contrarily to the fact that traditional demonstrative ideal systems are prototypically characterized by what I call “maximization of memorylessness”.

Abstract: Publication date: May 2016
Source:Journal of Applied Logic, Volume 15
Author(s): Amnon Rosenmann
We present a novel approach, which is based on multiple-valued logic (MVL), to the verification and analysis of digital hardware designs, which extends the common ternary or quaternary approaches for simulations. The simulations which are performed in the more informative MVL setting reveal details which are either invisible or harder to detect through binary or ternary simulations. In equivalence verification, detecting different behavior under MVL simulations may lead to the discovery of a genuine binary non-equivalence or to a qualitative gap between two designs. The value of a variable in a simulation may hold information about its degree of truth and its “place of birth” and “date of birth”. Applications include equivalence verification, initialization, assertions generation and verification, partial control on the flow of data by prioritizing and block-oriented simulations. Much of the paper is devoted to theoretical aspects behind the MVL approach, including the reason for choosing a specific algebra for computations and the introduction of the notions of De Morgan Canonical Form and of verification complexity of Boolean expressions. Two basic simulation-based algorithms are presented, one for satisfying and verifying combinational designs and the other for equivalence verification of sequential designs.

Abstract: Publication date: December 2015
Source:Journal of Applied Logic, Volume 13, Issue 4, Part 1
Author(s): Laurence Cholvy
This paper presents a version of belief function theory in which masses are assigned to propositional formulas and which allows the modeler to consider integrity constraints. It also provides three combination rules which apply within this framework. It proves that the initial version of belief function theory and its extension to non-exclusive hypotheses are two particular cases of this proposal. It finally shows that, even if this framework is not more expressive than the belief function theory as defined by Dempster and Shafer, its interest resides in the fact that it offers the modeler a rich language to express its beliefs, i.e., the propositional language.

Abstract: Publication date: December 2015
Source:Journal of Applied Logic, Volume 13, Issue 4, Part 1
Author(s): Henrietta Eyre, Jonathan Lawry
A Dempster–Shafer theory based model of assertion is proposed for multi-agent communications so as to capture both epistemic and strategic uncertainty. Treating assertion as a choice problem, we argue that for complex multi-agent communication systems, individual agents will only tend to have sufficient information to allow them to formulate imprecise strategies for choosing between different possible true assertions. In a propositional logic setting, an imprecise assertion strategy is defined as a functional mapping between a valuation and a set of true sentences, where the latter is assumed to contain the optimal assertion given that particular state of the world. Uncertainty is then quantified in terms of probability distributions defined on the joint space of valuations and strategies, naturally leading to Dempster–Shafer belief and plausibility measures on sets of possible assertions. This model is extended so as to include imprecise valuations and to provide a meta-level treatment of weak and strong assertions. As a case study, we consider the application of our proposed assertion models to the problem of choosing between a number of different vague descriptions, in the context of both epistemic and supervaluationist approaches to vagueness.

Abstract: Publication date: May 2016
Source:Journal of Applied Logic, Volume 15
Author(s): B.O. Akinkunmi
Logical theories have been developed which have allowed temporal reasoning about eventualities (à la Galton) such as states, processes, actions, events and complex eventualities such as sequences and recurrences of other eventualities. This paper presents the problem of coincidence within the framework of a first order logical theory formalizing temporal multiple recurrence of two sequences of fixed duration eventualities and presents a solution to it. The coincidence problem is described as: if two complex eventualities (or eventuality sequences) consisting respectively of component eventualities x 0 , x 1 , … , x r and y 0 , y 1 , … , y s both recur over an interval k and all eventualities are of fixed durations, is there a subinterval of k over which the incidence x t and y u for 0 ≤ t ≤ r and 0 ≤ u ≤ s coincide? The solution presented here formalizes the intuition that a solution can be found by temporal projection over a cycle of the multiple recurrence of both sequences.

Abstract: Publication date: May 2016
Source:Journal of Applied Logic, Volume 15
Author(s): Tahel Ronel, Alena Vencovská
We investigate the notion of a signature in Polyadic Inductive Logic and study the probability functions satisfying the Principle of Signature Exchangeability. We prove a representation theorem for such functions on binary languages and show that they satisfy a binary version of the Principle of Instantial Relevance. We discuss polyadic versions of the Principle of Instantial Relevance and Johnson's Sufficientness Postulate.

Abstract: Publication date: December 2015
Source:Journal of Applied Logic, Volume 13, Issue 4, Part 1
Author(s): Nico Potyka, Christoph Beierle, Gabriele Kern-Isberner
Coping with uncertain knowledge and changing beliefs is essential for reasoning in dynamic environments. We generalize an approach to adjust probabilistic belief states by use of the relative entropy in a propositional setting to relational languages, leading to a concept for the evolution of relational probabilistic belief states. As a second contribution of this paper, we present a method to compute the corresponding belief state changes by considering a dual problem and present first application and experimental results. The computed belief state usually factorizes and we explain how the factorization can be exploited to store the belief state more compactly and to simplify its computation by exploiting equivalences of worlds. Finally, we present results on the computational complexity of determining equivalence classes.

Abstract: Publication date: May 2016
Source:Journal of Applied Logic, Volume 15
Author(s): José Luis Castiglioni, Hernán Javier San Martín
Let us write ℓ G u f for the category whose objects are lattice-ordered abelian groups (l-groups for short) with a strong unit and finite prime spectrum endowed with a collection of Archimedean elements, one for each prime l-ideal, which satisfy certain properties, and whose arrows are l-homomorphisms with additional structure. In this paper we show that a functor which assigns to each object ( A , u ˆ ) ∈ ℓ G u f the prime spectrum of A, and to each arrow f : ( A , u ˆ ) → ( B , v ˆ ) ∈ ℓ G u f the naturally induced p-morphism, has a left adjoint.

Abstract: Publication date: December 2015
Source:Journal of Applied Logic, Volume 13, Issue 4, Part 1
Author(s): Vaishak Belle, Hector J. Levesque
Location estimation is a fundamental sensing task in robotic applications, where the world is uncertain, and sensors and effectors are noisy. Most systems make various assumptions about the dependencies between state variables, and especially about how these dependencies change as a result of actions. Building on a general framework by Bacchus, Halpern and Levesque for reasoning about degrees of belief in the situation calculus, and a recent extension to it for continuous probability distributions, in this paper we illustrate location estimation in the presence of a rich theory of actions using examples. The formalism also allows specifications with incomplete knowledge and strict uncertainty, as a result of which the agent's initial beliefs need not be characterized by a unique probability distribution. Finally, we show that while actions might affect prior distributions in nonstandard ways, suitable posterior beliefs are nonetheless entailed as a side-effect of the overall specification.

Abstract: Publication date: Available online 1 October 2015
Source:Journal of Applied Logic
Author(s): Eric Raidl
This paper presents a progic, or probabilistic logic, in the sense of Haenni et al. [8]. The progic presented here is based on Bayesianism, as the progic discussed by Williamson [15]. However, the underlying generalised Bayesianism differs from the objective Bayesianism used by Williamson, in the calibration norm, and the liberalisation and interpretation of the reference probability in the norm of equivocation. As a consequence, the updating dynamics of both Bayesianisms differ essentially. Whereas objective Bayesianism is based on a probabilistic re-evaluation, orthodox Bayesianism is based on a probabilistic revision. I formulate a generalised and iterable orthodox Bayesian revision dynamics. This allows to define an updating procedure for the generalised Bayesian progic. The paper compares the generalised Bayesian progic and Williamson's objective Bayesian progic in strength, update dynamics and with respect to language (in)sensitivity.

Abstract: Publication date: Available online 3 October 2015
Source:Journal of Applied Logic
Author(s): S.A. Selesnick, J.P. Rawling
Ordinary quantum logic has well known pathologies rendering it useless for the purposes of computation. However, loosely related logics, based upon variants of Girard's Linear Logic, have been found useful in the context of quantum computation. In one sense, the use of such computational schemes affords a meta level view of the possible provenance of certain expressions not otherwise apparent. Since such logics are presumed to encapsulate the essential behavior of quantum “resources” we may entertain the question as to whether this logical or computational approach could have any bearing upon quantum physics itself. In this article we address the question of the genesis of certain fundamental Lagrangians, namely those occurring in the standard model. If a certain set of sentences in a logic are added to the set of axioms of the logic the resulting structure is generally called a theory by logicians. In this paper we shall introduce a version of such a logic and deduce some of its physical ramifications. Namely, we will show that there is a single type of sequent that, when added to the logical calculus at hand as an axiom, generates in the theory so defined, series whose leading terms match exactly the Yang–Mills Lagrangian density (including a gauge fixing term) and also the Einstein–Hilbert Lagrangian density, most of the remaining terms being negligible at low intensities in both cases. By expanding the logic somewhat, in the manner of second quantization, we are able also to give an account of interaction terms in the Yang–Mills case. This shows that there is a common form ancestral to all the Lagrangians of the standard model in the ensemble of “evolutionary” trees provided by deductions in a certain clearly specified logic, and reveals the differences between the Yang–Mills and gravitational kinetic terms. Thus we acquire a new paradigm for “unification” of the fundamental forces at the level of the underlying logic.

Abstract: Publication date: Available online 11 September 2015
Source:Journal of Applied Logic
Author(s): Nissim Francez
The paper highlights a difference between Model-Theoretic Semantics (MTS) and Proof-Theoretic Semantics (PTS) regarding the meanings of NL-realisable determiners. While MTS uses conservativity as a major filter on GQs as serving NL-realisable determiners' meanings, conservativity fails in serving as such a filter. According to the PTS rendering of conservativity, all determiners are conservative. Instead of conservativity, PTS methodology focuses on other criteria, originating from the inferential role of determiners as captured by the introduction and elimination rules of a meaning-conferring ND-system. The criteria considered in the paper are harmony, stability and purity of rules. The paper presents two examples of conservative “determiners” that can be excluded by the suggested criteria.

Abstract: Publication date: Available online 10 September 2015
Source:Journal of Applied Logic
Author(s): Fayçal Touazi, Claudette Cayrol, Didier Dubois
This paper presents the extension of results on reasoning with totally ordered belief bases to the partially ordered case. The idea is to reason from logical bases equipped with a partial order expressing relative certainty and to construct a partially ordered deductive closure. The difficult point lies in the fact that equivalent definitions in the totally ordered case are no longer equivalent in the partially ordered one. At the syntactic level we can either use a language expressing pairs of related formulas and axioms describing the properties of the ordering, or use formulas with partially ordered symbolic weights attached to them in the spirit of possibilistic logic. A possible semantics consists in assuming the partial order on formulas stems from a partial order on interpretations. It requires the capability of inducing a partial order on subsets of a set from a partial order on its elements so as to extend possibility theory functions. Among different possible definitions of induced partial order relations, we select the one generalizing necessity orderings (closely related to epistemic entrenchments). We study such a semantic approach inspired from possibilistic logic, and show its limitations when relying on a unique partial order on interpretations. We propose a more general sound and complete approach to relative certainty, inspired by conditional modal logics, in order to get a partial order on the whole propositional language. Some links between several inference systems, namely conditional logic, modal epistemic logic and non-monotonic preferential inference are established. Possibilistic logic with partially ordered symbolic weights is also revisited and a comparison with the relative certainty approach is made via mutual translations.

Abstract: Publication date: Available online 28 September 2015
Source:Journal of Applied Logic
Author(s): Jürgen Landes, Jon Williamson
This editorial explains the scope of the special issue and provides a thematic introduction to the contributed papers.

Abstract: Publication date: Available online 28 September 2015
Source:Journal of Applied Logic
Author(s): Jonathan Lawry
We describe an integrated approach to vagueness and uncertainty within a propositional logic setting and based on a combination of three valued logic and probability. Three valued valuations are employed in order to model explicitly borderline cases and in this context we give an axiomatic characterisation of two well known three valued models; supervaluations and Kleene valuations. We then demonstrate the close relationship between Kleene valuations and a sub-class of supervaluations. Belief pairs are lower and upper measures on the sentences of the language generated from a probability distribution defined over a finite set of three valued valuations. We describe links between these measures and other uncertainty theories and we show the close relationship between Kleene belief pairs and a sub-class of supervaluation belief pairs. Finally, a probabilistic approach to conditioning is explored within this framework.

Abstract: Publication date: Available online 30 September 2015
Source:Journal of Applied Logic
Author(s): Teddy Groves
In [39], Imre Lakatos influentially argued that Carnapian inductive logic was a degenerate research programme. This paper argues that Lakatos's criticism was mistaken and that, according to Lakatos's own standards, Carnapian inductive logic was progressive rather than degenerate.

Abstract: Publication date: Available online 30 September 2015
Source:Journal of Applied Logic
Author(s): Angelo Gilio, Niki Pfeifer, Giuseppe Sanfilippo
We study probabilistically informative (weak) versions of transitivity by using suitable definitions of defaults and negated defaults in the setting of coherence and imprecise probabilities. We represent p-consistent sequences of defaults and/or negated defaults by g-coherent imprecise probability assessments on the respective sequences of conditional events. Moreover, we prove the coherent probability propagation rules for Weak Transitivity and the validity of selected inference patterns by proving p-entailment of the associated knowledge bases. Finally, we apply our results to study selected probabilistic versions of classical categorical syllogisms and construct a new version of the square of opposition in terms of defaults and negated defaults.

Abstract: Publication date: Available online 30 September 2015
Source:Journal of Applied Logic
Author(s): Pavel Janda
I will propose an alternative philosophical approach to the representation of uncertain doxastic states. I will argue that the current account of measuring inaccuracy of uncertain doxastic states is inadequate for Belnap's four-valued logic. Specifically, a situation can be found in which either an inaccuracy measure returns a completely wrong result or an agent's inaccuracy score is inadequate relative to the mistake in her doxastic attitude. This will motivate an alternative representation of uncertain doxastic states based on ordered pairs. I will describe a possible inaccuracy measure that is suitable for ordered pairs, and I will show that it has all the qualities that are required for an inaccuracy measure to be legitimate. Finally, I will introduce conditions of rationality for uncertain doxastic states represented by ordered pairs.

Abstract: Publication date: Available online 28 August 2015
Source:Journal of Applied Logic
Author(s): Manfred Eppe, Mehul Bhatt
We present an answer set programming realization of the h-approximation ( HPX ) theory [8] as an efficient and provably sound reasoning method for epistemic planning and projection problems that involve postdictive reasoning. The efficiency of HPX stems from an approximate knowledge state representation that involves only a linear number of state variables, as compared to an exponential number for theories that utilize a possible-worlds based semantics. This causes a relatively low computational complexity, i.e, the planning problem is in NP under reasonable restrictions, at the cost that HPX is incomplete. In this paper, we use the implementation of HPX to investigate the incompleteness issue and present an empirical evaluation of the solvable fragment and its performance. We find that the solvable fragment of HPX is indeed reasonable and fairly large: in average about 85% of the considered projection problem instances can be solved, compared to a PWS -based approach with exponential complexity as baseline. In addition to the empirical results, we demonstrate the manner in which HPX can be applied in a real robotic control task within a smart home, where our scenario illustrates the usefulness of postdictive reasoning to achieve error-tolerance by abnormality detection in a high-level decision-making task.

Abstract: Publication date: Available online 28 August 2015
Source:Journal of Applied Logic
Author(s): Manfred Eppe, Mehul Bhatt
We propose an approximation of the possible worlds semantics ( PWS ) of knowledge with support for postdiction – a fundamental inference pattern for diagnostic reasoning and explanation tasks in a wide range of real-world applications such as cognitive robotics, visual perception for cognitive vision, ambient intelligence and smart environments. We present the formal framework, an operational semantics, and an analysis of soundness and completeness results therefrom. The advantage of our approach is that only a linear number of state-variables are required to represent an agent's knowledge state. This is achieved by modeling knowledge as the history of a single approximate state, instead of using an exponential number of possible worlds like in Kripke semantics. That is, we add a temporal dimension to the knowledge representation which facilitates efficient postdiction. Since we consider knowledge histories, we call our theory h-approximation ( HPX ). Due to the linear number of state variables, HPX features a comparably low computational complexity. Specifically, we show that HPX can solve the projection problem in polynomial (tractable) time. It can solve planning problems in NP, while e.g. for the action language A k [48] this is Σ 2 P -complete. In addition to the temporal dimension of knowledge, our theory supports concurrent acting and sensing, and is in this sense more expressive than existing approximations.

Abstract: Publication date: Available online 30 June 2015
Source:Journal of Applied Logic
Author(s): Yan Zhang , Zhaohui Zhu , Jinjin Zhang , Yong Zhou
In the framework of logic labelled transition systems, a variant of weak ready simulation has been presented by Lüttgen and Vogler. It has been shown that such behavioural preorder is the largest precongruence w.r.t parallel and conjunction composition satisfying some desirable properties. This paper offers a ground-complete axiomatization for this precongruence over processes containing no recursion in the calculus CLL R . Compared with the usual inference systems for process calculi, in addition to axioms about process operators, such system contains a number of axioms to characterize the interaction between process operators and logical operators.

Abstract: Publication date: Available online 16 June 2015
Source:Journal of Applied Logic
Author(s): E. Howarth , J.B. Paris , A. Vencovská
Within the framework of (Unary) Pure Inductive Logic we investigate four possible formulations of a probabilistic principle of analogy based on a template considered by Paul Bartha in the Stanford Encyclopedia of Philosophy [1] and give some characterizations of the probability functions which satisfy them. In addition we investigate an alternative interpretation of analogical support, also considered by Bartha, based not on the enhancement of probability but on the creation of possibility.

Abstract: Publication date: Available online 9 June 2015
Source:Journal of Applied Logic
Author(s): Scarlett Liu , Mark Whitty
Precise yield estimation in vineyards using image processing techniques has only been demonstrated conceptually on a small scale. Expanding this scale requires significant computational power where, by necessity, only small parts of the images of vines contain useful features. This paper introduces an image processing algorithm combining colour and texture information and the use of a support vector machine, to accelerate fruit detection by isolating and counting bunches in images. Experiments carried out on two varieties of red grapes (Shiraz and Cabernet Sauvignon) demonstrate an accuracy of 88.0% and recall of 91.6%. This method is also shown to remove the restriction on the field of view and background which plagued existing methods and is a first step towards precise and reliable yield estimation on a large scale.

Abstract: Publication date: Available online 18 March 2015
Source:Journal of Applied Logic
Author(s): Christian Eichhorn , Gabriele Kern-Isberner
OCF-networks provide the possibility to combine qualitative information expressed by rankings of (conditional) formulas with the strong structural information of a network, in this respect being a qualitative variant of the better known Bayesian networks. Like for Bayesian networks, a global ranking function can be calculated quickly and efficiently from the locally distributed information, whereas the latter significantly reduces the exponentially high complexity of the semantical ranking approach. This qualifies OCF-networks for applications. However, in practical applications the provided ranking information may not be in the format needed to be represented by an OCF-network, or some values may be simply missing. In this paper, we present techniques for filling in the missing values using methods of inductive reasoning and we elaborate on formal properties of OCF-networks.

Abstract: Publication date: Available online 18 March 2015
Source:Journal of Applied Logic
Author(s): Parot Ratnapinda , Marek J. Druzdzel
We compare three approaches to learning numerical parameters of discrete Bayesian networks from continuous data streams: (1) the EM algorithm applied to all data, (2) the EM algorithm applied to data increments, and (3) the online EM algorithm. Our results show that learning from all data at each step, whenever feasible, leads to the highest parameter accuracy and model classification accuracy. When facing computational limitations, incremental learning approaches are a reasonable alternative. While the differences in speed between incremental algorithms are not large (online EM is slightly slower), for all but small data sets online EM tends to be more accurate than incremental EM.

Abstract: Publication date: Available online 18 March 2015
Source:Journal of Applied Logic
Author(s): Ofer Arieli
In this paper we incorporate integrity constraints in abstract argumentation frameworks. Two types of semantics are considered for these constrained frameworks: conflict-free and conflict-tolerant. The first one is a conservative extension of standard approaches for giving coherent-based semantics to argumentation frameworks, where in addition certain constraints must be satisfied. A primary consideration behind this approach is a dismissal of any contradiction between accepted arguments of the constrained frameworks. The second type of semantics preserves contradictions, which are regarded as meaningful and sometimes even critical for the conclusions. We show that this approach is particularly useful for assuring the existence of non-empty extensions and for handling contradictions among the constraints, in which cases conflict-free extensions are not available. Both types of semantics are represented by propositional sets of formulas and are evaluated in the context of three-valued and four-valued logics. Among others, we show a one-to-one correspondence between the models of these theories, the extensions, and the labelings of the underlying constrained argumentation frameworks.

Abstract: Publication date: Available online 18 March 2015
Source:Journal of Applied Logic
Author(s): Nicholas Mattei , Judy Goldsmith , Andrew Klapper , Martin Mundhenk
We study the computational complexity of bribery and manipulation schemes for sports tournaments with uncertain information. We introduce a general probabilistic model for multi-round tournaments and consider several special types of tournament: challenge (or caterpillar); cup; and round robin. In some ways, tournaments are similar to the sequential pair-wise, cup and Copeland voting rules. The complexity of bribery and manipulation are well studied for elections, usually assuming deterministic information about votes and results. We assume that for tournament entrants i and j, the probability that i beats j and the costs of lowering each probability by fixed increments are known to the manipulators. We provide complexity analyses for several problems related to manipulation and bribery for the various types of tournaments. Complexities range from probabilistic log space to NP PP . This shows that the introduction of uncertainty into the reasoning process drastically increases the complexity of bribery problems in some instances.