Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The μ-calculus is one of the most important logics describing specifications of transition systems. It has been extensively explored for formal verification in model checking due to its exceptional balance between expressiveness and algorithmic properties. From the perspective of systems/knowledge evolving, one may want to discard some atoms (elements) that become irrelevant or unnecessary in a specification; one may also need to know what makes something true, or the minimal condition under which something holds. This paper aims to address these scenarios for μ-calculus in terms of knowledge forgetting. In particular, it proposes a notion of forgetting based on a generalized bisimulation and explores the semantic and logical properties of forgetting, including some reasoning complexity results. It also shows that forgetting can be employed to perform knowledge update. PubDate: 2023-02-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we study arbitrary infinite binary information systems each of which consists of an infinite set called universe and an infinite set of two-valued functions (attributes) defined on the universe. We consider the notion of a problem over information system, which is described by a finite number of attributes and a mapping associating a decision to each tuple of attribute values. As algorithms for problem solving, we use deterministic and nondeterministic decision trees. As time and space complexity, we study the depth and the number of nodes in the decision trees. In the worst case, with the growth of the number of attributes in the problem description, (i) the minimum depth of deterministic decision trees grows either almost as logarithm or linearly, (ii) the minimum depth of nondeterministic decision trees either is bounded from above by a constant or grows linearly, (iii) the minimum number of nodes in deterministic decision trees has either polynomial or exponential growth, and (iv) the minimum number of nodes in nondeterministic decision trees has either polynomial or exponential growth. Based on these results, we divide the set of all infinite binary information systems into five complexity classes, and study for each class issues related to time-space trade-off for decision trees. PubDate: 2023-02-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Support vector machines (SVM) is one of the well known supervised machine learning model. The standard SVM models are dealing with the situation where the exact values of the data points are known. This paper studies the SVM model when the data set contains uncertain or mislabelled data points. To ensure the small probability of misclassification for the uncertain data, a chance constrained conic-segmentation SVM model is proposed for multiclass classification. Based on the data set, a mixed integer programming formulation for the chance constrained conic-segmentation SVM is derived. Kernelization of chance constrained conic-segmentation SVM model is also exploited for nonlinear classification. The geometric interpretation is presented to show how the chance constrained conic-segmentation SVM works on uncertain data. Finally, experimental results are presented to demonstrate the effectiveness of the chance constrained conic-segmentation SVM for both artificial and real-world data. PubDate: 2023-01-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Sensitivity analysis in general deals with the question of how changes in input data of a model affect its output data. In the context of optimization problems, such an analysis could, for instance, address how changes in capacity constraints affect the optimal solution value. Although well established in the domain of linear programming, sensitivity analysis approaches for combinatorial optimization problems are model-specific, limited in scope and not applicable to practical optimization problems. To overcome these limitations, Schulte et al. developed the concept of bilevel innovization. By using evolutionary bilevel optimization in combination with data mining and visualization techniques, bilevel innovization provides decision-makers with deeper insights into the behavior of the optimization model and supports decision-making related to model building and configuration. Originally introduced in the field of evolutionary computation, most recently bilevel innovization has been proposed as an approach to sensitivity analysis for combinatorial problems in general. Based on previous work on bilevel innovization, our paper illustrates this concept as a tool for sensitivity analysis by providing a comprehensive analysis of the generalized assignment problem. Furthermore, it is investigated how different algorithms for solving the combinatorial problem affect the insights gained by the sensitivity analysis, thus evaluating the robustness and reliability of the sensitivity analysis results. PubDate: 2023-01-14
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract A simple search problem is studied in which a binary n-tuple is to be found in a list, by sequential bit comparisons with cost. The problem can be solved (for small n) using dynamic programming. We show how the “bottom up” part of the algorithm can be organized by means of Formal Concept Analysis. PubDate: 2022-12-17
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Today artificial intelligence systems support efficient management in different fields of social activities. In particular, congestion control in modern networks seems to be impossible without proper mathematical models of traffic flow assignment. Thus, the network design problem can be referred to as Stackelberg game with independent lower-level drivers acting in a non-cooperative manner to minimize individual costs. In turn, upper-level decision-maker seeks to minimize overall travel time in the network by investing in its capacity. Hence, the decision-maker faces the challenge with a hierarchical structure which solution important due to its influence on the sustainable development of modern large cities. However, it is well-known that a bilevel programming problem is often strongly NP-hard, while the hierarchical optimization structure of a bilevel problem raises such difficulties as non-convexity and discontinuity. The present paper is devoted to the network design problem with affine cost functions. First of all, we obtain exact optimality conditions to the network design problem in a case of a single-commodity network with non-interfering routes. Secondly, we show that obtained conditions can be exploited as optimization strategies for solving the network design problem in the case of an arbitrary network with affine cost functions. These findings give fresh managerial insights to traffic engineers dealing with network design. PubDate: 2022-12-13
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we present a set of algorithms to display a Hasse diagram of triadic concepts in Triadic Concept Analysis and compute triadic generators and association rules, including implications without any need for a preprocessing step to convert the triadic representation into a dyadic one. Our contributions are as follows. First, we adapt the iPred algorithm for precedence link computation in concept lattices to the triadic framework. Then, new algorithms are proposed to compute triadic generators by extending the notion of faces to further calculate association rules. Finally, an empirical study is conducted in order to mainly show the performance of our prototype on triadic contexts and estimate the cost of each one of its components. PubDate: 2022-12-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract A fundamental task in formal concept analysis is the enumeration of formal concepts. Among the fastest algorithms for this task belong algorithms which are based on Close-by-One (CbO), a tree recursive algorithm using lexicographical order of formal concepts to ensure that each formal concept is enumerated exactly once. State-of-the-art algorithms based on CbO, e.g. FCbO, In-Close4, and In-Close5, employ several techniques, which we call pruning, to avoid some unnecessary computations. However, the number of the formal concepts can be exponential w.r.t. dimension of the input data. Therefore, the algorithms do not scale well and large datasets become intractable. To resolve this weakness, several parallel and distributed algorithms were proposed. We propose four new CbO-based algorithms intended for Apache Spark or a similar programming model and show how the pruning can be incorporated into them. We experimentally evaluate the impact of the pruning and demonstrate the scalability of the new algorithms. PubDate: 2022-12-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In this paper we investigate, from a graph theoretical point of view, the notion of acceptability in Dung semantics for abstract argumentation frameworks. We advance the state of the art by introducing and analyzing combinatorial structures exploited for taming, in particular cases, the exponential blowout of acceptance algorithms. We conclude the paper by a series of observations allowing to deepen the intuition with respect to the practical use of Dung acceptance based semantics. PubDate: 2022-12-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper extends Semiotic-Conceptual Analysis (SCA) to provide a means for comparing and evaluating semiotic relations (i.e. sets of signs) with respect to their ability of and efficiency of expressing conceptual structures. One contributing factor for efficiency is polysemy which refers to reusing representamens of signs in different contexts. Two different types of polysemy are identified: ambiguous polysemy which encodes a view from part signs to compound signs and simultaneous polysemy for analysing how compound signs simultaneously denote more than one meaning. Two detailed examples are included. PubDate: 2022-12-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We propose the usage of two power indices from cooperative game theory and public choice theory for ranking attributes of closed sets, namely intents of formal concepts (or closed itemsets). The introduced indices are related to extensional concept stability and are also based on counting of generators, especially of those that contain a selected attribute. The introduction of such indices is motivated by the so-called interpretable machine learning, which supposes that we do not only have the class membership decision of a trained model for a particular object, but also a set of attributes (in the form of JSM-hypotheses or other patterns) along with individual importance of their single attributes (or more complex constituent elements). We characterise computation of the Shapley and Banzhaf-Penrose values of a formal concept in terms of minimal generators and their order filters, provide the reader with their properties important for computation purposes, prove related #P-completeness results, and show experimental results with model and real datasets. We also show how this approach can be applied in both supervised (classification) and unsupervised (pattern mining) settings. PubDate: 2022-11-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We propose a neural network architecture in infinite dimensional spaces for which we can show the universal approximation property. Indeed, we derive approximation results for continuous functions from a Fréchet space \(\mathfrak {X}\) into a Banach space \(\mathfrak {Y}\) . The approximation results are generalising the well known universal approximation theorem for continuous functions from \(\mathbb {R}^{n}\) to \(\mathbb {R}\) , where approximation is done with (multilayer) neural networks Cybenko (1989) Math. Cont. Signals Syst. 2, 303–314 and Hornik et al. (1989) Neural Netw., 2, 359–366 and Funahashi (1989) Neural Netw., 2, 183–192 and Leshno (1993) Neural Netw., 6, 861–867. Our infinite dimensional networks are constructed using activation functions being nonlinear operators and affine transforms. Several examples are given of such activation functions. We show furthermore that our neural networks on infinite dimensional spaces can be projected down to finite dimensional subspaces with any desirable accuracy, thus obtaining approximating networks that are easy to implement and allow for fast computation and fitting. The resulting neural network architecture is therefore applicable for prediction tasks based on functional data. PubDate: 2022-11-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Agents use their own vocabularies to reason and talk about the world. Public signature awareness is satisfied if agents are aware of the vocabularies, or signatures, used by all agents they may, eventually, interact with. Multi-agent modal logics and in particular Dynamic Epistemic Logic rely on public signature awareness for modeling information flow in multi-agent systems. However, this assumption is not desirable for dynamic and open multi-agent systems because (1) it prevents agents to use unique signatures other agents are unaware of, (2) it prevents agents to openly extend their signatures when encountering new information, and (3) it requires that all future knowledge and beliefs of agents are bounded by the current state. We propose a new semantics for awareness that enables us to drop public signature awareness. This semantics is based on partial valuation functions and weakly reflexive relations. Dynamics for raising public and private awareness are then defined in such a way as to differentiate between becoming aware of a proposition and learning its truth value. With this, we show that knowledge and beliefs are not affected through the raising operations. PubDate: 2022-10-27 DOI: 10.1007/s10472-022-09809-y
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We prove a new property of dual hypergraphs and derive from it Nash-solvability of the corresponding (tight) game forms. This result is known since 1975, but its new proof is much simpler. PubDate: 2022-10-19 DOI: 10.1007/s10472-022-09820-3
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract One category of vehicle routing problems involving groups of people where there can be multiple possible drop-off locations is the School Bus Routing Problem (SBRP). In this paper, we investigate a form of SBRP where each passenger has a well-ordered set of preferred stops, e.g., the preferences can be based on the distance to their destination. The passengers continue their journey from the stop to their destination by an alternative mode, such as walking. Different drop-off choices allow for a different set of passengers to be dropped off together, affecting the total vehicle driving distance. As the transport provider and passengers have their own objectives, inevitably, there are two opposing goals (tradeoff aspects), reducing total driving distance versus passenger walking distance. Unlike typical SBRP, we focus on practical problems where the set of passengers is not known well in advance and there is only a short time window to compute a solution (a real-time requirement). Hence, runtime efficiency is more important than optimality. We introduce this variant as SBRP-RT, a school bus routing problem with tradeoff aspects and real-time requirements. We propose a model with an efficient greedy local search for such problems, and demonstrate the efficacy with experiments on real-world datasets with quick solving that balances tradeoffs. PubDate: 2022-10-12 DOI: 10.1007/s10472-022-09812-3
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this article we formalise the notion of knowing a secret as a modality, by combining standard notions of knowledge and ignorance from modal epistemic logic. Roughly speaking, Ann knows a secreet if and only if she knows it and she knows that everyone else does not know it. The main aim is to study the properties of these secretly knowing modalities. It turns out that the modalities are non-normal, and are characterised by a derivation rule we call Interpolation that is stronger than Equivalence but weaker than Monotonicity. We study the Interpolation rule and position it in the landscape of non-normal modal logics. We show that it, in combination with basic axioms, gives us a complete characterisation of the properties of the secretly knowing modalities under weak assumptions about the properties of individual knowledge, in the form of a sound and complete axiomatisation. This characterisation gives us the most basic and fundamental principles of secretly knowing. PubDate: 2022-10-10 DOI: 10.1007/s10472-022-09815-0
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract As the number of graph-level embedding techniques increases at an unprecedented speed, questions arise about their behavior and performance when training data undergo perturbations. This is the case when an external entity maliciously alters training data to invalidate the embedding. This paper explores the effects of such attacks on some graph datasets by applying different graph-level embedding techniques. The main attack strategy involves manipulating training data to produce an altered model. In this context, our goal is to go in-depth about methods, resources, experimental settings, and performance results to observe and study all the aspects that derive from the attack stage. PubDate: 2022-10-06 DOI: 10.1007/s10472-022-09811-4
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Recent approaches for knowledge-graph embeddings aim at connecting quantitative data structures used in machine learning to the qualitative structures of logics. Such embeddings are of a hybrid nature, they are data models that also exhibit conceptual structures inherent to logics. One motivation to investigate embeddings is to design conceptually adequate machine learning (ML) algorithms that learn or incorporate ontologies expressed in some logic. This paper investigates a new approach to embedding ontologies into geometric models that interpret concepts by geometrical structures based on convex cones. The ontologies are assumed to be represented in an orthologic, a logic with a full (ortho)negation. As a proof of concept this cone-based embedding was implemented within two ML algorithms for weak supervised multi-label learning. Both algorithms rely on cones but the first addresses ontologies expressed in classical propositional logic whereas the second addresses a weaker propositional logic, namely a weak orthologic that does not fulfil distributivity. The algorithms were evaluated and showed promising results that call for investigating other (sub)classes of cones and developing fine-tuned algorithms based on them. PubDate: 2022-10-01 DOI: 10.1007/s10472-022-09806-1