Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: The Universum provides prior knowledge about data in the mathematical problem to improve the generalization performance of the classifiers. Several works have shown that the Universum twin support vector machine ( \( \mathfrak {U} \) -TSVM) is an efficient method for binary classification problems. In this paper, we improve the \( \mathfrak {U} \) -TSVM method and propose an improved Universum twin bounded support vector machine (named as IUTBSVM). Indeed, by introducing different Lagrangian functions for the primal problems, we obtain new dual formulations of \( \mathfrak {U} \) -TSVM so that we do not need to compute inverse matrices. To reduce the computational time of the proposed method, we suggest a smaller size of the rectangular kernel matrices than the other methods. Numerical experiments on gender classification of human faces, handwritten digits recognition, and several UCI benchmark data sets indicate that the IUTBSVM is more efficient than the other four algorithms, namely \(\mathfrak {U}\) -SVM, TSVM, \(\mathfrak {U}\) -TSVM, and IUTSVM in the sense of the classification accuracy. PubDate: 2022-01-18

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: A default consequence relation is a well-behaved collection of conditional assertions (defaults). A default conditional \(\alpha \mathrel \joinrel \sim \beta \) is read as ‘if α, then normally β’ and can be given several interpretations, including a ‘size’-oriented one: ‘in mostα-situations, β is also true’. Typically, this asks for making the set of (α ∧ β)-worlds a ‘large’ subset of the α-worlds and the set of (α ∧¬β)-worlds a ‘small’ subset of the same set. Technically, this is achieved via a ‘most’ generalized quantifier (‘most A s are B s’) and we proceed to investigate the default consequence relations emerging upon defining such quantifiers with tools from mathematical analysis. Within topology, we identify ‘large’ sets with topologically dense sets: we show that the unrestricted topological interpretation introduces a consequence relation weaker than the KLM preferential relations (system P) while the restriction to the finite complement topology over infinite sets captures rational consequence (system R). Measure theory, seemingly the most fitting tool for a ‘size’-oriented treatment of default conditionals, introduces a rather weak consequence relation, in accordance with probabilistic approaches. It turns out however, that our measure-theoretic approach is essentially equivalent to J. Hawthorne’s system O supplemented with negation rationality. Our results in this paper, show that a ‘size’-oriented interpretation of default reasoning is context-sensitive and in ‘most’ cases it departs from the preferential approach. PubDate: 2022-01-07

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Let NFAb(q) denote the set of languages accepted by nondeterministic finite automata with q states over an alphabet with b letters. Let Bn denote the set of words of length n. We give a quadratic lower bound on the VC dimension of $$ \text{NFA}_{2}(q)\cap B_{n} = \{L\cap B_{n} \mid L \in \text{NFA}_{2}(q)\} $$ as a function of q. Next, the work of Gruber and Holzer (Theoret. Comput. Sci. 387(2): 155–166, 2007) gives an upper bound for the nondeterministic state complexity of finite languages contained in Bn, which we strengthen using our methods. Finally, we give some theoretical and experimental results on the dependence on n of the VC dimension and testing dimension of NFA2(q) ∩ Bn. PubDate: 2022-01-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: In this paper, we discuss algorithms for the problem of finding read-once resolution refutations of unsatisfiable 2CNF formulas within the resolution refutation system. Broadly, a read-once resolution refutation is one in which each constraint (input or derived) is used at most once. Read-once resolution refutations have been widely studied in the literature for a number of constraint system-refutation system pairs. For instance, read-once resolution has been analyzed for boolean formulas in conjunctive normal form (CNF) and read-once cutting planes have been analyzed for polyhedral systems. By definition, read-once refutations are compact, and hence valuable in applications that place a premium on visualization. The satisfiability problem (SAT) is concerned with finding a satisfying assignment for a boolean formula in CNF. While SAT is NP-complete in general, there exist some interesting subclasses of CNF formulas, for which it is decidable in polynomial time. One such subclass is the class of 2CNF formulas, i.e., CNF formulas in which each clause has at most two literals. The existence of efficient algorithms for satisfiability checking in 2CNF formulas (2SAT), makes this class useful from the perspective of modeling selected logic programs. The work in this paper is concerned with the read-once refutability problem (under resolution) in this subclass. Although 2SAT is decidable in polynomial time, the problem of finding a read-once resolution refutation of an unsatisfiable 2CNF formula is NP-complete. We design non-trivial, parameterized and exact exponential algorithms for this problem. Additionally, we study the computational complexity of finding a shortest read-once resolution refutation of a 2CNF formula. PubDate: 2022-01-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Privacy has traditionally been a major motivation of distributed problem solving. One popular approach to enable privacy in distributed environments is to implement complex cryptographic protocols. In this paper, we propose a different, orthogonal approach, which is to control the quality and the quantity of publicized data. We consider the Open Constraint Programming model and focus on algorithms that solve Distributed Constraint Optimization Problems (DCOPs) using a local search approach. Two such popular algorithms exist to find good solutions to DCOP: DSA and GDBA. In this paper, we propose DSAB, a new algorithm that merges ideas from both algorithms to allow extensive handling of constraint privacy. We also study how algorithms behave when solving Utilitarian DCOPs, where utilitarian agents want to reach an agreement while reducing the privacy loss. We experimentally study how the utilitarian approach impacts the quality of the solution and of publicized data. PubDate: 2022-01-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: The inner representation of deep neural networks (DNNs) is indecipherable, which makes it difficult to tune DNN models, control their training process, and interpret their outputs. In this paper, we propose a novel approach to investigate the inner representation of DNNs through topological data analysis (TDA). Persistent homology (PH), one of the outstanding methods in TDA, was employed for investigating the complexities of trained DNNs. We constructed clique complexes on trained DNNs and calculated the one-dimensional PH of DNNs. The PH reveals the combinational effects of multiple neurons in DNNs at different resolutions, which is difficult to be captured without using PH. Evaluations were conducted using fully connected networks (FCNs) and networks combining FCNs and convolutional neural networks (CNNs) trained on the MNIST and CIFAR-10 data sets. Evaluation results demonstrate that the PH of DNNs reflects both the excess of neurons and problem difficulty, making PH one of the prominent methods for investigating the inner representation of DNNs. PubDate: 2022-01-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: For nonmonotonic reasoning in the context of a knowledge base \(\mathcal {R}\) containing conditionals of the form If A then usually B, system P provides generally accepted axioms. Inference solely based on system P, however, is inherently skeptical because it coincides with reasoning that takes all ranking models of \(\mathcal {R}\) into account. System Z uses only the unique minimal ranking model of \(\mathcal {R}\) , and c-inference, realized via a complex constraint satisfaction problem, takes all c-representations of \(\mathcal {R}\) into account. C-representations constitute the subset of all ranking models of \(\mathcal {R}\) that are obtained by assigning non-negative integer impacts to each conditional in \(\mathcal {R}\) and summing up, for every world, the impacts of all conditionals falsified by that world. While system Z and c-inference license in general different sets of desirable entailments, the first major objective of this article is to present system W. System W fully captures and strictly extends both system Z and c-inference. Moreover, system W can be represented by a single strict partial order on the worlds over the signature of \(\mathcal {R}\) . We show that system W exhibits further inference properties worthwhile for nonmonotonic reasoning, like satisfying the axioms of system P, respecting conditional indifference, and avoiding the drowning problem. The other main goal of this article is to provide results on our investigations, underlying the development of system W, of upper and lower bounds that can be used to restrict the set of c-representations that have to be taken into account for realizing c-inference. We show that the upper bound of n − 1 is sufficient for capturing c-inference with respect to \(\mathcal {R}\) having n conditionals if there is at least one world verifying all conditionals in \(\mathcal {R}\) . In contrast to the previous conjecture that the number of conditionals in \(\mathcal {R}\) is always sufficient, we prove that there are knowledge bases requiring an upper bound of 2n− 1, implying that there is no polynomial upper bound of the impacts assigned to the conditionals in \(\mathcal {R}\) for fully capturing c-inference. PubDate: 2021-12-14

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: We consider the problem of cost sensitive multiclass classification, where we would like to increase the sensitivity of an important class at the expense of a less important one. We adopt an apportioned margin framework to address this problem, which enables an efficient margin shift between classes that share the same boundary. The decision boundary between all pairs of classes divides the margin between them in accordance with a given prioritization vector, which yields a tighter error bound for the important classes while also reducing the overall out-of-sample error. In addition to demonstrating an efficient implementation of our framework, we derive generalization bounds, demonstrate Fisher consistency, adapt the framework to Mercer’s kernel and to neural networks, and report promising empirical results on all accounts. PubDate: 2021-12-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: This paper introduces a novel approach to computing logic programming semantics. First, a propositional Herbrand base is represented in a vector space and if-then rules in a program are encoded in a matrix. Then the least fixpoint of a definite logic program is computed by matrix-vector products with a non-linear operation. Second, disjunctive logic programs are represented in third-order tensors and their minimal models are computed by algebraic manipulation of tensors. Third, normal logic programs are represented by matrices and third-order tensors, and their stable models are computed. The result of this paper exploits a new connection between linear algebraic computation and symbolic computation, which has the potential to realize logical inference in huge scale of knowledge bases. PubDate: 2021-12-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: This paper partially bridges a gap in the literature on Circumscription in Description Logics by investigating the tractability of conjunctive query answering in OWL2’s profiles. It turns out that the data complexity of conjunctive query answering is coNP-hard in circumscribed \(\mathcal {E}{\mathscr{L}}\) and DL-lite, while in circumscribed OWL2-RL conjunctive queries retain their classical semantics. In an attempt to capture nonclassical inferences in OWL2-RL, we consider conjunctive queries with safe negation. They can detect some of the nonclassical consequences of circumscribed knowledge bases, but data complexity becomes coNP-hard. In circumscribed \(\mathcal {E}{\mathscr{L}}\) , answering queries with safe negation is undecidable. PubDate: 2021-12-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: In this paper we study the One-to-Many Shortest Path Problem (OMSPP), which is the problem of solving k shortest path problems that share the same start node. This problem has been studied in the context of routing in road networks. While past work on routing relied on pre-processing the network, which is assumed to be provided explicitly. We explore how OMSPP can be solved with heuristic search techniques, allowing the searched graph to be given either explicitly or implicitly. Two fundamental heuristic search approaches are analyzed: searching for the k goals one at a time, or searching for all k goals as one compound goal. The former approach, denoted k×A∗, is simpler to implement, but can be computationally inefficient, as it may expand a node multiple times, by the different searches. The latter approach, denoted kA∗, can resolve this potential inefficiency, but implementing it raises fundamental questions such as how to combine k heuristic estimates, one per goal, and what to do after the shortest path to one of the goals has been found. We propose several ways to implement kA∗, and characterize the required and sufficient conditions on the available heuristics and how they are aggregated such that the solution is admissible. Then, we analytically compare the runtime and memory requirements of k×A∗ and kA∗, identifying when each approach should be used. Finally, we compare these approaches experimentally on two representative domains, providing empirical support for our theoretical analysis. These results shed light on when each approach is beneficial. PubDate: 2021-12-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: The gaussoid axioms are conditional independence inference rules which characterize regular Gaussian CI structures over a three-element ground set. It is known that no finite set of inference rules completely describes regular Gaussian CI as the ground set grows. In this article we show that the gaussoid axioms logically imply every inference rule of at most two antecedents which is valid for regular Gaussians over any ground set. The proof is accomplished by exhibiting for each inclusion-minimal gaussoid extension of at most two CI statements a regular Gaussian realization. Moreover we prove that all those gaussoids have rational positive-definite realizations inside every ε-ball around the identity matrix. For the proof we introduce the concept of algebraic Gaussians over arbitrary fields and of positive Gaussians over ordered fields and obtain the same two-antecedental completeness of the gaussoid axioms for algebraic and positive Gaussians over all fields of characteristic zero as a byproduct. PubDate: 2021-11-25

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Probability kinematics is a leading paradigm in probabilistic belief change. It is based on the idea that conditional beliefs should be independent from changes of their antecedents’ probabilities. In this paper, we propose a re-interpretation of this paradigm for Spohn’s ranking functions which we call Generalized Ranking Kinematics as a new principle for iterated belief revision of ranking functions by sets of conditional beliefs with respect to their specific subcontext. By taking into account semantical independencies, we can reduce the complexity of the revision task to local contexts. We show that global belief revision can be set up from revisions on the local contexts via a merging operator. Furthermore, we formalize a variant of the Ramsey-Test based on the idea of local contexts which connects conditional and propositional revision in a straightforward way. We extend the belief change methodology of c-revisions to strategic c-revisions which will serve as a proof of concept. PubDate: 2021-11-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: The exact relationship between formal argumentation and nonmonotonic logics is a research topic that keeps on eluding researchers despite recent intensified efforts. We contribute to a deeper understanding of this relation by investigating characterizations of abstract dialectical frameworks in conditional logics for nonmonotonic reasoning. We first show that in general, there is a gap between argumentation and conditional semantics when applying several intuitive translations, but then prove that this gap can be closed when focusing on specific classes of translations. PubDate: 2021-11-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: How humans reason in general about syllogisms is, despite a century of research and many proposed cognitive theories, still an unanswered question. It is even more difficult, however, to answer how an individual human reasons. The goal of this article is twofold: First, it analyses the predictive quality of existing cognitive theories by providing a standardized (re-) implementation of existing theories. Towards this, theories are algorithmically formalized, including their potential capabilities for adaptation to an individual reasoner. The implementations are modular with regard to the underlying mental operations defined by the cognitive theories. Second, it proposes a novel composite approach based on existing cognitive theories, resulting in a cognitive model for predicting an individual reasoner before s/he draws a conclusion. This approach uses sequences of operations, inherited and combined from different theories, to form its predictions. Among the existing models, our implementations of PHM, mReasoner, and Verbal Models make the most accurate predictions of the conclusions drawn by individual reasoners. The designed composite model, however, is able to significantly surpass those implementations by exploiting synergies between different models. In particular, it successfully combines operations from PHM and Verbal Models. Therefore, the composite approach is a promising tool to model and study syllogistic reasoning and to generate tailored cognitive theories. At the same time it provides a general method that can potentially be applied to predict individual human reasoners in other domains, too. PubDate: 2021-11-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Clustering high-dimensional data under the curse of dimensionality is an arduous task in many applications domains. The wide dimension yields the complexity-related challenges and the limited number of records leads to the overfitting trap. We propose to tackle this problematic using the graphical and probabilistic power of the Bayesian network. Our contribution is a new loose hierarchical Bayesian network model that encloses latent variables. These hidden variables are introduced for ensuring a multi-view clustering of the records. We propose a new framework for learning our proposed Bayesian network model. It starts by extracting the cliques of highly dependent features and it proceeds to learn representative latent variable for each features’ clique. The experimental results of our comparative analysis prove the efficiency of our model in tackling the distance concentration challenge. They also show the effectiveness of our model learning framework in skipping the overfitting trap, on benchmark high-dimensional datasets. PubDate: 2021-11-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Modeling high-dimensional multivariate distributions is a computationally challenging task. In the discrete case, Bayesian networks have been successfully used to reduce the complexity and to simplify the problem. However, they lack of a general model for continuous variables. In order to overcome this problem, Elidan (2010) proposed the model of copula Bayesian networks that parametrizes Bayesian networks using copula functions. We propose a new learning algorithm for this model based on a PC algorithm and a conditional independence test proposed by Bouezmarni et al. (2009). This test being non-parametric, no model assumptions are made allowing it to be as general as possible. This algorithm is compared on generated data with the parametric method proposed by Elidan (2010) and proves to have better results. PubDate: 2021-11-01