Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract This paper introduces and studies the sequential composition and decomposition of propositional logic programs. We show that acyclic programs can be decomposed into single-rule programs and provide a general decomposition result for arbitrary programs. We show that the immediate consequence operator of a program can be represented via composition which allows us to compute its least model without any explicit reference to operators. This bridges the conceptual gap between the syntax and semantics of a propositional logic program in a mathematically satisfactory way. PubDate: 2024-02-15

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract This paper studies analogical proportions in monounary algebras consisting only of a universe and a single unary function, where we analyze the role of congruences, and we show that the analogical proportion relation is characterized in the infinite monounary algebra formed by the natural numbers together with the successor function via difference proportions. PubDate: 2024-02-10

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The successes of Machine Learning, and in particular of Deep Learning systems, have led to a reformulation of the Artificial Intelligence agenda. One of the pressing issues in the field is the extraction of knowledge out of the behavior of those systems. In this paper we propose a semiotic analysis of that behavior, based on the formal model of learners. We analyze the topos-theoretic properties that ensure the logical expressivity of the knowledge embodied by learners. Furthermore, we show that there exists an ideal universal learner, able to interpret the knowledge gained about any possible function as well as about itself, which can be monotonically approximated by networks of increasing size. PubDate: 2024-02-09

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Systems of decision rules and decision trees are widely used as a means for knowledge representation, as classifiers, and as algorithms. They are among the most interpretable models for classifying and representing knowledge. The study of relationships between these two models is an important task of computer science. It is easy to transform a decision tree into a decision rule system. The inverse transformation is a more difficult task. In this paper, we study unimprovable upper and lower bounds on the minimum depth of decision trees derived from decision rule systems with discrete attributes depending on the various parameters of these systems. To illustrate the process of transformation of decision rule systems into decision trees, we generalize well known result for Boolean functions to the case of functions of k-valued logic. PubDate: 2024-02-08

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract There are many real life applications where data can not be effectively represented in Hilbert spaces and/or where the data points are uncertain. In this context, we address the issue of binary classification in Banach spaces in presence of uncertainty. We show that a number of results from classical support vector machines theory can be appropriately generalized to their robust counterpart in Banach spaces. These include the representer theorem, strong duality for the associated optimization problem as well as their geometrical interpretation. Furthermore, we propose a game theoretical interpretation of the class separation problem when the underlying space is reflexive and smooth. The proposed Nash equilibrium formulation draws connections and emphasizes the interplay between class separation in machine learning and game theory in the general setting of Banach spaces. PubDate: 2024-02-08

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract This paper concerns preference elicitation and learning of decision models in the context of multicriteria decision making. We propose an approach to learn a representation of preferences by a non-additive multiattribute utility function, namely a Choquet or bi-Choquet integral. This preference model is parameterized by one-dimensional utility functions measuring the attractiveness of consequences w.r.t. various point of views and one or two set functions (capacities) used to weight the coalitions and control the intensity of interactions among criteria, on the positive and possibly the negative sides of the utility scale. Our aim is to show how we can successively learn marginal utilities from properly chosen preference examples and then learn where the interactions matter in the overall model. We first present a preference elicitation method to learn spline representations of marginal utilities on every component of the model. Then we propose a sparse learning approach based on adaptive \(L_1\) -regularization for determining a compact Möbius representation fitted to the observed preferences. We present numerical tests to compare different regularization methods. We also show the advantages of our approach compared to basic methods that do not seek sparsity or that force sparsity a priori by requiring k-additivity. PubDate: 2024-02-07

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In the context of Multiple Criteria Decision Aiding, decision makers often face problems with multiple conflicting criteria that justify the use of preference models to help advancing towards a decision. In order to determine the parameters of these preference models, preference elicitation makes use of preference learning algorithms, usually taking as input holistic judgments, i.e., overall preferences on some of the alternatives, expressed by the decision maker. Tools to achieve this goal in the context of a ranking model based on multiple reference profiles are usually based on mixed-integer linear programming, Boolean satisfiability formulation or metaheuristics. However, they are usually unable to handle realistic problems involving many criteria and a large amount of input information. We propose here an evolutionary metaheuristic in order to address this issue. Extensive experiments illustrate its ability to handle problem instances that previous proposals cannot. PubDate: 2024-02-06

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Nondeterministic planning is the process of computing plans or policies of actions achieving given goals, when there is nondeterministic uncertainty about the initial state and/or the outcomes of actions. This process encompasses many precise computational problems, from classical planning, where there is no uncertainty, to contingent planning, where the agent has access to observations about the current state. Fundamental to these problems is belief tracking, that is, obtaining information about the current state after a history of actions and observations. At an abstract level, belief tracking can be seen as maintaining and querying the current belief state, that is, the set of states consistent with the history. We take a knowledge compilation perspective on these processes, by defining the queries and transformations which pertain to belief tracking. We study them for propositional domains, considering a number of representations for belief states, actions, observations, and goals. In particular, for belief states, we consider explicit propositional representations with and without auxiliary variables, as well as implicit representations by the history itself; and for actions, we consider propositional action theories as well as ground PDDL and conditional STRIPS. For all combinations, we investigate the complexity of relevant queries (for instance, whether an action is applicable at a belief state) and transformations (for instance, revising a belief state by an observation); we also discuss the relative succinctness of representations. Though many results show an expected tradeoff between succinctness and tractability, we identify some interesting combinations. We also discuss the choice of representations by existing planners in light of our study. PubDate: 2024-01-31

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract While most models of human choice are linear to ease interpretation, it is not clear whether linear models are good models of human decision making. And while prior studies have investigated how task conditions and group characteristics, such as personality or socio-demographic background, influence human decisions, no prior works have investigated how to use less personal information for choice prediction. We propose a deep learning model based on self-attention and cross-attention to model human decision making which takes into account both subject-specific information and task conditions. We show that our model can consistently predict human decisions more accurately than linear models and other baseline models while remaining interpretable. In addition, although a larger amount of subject specific information will generally lead to more accurate choice prediction, collecting more surveys to gather subject background information is a burden to subjects, as well as costly and time-consuming. To address this, we introduce a training scheme that reduces the number of surveys that must be collected in order to achieve more accurate predictions. PubDate: 2024-01-30

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In this paper we propose a new notion of a clique reliability. The clique reliability is understood as the ratio of the number of statistically significant links in a clique to the number of edges of the clique. This notion relies on a recently proposed original technique for separating inferences about pairwise connections between vertices of a network into significant and admissible ones. In this paper, we propose an extension of this technique to the problem of clique detection. We propose a method of step-by-step construction of a clique with a given reliability. The results of constructing cliques with a given reliability using data on the returns of stocks included in the Dow Jones index are presented. PubDate: 2024-01-29

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We consider comparative dissimilarity relations on pairs on fuzzy description profiles, the latter providing a fuzzy set-based representation of pairs of objects. Such a relation expresses the idea of “no more dissimilar than” and is used by a decision maker when performing a case-based decision task under vague information. We first limit ourselves to those relations admitting a weighted \(\varvec{L}^p\) distance representation, for which we provide an axiomatic characterization in case the relation is complete, transitive and defined on the entire space of pairs of fuzzy description profiles. Next, we switch to the more general class of comparative dissimilarity relations representable by a Choquet \(\varvec{L}^p\) distance, parameterized by a completely alternating normalized capacity. PubDate: 2024-01-24

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We study the stability of accuracy during the training of deep neural networks (DNNs). In this context, the training of a DNN is performed via the minimization of a cross-entropy loss function, and the performance metric is accuracy (the proportion of objects that are classified correctly). While training results in a decrease of loss, the accuracy does not necessarily increase during the process and may sometimes even decrease. The goal of achieving stability of accuracy is to ensure that if accuracy is high at some initial time, it remains high throughout training. A recent result by Berlyand, Jabin, and Safsten introduces a doubling condition on the training data, which ensures the stability of accuracy during training for DNNs using the absolute value activation function. For training data in \(\mathbb {R}^n\) , this doubling condition is formulated using slabs in \(\mathbb {R}^n\) and depends on the choice of the slabs. The goal of this paper is twofold. First, to make the doubling condition uniform, that is, independent of the choice of slabs. This leads to sufficient conditions for stability in terms of training data only. In other words, for a training set T that satisfies the uniform doubling condition, there exists a family of DNNs such that a DNN from this family with high accuracy on the training set at some training time \(t_0\) will have high accuracy for all time \(t>t_0\) . Moreover, establishing uniformity is necessary for the numerical implementation of the doubling condition. We demonstrate how to numerically implement a simplified version of this uniform doubling condition on a dataset and apply it to achieve stability of accuracy using a few model examples. The second goal is to extend the original stability results from the absolute value activation function to a broader class of piecewise linear activation functions with finitely many critical points, such as the popular Leaky ReLU. PubDate: 2024-01-19

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We study a problem of best-effort adaptation motivated by several applications and considerations, which consists of determining an accurate predictor for a target domain, for which a moderate amount of labeled samples are available, while leveraging information from another domain for which substantially more labeled samples are at one’s disposal. We present a new and general discrepancy-based theoretical analysis of sample reweighting methods, including bounds holding uniformly over the weights. We show how these bounds can guide the design of learning algorithms that we discuss in detail. We further show that our learning guarantees and algorithms provide improved solutions for standard domain adaptation problems, for which few labeled data or none are available from the target domain. We finally report the results of a series of experiments demonstrating the effectiveness of our best-effort adaptation and domain adaptation algorithms, as well as comparisons with several baselines. We also discuss how our analysis can benefit the design of principled solutions for fine-tuning. PubDate: 2024-01-13

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract A simple search problem is studied in which a binary n-tuple is to be found in a list, by sequential bit comparisons with cost. The problem can be solved (for small n) using dynamic programming. We show how the “bottom up” part of the algorithm can be organized by means of Formal Concept Analysis. PubDate: 2024-01-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract A digital plane is a digitization of a Euclidean plane. A plane is specified by its normal, which is a 3D vector with integer coordinates, as considered in this case. It is established here that a 3D digital straight line segment, shifted by an integer amount, can produce the digitized plane. 3D plane’s normals are classified based on the Greatest Common Divisor (GCD) of its components, and the net code is calculated separately for each case. Experimental results are provided for several normals. Also, we show that the digital plane segment generated is a connected digital plane. The proposed method mainly involves integer arithmetic. PubDate: 2024-01-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract A digitized rigid motion is called digitally continuous if two neighbor pixels still stay neighbors after the motion. This concept plays important role when people or computers (artificial intelligence, machine vision) need to recognize the object shown in the image. In this paper, digital rotations of a pixel with its closest neighbors are of our interest. We compare the neighborhood motion map results among the three regular grids, when the center of rotation is the midpoint of a main pixel, a grid point (corner of a pixel) or an edge midpoint. The first measure about the quality of digital rotations is based on bijectivity, e.g., measuring how many of the cases produce bijective and how many produce not bijective neighborhood motion maps (Avkan et. al, 2022). Now, a second measure is investigated, the quality of bijective digital rotations is measured by the digital continuity of the resulted image: we measure how many of the cases are bijective and also digitally continuous. We show that rotations on the triangular grid prove to be digitally continuous at many more real angles and also as a special case, many more integer angles compared to the square grid or to the hexagonal grid with respect to the three different rotation centers. PubDate: 2024-01-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Road network studies attracted unprecedented and overwhelming interest in recent years due to the clear relationship between human existence and city evolution. Current studies cover many aspects of a road network, for example, road feature extraction from video/image data, road map generalisation, traffic simulation, optimisation of optimal route finding problems, and traffic state prediction. However, analysing road networks as a complex graph is a field to explore. This study presents comparative studies on the Porto, in Portugal, road network sections, mainly of Matosinhos, Paranhos, and Maia municipalities, regarding degree distributions, clustering coefficients, centrality measures, connected components, k-nearest neighbours, and shortest paths. Further insights into the networks took into account the community structures, page rank, and small-world analysis. The results show that the information exchange efficiency of Matosinhos is 0.8, which is 10 and 12.8% more significant than that of the Maia and Paranhos networks, respectively. Other findings stated are: (1) the studied road networks are very accessible and densely linked; (2) they are small-world in nature, with an average length of the shortest pathways between any two roads of 29.17 units, which as found in the scenario of the Maia road network; and (3) the most critical intersections of the studied network are ’Avenida da Boavista, 4100-119 Porto (latitude: 41.157944, longitude: − 8.629105)’, and ’Autoestrada do Norte, Porto (latitude: 41.1687869, longitude: − 8.6400656)’, based on the analysis of centrality measures. PubDate: 2024-01-01