for Journals by Title or ISSN for Articles by Keywords help

Publisher: Springer-Verlag (Total: 2574 journals)

Similar Journals
 Annals of Mathematics and Artificial IntelligenceJournal Prestige (SJR): 0.413 Citation Impact (citeScore): 1Number of Followers: 12      Hybrid journal (It can contain Open Access articles) ISSN (Print) 1573-7470 - ISSN (Online) 1012-2443 Published by Springer-Verlag  [2574 journals]
• Data-driven Koopman operator approach for computational neuroscience
• Abstract: This article presents a novel, nonlinear, data-driven signal processing method, which can help neuroscience researchers visualize and understand complex dynamical patterns in both time and space. Specifically, we present applications of a Koopman operator approach for eigendecomposition of electrophysiological signals into orthogonal, coherent components and examine their associated spatiotemporal dynamics. This approach thus provides enhanced capabilities over conventional computational neuroscience tools restricted to analyzing signals in either the time or space domains. This is achieved via machine learning and kernel methods for data-driven approximation of skew-product dynamical systems. The approximations successfully converge to theoretical values in the limit of long embedding windows. First, we describe the method, then using electrocorticographic (ECoG) data from a mismatch negativity experiment, we extract time-separable frequencies without bandpass filtering or prior selection of wavelet features. Finally, we discuss in detail two of the extracted components, Beta ($$\sim$$ 13 Hz) and high Gamma ($$\sim$$ 50 Hz) frequencies, and explore the spatiotemporal dynamics of high- and low- frequency components.
PubDate: 2019-11-11

• Guest editorial: revised selected papers from the LION 12 conference
• PubDate: 2019-11-05

• Multi-valued logic in graph transformation theory and self-adaptive
systems
• Abstract: Graph transformation theory uses rules to perform a graph transformation. However, there is no a way to choose between such different transformations in the case where several of them are applicable. A way to get the choice is suggested here based on the comparing of the values of implications which correspond to different transformation variants. The relationship between the topos of bundles, and the set of graphs with the same vertices, is introduced to include logic into graph transformation theory. Thus, one can use the special type of implication and the truth-values set of such a topos to estimate different variants of graph transformations. In this approach, the maximal part of the initial graph towards the terminal one is conserved in the chosen variant. Analysis of self-adaptive systems uses some graph grammars. Self-adaptive systems autonomously perform an adaptation to changes both in user needs and in their operational environments, while still maintaining some desired properties. The suggested way to choose such graph transformation variants may be used to make a choice between different graph grammars in such systems modeling. This approach is illustrated in a model of some business processes, that result in the automated choice of the business process adaptation under the assumption that the process changes are minimal towards the terminal state.
PubDate: 2019-11-02

• Deep learning models for brain machine interfaces
• Abstract: Deep Learning methods have been rising in popularity in the past few years, and are now used as a fundamental component in various application domains such as computer vision, natural language processing, bioinformatics. Supervised learning with Convolutional Neural Networks has become the state of the art approach in many image related works. However, despite the great success of deep learning methods in other areas they remain relatively unexplored in the brain imaging field. In this paper we make an overview of recent achievements of Deep Learning to automatically extract features from brain signals that enable building Brain-Machine Interfaces (BMI). Major challenge in the BMI research is to find common subject-independent neural signatures due to the high brain data variability across multiple subjects. To address this problem we propose a Deep Neural Autoencoder with sparsity constraint as a promising approach to extract hidden features from Electroencephalogram data (in-dept feature learning) and build a subject-independent noninvasive BMI in the affective neuro computing framework. Future direction for research are also outlined.
PubDate: 2019-10-02

• A framework for comparing query languages in their ability to express
boolean queries
• Abstract: For any query language $$\mathcal {F}$$ , we consider three natural families of boolean queries. Nonemptiness queries are expressed as e ≠ ∅ with e an $$\mathcal {F}$$ expression. Emptiness queries are expressed as e = ∅. Containment queries are expressed as e1 ⊆ e2. We refer to syntactic constructions of boolean queries as modalities. In first order logic, the emptiness, nonemptiness and containment modalities have exactly the same expressive power. For other classes of queries, e.g., expressed in weaker query languages, the modalities may differ in expressiveness. We propose a framework for studying the expressive power of boolean query modalities. Along one dimension, one may work within a fixed query language and compare the three modalities. Here, we identify crucial query features that enable us to go from one modality to another. Furthermore, we identify semantical properties that reflect the lack of these query features to establish separations. Along a second dimension, one may fix a modality and compare different query languages. This second dimension is the one that has already received quite some attention in the literature, whereas in this paper we emphasize the first dimension. Combining both dimensions, it is interesting to compare the expressive power of a weak query language using a strong modality, against that of a seemingly stronger query language but perhaps using a weaker modality. We present some initial results within this theme. The two main query languages to which we apply our framework are the algebra of binary relations, and the language of conjunctive queries.
PubDate: 2019-10-01

• Contextual rational closure for defeasible A ℒ C $\mathcal {ALC}$
• Abstract: Description logics have been extended in a number of ways to support defeasible reasoning in the KLM tradition. Such features include preferential or rational defeasible concept inclusion, and defeasible roles in complex concept descriptions. Semantically, defeasible subsumption is obtained by means of a preference order on objects, while defeasible roles are obtained by adding a preference order to role interpretations. In this paper, we address an important limitation in defeasible extensions of description logics, namely the restriction in the semantics of defeasible concept inclusion to a single preference order on objects. We do this by inducing a modular preference order on objects from each modular preference order on roles, and using these to relativise defeasible subsumption. This yields a notion of contextualised rational defeasible subsumption, with contexts described by roles. We also provide a semantic construction for rational closure and a method for its computation, and present a correspondence result between the two.
PubDate: 2019-10-01

• Voting-based ensemble learning for partial lexicographic preference
forests over combinatorial domains
• Abstract: We study preference representation models based on partial lexicographic preference trees (PLP-trees). We propose to represent preference relations as forests of small PLP-trees (PLP-forests), and to use voting rules to aggregate orders represented by the individual trees into a single order to be taken as a model of the agent’s preference relation. We show that when learned from examples, PLP-forests have better accuracy than single PLP-trees. We also show that the choice of a voting rule does not have a major effect on the aggregated order, thus rendering the problem of selecting the “right” rule less critical. Next, for the proposed PLP-forest preference models, we develop methods to compute optimal and near-optimal outcomes, the tasks that appear difficult for some other common preference models. Lastly, we compare our models with those based on decision trees, which brings up questions for future research.
PubDate: 2019-10-01

• First-order definable counting-only queries
• Abstract: Many data sources can be represented easily by collections of sets of objects. For several practical queries on such collections of sets of objects, the answer does not depend on the precise composition of these sets, but only on the number of sets to which each object belongs. This is the case k= 1 for the more general situation where the query answer only depends on the number of sets to which each collection of at most k objects belongs. We call such queries k-counting-only. Here, we focus on k-SyCALC, i.e., k-counting-only queries that are first-order definable. As k-SyCALC is semantically defined, however, it is not surprising that it is already undecidable whether a first-order query is in 1-SyCALC. Therefore, we introduce SimpleCALC-k, a syntactically defined (strict) fragment of k-SyCALC. It turns out that many practical queries in k-SyCALC can already be expressed in SimpleCALC− k. We also define the query language GCount− k, which expresses counting-only queries directly by using generalized counting terms, and show that this language is equivalent to SimpleCALC-k. We prove that the k-counting-only queries form a non-collapsing hierarchy: for every k, there exist (k+ 1)-counting-only queries that are not k-counting-only. This result specializes to both SimpleCALC− k and k-SyCALC. Finally, we establish a strong dichotomy between 1-SyCALC and SimpleCALC− k on the one hand and 2-SyCALC on the other hand by showing that satisfiability, validity, query containment, and query equivalence are decidable for the former two languages, but not for the latter one.
PubDate: 2019-10-01

• Lattice map spiking neural networks (LM-SNNs) for clustering and
classifying image data
• Abstract: Spiking neural networks (SNNs) with a lattice architecture are introduced in this work, combining several desirable properties of SNNs and self-organized maps (SOMs). Networks are trained with biologically motivated, unsupervised learning rules to obtain a self-organized grid of filters via cooperative and competitive excitatory-inhibitory interactions. Several inhibition strategies are developed and tested, such as (i) incrementally increasing inhibition level over the course of network training, and (ii) switching the inhibition level from low to high (two-level) after an initial training segment. During the labeling phase, the spiking activity generated by data with known labels is used to assign neurons to categories of data, which are then used to evaluate the network’s classification ability on a held-out set of test data. Several biologically plausible evaluation rules are proposed and compared, including a population-level confidence rating, and an n-gram inspired method. The effectiveness of the proposed self-organized learning mechanism is tested using the MNIST benchmark dataset, as well as using images produced by playing the Atari Breakout game.
PubDate: 2019-09-21

• Compilation of static and evolving conditional knowledge bases for
computing induced nonmonotonic inference relations
• Abstract: Several different semantics have been proposed for conditional knowledge bases $$\mathcal {R}$$ containing qualitative conditionals of the form “If A, then usually B”, leading to different nonmonotonic inference relations induced by $$\mathcal {R}$$ . For the notion of c-representations which are a subclass of all ranking functions accepting $$\mathcal {R}$$ , a skeptical inference relation, called c-inference and taking all c-representations of $$\mathcal {R}$$ into account, has been suggested. In this article, we develop a 3-phase compilation scheme for both knowledge bases and skeptical queries to constraint satisfaction problems. In addition to skeptical c-inference, we show how also credulous and weakly skeptical c-inference can be modelled as constraint satisfaction problems, and that the compilation scheme can be extended to such queries. We further extend the compilation approach to knowledge bases evolving over time. The compiled form of $$\mathcal {R}$$ is reused for incrementally compiling extensions, contractions, and updates of $$\mathcal {R}$$ . For each compilation step, we prove its soundness and completeness, and demonstrate significant efficiency benefits when querying the compiled version of $$\mathcal {R}$$ . These findings are also supported by experiments with the software system InfOCF that employs the proposed compilation scheme.
PubDate: 2019-08-30

• Statistical learning based on Markovian data maximal deviation
inequalities and learning rates
• Abstract: In statistical learning theory, numerous works established non-asymptotic bounds assessing the generalization capacity of empirical risk minimizers under a large variety of complexity assumptions for the class of decision rules over which optimization is performed, by means of sharp control of uniform deviation of i.i.d. averages from their expectation, while fully ignoring the possible dependence across training data in general. It is the purpose of this paper to show that similar results can be obtained when statistical learning is based on a data sequence drawn from a (Harris positive) Markov chain X, through the running example of estimation of minimum volume sets (MV-sets) related to X’s stationary distribution, an unsupervised statistical learning approach to anomaly/novelty detection. Based on novel maximal deviation inequalities we establish, using the regenerative method, learning rate bounds that depend not only on the complexity of the class of candidate sets but also on the ergodicity rate of the chain X, expressed in terms of tail conditions for the length of the regenerative cycles. In particular, this approach fully tailored to Markovian data permits to interpret the rate bound results obtained in frequentist terms, in contrast to alternative coupling techniques based on mixing conditions: the larger the expected number of cycles over a trajectory of finite length, the more accurate the MV-set estimates. Beyond the theoretical analysis, this phenomenon is supported by illustrative numerical experiments.
PubDate: 2019-08-29

• Analyzing cognitive processes from complex neuro-physiologically based
data: some lessons
• Abstract: In the past few years, due to their ability to extract multivariate correlations, machine learning tools have become more and more important for discovery of information in very complex data sets. This has had specific application to various data sets related to human brain tasks. However, this is far from a simple and direct methodology. Some of the issues involve dealing with the extreme signal to noise ratios, as well as variation between different individuals. Moreover, the huge amount of features relative to the number of data points is a challenge. As a result, in attacking these problems, we found it necessary to adapt a large variety of methodologies; chosen to overcome specific obstructions for specific problems. In this paper, we describe our experience working on several examples at the edge of capabilities of these systems and describe the various and variant methodologies we needed to overcome these sort of challenges. Hopefully these cases will serve as a guideline for other applications.
PubDate: 2019-08-28

• Dynamic search trajectory methods for global optimization
• Abstract: A detailed review of the dynamic search trajectory methods for global optimization is given. In addition, a family of dynamic search trajectories methods that are created using numerical methods for solving autonomous ordinary differential equations is presented. Furthermore, a strategy for developing globally convergent methods that is applicable to the proposed family of methods is given and the corresponding theorem is proved. Finally, theoretical results for obtaining nonmonotone convergent methods that exploit the accumulated information with regard to the most recent values of the objective function are given.
PubDate: 2019-08-27

• Targeting solutions in Bayesian multi-objective optimization: sequential
and batch versions
• Abstract: Multi-objective optimization aims at finding trade-off solutions to conflicting objectives. These constitute the Pareto optimal set. In the context of expensive-to-evaluate functions, it is impossible and often non-informative to look for the entire set. As an end-user would typically prefer a certain part of the objective space, we modify the Bayesian multi-objective optimization algorithm which uses Gaussian Processes and works by maximizing the Expected Hypervolume Improvement, to focus the search in the preferred region. The cumulated effects of the Gaussian Processes and the targeting strategy lead to a particularly efficient convergence to the desired part of the Pareto set. To take advantage of parallel computing, a multi-point extension of the targeting criterion is proposed and analyzed.
PubDate: 2019-08-20

• Fundamental conditions on the sampling pattern for union of low-rank
subspaces retrieval
• Abstract: This paper is concerned with investigating the fundamental conditions on the locations of the sampled entries, i.e., sampling pattern, for finite completability of a matrix that represents the union of several subspaces with given ranks. In contrast with the existing analysis on Grassmannian manifold for the conventional matrix completion, we propose a geometric analysis on the manifold structure for the union of several subspaces to incorporate all given rank constraints simultaneously. In order to obtain the deterministic conditions on the sampling pattern, we characterizes the algebraic independence of a set of polynomials defined based on the sampling pattern, which is closely related to finite completion. We also give a probabilistic condition in terms of the number of samples per column, i.e., the sampling probability, which leads to finite completability with high probability. Furthermore, using the proposed geometric analysis for finite completability, we characterize sufficient conditions on the sampling pattern that ensure there exists only one completion for the sampled data.
PubDate: 2019-08-16

• The price to pay for forgoing normalization in fair division of
indivisible goods
• Abstract: We study the complexity of fair division of indivisible goods and consider settings where agents can have nonzero utility for the empty bundle. This is a deviation from a common normalization assumption in the literature, and we show that this inconspicuous change can lead to an increase in complexity: In particular, while an allocation maximizing social welfare by the Nash product is known to be easy to detect in the normalized setting whenever there are as many agents as there are resources, without normalization it can no longer be found in polynomial time, unless P = NP. The same statement also holds for egalitarian social welfare. Moreover, we show that it is NP-complete to decide whether there is an allocation whose Nash product social welfare is above a certain threshold if the number of resources is a multiple of the number of agents. Finally, we consider elitist social welfare and prove that the increase in expressive power by allowing negative coefficients again yields NP-completeness.
PubDate: 2019-08-16

• Checking inference-proofness of attribute-disjoint and
duplicate-preserving fragmentations
• Abstract: The transmission of own and partly confidential data to another agent comes along with the risk of enabling the receiver to infer information he is not entitled to learn. We consider a specific countermeasure against unwanted inferences about associations between data values whose combination of attributes are declared to be sensitive. This countermeasure fragments a relation instance into attribute-disjoint and duplicate-preserving projections such that no sensitive attribute combination is contained in any projection. Unfortunately, the intended goal of inference-proofness will not always be accomplished. Inferences might be based on combinatorial reasoning, since duplicate-preservation implies that the frequencies of value associations in visible fragments equals those in the original relation instance. In addition, the receiver might exploit entailment reasoning about functional dependencies, numerical dependencies and tuple-generating dependencies, as presumably known from the underlying database schema. We investigate possible interferences of combinatorial reasoning and entailment reasoning and identify basic conditions for a fragmentation to violate inference-proofness. Moreover, we outline a comprehensive method to effectively check the inference-proofness of a given fragmentation and we experimentally evaluate the computational efficiency of a partial prototype implementation.
PubDate: 2019-08-09

• Revising event calculus theories to recover from unexpected observations
• Abstract: Recent extensions of the Event Calculus resulted in powerful formalisms, able to reason about a multitude of commonsense phenomena in causal domains, involving epistemic notions, functional fluents and probabilistic aspects, among others. Less attention has been paid to the problem of automatically revising (correcting) a Knowledge Base when an observation contradicts inferences made regarding the world state. Despite mature work on the related belief revision field, adapting such results for the case of action theories is non-trivial. This paper describes how to address this problem for deterministic, yet partially observable, domains, by proposing a generic framework in the context of the Event Calculus, along with ASP encodings of the revision algorithm and a web-based tester of the formalism implementation.
PubDate: 2019-08-09

• A linear relation between input and first layer in neural networks
• Abstract: Artificial neural networks grow on the number of applications and complexity, which require a minimization on the number of units for some practical implementations. A particular problem is the minimum number of units that a feed forward neural network needs on its first layer. In order to study this problem, it is defined a family of classification problems following a continuity hypothesis, where inputs that are close to some set of points may share the same category. Given a set S of k −dimensional inputs and let $$\mathcal {N}$$ be a feed forward neural network that classifies any input in S within a fixed error, there is proved that $$\mathcal {N}$$ requires $${\Theta } \left (k \right )$$ units in the first layer, if $$\mathcal {N}$$ can solve any instance from the given family of classification problems. Furthermore, this asymptotic result is optimal.
PubDate: 2019-08-08

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327

Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs