Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: We consider the problem of cost sensitive multiclass classification, where we would like to increase the sensitivity of an important class at the expense of a less important one. We adopt an apportioned margin framework to address this problem, which enables an efficient margin shift between classes that share the same boundary. The decision boundary between all pairs of classes divides the margin between them in accordance with a given prioritization vector, which yields a tighter error bound for the important classes while also reducing the overall out-of-sample error. In addition to demonstrating an efficient implementation of our framework, we derive generalization bounds, demonstrate Fisher consistency, adapt the framework to Mercer’s kernel and to neural networks, and report promising empirical results on all accounts. PubDate: 2021-10-08

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Unification is a central operation in constructing a range of computational logic systems based on first-order and higher-order logics. First-order unification has several properties that guide its incorporation in such systems. In particular, first-order unification is decidable, unary, and can be performed on untyped term structures. None of these three properties hold for full higher-order unification: unification is undecidable, unifiers can be incomparable, and term-level typing can dominate the search for unifiers. The so-called pattern subset of higher-order unification was designed to be a small extension to first-order unification that respects the laws governing λ-binding (i.e., the equalities for α, β, and η-conversion) but which also satisfied those three properties. While the pattern fragment of higher-order unification has been used in numerous implemented systems and in various theoretical settings, it is too weak for many applications. This paper defines an extension of pattern unification that should make it more generally applicable, especially in proof assistants that allow for higher-order functions. This extension’s main idea is that the arguments to a higher-order, free variable can be more than just distinct bound variables. In particular, such arguments can be terms constructed from (sufficient numbers of) such bound variables using term constructors and where no argument is a subterm of any other argument. We show that this extension to pattern unification satisfies the three properties mentioned above. PubDate: 2021-09-30

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Generalization, also called anti-unification, is the dual of unification. A generalizer of two terms t and \(t^{\prime }\) is a term \(t^{\prime \prime }\) of which t and \(t^{\prime }\) are substitution instances. The dual of most general equational unifiers is that of least general equational generalizers, i.e., most specific anti-instances modulo equations. In a previous work, we extended the classical untyped generalization algorithm to: (1) an order-sorted typed setting with sorts, subsorts, and subtype polymorphism; (2) work modulo equational theories, where function symbols can obey any combination of associativity, commutativity, and identity axioms (including the empty set of such axioms); and (3) the combination of both, which results in a modular, order-sorted equational generalization algorithm. However, Cerna and Kutsia showed that our algorithm is generally incomplete for the case of identity axioms and a counterexample was given. Furthermore, they proved that, in theories with two identity elements or more, generalization with identity axioms is generally nullary, yet it is finitary for both the linear and one-unital fragments, i.e., either solutions with repeated variables are disregarded or the considered theories are restricted to having just one function symbol with an identity or unit element. In this work, we show how we can easily extend our original inference system to cope with the non-linear fragment and identify a more general class than one–unit theories where generalization with identity axioms is finitary. PubDate: 2021-09-25

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: In this paper we consider two closely related problems of selecting a diverse subset of points with respect to squared Euclidean distance. Given a set of points in Euclidean space, the first problem is to find a subset of a specified size M maximizing the sum of squared Euclidean distances between the chosen points. The second problem asks for a minimum cardinality subset of points, given a constraint on the sum of squared Euclidean distances between them. We consider the computational complexity of both problems and propose exact dynamic programming algorithms in the case of integer input data. If the dimension of the Euclidean space is bounded by a constant, these algorithms have a pseudo-polynomial time complexity. We also develop an FPTAS for the special case of the first problem, where the dimension of the Euclidean space is bounded by a constant. PubDate: 2021-09-22

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: We briefly review properties of explainable AI proposed by various researchers. We take a structural approach to the problem of explainable AI, examine the feasibility of these aspects and extend them where appropriate. Afterwards, we review combinatorial methods for explainable AI which are based on combinatorial testing-based approaches to fault localization. Last, we view the combinatorial methods for explainable AI through the lens provided by the properties of explainable AI that are elaborated in this work. We pose resulting research questions that need to be answered and point towards possible solutions, which involve a hypothesis about a potential parallel between software testing, human cognition and brain capacity. PubDate: 2021-09-20

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: This research investigates an integrated inventory and production scheduling problem (IIPSP) in a manufacturer that deals with the perishable goods. The objective is to find an optimal schedule to minimize the sum of inventory cost and production cost. Both single-plant problem and multi-plant problem are investigated in this paper. For the single-plant problem, we prove that it is optimal to arrange the processing of raw materials in descending order of the value of the product of consumption rate and unit inventory cost. For the more complex multi-plant problem, we first prove that it is NP-hard, and then, we propose a hybrid intelligent algorithm to solve it. The experiments show that the proposed algorithm is superior to several other algorithms in both effectiveness and efficiency. PubDate: 2021-09-01 DOI: 10.1007/s10472-021-09739-1

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Software-Defined Networking (SDN) has gained tremendous attention in the past few years for its advantages over network controllability. Nonetheless, the deployment of SDN in legacy network is likely to span multi-periods over months or years for budget consideration. Network operators, especially for a network consists of thousand or more nodes, are eager to understand how legacy networks can be deploying gradually towards SDN with variety of constraints. The reliability of network should be the utmost concern during SDN deployment in legacy network. Primarily, node exclusion is essential in SDN deployment process as the exclusion of critical node during the process greatly improves network stability and reliability. Therefore, we propose a heuristic algorithm for multi-periods SDN node migration in legacy network with respect to node exclusion and budget. We first formulate the aforementioned problem in Integer Linear Programming (ILP) model, and we evaluate the proposed ILP model and our proposed heuristic algorithm with regard to solution quality and processing time. The results show that our proposed algorithm maintains its overall correctness with O(N2) polynomial time complexity. In addition to migration sequence computation, the results of our proposed algorithm reveal the impact of each node in SDN deployment as Key Node, Critical Node and Diminishing Return Analysis. These information offers better decision for network operator during hybrid SDN deployment. Ultimately, the results of our proposed algorithm offers valuable insight in which significantly reduce up to 83% of investment while reaping the most performance gain out of SDN deployment in legacy network. PubDate: 2021-09-01 DOI: 10.1007/s10472-021-09750-6

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: This paper reviews the papers on the applications of VNS in the health care area by analyzing the characteristics of VNS in different problems. In the health care field, many complex optimization problems need to be tackled in a short time considering multiple influencing factors, such as personnel preferences, resources limitations, etc. As a metaheuristic, Variable neighborhood search (VNS) algorithm can get an approximate solution for the complex problem in a short time, and has been widely used to deal with the real-world health care optimization problems based on systematic neighborhood changes and perturbation operations when needed. We classify the found papers into five categories based on the topics, i.e., home health care problems, operating room problems, nurse rostering problems, routing optimization problems, and other topics. For each topic, the detailed operations of VNS and the differences in solving various problems have been discussed. In addition, we analyze the studied problems and summarize the approaches used in the literature. The application tendency of VNS in the health care area is also discussed at the end of the paper. PubDate: 2021-09-01 DOI: 10.1007/s10472-021-09727-5

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: In this paper, we address the Single Source Capacitated Facility Location Problem (SSCFLP) which considers a set of possible locations for opening facilities and a set of clients whose demand must be satisfied. The objective is to minimize the cost of assigning the clients to the facilities, ensuring that all clients are served by only one facility without exceeding the capacity of the facilities. We propose a Relaxation Adaptive Memory Programming (RAMP) heuristic for solving the SSCFLP to efficiently explore the relation between the primal and the dual sides of this combinatorial optimisation problem. Computational experiments demonstrated that the proposed heuristic is very effective in terms of solution quality with reasonable computing times. PubDate: 2021-09-01 DOI: 10.1007/s10472-021-09756-0

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: In many Natural Language Processing problems the combination of machine learning and optimization techniques is essential. One of these problems is the estimation of the human effort needed to improve a text that has been translated using a machine translation method. Recent advances in this area have shown that Gaussian Processes can be effective in post-editing effort prediction. However, Gaussian Processes require a kernel function to be defined, the choice of which highly influences the quality of the prediction. On the other hand, the extraction of features from the text can be very labor-intensive, although recent advances in sentence embedding have shown that this process can be automated. In this paper, we use a Genetic Programming algorithm to evolve kernels for Gaussian Processes to predict post-editing effort based on sentence embeddings. We show that the combination of evolutionary optimization and Gaussian Processes removes the need for a-priori specification of the kernel choice, and, by using a multi-objective variant of the Genetic Programming approach, kernels that are suitable for predicting several metrics can be learned. We also investigate the effect that the choice of the sentence embedding method has on the kernel learning process. PubDate: 2021-09-01 DOI: 10.1007/s10472-021-09751-5

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: The open shop is a classical scheduling problem known since 1976, which can be described as follows. A number of jobs have to be processed by a given set of machines, each machine should perform an operation on every job, and the processing times of all the operations are given. One has to construct a schedule to perform all the operations to minimize finish time also known as the makespan. The open shop problem is known to be NP-hard for three and more machines, while is polynomially solvable in the case of two machines. We consider the routing open shop problem, being a generalization of both the open shop problem and the metric traveling salesman problem. In this setting, jobs are located at nodes of a transportation network and have to be processed by mobile machines, initially located at a predefined depot. Machines have to process all the jobs and return to the depot to minimize makespan. A feasible schedule is referred to as normal if its makespan coincides with the standard lower bound. We introduce the Irreducible Bin Packing decision problem, use it to describe new sufficient conditions of normality for the two machine problem, and discuss the possibility to extend these results on the problem with three and more machines. To that end, we present two new computer-aided optima localization results. PubDate: 2021-09-01 DOI: 10.1007/s10472-021-09759-x

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: In this paper, we address the Capacitated Facility Location Problem (CFLP) in which the assignment of facilities to customers must ensure enough facility capacity and all the customers must be served. We propose both sequential and parallel Relaxation Adaptive Memory Programming approaches for the CFLP, combining a Lagrangean subgradient search with an improvement method to explore primal-dual relationships to create advanced memory structures that integrate information from both primal and dual solution spaces. Computational experiments of the effectiveness of this approach are presented and discussed. PubDate: 2021-09-01 DOI: 10.1007/s10472-021-09757-z

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: The unification problem in a propositional logic is to determine, given a formula φ, whether there exists a substitution σ such that σ(φ) is in that logic. In that case, σ is a unifier of φ. When a unifiable formula has minimal complete sets of unifiers, it is either infinitary, finitary, or unitary, depending on the cardinality of its minimal complete sets of unifiers. Otherwise, it is nullary. In this paper, we prove that in modal logic \(\mathbf {K}+\square \square \bot \) , unifiable formulas are either finitary, or unitary. PubDate: 2021-09-01 DOI: 10.1007/s10472-021-09768-w

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: This paper partially bridges a gap in the literature on Circumscription in Description Logics by investigating the tractability of conjunctive query answering in OWL2’s profiles. It turns out that the data complexity of conjunctive query answering is coNP-hard in circumscribed \(\mathcal {E}{\mathscr{L}}\) and DL-lite, while in circumscribed OWL2-RL conjunctive queries retain their classical semantics. In an attempt to capture nonclassical inferences in OWL2-RL, we consider conjunctive queries with safe negation. They can detect some of the nonclassical consequences of circumscribed knowledge bases, but data complexity becomes coNP-hard. In circumscribed \(\mathcal {E}{\mathscr{L}}\) , answering queries with safe negation is undecidable. PubDate: 2021-08-31 DOI: 10.1007/s10472-021-09770-2

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Let NFAb(q) denote the set of languages accepted by nondeterministic finite automata with q states over an alphabet with b letters. Let Bn denote the set of words of length n. We give a quadratic lower bound on the VC dimension of $$ \text{NFA}_{2}(q)\cap B_{n} = \{L\cap B_{n} \mid L \in \text{NFA}_{2}(q)\} $$ as a function of q. Next, the work of Gruber and Holzer (Theoret. Comput. Sci. 387(2): 155–166, 2007) gives an upper bound for the nondeterministic state complexity of finite languages contained in Bn, which we strengthen using our methods. Finally, we give some theoretical and experimental results on the dependence on n of the VC dimension and testing dimension of NFA2(q) ∩ Bn. PubDate: 2021-08-24 DOI: 10.1007/s10472-021-09769-9

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Conditionals are defeasible rules of the form If A then usually B, and they play a central role in many approaches to nonmonotonic reasoning. Normal forms of conditional knowledge bases consisting of a set of such conditionals are useful to create, process, and compare the knowledge represented by them. In this article, we propose several new normal forms for conditional knowledge bases. Compared to the previously introduced antecedent normal form, the reduced antecedent normal form (RANF) represents conditional knowledge with significantly fewer conditionals by taking nonmonotonic entailments licenced by system P into account. The renaming normal form (ρNF) addresses equivalences among conditional knowledge bases induced by renamings of the underlying signature. Combining the concept of renaming normal form with other normal forms yields the renaming antecedent normal form (ρ ANF) and the renaming reduced antecedent normal form (ρ RANF). For all newly introduced normal forms, we show their key properties regarding, existence, uniqueness, model equivalence, and inferential equivalence, and we develop algorithms transforming every conditional knowledge base into an equivalent knowledge base being in the respective normal form. For the most succinct normal form, the ρ RANF, we present an algorithm KBρra systematically generating knowledge bases over a given signature in ρ RANF. We show that the generated knowledge bases are consistent, pairwise not antecedentwise equivalent, and pairwise not equivalent under signature renaming. Furthermore, the algorithm is complete in the sense that, when taking signature renamings and model equivalence into account, every consistent knowledge base is generated. Observing that normalizing the set of all knowledge bases over a signature Σ to ρ RANF yields exactly the same result as KBρra (Σ), highlights the interrelationship between normal form transformations on the one hand and systematically generating knowledge bases in normal form on the other hand. PubDate: 2021-08-17 DOI: 10.1007/s10472-021-09745-3

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: This paper introduces a novel approach to computing logic programming semantics. First, a propositional Herbrand base is represented in a vector space and if-then rules in a program are encoded in a matrix. Then the least fixpoint of a definite logic program is computed by matrix-vector products with a non-linear operation. Second, disjunctive logic programs are represented in third-order tensors and their minimal models are computed by algebraic manipulation of tensors. Third, normal logic programs are represented by matrices and third-order tensors, and their stable models are computed. The result of this paper exploits a new connection between linear algebraic computation and symbolic computation, which has the potential to realize logical inference in huge scale of knowledge bases. PubDate: 2021-08-16 DOI: 10.1007/s10472-021-09767-x

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: In the recent years, extensive discussion takes place in literature, on the effectiveness of meta-heuristics, and especially Nature Inspired Algorithms. Usually, authors state that such an approach should embody a well-balanced exploration and exploitation strategy. Sonar Inspired Optimization (SIO) is a recently presented algorithm, which counts already a number of successful real-world applications. Its novel mechanisms provide this equilibrium between exploration and exploitation, as it has been stated in previous studies. In this work, authors prove that this equilibrium exists and also, it is one of the main reasons behind the high quality performance of SIO. PubDate: 2021-07-22 DOI: 10.1007/s10472-021-09755-1