Constraints
Journal Prestige (SJR): 0.288 Citation Impact (citeScore): 1 Number of Followers: 0 Hybrid journal (It can contain Open Access articles) ISSN (Print) 15729354  ISSN (Online) 13837133 Published by SpringerVerlag [2468 journals] 
 A feature commonalitybased search strategy to find high $$t$$ wise
covering solutions in feature models
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract twise coverage is one of the most important techniques used to test configurations of software for finding bugs. It ensures that interactions between features of a Software Product Line (SPL) are tested. The size of SPLs (of thousands of features) makes the problem of finding a good test suite computationally expensive, as the number of twise combinations grows exponentially. In this article, we leverage Constraint Programming’s search strategies to generate test suites with a high coverage of configurations. We analyse the behaviour of the default random search strategy, and then we propose an improvement based on the commonalities (frequency) of the features. We experimentally compare to uniform sampling and state of the art sampling approaches. We show that our new search strategy outperforms all the other approaches and has the fastest running time.
PubDate: 20231130

 Learning to select SAT encodings for pseudoBoolean and linear integer
constraints
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Many constraint satisfaction and optimisation problems can be solved effectively by encoding them as instances of the Boolean Satisfiability problem (SAT). However, even the simplest types of constraints have many encodings in the literature with widely varying performance, and the problem of selecting suitable encodings for a given problem instance is not trivial. We explore the problem of selecting encodings for pseudoBoolean and linear constraints using a supervised machine learning approach. We show that it is possible to select encodings effectively using a standard set of features for constraint problems; however we obtain better performance with a new set of features specifically designed for the pseudoBoolean and linear constraints. In fact, we achieve good results when selecting encodings for unseen problem classes. Our results compare favourably to AutoFolio when using the same feature set. We discuss the relative importance of instance features to the task of selecting the best encodings, and compare several variations of the machine learning method.
PubDate: 20231102

 Reasoning and inference for (Maximum) satisfiability: new insights

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract At the heart of computer science and artificial intelligence, logic is often used as a powerful language to model and solve complex problems that arise in academia and in realworld applications. A wellknown formalism in this context is the Satisfiability (SAT) problem which simply checks whether a given propositional formula in the form of a set of constraints, called clauses, can be satisfied. A natural optimization extension of this problem is Maximum Satisfiability (MaxSAT) which consists in determining the maximum number of clausal constraints that can be satisfied within the formula. In our work, we are interested in studying the power and limits of inference and reasoning in the context of (Maximum) Satisfiability. Our first contributions revolve around investigating inference in SAT and MaxSAT solving. First, we study statistical inference within a MultiArmed Bandit (MAB) framework for online selection of branching heuristics in SAT and we show that it can further enhance the efficiency of modern clauselearning solvers. Moreover, we provide further insights on the power of inference in Branch and Bound algorithms for MaxSAT solving through the property of UPresilience. Our contributions also extend to SAT and MaxSAT proof theory. We particularly attempt to theoretically bridge the gap between SAT and MaxSAT inference.
PubDate: 20231023

 CSP beyond tractable constraint languages

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract The constraint satisfaction problem (CSP) is among the most studied computational problems. While NPhard, many tractable subproblems have been identified (Bulatov 2017, Zhuk 2017) Backdoors, introduced by Williams, Gomes, and Selman (2003), gradually extend such a tractable class to all CSP instances of bounded distance to the class. Backdoor size provides a natural but rather crude distance measure between a CSP instance and a tractable class. Backdoor depth, introduced by Mählmann, Siebertz, and Vigny (2021) for SAT, is a more refined distance measure, which admits the parallel utilization of different backdoor variables. Bounded backdoor size implies bounded backdoor depth, but there are instances of constant backdoor depth and arbitrarily large backdoor size. Dreier, Ordyniak, and Szeider (2022) provided fixedparameter algorithms for finding backdoors of small depth into the classes of Horn and Krom formulas. In this paper, we consider backdoor depth for CSP. We consider backdoors w.r.t. tractable subproblems \(C_\Gamma \) of the CSP defined by a constraint language \(\varvec{\Gamma }\) , i.e., where all the constraints use relations from the language \(\varvec{\Gamma }\) . Building upon Dreier et al.’s gametheoretic approach and their notion of separator obstructions, we show that for any finite, tractable, semiconservative constraint language \(\varvec{\Gamma }\) , the CSP is fixedparameter tractable parameterized by the backdoor depth into \(C_{\varvec{\Gamma }}\) plus the domain size. With backdoors of low depth, we reach classes of instances that require backdoors of arbitrary large size. Hence, our results strictly generalize several known results for CSP that are based on backdoor size.
PubDate: 20231011

 Learn and route: learning implicit preferences for vehicle routing

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract We investigate a learning decision support system for vehicle routing, where the routing engine learns implicit preferences that human planners have when manually creating route plans (or routings). The goal is to use these learned subjective preferences on top of the distancebased objective criterion in vehicle routing systems. This is an alternative to the practice of distinctively formulating a custom vehicle routing problem (VRP) for every company with its own routing requirements. Instead, we assume the presence of past vehicle routing solutions over similar sets of customers, and learn to make similar choices. The learning approach is based on the concept of learning a Markov model, which corresponds to a probabilistic transition matrix, rather than a deterministic distance matrix. This nevertheless allows us to use existing arc routing VRP software in creating the actual routings, and to optimize over both distances and preferences at the same time. For the learning, we explore different schemes to construct the probabilistic transition matrix that can coevolve with changing preferences over time. Our results on randomly generated instances and on a usecase with a small transportation company show that our method is able to generate results that are close to the manually created solutions, without needing to characterize all constraints and subobjectives explicitly. Even in the case of changes in the customer sets, our approach is able to find solutions that are closer to the actual routings than when using only distances, and hence, solutions that require fewer manual changes when transformed into practical routings.
PubDate: 20231011

 Complexity of minimumsize arcinconsistency explanations

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Explaining the outcome of programs has become one of the main concerns in AI research. In constraint programming, a user may want the system to explain why a given variable assignment is not feasible or how it came to the conclusion that the problem does not have any solution. One solution to the latter is to return to the user a sequence of simple reasoning steps that lead to inconsistency. Arc consistency is a wellknown form of reasoning that can be understood by a human. We consider explanations as sequences of propagation steps of a constraint on a variable (i.e. the ubiquitous revise function in arcconsistency algorithms) that lead to inconsistency. We characterize several cases for which providing a shortest such explanation is easy: For instance when constraints are binary and variables have maximum degree two. However, these polynomial cases are tight. For instance, providing a shortest explanation is NPhard when constraints are binary and the maximum degree is three, even if the number of variables is bounded. It remains NPhard on trees, despite the fact that arc consistency is a decision procedure on trees. The problem is not even FPTapproximable unless the FPT \(\ne \) W[2] hypothesis is false.
PubDate: 20231002

 Spacetime programming: a synchronous language for constraint search

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Constraint programming is a paradigm for computing with mathematical relations named constraints. It is a declarative approach to describe many realworld problems including scheduling, vehicles routing, biology and musical composition. Constraint programming must be contrasted with procedural approaches that describe how a problem is solved, whereas constraint models describe what the problem is. The part of how a constraint problem is solved is left to a general constraint solver. Unfortunately, there is no solving algorithm efficient enough to every problem, because the search strategy must often be customized per problem to attain reasonable efficiency. This is a daunting task that requires expertise and good understanding on the solver’s intrinsics. Moreover, higherlevel constraintbased languages provide limited support to specify search strategies. In this dissertation, we tackle this challenge by designing a programming language for specifying search strategies. The dissertation is constructed around two axes: A novel theory of constraint programming based on lattice theory. A programming language, called spacetime programming, building on lattice theory for its data structures and on synchronous programming for its computational model.
PubDate: 20230923

 Constraint programming approaches to electric vehicle and robot routing
problems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Driven by global efforts to curb greenhouse gas emissions, there has been significant investment in electric vehicle (EV) technology in recent years, resulting in a substantial increase in EV market share. Concurrently, the demand for mobile robots, such as unmanned aerial vehicles (UAVs) and landbased robots, has also experienced rapid growth, encouraged by recent advances in the autonomy and capabilities of these systems. Common to both of these technologies is the use of electric motors for propulsion and batteries for mobile energy storage. Techniques for the coordination of electric vehicle fleets, whether humanoperated or autonomous, must address a variety of unique challenges, including sparse recharging infrastructure, significant recharge durations, and limited battery capacities.
PubDate: 20230922

 Optimization methods based on decision diagrams for constraint
programming, AI planning, and mathematical programming
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Decision diagrams (DDs) are graphical structures that can be used to solve discrete optimization problems by representing the set of feasible solutions as paths in a graph. This graphical encoding of the feasibility set can represent complex combinatorial structures and is the foundation of several novel optimization techniques. Due to their flexibility, DDs have become an attractive optimization tool for researchers in different fields, including operations research and computer science.
PubDate: 20230922

 Scheduling through logicbased tools

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract A scheduling problem can be defined in a nutshell as the problem of determining when and how the activities of a project have to be run, according to some project requirements. Such problems are ubiquitous nowadays since they frequently appear in industry and services. In most cases the computation of solutions of scheduling problems is hard, especially when some objective, such as the duration of the project, has to be optimised. The recent performance advances on solving the problems of Boolean Satisfiability (SAT) and SAT Modulo Theories (SMT) have risen the interest in formulating hard combinatorial problems as SAT or SMT formulas, which are then solved with efficient algorithms. One of the principal advantages of such logicbased techniques is that they can certify optimality of solutions.
PubDate: 20230918

 Floatingpoint numbers roundoff error analysis by constraint programming

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Floatingpoint numbers are used in many applications to perform computations, often without the user’s knowledge. The mathematical models of these applications use real numbers that are often not representable on a computer. Indeed, a finite binary representation is not sufficient to represent the continuous and infinite set of real numbers. The problem is that computing with floatingpoint numbers often introduces a rounding error compared to its equivalent over real numbers. Knowing the order of magnitude of this error is essential in order to correctly understand the behaviour of a program. Many error analysis tools calculate an overapproximation of the errors. These overapproximations are often too coarse to effectively assess the impact of the error on the behaviour of the program. Other tools calculate an underapproximation of the maximum error, i.e., the largest possible error in absolute value. These underapproximations are either incorrect or unreachable. In this thesis, we propose a constraint system capable of capturing and reasoning about the error produced by a program that performs computations with floatingpoint numbers. We also propose an algorithm to search for the maximum error. For this purpose, our algorithm computes both a rigorous overapproximation and a rigorous underapproximation of the maximum error. An overapproximation is obtained from the constraint system for the errors, while a reachable underapproximation is produced using a generateandtest procedure and a local search. Our algorithm is the first to combine both an overapproximation and an underapproximation of the error. Our methods are implemented in a solver, called FErA. Performance on a set of common problems is competitive: the rigorous enclosure produced is accurate and compares well with other stateoftheart tools.
PubDate: 20230915

 Correction to: Solution sampling with random table constraints

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
PubDate: 20230907

 From declarative models to local search

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
PubDate: 20230812
DOI: 10.1007/s1060102309359y

 SecurityAware Database Migration Planning

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Database migration is an important problem faced by companies dealing with big data. Not only is migration a costly procedure, but it also involves serious security risks as well. For some institutions, the primary focus is on reducing the cost of the migration operation, which manifests itself in application testing. For other institutions, minimizing security risks is the most important goal, especially if the data involved is of a sensitive nature. In the literature, the database migration problem has been studied from a test cost minimization perspective. In this paper, we focus on an orthogonal measure, i.e., security risk minimization. We associate security with the number of shifts needed to complete the migration task. Ideally, we want to complete the migration in as few shifts as possible, so that the risk of data exposure is minimized. In this paper, we provide a formal framework for studying the database migration problem from the perspective of security risk minimization (shift minimization) and establish the computational complexities of several models in the same. For the NPhard models, we develop memetic algorithms that produce solutions that are within \(10\%\) and \(7\%\) of the optimal in \(95\%\) of the instances under 8 and 82 seconds, respectively.
PubDate: 20230810
DOI: 10.1007/s10601023093516

 The extensional constraint

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Extensional constraints are crucial in CP. They represent allowed combinations of values for a subset of variables (scope) using extensional representation forms such as tables (lists of tuples of constraint solutions) or MDDs (layered acyclic directed graphs where each path represents a constraint solution). Such extensional forms allow the modelization of virtually any kind of constraints. This type of constraint is among the first ones available in CP solvers. A lot of progress has been made since the design of its first propagator: supports, tabular reduction, bitwise computations, resets, table compression, and MDDs. The most recent algorithm prior to this thesis is \(\mathtt {CompactTable}\) . It uses a data structure called reversible sparse bitsets to speed up the computations. The work in this thesis initiates with \(\mathtt {CompactTable}\) , and extend it to handle other kinds of extensional representation. The first addressed representation is about compressed tables, i.e. tables containing tuples which contain single values and simple unary ( \(*\) , \(\not = v\) , \(\le v\) , \(\in S\) , ...) or binary ( \(=x+v\) , \(\not =x+v\) , \(\le x+v\) , ...) restrictions. One such tuple represents several classical ones. This led to the \(\texttt{CT}^*\) (for short tables) and \(\texttt {CT}^{bs}\) (for basic smart tables) algorithms. The second addressed issue concerns negative tables, i.e. tables listing forbidden combinations of values. This results in the \(\texttt {CT}_{neg}\) (for negative tables) and \(\texttt{CT}_{neg}^*\) (for negative short tables) algorithms. The third and last adaptation addresses diagrams, i.e. layered graphs such as \(\texttt {MDDs}\) . This led to the \(\texttt {CD}\) (for diagrams) and \(\texttt {CD}^{bs}\) (for basic smart diagrams) algorithms. Being able to use such diversity of representation helps to counterbalance the main drawback of classical table representations, which is their potentially exponential growth in size. Compressed tables, negative tables, and diagrams help reduce the memory consumption (storage size) required to store an equivalent representation.
PubDate: 20230803
DOI: 10.1007/s1060102309358z

 Hybrid optimization of vehicle routing problems

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Vehicle routing problems are combinatorial optimization problems that aspire to design vehicle routes that minimize some measure of cost, such as the total distance traveled or the time at which the last vehicle returns to a depot, while adhering to various restrictions. Vehicle routing problems are of profound interest in both academia and industry because they are opportunities to study graph structures and algorithms, and because they underpin practical applications in a multitude of industries, but notably, the transportation and logistics industries. This dissertation presents two applications relevant to industry and develops a fully hybrid method for solving a classical vehicle routing problem.
PubDate: 20230717
DOI: 10.1007/s10601023093525

 Blockcoordinate descent and local consistencies in linear programming

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Even though linear programming (LP) problems can be solved in polynomial time, solving largescale LP instances using offtheshelf solvers may be difficult in practice, which creates demand for specialized scalable methods. One such method for largescale problems is blockcoordinate descent (BCD). However, the fixed points of this method need not be global optima even for convex optimization problems. Despite this limitation, various BCD algorithms (also called ‘convergent messagepassing algorithms’) are successfully used for approximately solving the dual LP relaxation of the weighted constraint satisfaction problem (WCSP, also known as MAP inference in graphical models) and their fixed points can be characterized using local consistencies, typically variants of arc consistency. In this work, we focus on optimizing linear programs by BCD or constraint propagation and theoretically relating these approaches. To this end, we propose a general constraintpropagationbased framework for approximate optimization of largescale linear programs whose applicability is evaluated on publicly available benchmarks. In detail, we employ this approach to approximately optimize the dual LP relaxation of weighted MaxSAT and an LP formulation of WCSP. In the latter case, we show that one can use any classical CSP constraint propagation method in order to obtain an upper bound on the optimal value. This is in contrast to existing methods that needed to be tailored to a specific chosen kind of local consistency. However, the cost for this is that our approach may not preserve the properties of the input WCSP instance, such as the set of optimal assignments, and only provides an upper bound on its optimal value, which is nevertheless important for pruning the search space during branchandbound search. Although one can use our general framework with any constraint propagation method in a system of linear inequalities, we identify the precise form of constraint propagation such that the stopping points of the resulting algorithm coincide with the fixed points of BCD. In other words, we identify the kind of local consistency that is enforced by BCD in any linear program. Depending on the problem being solved, this condition may be interpreted, e.g., as arc consistency or positive consistency. Thanks to these results, we characterize linear programs that are optimally solvable by BCD by refutationcompleteness of the associated propagator (i.e., whether it can always detect infeasibility of a certain class of systems of linear inequalities and equalities). This allows us to identify new classes of linear programs exactly solvable by BCD, including, e.g., an LP formulation of the maximum flow problem or LP relaxations of some combinatorial problems.
PubDate: 20230708
DOI: 10.1007/s10601023093507

 SATbased optimal classification trees for nonbinary data

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Decision trees are a popular classification model in machine learning due to their interpretability and performance. Decisiontree classifiers are traditionally constructed using greedy heuristic algorithms that do not provide guarantees regarding the quality of the resultant trees. In contrast, a recent line of work employed exact optimization techniques to construct optimal decisiontree classifiers. However, most of these approaches are designed for datasets with binary features. While numeric and categorical features can be transformed into binary features, this transformation can introduce a large number of binary features and may not be efficient in practice. In this work, we present a SATbased encoding for decision trees that directly supports nonbinary data and use it to solve two wellstudied variants of the optimal decision tree problem. Furthermore, we extend our approach to support costsensitive learning of optimal decision trees and introduce tree pruning constraints to reduce overfitting. We perform extensive experiments based on realworld and synthetic datasets that show that our approach obtains superior performance to stateoftheart exact techniques on nonbinary datasets and has significantly smaller memory consumption. We also show that our extension for costsensitive learning and our tree pruning constraints can help improve the prediction quality on unseen test data.
PubDate: 20230708
DOI: 10.1007/s10601023093481

 Computing relaxations for the threedimensional stable matching problem
with cyclic preferences
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Constraint programming has proven to be a successful framework for determining whether a given instance of the threedimensional stable matching problem with cyclic preferences (3dsmcyc) admits a solution. If such an instance is satisfiable, constraint models can even compute its optimal solution for several different objective functions. On the other hand, the only existing output for unsatisfiable 3dsmcyc instances is a simple declaration of impossibility. In this paper, we explore four ways to adapt constraint models designed for 3dsmcyc to the maximum relaxation version of the problem, that is, the computation of the smallest part of an instance whose modification leads to satisfiability. We also extend our models to support the presence of costs on elements in the instance, and to return the relaxation with lowest total cost for each of the four types of relaxation. Empirical results reveal that our relaxation models are efficient, as in most cases, they show little overhead compared to the satisfaction version.
PubDate: 20230603
DOI: 10.1007/s10601023093463

 Superreparametrizations of weighted CSPs: properties and optimization
perspective
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract The notion of reparametrizations of Weighted CSPs (WCSPs) (also known as equivalencepreserving transformations of WCSPs) is wellknown and finds its use in many algorithms to approximate or bound the optimal WCSP value. In contrast, the concept of superreparametrizations (which are changes of the weights that keep or increase the WCSP objective for every assignment) was already proposed but never studied in detail. To fill this gap, we present a number of theoretical properties of superreparametrizations and compare them to those of reparametrizations. Furthermore, we propose a framework for computing upper bounds on the optimal value of the (maximization version of) WCSP using superreparametrizations. We show that it is in principle possible to employ arbitrary (under some technical conditions) constraint propagation rules to improve the bound. For arc consistency in particular, the method reduces to the known Virtual AC (VAC) algorithm. We implemented the method for singleton arc consistency (SAC) and compared it to other strong local consistencies in WCSPs on a public benchmark. The results show that the bounds obtained from SAC are superior for many instance groups.
PubDate: 20230516
DOI: 10.1007/s10601023093436
