![]() |
Constraints
Journal Prestige (SJR): 0.288 ![]() Citation Impact (citeScore): 1 Number of Followers: 0 ![]() ISSN (Print) 1572-9354 - ISSN (Online) 1383-7133 Published by Springer-Verlag ![]() |
- Counting QBF solutions at level two
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We lift the problem of enumerative solution counting to quantified Boolean formulas (QBFs) at the second quantifier block. In contrast to the well-explored model counting problem for SAT (#SAT), where models are simply assignments to the Boolean variables of a formula, we are now dealing with tree (counter-)models reflecting the dependencies between the variables of the first and the second quantifier block. It turns out that enumerative counting on the second level does not give the complete solution count and more fine-grained view is necessary. We present a level-2 solution counting approach that works for true and false formulas. We implemented the presented approach in a counting tool exploiting state-of-the-art QBF solving technology. We present several kinds of benchmarks for testing our implementation and show that even with this very basic approach of solution enumeration the solution counts of challenging benchmarks can be found.
PubDate: 2024-08-08
-
- Applying constraint programming to minimal lottery designs
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We develop and deploy a set of constraints for the purpose of calculating minimal sizes of lottery designs. Specifically, we find the minimum number of tickets of size six which are needed to match at least two balls on any draw of size six, whenever there are at most 70 balls.
PubDate: 2024-07-09
-
- Optimal multivariate decision trees
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Recently, mixed-integer programming (MIP) techniques have been applied to learn optimal decision trees. Empirical research has shown that optimal trees typically have better out-of-sample performance than heuristic approaches such as CART. However, the underlying MIP formulations often suffer from weak linear programming (LP) relaxations. Many existing MIP approaches employ big-M constraints to ensure observations are routed throughout the tree in a feasible manner. This paper introduces new MIP formulations for learning optimal decision trees with multivariate branching rules and no assumptions on the feature types. We first propose a strong baseline MIP formulation that still uses big-M constraints, but yields a stronger LP relaxation than its counterparts in the literature. We then introduce a problem-specific class of valid inequalities called shattering inequalities. Each inequality encodes an inclusion-minimal set of points that cannot be shattered by a multivariate split, and in the context of a MIP formulation, the inequalities are sparse, involving at most the number of features plus two variables. We propose a separation procedure that attempts to find a violated inequality given a (possibly fractional) solution to the LP relaxation; in the case where the solution is integer, the separation is exact. Numerical experiments show that our MIP approach outperforms two other MIP formulations in terms of solution time and relative gap, and is able to improve solution time while remaining competitive with regards to out-of-sample accuracy in comparison to a wider range of approaches from the literature.
PubDate: 2023-12-27
DOI: 10.1007/s10601-023-09367-y
-
- A feature commonality-based search strategy to find high $$t$$ -wise
covering solutions in feature models-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract t-wise coverage is one of the most important techniques used to test configurations of software for finding bugs. It ensures that interactions between features of a Software Product Line (SPL) are tested. The size of SPLs (of thousands of features) makes the problem of finding a good test suite computationally expensive, as the number of t-wise combinations grows exponentially. In this article, we leverage Constraint Programming’s search strategies to generate test suites with a high coverage of configurations. We analyse the behaviour of the default random search strategy, and then we propose an improvement based on the commonalities (frequency) of the features. We experimentally compare to uniform sampling and state of the art sampling approaches. We show that our new search strategy outperforms all the other approaches and has the fastest running time.
PubDate: 2023-11-30
DOI: 10.1007/s10601-023-09366-z
-
- Learning to select SAT encodings for pseudo-Boolean and linear integer
constraints-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Many constraint satisfaction and optimisation problems can be solved effectively by encoding them as instances of the Boolean Satisfiability problem (SAT). However, even the simplest types of constraints have many encodings in the literature with widely varying performance, and the problem of selecting suitable encodings for a given problem instance is not trivial. We explore the problem of selecting encodings for pseudo-Boolean and linear constraints using a supervised machine learning approach. We show that it is possible to select encodings effectively using a standard set of features for constraint problems; however we obtain better performance with a new set of features specifically designed for the pseudo-Boolean and linear constraints. In fact, we achieve good results when selecting encodings for unseen problem classes. Our results compare favourably to AutoFolio when using the same feature set. We discuss the relative importance of instance features to the task of selecting the best encodings, and compare several variations of the machine learning method.
PubDate: 2023-11-02
DOI: 10.1007/s10601-023-09364-1
-
- Reasoning and inference for (Maximum) satisfiability: new insights
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract At the heart of computer science and artificial intelligence, logic is often used as a powerful language to model and solve complex problems that arise in academia and in real-world applications. A well-known formalism in this context is the Satisfiability (SAT) problem which simply checks whether a given propositional formula in the form of a set of constraints, called clauses, can be satisfied. A natural optimization extension of this problem is Maximum Satisfiability (Max-SAT) which consists in determining the maximum number of clausal constraints that can be satisfied within the formula. In our work, we are interested in studying the power and limits of inference and reasoning in the context of (Maximum) Satisfiability. Our first contributions revolve around investigating inference in SAT and Max-SAT solving. First, we study statistical inference within a Multi-Armed Bandit (MAB) framework for online selection of branching heuristics in SAT and we show that it can further enhance the efficiency of modern clause-learning solvers. Moreover, we provide further insights on the power of inference in Branch and Bound algorithms for Max-SAT solving through the property of UP-resilience. Our contributions also extend to SAT and Max-SAT proof theory. We particularly attempt to theoretically bridge the gap between SAT and Max-SAT inference.
PubDate: 2023-10-23
DOI: 10.1007/s10601-023-09365-0
-
- CSP beyond tractable constraint languages
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The constraint satisfaction problem (CSP) is among the most studied computational problems. While NP-hard, many tractable subproblems have been identified (Bulatov 2017, Zhuk 2017) Backdoors, introduced by Williams, Gomes, and Selman (2003), gradually extend such a tractable class to all CSP instances of bounded distance to the class. Backdoor size provides a natural but rather crude distance measure between a CSP instance and a tractable class. Backdoor depth, introduced by Mählmann, Siebertz, and Vigny (2021) for SAT, is a more refined distance measure, which admits the parallel utilization of different backdoor variables. Bounded backdoor size implies bounded backdoor depth, but there are instances of constant backdoor depth and arbitrarily large backdoor size. Dreier, Ordyniak, and Szeider (2022) provided fixed-parameter algorithms for finding backdoors of small depth into the classes of Horn and Krom formulas. In this paper, we consider backdoor depth for CSP. We consider backdoors w.r.t. tractable subproblems \(C_\Gamma \) of the CSP defined by a constraint language \(\varvec{\Gamma }\) , i.e., where all the constraints use relations from the language \(\varvec{\Gamma }\) . Building upon Dreier et al.’s game-theoretic approach and their notion of separator obstructions, we show that for any finite, tractable, semi-conservative constraint language \(\varvec{\Gamma }\) , the CSP is fixed-parameter tractable parameterized by the backdoor depth into \(C_{\varvec{\Gamma }}\) plus the domain size. With backdoors of low depth, we reach classes of instances that require backdoors of arbitrary large size. Hence, our results strictly generalize several known results for CSP that are based on backdoor size.
PubDate: 2023-10-11
DOI: 10.1007/s10601-023-09362-3
-
- Learn and route: learning implicit preferences for vehicle routing
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We investigate a learning decision support system for vehicle routing, where the routing engine learns implicit preferences that human planners have when manually creating route plans (or routings). The goal is to use these learned subjective preferences on top of the distance-based objective criterion in vehicle routing systems. This is an alternative to the practice of distinctively formulating a custom vehicle routing problem (VRP) for every company with its own routing requirements. Instead, we assume the presence of past vehicle routing solutions over similar sets of customers, and learn to make similar choices. The learning approach is based on the concept of learning a Markov model, which corresponds to a probabilistic transition matrix, rather than a deterministic distance matrix. This nevertheless allows us to use existing arc routing VRP software in creating the actual routings, and to optimize over both distances and preferences at the same time. For the learning, we explore different schemes to construct the probabilistic transition matrix that can co-evolve with changing preferences over time. Our results on randomly generated instances and on a use-case with a small transportation company show that our method is able to generate results that are close to the manually created solutions, without needing to characterize all constraints and sub-objectives explicitly. Even in the case of changes in the customer sets, our approach is able to find solutions that are closer to the actual routings than when using only distances, and hence, solutions that require fewer manual changes when transformed into practical routings.
PubDate: 2023-10-11
DOI: 10.1007/s10601-023-09363-2
-
- Complexity of minimum-size arc-inconsistency explanations
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Explaining the outcome of programs has become one of the main concerns in AI research. In constraint programming, a user may want the system to explain why a given variable assignment is not feasible or how it came to the conclusion that the problem does not have any solution. One solution to the latter is to return to the user a sequence of simple reasoning steps that lead to inconsistency. Arc consistency is a well-known form of reasoning that can be understood by a human. We consider explanations as sequences of propagation steps of a constraint on a variable (i.e. the ubiquitous revise function in arc-consistency algorithms) that lead to inconsistency. We characterize several cases for which providing a shortest such explanation is easy: For instance when constraints are binary and variables have maximum degree two. However, these polynomial cases are tight. For instance, providing a shortest explanation is NP-hard when constraints are binary and the maximum degree is three, even if the number of variables is bounded. It remains NP-hard on trees, despite the fact that arc consistency is a decision procedure on trees. The problem is not even FPT-approximable unless the FPT \(\ne \) W[2] hypothesis is false.
PubDate: 2023-10-02
DOI: 10.1007/s10601-023-09360-5
-
- Spacetime programming: a synchronous language for constraint search
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Constraint programming is a paradigm for computing with mathematical relations named constraints. It is a declarative approach to describe many real-world problems including scheduling, vehicles routing, biology and musical composition. Constraint programming must be contrasted with procedural approaches that describe how a problem is solved, whereas constraint models describe what the problem is. The part of how a constraint problem is solved is left to a general constraint solver. Unfortunately, there is no solving algorithm efficient enough to every problem, because the search strategy must often be customized per problem to attain reasonable efficiency. This is a daunting task that requires expertise and good understanding on the solver’s intrinsics. Moreover, higher-level constraint-based languages provide limited support to specify search strategies. In this dissertation, we tackle this challenge by designing a programming language for specifying search strategies. The dissertation is constructed around two axes: A novel theory of constraint programming based on lattice theory. A programming language, called spacetime programming, building on lattice theory for its data structures and on synchronous programming for its computational model.
PubDate: 2023-09-23
DOI: 10.1007/s10601-023-09356-1
-
- Constraint programming approaches to electric vehicle and robot routing
problems-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Driven by global efforts to curb greenhouse gas emissions, there has been significant investment in electric vehicle (EV) technology in recent years, resulting in a substantial increase in EV market share. Concurrently, the demand for mobile robots, such as unmanned aerial vehicles (UAVs) and land-based robots, has also experienced rapid growth, encouraged by recent advances in the autonomy and capabilities of these systems. Common to both of these technologies is the use of electric motors for propulsion and batteries for mobile energy storage. Techniques for the coordination of electric vehicle fleets, whether human-operated or autonomous, must address a variety of unique challenges, including sparse recharging infrastructure, significant recharge durations, and limited battery capacities.
PubDate: 2023-09-22
DOI: 10.1007/s10601-023-09355-2
-
- Optimization methods based on decision diagrams for constraint
programming, AI planning, and mathematical programming-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Decision diagrams (DDs) are graphical structures that can be used to solve discrete optimization problems by representing the set of feasible solutions as paths in a graph. This graphical encoding of the feasibility set can represent complex combinatorial structures and is the foundation of several novel optimization techniques. Due to their flexibility, DDs have become an attractive optimization tool for researchers in different fields, including operations research and computer science.
PubDate: 2023-09-22
DOI: 10.1007/s10601-023-09353-4
-
- Scheduling through logic-based tools
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract A scheduling problem can be defined in a nutshell as the problem of determining when and how the activities of a project have to be run, according to some project requirements. Such problems are ubiquitous nowadays since they frequently appear in industry and services. In most cases the computation of solutions of scheduling problems is hard, especially when some objective, such as the duration of the project, has to be optimised. The recent performance advances on solving the problems of Boolean Satisfiability (SAT) and SAT Modulo Theories (SMT) have risen the interest in formulating hard combinatorial problems as SAT or SMT formulas, which are then solved with efficient algorithms. One of the principal advantages of such logic-based techniques is that they can certify optimality of solutions.
PubDate: 2023-09-18
DOI: 10.1007/s10601-023-09357-0
-
- Floating-point numbers round-off error analysis by constraint programming
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Floating-point numbers are used in many applications to perform computations, often without the user’s knowledge. The mathematical models of these applications use real numbers that are often not representable on a computer. Indeed, a finite binary representation is not sufficient to represent the continuous and infinite set of real numbers. The problem is that computing with floating-point numbers often introduces a rounding error compared to its equivalent over real numbers. Knowing the order of magnitude of this error is essential in order to correctly understand the behaviour of a program. Many error analysis tools calculate an over-approximation of the errors. These over-approximations are often too coarse to effectively assess the impact of the error on the behaviour of the program. Other tools calculate an under-approximation of the maximum error, i.e., the largest possible error in absolute value. These under-approximations are either incorrect or unreachable. In this thesis, we propose a constraint system capable of capturing and reasoning about the error produced by a program that performs computations with floating-point numbers. We also propose an algorithm to search for the maximum error. For this purpose, our algorithm computes both a rigorous over-approximation and a rigorous under-approximation of the maximum error. An over-approximation is obtained from the constraint system for the errors, while a reachable under-approximation is produced using a generate-and-test procedure and a local search. Our algorithm is the first to combine both an over-approximation and an under-approximation of the error. Our methods are implemented in a solver, called FErA. Performance on a set of common problems is competitive: the rigorous enclosure produced is accurate and compares well with other state-of-the-art tools.
PubDate: 2023-09-15
DOI: 10.1007/s10601-023-09354-3
-
- Correction to: Solution sampling with random table constraints
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
PubDate: 2023-09-07
DOI: 10.1007/s10601-023-09361-4
-
- From declarative models to local search
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
PubDate: 2023-08-12
DOI: 10.1007/s10601-023-09359-y
-
- Security-Aware Database Migration Planning
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Database migration is an important problem faced by companies dealing with big data. Not only is migration a costly procedure, but it also involves serious security risks as well. For some institutions, the primary focus is on reducing the cost of the migration operation, which manifests itself in application testing. For other institutions, minimizing security risks is the most important goal, especially if the data involved is of a sensitive nature. In the literature, the database migration problem has been studied from a test cost minimization perspective. In this paper, we focus on an orthogonal measure, i.e., security risk minimization. We associate security with the number of shifts needed to complete the migration task. Ideally, we want to complete the migration in as few shifts as possible, so that the risk of data exposure is minimized. In this paper, we provide a formal framework for studying the database migration problem from the perspective of security risk minimization (shift minimization) and establish the computational complexities of several models in the same. For the NP-hard models, we develop memetic algorithms that produce solutions that are within \(10\%\) and \(7\%\) of the optimal in \(95\%\) of the instances under 8 and 82 seconds, respectively.
PubDate: 2023-08-10
DOI: 10.1007/s10601-023-09351-6
-
- The extensional constraint
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Extensional constraints are crucial in CP. They represent allowed combinations of values for a subset of variables (scope) using extensional representation forms such as tables (lists of tuples of constraint solutions) or MDDs (layered acyclic directed graphs where each path represents a constraint solution). Such extensional forms allow the modelization of virtually any kind of constraints. This type of constraint is among the first ones available in CP solvers. A lot of progress has been made since the design of its first propagator: supports, tabular reduction, bitwise computations, resets, table compression, and MDDs. The most recent algorithm prior to this thesis is \(\mathtt {Compact-Table}\) . It uses a data structure called reversible sparse bitsets to speed up the computations. The work in this thesis initiates with \(\mathtt {Compact-Table}\) , and extend it to handle other kinds of extensional representation. The first addressed representation is about compressed tables, i.e. tables containing tuples which contain single values and simple unary ( \(*\) , \(\not = v\) , \(\le v\) , \(\in S\) , ...) or binary ( \(=x+v\) , \(\not =x+v\) , \(\le x+v\) , ...) restrictions. One such tuple represents several classical ones. This led to the \(\texttt{CT}^*\) (for short tables) and \(\texttt {CT}^{bs}\) (for basic smart tables) algorithms. The second addressed issue concerns negative tables, i.e. tables listing forbidden combinations of values. This results in the \(\texttt {CT}_{neg}\) (for negative tables) and \(\texttt{CT}_{neg}^*\) (for negative short tables) algorithms. The third and last adaptation addresses diagrams, i.e. layered graphs such as \(\texttt {MDDs}\) . This led to the \(\texttt {CD}\) (for diagrams) and \(\texttt {CD}^{bs}\) (for basic smart diagrams) algorithms. Being able to use such diversity of representation helps to counterbalance the main drawback of classical table representations, which is their potentially exponential growth in size. Compressed tables, negative tables, and diagrams help reduce the memory consumption (storage size) required to store an equivalent representation.
PubDate: 2023-08-03
DOI: 10.1007/s10601-023-09358-z
-
- Hybrid optimization of vehicle routing problems
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Vehicle routing problems are combinatorial optimization problems that aspire to design vehicle routes that minimize some measure of cost, such as the total distance traveled or the time at which the last vehicle returns to a depot, while adhering to various restrictions. Vehicle routing problems are of profound interest in both academia and industry because they are opportunities to study graph structures and algorithms, and because they underpin practical applications in a multitude of industries, but notably, the transportation and logistics industries. This dissertation presents two applications relevant to industry and develops a fully hybrid method for solving a classical vehicle routing problem.
PubDate: 2023-07-17
DOI: 10.1007/s10601-023-09352-5
-
- Block-coordinate descent and local consistencies in linear programming
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Even though linear programming (LP) problems can be solved in polynomial time, solving large-scale LP instances using off-the-shelf solvers may be difficult in practice, which creates demand for specialized scalable methods. One such method for large-scale problems is block-coordinate descent (BCD). However, the fixed points of this method need not be global optima even for convex optimization problems. Despite this limitation, various BCD algorithms (also called ‘convergent message-passing algorithms’) are successfully used for approximately solving the dual LP relaxation of the weighted constraint satisfaction problem (WCSP, also known as MAP inference in graphical models) and their fixed points can be characterized using local consistencies, typically variants of arc consistency. In this work, we focus on optimizing linear programs by BCD or constraint propagation and theoretically relating these approaches. To this end, we propose a general constraint-propagation-based framework for approximate optimization of large-scale linear programs whose applicability is evaluated on publicly available benchmarks. In detail, we employ this approach to approximately optimize the dual LP relaxation of weighted Max-SAT and an LP formulation of WCSP. In the latter case, we show that one can use any classical CSP constraint propagation method in order to obtain an upper bound on the optimal value. This is in contrast to existing methods that needed to be tailored to a specific chosen kind of local consistency. However, the cost for this is that our approach may not preserve the properties of the input WCSP instance, such as the set of optimal assignments, and only provides an upper bound on its optimal value, which is nevertheless important for pruning the search space during branch-and-bound search. Although one can use our general framework with any constraint propagation method in a system of linear inequalities, we identify the precise form of constraint propagation such that the stopping points of the resulting algorithm coincide with the fixed points of BCD. In other words, we identify the kind of local consistency that is enforced by BCD in any linear program. Depending on the problem being solved, this condition may be interpreted, e.g., as arc consistency or positive consistency. Thanks to these results, we characterize linear programs that are optimally solvable by BCD by refutation-completeness of the associated propagator (i.e., whether it can always detect infeasibility of a certain class of systems of linear inequalities and equalities). This allows us to identify new classes of linear programs exactly solvable by BCD, including, e.g., an LP formulation of the maximum flow problem or LP relaxations of some combinatorial problems.
PubDate: 2023-07-08
DOI: 10.1007/s10601-023-09350-7
-