![]() |
Mathematical Programming
Journal Prestige (SJR): 2.49 ![]() Citation Impact (citeScore): 3 Number of Followers: 15 ![]() ISSN (Print) 1436-4646 - ISSN (Online) 0025-5610 Published by Springer-Verlag ![]() |
- Special Issue: Integer Programming and Combinatorial Optimization (IPCO)
2023-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
PubDate: 2025-03-01
DOI: 10.1007/s10107-025-02205-4
-
- Towards an optimal contention resolution scheme for matchings
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we study contention resolution schemes for matchings. Given a fractional matching x and a random set R(x) where each edge e appears independently with probability \(x_e\) , we want to select a matching \(M \subseteq R(x)\) such that \(\Pr [e \in M \mid e \in R(x)] \ge c\) , for c as large as possible. We call such a selection method a c-balanced contention resolution scheme. Our main results are (i) an asymptotically optimal \(\simeq 0.544\) -balanced contention resolution scheme for general matchings when \(\Vert x\Vert _\infty \rightarrow 0\) , and (ii) a 0.509-balanced contention resolution scheme for bipartite matchings (without any restriction on x). To the best of our knowledge, this result establishes for the first time, in any natural relaxation of a combinatorial optimization problem, a separation between (i) offline and random order online contention resolution schemes, and (ii) monotone and non-monotone contention resolution schemes.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02178-w
-
- Cut-sufficient directed 2-commodity multiflow topologies
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In multicommodity network flows, a supply–demand graph pair (G, H) (called a multiflow topology) is cut-sufficient if, for all capacity and demand weights, the cut condition is enough to guarantee the existence of a feasible multiflow. We characterize cut-sufficiency for two classes of directed 2-commodity flows: roundtrip demands, where H is a 2-cycle, and 2-path demands, where H is a directed path of length two. We then extend these characterizations to some larger demand graphs, namely directed stars and directed triangles. To obtain such characterizations, we introduce a theory of relevant minors. Unlike the undirected setting, for directed graphs the cut-sufficient topologies are not minor-closed. They are however relevant-minor-closed. A single forbidden relevant minor characterizes roundtrip cut-sufficiency, and two suffice for the other classes. We also provide a partial characterization in the case of two independent demands (2-matching demands), showing that one of two relevant minors exists if the weights that break cut-sufficiency have unit demand values. As an application of our results, we show that recognizing cut-sufficiency for directed multiflow topologies is co-NP-hard, even for roundtrip demands. This is in contrast to undirected 2-commodity flows, for which topologies are always cut-sufficient.
PubDate: 2025-03-01
DOI: 10.1007/s10107-025-02195-3
-
- Monoidal strengthening of simple $$\mathcal {V}$$ -polyhedral disjunctive
cuts-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Disjunctive cutting planes can tighten a relaxation of a mixed-integer linear program. Traditionally, such cuts are obtained by solving a higher-dimensional linear program, whose additional variables cause the procedure to be computationally prohibitive. Adopting a \(\mathcal {V}\) -polyhedral perspective is a practical alternative that enables the separation of disjunctive cuts via a linear program with only as many variables as the original problem. The drawback is that the classical approach of monoidal strengthening cannot be directly employed without the values of the extra variables appearing in the extended formulation, which constitute a certificate of validity of the cut. We derive how to compute this certificate from a solution to the linear program generating \(\mathcal {V}\) -polyhedral disjunctive cuts. We then present computational experiments with monoidal strengthening of cuts from disjunctions with as many as 64 terms. Some instances are dramatically impacted, with strengthening increasing the gap closed by the cuts from 0 to 100%. However, for larger disjunctions, monoidal strengthening appears to be less effective, for which we identify a potential cause. Lastly, the certificates of validity also enable us to verify which disjunctive cuts are equivalent to intersection cuts, which happens increasingly rarely for larger disjunctions.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02185-x
-
- A linear time algorithm for linearizing quadratic and higher-order
shortest path problems-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract An instance of the NP-hard Quadratic Shortest Path Problem (QSPP) is called linearizable iff it is equivalent to an instance of the classic Shortest Path Problem (SPP) on the same input digraph. The linearization problem for the QSPP (LinQSPP) decides whether a given QSPP instance is linearizable and determines the corresponding SPP instance in the positive case. We provide a novel linear time algorithm for the LinQSPP on acyclic digraphs which runs considerably faster than the previously best algorithm. The algorithm is based on a new insight revealing that the linearizability of the QSPP for acyclic digraphs can be seen as a local property. Our approach extends to the more general higher-order shortest path problem.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02086-z
-
- An update-and-stabilize framework for the minimum-norm-point problem
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We consider the minimum-norm-point (MNP) problem over polyhedra, a well-studied problem that encompasses linear programming. We present a general algorithmic framework that combines two fundamental approaches for this problem: active set methods and first order methods. Our algorithm performs first order update steps, followed by iterations that aim to ‘stabilize’ the current iterate with additional projections, i.e., find a locally optimal solution whilst keeping the current tight inequalities. Such steps have been previously used in active set methods for the nonnegative least squares (NNLS) problem. We bound on the number of iterations polynomially in the dimension and in the associated circuit imbalance measure. In particular, the algorithm is strongly polynomial for network flow instances. Classical NNLS algorithms such as the Lawson–Hanson algorithm are special instantiations of our framework; as a consequence, we obtain convergence bounds for these algorithms. Our preliminary computational experiments show promising practical performance.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02077-0
-
- Monoidal strengthening and unique lifting in MIQCPs
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Using the recently proposed maximal quadratic-free sets and the well-known monoidal strengthening procedure, we show how to improve intersection cuts for quadratically-constrained optimization problems by exploiting integrality requirements. We provide an explicit construction that allows an efficient implementation of the strengthened cuts along with computational results showing their improvements over the standard intersection cuts. We also show that, in our setting, there is unique lifting which implies that our strengthening procedure is generating the best possible cut coefficients for the integer variables.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02112-0
-
- Optimizing for strategy diversity in the design of video games
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We consider the problem of designing a linear program that has diverse solutions as the right-hand side varies. This problem arises in video game settings where designers aim to have players use different “weapons” or “tactics” as they progress. We model this design question as a choice over the constraint matrix A and cost vector c to maximize the number of possible supports of unique optimal solutions (what we call “loadouts”) of Linear Programs \(\max \{c^\top x \mid Ax \le b, x \ge 0\}\) with nonnegative data considered over all resource vectors b. We provide an upper bound on the optimal number of loadouts and provide a family of constructions that have an asymptotically optimal number of loadouts. The upper bound is based on a connection between our problem and the study of triangulations of point sets arising from polyhedral combinatorics, and specifically the combinatorics of the cyclic polytope. Our asymptotically optimal construction also draws inspiration from the properties of the cyclic polytope.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02126-8
-
- A nearly optimal randomized algorithm for explorable heap selection
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Explorable heap selection is the problem of selecting the nth smallest value in a binary heap. The key values can only be accessed by traversing through the underlying infinite binary tree, and the complexity of the algorithm is measured by the total distance traveled in the tree (each edge has unit cost). This problem was originally proposed as a model to study search strategies for the branch-and-bound algorithm with storage restrictions by Karp, Saks and Widgerson (FOCS ’86), who gave deterministic and randomized \(n\cdot \exp (O(\sqrt{\log {n}}))\) time algorithms using \(O(\log (n)^{2.5})\) and \(O(\sqrt{\log n})\) space respectively. We present a new randomized algorithm with running time \(O(n\log (n)^3)\) against an oblivious adversary using \(O(\log n)\) space, substantially improving the previous best randomized running time at the expense of slightly increased space usage. We also show an \(\Omega (\log (n)n/\log (\log (n)))\) lower bound for any algorithm that solves the problem in the same amount of space, indicating that our algorithm is nearly optimal.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02145-5
-
- From approximate to exact integer programming
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Approximate integer programming is the following: For a given convex body \(K \subseteq {\mathbb {R}}^n\) , either determine whether \(K \cap {\mathbb {Z}}^n\) is empty, or find an integer point in the convex body \(2\cdot (K - c) +c\) which is K, scaled by 2 from its center of gravity c. Approximate integer programming can be solved in time \(2^{O(n)}\) while the fastest known methods for exact integer programming run in time \(2^{O(n)} \cdot n^n\) . So far, there are no efficient methods for integer programming known that are based on approximate integer programming. Our main contribution are two such methods, each yielding novel complexity results. First, we show that an integer point \(x^* \in (K \cap {\mathbb {Z}}^n)\) can be found in time \(2^{O(n)}\) , provided that the remainders of each component \(x_i^* \mod \ell \) for some arbitrarily fixed \(\ell \ge 5(n+1)\) of \(x^*\) are given. The algorithm is based on a cutting-plane technique, iteratively halving the volume of the feasible set. The cutting planes are determined via approximate integer programming. Enumeration of the possible remainders gives a \(2^{O(n)}n^n\) algorithm for general integer programming. This matches the current best bound of an algorithm by Dadush (Integer programming, lattice algorithms, and deterministic, vol. Estimation. Georgia Institute of Technology, Atlanta, 2012) that is considerably more involved. Our algorithm also relies on a new asymmetric approximate Carathéodory theorem that might be of interest on its own. Our second method concerns integer programming problems in equation-standard form \(Ax = b, 0 \le x \le u, \, x \in {\mathbb {Z}}^n\) . Such a problem can be reduced to the solution of \(\prod _i O(\log u_i +1)\) approximate integer programming problems. This implies, for example that knapsack or subset-sum problems with polynomial variable range \(0 \le x_i \le p(n)\) can be solved in time \((\log n)^{O(n)}\) . For these problems, the best running time so far was \(n^n \cdot 2^{O(n)}\) .
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02084-1
-
- Optimal general factor problem and jump system intersection
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In the optimal general factor problem, given a graph \(G=(V, E)\) and a set \(B(v) \subseteq {\mathbb {Z}}\) of integers for each \(v \in V\) , we seek for an edge subset F of maximum cardinality subject to \(d_F(v) \in B(v)\) for \(v \in V\) , where \(d_F(v)\) denotes the number of edges in F incident to v. A recent crucial work by Dudycz and Paluch shows that this problem can be solved in polynomial time if each B(v) has no gap of length more than one. While their algorithm is very simple, its correctness proof is quite complicated. In this paper, we formulate the optimal general factor problem as the jump system intersection, and reveal when the algorithm by Dudycz and Paluch can be applied to this abstract form of the problem. By using this abstraction, we give another correctness proof of the algorithm, which is simpler than the original one. We also extend our result to the valuated case.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02098-9
-
- A fast combinatorial algorithm for the bilevel knapsack problem with
interdiction constraints-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We consider the bilevel knapsack problem with interdiction constraints, a fundamental bilevel integer programming problem which generalizes the 0–1 knapsack problem. In this problem, there are two knapsacks and n items. The objective is to select some items to pack into the first knapsack such that the maximum profit attainable from packing some of the remaining items into the second knapsack is minimized. We present a combinatorial branch-and-bound algorithm which outperforms the current state-of-the-art solution method in computational experiments for 99% of the instances reported in the literature. On many of the harder instances, our algorithm is orders of magnitude faster, which enabled it to solve 53 of the 72 previously unsolved instances. Our result relies fundamentally on a new dynamic programming algorithm which computes very strong lower bounds. This dynamic program solves a relaxation of the problem from bilevel to 2n-level where the items are processed in an online fashion. The relaxation is easier to solve but approximates the original problem surprisingly well in practice. We believe that this same technique may be useful for other interdiction problems.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02133-9
-
- Stabilization of capacitated matching games
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract An edge-weighted, vertex-capacitated graph \(G\) is called stable if the value of a maximum-weight capacity-matching equals the value of a maximum-weight fractional capacity-matching. Stable graphs play a key role in characterizing the existence of stable solutions for popular combinatorial games that involve the structure of matchings in graphs, such as network bargaining games and cooperative matching games. The vertex-stabilizer problem asks to compute a minimum number of players to block (i.e., vertices of \(G\) to remove) in order to ensure stability for such games. The problem has been shown to be solvable in polynomial-time, for unit-capacity graphs. This stays true also if we impose the restriction that the set of players to block must not intersect with a given specified maximum matching of \(G\) . In this work, we investigate these algorithmic problems in the more general setting of arbitrary capacities. We show that the vertex-stabilizer problem with the additional restriction of avoiding a given maximum matching remains polynomial-time solvable. Differently, without this restriction, the vertex-stabilizer problem becomes NP-hard and even hard to approximate, in contrast to the unit-capacity case.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02169-x
-
- Advances on strictly $$\Delta $$ -modular IPs
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract There has been significant work recently on integer programs (IPs) \(\min \{c^\top x :Ax\le b,\,x\in \mathbb {Z}^n\}\) with a constraint marix A with bounded subdeterminants. This is motivated by a well-known conjecture claiming that, for any constant \(\Delta \in \mathbb {Z}_{>0}\) , \(\Delta \) -modular IPs are efficiently solvable, which are IPs where the constraint matrix \(A\in \mathbb {Z}^{m\times n}\) has full column rank and all \(n\times n\) minors of A are within \(\{-\Delta , \dots , \Delta \}\) . Previous progress on this question, in particular for \(\Delta =2\) , relies on algorithms that solve an important special case, namely strictly \(\Delta \) -modular IPs, which further restrict the \(n\times n\) minors of A to be within \(\{-\Delta , 0, \Delta \}\) . Even for \(\Delta =2\) , such problems include well-known combinatorial optimization problems like the minimum odd/even cut problem. The conjecture remains open even for strictly \(\Delta \) -modular IPs. Prior advances were restricted to prime \(\Delta \) , which allows for employing strong number-theoretic results. In this work, we make first progress beyond the prime case by presenting techniques not relying on such strong number-theoretic prime results. In particular, our approach implies that there is a randomized algorithm to check feasibility of strictly \(\Delta \) -modular IPs in strongly polynomial time if \(\Delta \le 4\) .
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02148-2
-
- ReLU neural networks of polynomial size for exact maximum flow computation
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper studies the expressive power of artificial neural networks with rectified linear units. In order to study them as a model of real-valued computation, we introduce the concept of Max-Affine Arithmetic Programs and show equivalence between them and neural networks concerning natural complexity measures. We then use this result to show that two fundamental combinatorial optimization problems can be solved with polynomial-size neural networks. First, we show that for any undirected graph with n nodes, there is a neural network (with fixed weights and biases) of size \(\mathcal {O}(n^3)\) that takes the edge weights as input and computes the value of a minimum spanning tree of the graph. Second, we show that for any directed graph with n nodes and m arcs, there is a neural network of size \(\mathcal {O}(m^2n^2)\) that takes the arc capacities as input and computes a maximum flow. Our results imply that these two problems can be solved with strongly polynomial time algorithms that solely use affine transformations and maxima computations, but no comparison-based branchings.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02096-x
-
- A $$\frac{4}{3}$$ -approximation algorithm for half-integral cycle cut
instances of the TSP-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract A long-standing conjecture for the traveling salesman problem (TSP) states that the integrality gap of the standard linear programming relaxation of the TSP (sometimes called the Subtour LP or the Held-Karp bound) is at most 4/3 for symmetric instances of the TSP obeying the triangle inequality; that is, the cost of an optimal tour is at most 4/3 times the value of the value of the corresponding linear program. There is a variety of evidence in support of the conjecture (see, for instance, Goemans in Math Program 69:335–349, 1995; Benoit and Boyd in Math Oper Res 33:921–931, 2008). It has long been known that the integrality gap is at most 3/2 (Wolsey in Math Program Study 13:121–134, 1980; Shmoys and Williamson in Inf Process Lett 35:281–285, 1990). Despite significant efforts by the community, the conjecture remains open. In this paper we consider the half-integral case, in which a feasible solution to the LP has solution values in \(\{0, 1/2, 1\}\) . Such instances have been conjectured to be the most difficult instances for the overall four-thirds conjecture (Schalekamp et al. in Math Oper Res 39(2):403–417, 2014). Karlin et al. (in: Proceedings of the 52nd Annual ACM Symposium on the the Theory of Computing, ACM, New York, 2020), in a breakthrough result, were able to show that in the half-integral case, the integrality gap is at most 1.49993; Gupta et al. (in: Integer Programming and Combinatorial Optimization. Lecture Notes in Computer Science, 2022. https://arxiv.org/abs/2111.09290) showed a slight improvement of this result to 1.4983. Additionally, this result led to the first significant progress on the overall conjecture in decades; the same authors showed the integrality gap of the Subtour LP is at most \(1.5-\epsilon \) for some \(\epsilon >10^{-36}\) Karlin et al. in 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS). https://doi.org/10.1109/FOCS54457.2022.00084. With the improvements on the 3/2 bound remaining very incremental, even in the half-integral case, we turn the question around and look for a large class of half-integral instances for which we can prove that the 4/3 conjecture is correct, preferably one containing the known worst-case instances. In Karlin et al.’s work on the half-integral case, they perform induction on a hierarchy of critical tight sets in the support graph of the LP solution, in which some of the sets correspond to cycle cuts and the others to degree cuts. Here we show that if all the sets in the hierarchy correspond to cycle cuts, then we can find a distribution of tours whose expected cost is at most 4/3 times the value of the half-integral LP solution; sampling from the distribution gives us a randomized 4/3-approximation algorithm. We note that two important bad cases with an integrality gap of 4/3 have a half-integral LP solution in which all the critical tight sets in the hierarchy are cycle cuts; thus our result is tight. Our overall approach is novel. Most recent work has focused on showing that some variation of the Christofides-Serdyukov algorithm (Christofides in “Worst case analysis of a new heuristic for the traveling salesman problem”. Report 388, Graduate School of Industrial Administration, Carnegie Mellon University, Pittsburgh, 1976; Serdyukov in Upravlyaemye Sistemy 17:76–79, 1978) that combines a randomly sampled spanning tree plus a T-join (or a matching) can be shown to give a bound better than 1.5. Here we show that for any point in a region of “patterns” of edges incident to each cycle cut, we can give a distribution of patterns connecting all the child cycle cuts such that the distribution of patterns for each child also falls in the region. This region gives rise to a distribution on Eulerian tours in which each edge in the support of the LP is used at most four-thirds of its LP value of the time, which then gives the result.
PubDate: 2025-03-01
DOI: 10.1007/s10107-025-02193-5
-
- Inapproximability of shortest paths on perfect matching polytopes
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We consider the computational problem of finding short paths in the skeleton of the perfect matching polytope of a bipartite graph. We prove that unless \({\textsf {P}}={\textsf {NP}}\) , there is no polynomial-time algorithm that computes a path of constant length between two vertices at distance two of the perfect matching polytope of a bipartite graph. Conditioned on \({\textsf {P}}\ne {\textsf {NP}}\) , this disproves a conjecture by Ito et al. (SIAM J Discrete Math 36(2):1102–1123, 2022). Assuming the Exponential Time Hypothesis we prove the stronger result that there exists no polynomial-time algorithm computing a path of length at most \(\left( \frac{1}{4}-o(1)\right) \log N / \log \log N\) between two vertices at distance two of the perfect matching polytope of an N-vertex bipartite graph. These results remain true if the bipartite graph is restricted to be of maximum degree three. The above has the following interesting implication for the performance of pivot rules for the simplex algorithm on simply-structured combinatorial polytopes: If \({\textsf {P}}\ne {\textsf {NP}}\) , then for every simplex pivot rule executable in polynomial time and every constant \(k \in {\mathbb {N}}\) there exists a linear program on a perfect matching polytope and a starting vertex of the polytope such that the optimal solution can be reached in two monotone non-degenerate steps from the starting vertex, yet the pivot rule will require at least k non-degenerate steps to reach the optimal solution. This result remains true in the more general setting of pivot rules for so-called circuit-augmentation algorithms.
PubDate: 2025-03-01
DOI: 10.1007/s10107-023-02025-4
-
- Multiplicative auction algorithm for approximate maximum weight bipartite
matching-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We present an auction algorithm using multiplicative instead of constant weight updates to compute a \((1-\varepsilon )\) -approximate maximum weight matching (MWM) in a bipartite graph with n vertices and m edges in time \(O(m\varepsilon ^{-1})\) , beating the running time of the fastest known approximation algorithm of Duan and Pettie [JACM ’14] that runs in \(O(m\varepsilon ^{-1}\log \varepsilon ^{-1})\) . Our algorithm is very simple and it can be extended to give a dynamic data structure that maintains a \((1-\varepsilon )\) -approximate maximum weight matching under (1) one-sided vertex deletions (with incident edges) and (2) one-sided vertex insertions (with incident edges sorted by weight) to the other side. The total time time used is \(O(m\varepsilon ^{-1})\) , where m is the sum of the number of initially existing and inserted edges.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02066-3
-
- Information complexity of mixed-integer convex optimization
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We investigate the information complexity of mixed-integer convex optimization under different types of oracles. We establish new lower bounds for the standard first-order oracle, improving upon the previous best known lower bound. This leaves only a lower order linear term (in the dimension) as the gap between the lower and upper bounds. This is derived as a corollary of a more fundamental “transfer” result that shows how lower bounds on information complexity of continuous convex optimization under different oracles can be transferred to the mixed-integer setting in a black-box manner. Further, we (to the best of our knowledge) initiate the study of, and obtain the first set of results on, information complexity under oracles that only reveal partial first-order information, e.g., where one can only make a binary query over the function value or subgradient at a given point. We give algorithms for (mixed-integer) convex optimization that work under these less informative oracles. We also give lower bounds showing that, for some of these oracles, every algorithm requires more iterations to achieve a target error compared to when complete first-order information is available. That is, these oracles are provably less informative than full first-order oracles for the purpose of optimization.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02099-8
-
- A characterization of maximal homogeneous-quadratic-free sets
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The intersection cut framework was introduced by Balas in 1971 as a method for generating cutting planes in integer optimization. In this framework, one uses a full-dimensional convex S-free set, where S is the feasible region of the integer program, to derive a cut separating S from a non-integral vertex of a linear relaxation of S. Among all S-free sets, it is the inclusion-wise maximal ones that yield the strongest cuts. Recently, this framework has been extended beyond the integer case in order to obtain cutting planes in non-linear settings. In this work, we consider the specific setting when S is defined by a homogeneous quadratic inequality. In this ‘quadratic-free’ setting, every function \(\Gamma : D^m \rightarrow D^n\) , where \(D^k\) is the unit sphere in \(\mathbb {R}^k\) , generates a representation of a quadratic-free set. While not every \(\Gamma \) generates a maximal quadratic free set, it is the case that every full-dimensional maximal quadratic free set is generated by some \(\Gamma \) . Our main result shows that the corresponding quadratic-free set is full-dimensional and maximal if and only if \(\Gamma \) is non-expansive and satisfies a technical condition. This result yields a broader class of maximal S-free sets than previously known. Our result stems from a new characterization of maximal S-free sets (for general S beyond the quadratic setting) based on sequences that ‘expose’ inequalities defining the S-free set.
PubDate: 2025-03-01
DOI: 10.1007/s10107-024-02092-1
-