SIAM Journal on Computing
Journal Prestige (SJR): 0.874 Citation Impact (citeScore): 2 Number of Followers: 12 Hybrid journal * Containing 2 Open Access article(s) in this issue * ISSN (Print) 00975397  ISSN (Online) 10957111 Published by Society for Industrial and Applied Mathematics [17 journals] 
 Hardness of Random Optimization Problems for Boolean Circuits, LowDegree
Polynomials, and Langevin Dynamics
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: David Gamarnik, Aukosh Jagannath, Alexander S. Wein
Pages: 1  46
Abstract: SIAM Journal on Computing, Volume 53, Issue 1, Page 146, February 2024.
Abstract. We consider the problem of finding nearly optimal solutions of optimization problems with random objective functions. Such problems arise widely in the theory of random graphs, theoretical computer science, and statistical physics. Two concrete problems we consider are (a) optimizing the Hamiltonian of a spherical or Ising [math]spin glass model and (b) finding a large independent set in a sparse Erdős–Rényi graph. The following families of algorithms are considered: (a) lowdegree polynomials of the input—a general framework that captures many prior algorithms; (b) lowdepth Boolean circuits; (c) the Langevin dynamics algorithm, a canonical Monte Carlo analogue of the gradient descent algorithm. We show that these families of algorithms cannot have high success probability. For the case of Boolean circuits, our results improve the stateoftheart bounds known in circuit complexity theory (although we consider the search problem as opposed to the decision problem). Our proof uses the fact that these models are known to exhibit a variant of the overlap gap property (OGP) of nearoptimal solutions. Specifically, for both models, every two solutions whose objectives are above a certain threshold are either close to or far from each other. The crux of our proof is that the classes of algorithms we consider exhibit a form of stability (noiseinsensitivity): a small perturbation of the input induces a small perturbation of the output. We show by an interpolation argument that stable algorithms cannot overcome the OGP barrier. The stability of Langevin dynamics is an immediate consequence of the wellposedness of stochastic differential equations. The stability of lowdegree polynomials and Boolean circuits is established using tools from Gaussian and Boolean analysis—namely hypercontractivity and total influence, as well as a novel lower bound for random walks avoiding certain subsets, which we expect to be of independent interest. In the case of Boolean circuits, the result also makes use of Linial–Mansour–Nisan’s classical theorem. Our techniques apply more broadly to low influence functions, and we expect that they may apply more generally.
Citation: SIAM Journal on Computing
PubDate: 20240213T08:00:00Z
DOI: 10.1137/22M150263X
Issue No: Vol. 53, No. 1 (2024)

 The Shortest Even Cycle Problem Is Tractable
Open Access Article
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Andreas Björklund, Thore Husfeldt, Petteri Kaski
Pages: STOC2222  STOC2245
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. Given a directed graph as input, we show how to efficiently find a shortest (directed, simple) cycle on an even number of vertices. As far as we know, no polynomialtime algorithm was previously known for this problem. In fact, finding any even cycle in a directed graph in polynomial time was open for more than two decades until Robertson, Seymour, and Thomas (Ann. of Math. (2), 150 (1999), pp. 929–975) and, independently, McCuaig (Electron. J. Combin., 11 (2004), R7900) (announced jointly at STOC 1997) gave an efficiently testable structural characterization of evencyclefree directed graphs. Methodologically, our algorithm relies on the standard framework of algebraic fingerprinting and randomized polynomial identity testing over a finite field and, in fact, relies on a generating polynomial implicit in a paper of Vazirani and Yannakakis (Discrete Appl. Math., 25 (1989), pp. 179–190) that enumerates weighted cycle covers by the parity of their number of cycles as a difference of a permanent and a determinant polynomial. The need to work with the permanent—known to be #Phard apart from a very restricted choice of coefficient rings (L. G. Valiant, Theoret. Comput. Sci., 8 (1979), pp. 189–201)—is where our main technical contribution occurs. We design a family of finite commutative rings of characteristic 4 that simultaneously (i) give a nondegenerate representation for the generating polynomial identity via the permanent and the determinant, (ii) support efficient permanent computations by extension of Valiant’s techniques, and (iii) enable emulation of finitefield arithmetic in characteristic 2. Here our work is foreshadowed by that of Björklund and Husfeldt (SIAM J. Comput., 48 (2019), pp. 1698–1710) who used a considerably less efficient commutative ring design—in particular, one lacking finitefield emulation—to obtain a polynomialtime algorithm for the shortest two disjoint paths problem in undirected graphs. Building on work of Gilbert and Tarjan (Numer. Math., 50 (1986), pp. 377–404) as well as Alon and Yuster (J. ACM, 42 (2013), pp. 844–856), we also show how ideas from the nested dissection technique for solving linear equation systems—introduced by George (SIAM J. Numer. Anal., 10 (1973), pp. 345–363) for symmetric positive definite real matrices—leads to faster algorithm designs in our present finitering randomized context when we have control of the separator structure of the input graph; for example, this happens when the input has bounded genus.
Citation: SIAM Journal on Computing
PubDate: 20240215T08:00:00Z
DOI: 10.1137/22M1538260

 A Strong Version of Cobham’s Theorem

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Philipp Hieronymi, Chris Schulz
Pages: STOC221  STOC2221
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. Let [math] be two multiplicatively independent integers. Cobham’s famous theorem states that a set [math] is both [math]recognizable and [math]recognizable if and only if it is definable in Presburger arithmetic. Here we show the following strengthening: let [math] be [math]recognizable, and let [math] be [math]recognizable such that both [math] and [math] are not definable in Presburger arithmetic. Then the firstorder logical theory of [math] is undecidable. This is in contrast to a wellknown theorem of Büchi stating that the firstorder logical theory of [math] is decidable.
Citation: SIAM Journal on Computing
PubDate: 20240117T08:00:00Z
DOI: 10.1137/22M1538065

 Discrepancy Minimization via a SelfBalancing Walk

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Ryan Alweiss, Yang P. Liu, Mehtaab S. Sawhney
Pages: STOC21211  STOC21224
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. We study discrepancy minimization for vectors in [math] under various settings. The main result is the analysis of a new simple random process in high dimensions through a comparison argument. As corollaries, we obtain bounds which are tight up to logarithmic factors for online vector balancing against oblivious adversaries, resolving several questions posed by Bansal et al. [STOC, ACM, New York, 2020, pp. 1139–1152], as well as a linear time algorithm for logarithmic bounds for the Komlós conjecture.
Citation: SIAM Journal on Computing
PubDate: 20240112T08:00:00Z
DOI: 10.1137/21M1442450

 NonBlackBox WorstCase to AverageCase Reductions Within [math]

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Shuichi Hirahara
Pages: FOCS18349  FOCS18382
Abstract: SIAM Journal on Computing, Volume 52, Issue 6, Page FOCS18349FOCS18382, December 2023.
Abstract. There are significant obstacles to establishing an equivalence between the worstcase and averagecase hardness of [math]. Several results suggest that blackbox worstcase to averagecase reductions are not likely to be used for reducing any worstcase problem outside [math] to a distributional [math] problem. This paper overcomes the barrier. We present the first nonblackbox worstcase to averagecase reduction from a problem conjectured to be outside [math] to a distributional [math] problem. Specifically, we consider the minimum timebounded Kolmogorov complexity problem (MINKT) and prove that there exists a zeroerror randomized polynomialtime algorithm approximating the minimum timebounded Kolmogorov complexity [math] within an additive error [math] if its averagecase version admits an errorless heuristic polynomialtime algorithm. We observe that the approximation version of MINKT is Random 3SAThard, and more generally it is harder than avoiding any polynomialtime computable hitting set generator that extends its seed of length [math] by [math], which provides strong evidence that the approximation problem is outside [math] and thus our reductions are nonblackbox. Our reduction can be derandomized at the cost of the quality of the approximation. We also show that, given a truth table of size [math], approximating the minimum circuit size within a factor of [math] is in [math] for some constant [math] iff its averagecase version is easy. Our results can be seen as a new approach for excluding Heuristica. In particular, proving [math]hardness of the approximation versions of MINKT or the minimum circuit size problem is sufficient for establishing an equivalence between the worstcase and averagecase hardness of [math].
Citation: SIAM Journal on Computing
PubDate: 20231215T08:00:00Z
DOI: 10.1137/19M124705X
Issue No: Vol. 52, No. 6 (2023)

 Special Section on the FiftyNinth Annual IEEE Symposium on Foundations of
Computer Science (2018)
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Elette Boyle, Vincent CohenAddad, Alexandra Kolla, Mikkel Thorup
Pages: FOCS18i  FOCS18i
Abstract: SIAM Journal on Computing, Volume 52, Issue 6, Page FOCS18iFOCS18i, December 2023.
Citation: SIAM Journal on Computing
PubDate: 20231214T08:00:00Z
DOI: 10.1137/23M1617011
Issue No: Vol. 52, No. 6 (2023)

 Sampling Graphs without Forbidden Subgraphs and Unbalanced Expanders with
Negligible Error
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Benny Applebaum, Eliran Kachlon
Pages: 1321  1368
Abstract: SIAM Journal on Computing, Volume 52, Issue 6, Page 13211368, December 2023.
Abstract. Suppose that you wish to sample a random graph [math] over [math] vertices and [math] edges conditioned on the event that [math] does not contain a “small” [math]size graph [math] (e.g., clique) as a subgraph. Assuming that most such graphs are [math]free, the problem can be solved by a simple rejectedsampling algorithm (that tests for [math]cliques) with an expected running time of [math]. Is it possible to solve the problem in a running time that does not grow polynomially with [math]' In this paper, we introduce the general problem of sampling a “random looking” graph [math] with a given edge density that avoids some arbitrary predefined [math]size subgraph [math]. As our main result, we show that the problem is solvable with respect to some specially crafted [math]wise independent distribution over graphs. That is, we design a sampling algorithm for [math]wise independent graphs that supports efficient testing for subgraphfreeness in time [math], where [math] is a function of [math] and the constant [math] in the exponent is independent of [math]. Our solution extends to the case where both [math] and [math] are [math]uniform hypergraphs. We use these algorithms to obtain the first probabilistic construction of constantdegree polynomially unbalanced expander graphs whose failure probability is negligible in [math] (i.e., [math]). In particular, given constants [math], we output a bipartite graph that has [math] left nodes and [math] right nodes with rightdegree of [math] so that any right set of size at most [math] expands by factor of [math]. This result is extended to the setting of unique expansion as well. We observe that such a negligibleerror construction can be employed in many useful settings and present applications in coding theory (batch codes and lowdensity paritycheck codes), pseudorandomness (lowbias generators and randomness extractors), and cryptography. Notably, we show that our constructions yield a collection of polynomialstretch locally computable cryptographic pseudorandom generators based on Goldreich’s onewayness assumption resolving a central open problem in the area of paralleltime cryptography (e.g., Applebaum, Ishai, and Kushilevitz [SIAM J. Comput., 36 (2006), pp. 845–888] and Ishai et al. [Proceedings of the 40th Annual ACM Symposium on Theory of Computing, ACM, 2008, pp. 433–442]).
Citation: SIAM Journal on Computing
PubDate: 20231114T08:00:00Z
DOI: 10.1137/22M1484134
Issue No: Vol. 52, No. 6 (2023)

 A Faster Exponential Time Algorithm for Bin Packing With a Constant Number
of Bins via Additive Combinatorics
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Jesper Nederlof, Jakub Pawlewicz, Céline M. F. Swennenhuis, Karol Węgrzycki
Pages: 1369  1412
Abstract: SIAM Journal on Computing, Volume 52, Issue 6, Page 13691412, December 2023.
Abstract. In the Bin Packing problem one is given [math] items with weights [math] and [math] bins with capacities [math]. The goal is to partition the items into sets [math] such that [math] for every bin [math], where [math] denotes [math]. Björklund, Husfeldt, and Koivisto [SIAM J. Comput., 39 (2009), pp. 546–563] presented an [math] time algorithm for Bin Packing (the [math] notation omits factors polynomial in the input size). In this paper, we show that for every [math] there exists a constant [math] such that an instance of Bin Packing with [math] bins can be solved in [math] randomized time. Before our work, such improved algorithms were not known even for [math]. A key step in our approach is the following new result in Littlewood–Offord theory on the additive combinatorics of subset sums: For every [math] there exists an [math] such that if [math] for some [math], then [math].
Citation: SIAM Journal on Computing
PubDate: 20231129T08:00:00Z
DOI: 10.1137/22M1478112
Issue No: Vol. 52, No. 6 (2023)

 A Structural Theorem for Local Algorithms with Applications to Coding,
Testing, and Verification
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Marcel Dall’Agnol, Tom Gur, Oded Lachish
Pages: 1413  1463
Abstract: SIAM Journal on Computing, Volume 52, Issue 6, Page 14131463, December 2023.
Abstract. We prove a general structural theorem for a wide family of local algorithms, which includes property testers, local decoders, and probabilistically checkable proofs of proximity. Namely, we show that the structure of every algorithm that makes [math] adaptive queries and satisfies a natural robustness condition admits a samplebased algorithm with [math] sample complexity, following the definition of Goldreich and Ron [ACM Trans. Comput. Theory, 8 (2016), 7]. We prove that this transformation is nearly optimal. Our theorem also admits a scheme for constructing privacypreserving local algorithms. Using the unified view that our structural theorem provides, we obtain results regarding various types of local algorithms, including the following. We strengthen the stateoftheart lower bound for relaxed locally decodable codes, obtaining an exponential improvement on the dependency in query complexity; this resolves an open problem raised by Gur and Lachish [SIAM J. Comput., 50 (2021), pp. 788–813]. We show that any (constantquery) testable property admits a samplebased tester with sublinear sample complexity; this resolves a problem left open in a work of Fischer, Lachish, and Vasudev [Proceedings of the 56th Annual Symposium on Foundations of Computer Science, IEEE, 2015, pp. 1163–1182], bypassing an exponential blowup caused by previous techniques in the case of adaptive testers. We prove that the known separation between proofs of proximity and testers is essentially maximal; this resolves a problem left open by Gur and Rothblum [Proceedings of the 8th Innovations in Theoretical Computer Science Conference, 2017, pp. 39:1–39:43; Comput. Complexity, 27 (2018), pp. 99–207] regarding sublineartime delegation of computation. Our techniques strongly rely on relaxed sunflower lemmas and the Hajnal–Szemerédi theorem.
Citation: SIAM Journal on Computing
PubDate: 20231206T08:00:00Z
DOI: 10.1137/21M1422781
Issue No: Vol. 52, No. 6 (2023)

 A Polynomial Lower Bound on the Number of Rounds for Parallel Submodular
Function Minimization and Matroid Intersection
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Deeparnab Chakrabarty, Yu Chen, Sanjeev Khanna
Pages: 38  66
Abstract: SIAM Journal on Computing, Volume 52, Issue 1, Page 3866, February 2023.
Abstract. Submodular function minimization (SFM) and matroid intersection are fundamental discrete optimization problems with applications in many fields. It is well known that both of these can be solved making [math] queries to a relevant oracle (evaluation oracle for SFM and rank oracle for matroid intersection), where [math] denotes the universe size. However, all known polynomial query algorithms are highly adaptive, requiring at least [math] rounds of querying the oracle. A natural question is whether these can be efficiently solved in a highly parallel manner, namely, with [math] queries using only polylogarithmic rounds of adaptivity. An important step towards understanding the adaptivity needed for efficient parallel SFM was taken recently in the work of Balkanski and Singer who showed that any SFM algorithm making [math] queries necessarily requires [math] rounds. This left open the possibility of efficient SFM algorithms in polylogarithmic rounds. For matroid intersection, even the possibility of a constant round, [math] query algorithm was not hitherto ruled out. In this work, we prove that any, possibly randomized, algorithm for submodular function minimization or matroid intersection making [math] queries requires (Throughout the paper, we use the usual convention of using [math] to denote [math] and using [math] to denote [math] for some unspecified constant [math]) [math] rounds of adaptivity. In fact, we show a polynomial lower bound on the number of rounds of adaptivity even for algorithms that make at most [math] queries for any constant [math]. Therefore, even though SFM and matroid intersection are efficiently solvable, they are not highly parallelizable in the oracle model.
Citation: SIAM Journal on Computing
PubDate: 20230202T08:00:00Z
DOI: 10.1137/22M147685X
Issue No: Vol. 52, No. 1 (2023)

 Approximation Algorithms for LCS and LIS with Truly Improved Running Times

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Aviad Rubinstein, Saeed Seddighin, Zhao Song, Xiaorui Sun
Pages: FOCS19276  FOCS19331
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. Longest common subsequence (LCS) is a classic and central problem in combinatorial optimization. While LCS admits a quadratic time solution, recent evidence suggests that solving the problem may be impossible in truly subquadratic time. A special case of LCS wherein each character appears at most once in every string is equivalent to the longest increasing subsequence (LIS) problem which can be solved in quasilinear time. In this work, we present novel algorithms for approximating LCS in truly subquadratic time and LIS in truly sublinear time. Our approximation factors depend on the ratio of the optimal solution size to the input size. We denote this ratio by [math] and obtain the following results for LCS and LIS without any prior knowledge of [math]: a truly subquadratic time algorithm for LCS with approximation factor [math] and a truly sublinear time algorithm for LIS with approximation factor [math]. The triangle inequality was recently used by M. Boroujeni, S. Ehsani, M. Ghodsi, M. HajiAghayi, and S. Seddingham [Proceedings of the 29th Annual ACMSIAM Symposium on Discrete Algorithms, 2018, pp. 1170–1189] and D. Chakraborty, D. Das, E. Goldenberg, M. Koucky, and M. Saks [Proceedings of the 59th Annual IEEE Symposium on Foundations of Computer Science, 2018, pp. 979–990] to present new approximation algorithms for edit distance. Our techniques for LCS extend the notion of the triangle inequality to nonmetric settings.
Citation: SIAM Journal on Computing
PubDate: 20231117T08:00:00Z
DOI: 10.1137/20M1316500

 Breaking the Cubic Barrier for (Unweighted) Tree Edit Distance

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Xiao Mao
Pages: FOCS21195  FOCS21223
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. The (unweighted) tree edit distance problem for [math] node trees asks to compute a measure of dissimilarity between two rooted trees with node labels. The current best algorithm from more than a decade ago runs in [math] time [Demaine et al., Automata, Languages and Programming, Springer, Berlin, 2007, pp. 146–157]. The same paper also showed that [math] is the best possible running time for any algorithm using the socalled decomposition strategy, which underlies almost all the known algorithms for this problem. These algorithms would also work for the weighted tree edit distance problem, which cannot be solved in truly subcubic time under the AllPairs Shortest Paths conjecture [Bringmann et al., ACM Trans. Algorithms, 16 (2020), pp. 48:1–48:22]. In this paper, we break the cubic barrier by showing an [math] time algorithm for the unweighted tree edit distance problem. We consider an equivalent maximization problem and use a dynamic programming scheme involving matrices with many special properties. By using a decomposition scheme as well as several combinatorial techniques, we reduce tree edit distance to the maxplus product of boundeddifference matrices, which can be solved in truly subcubic time [Bringmann et al., Proceedings of the IEEE 57th Annual Symposium on Foundations of Computer Science, 2016, pp. 375–384].
Citation: SIAM Journal on Computing
PubDate: 20231116T08:00:00Z
DOI: 10.1137/22M1480719

 A SingleExponential Time 2Approximation Algorithm for Treewidth

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Tuukka Korhonen
Pages: FOCS21174  FOCS21194
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. We give an algorithm that, given an [math]vertex graph [math] and an integer [math], in time [math] either outputs a tree decomposition of [math] of width at most [math] or determines that the treewidth of [math] is larger than [math]. This is the first 2approximation algorithm for treewidth that is faster than the known exact algorithms, and in particular improves upon the previous best approximation ratio of 5 in time [math] given by Bodlaender et al. [SIAM J. Comput., 45 (2016), pp. 317–378]. Our algorithm works by applying incremental improvement operations to a tree decomposition, using an approach inspired by a proof of Bellenbaum and Diestel [Combin. Probab. Comput., 11 (2002), pp. 541–547].
Citation: SIAM Journal on Computing
PubDate: 20231114T08:00:00Z
DOI: 10.1137/22M147551X

 Adversarial Laws of Large Numbers and Optimal Regret in Online
Classification
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Noga Alon, Omri BenEliezer, Yuval Dagan, Shay Moran, Moni Naor, Eylon Yogev
Pages: STOC21154  STOC21210
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. Laws of large numbers guarantee that given a large enough sample from some population, the measure of any fixed subpopulation is wellestimated by its frequency in the sample. We study laws of large numbers in sampling processes that can affect the environment they are acting upon and interact with it. Specifically, we consider the sequential sampling model proposed by [O. BenEliezer and E. Yogev, The adversarial robustness of sampling, in Proceedings of the 39th ACM SIGMODSIGACTSIGAI Symposium on Principles of Database Systems (PODS), 2020, pp. 49–62] and characterize the classes which admit a uniform law of large numbers in this model: these are exactly the classes that are online learnable. Our characterization may be interpreted as an online analogue to the equivalence between learnability and uniform convergence in statistical (PAC) learning. The samplecomplexity bounds we obtain are tight for many parameter regimes, and as an application, we determine the optimal regret bounds in online learning, stated in terms of Littlestone’s dimension, thus resolving the main open question from [S. BenDavid, D. Pál, and S. ShalevShwartz, Agnostic online learning, in Proceedings of the 22nd Conference on Learning Theory (COLT), 2009], which was also posed by [A. Rakhlin, K. Sridharan, and A. Tewari, J. Mach. Learn. Res., 16 (2015), pp. 155–186].
Citation: SIAM Journal on Computing
PubDate: 20231103T07:00:00Z
DOI: 10.1137/21M1441924

 Optimal Mixing of Glauber Dynamics: Entropy Factorization via
HighDimensional Expansion
Open Access Article
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Zongchen Chen, Kuikui Liu, Eric Vigoda
Pages: STOC21104  STOC21153
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. We prove an optimal mixing time bound for the singlesite update Markov chain known as the Glauber dynamics or Gibbs sampling in a variety of settings. Our work presents an improved version of the spectral independence approach of Anari, Liu, and Oveis Gharan [Proceedings of the 61st Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2020, pp. 1319–1330] and shows [math] mixing time on any [math]vertex graph of bounded degree when the maximum eigenvalue of an associated influence matrix is bounded. As an application of our results, for the hardcore model on independent sets weighted by a fugacity [math], we establish [math] mixing time for the Glauber dynamics on any [math]vertex graph of constant maximum degree [math] when [math], where [math] is the critical point for the uniqueness/nonuniqueness phase transition on the [math]regular tree. More generally, for any antiferromagnetic 2spin system we prove the [math] mixing time of the Glauber dynamics on any bounded degree graph in the corresponding tree uniqueness region. Our results apply more broadly; for example, we also obtain [math] mixing for [math]colorings of trianglefree graphs of maximum degree [math] when the number of colors satisfies [math], where [math], and [math] mixing for generating random matchings of any graph with bounded degree and [math] edges. Our approach is based on two steps. First, we show that the approximate tensorization of entropy (i.e., factorizing entropy into single vertices), which is a key step for establishing the modified logSobolev inequality in many previous works, can be deduced from entropy factorization into blocks of fixed linear size. Second, we adapt the localtoglobal scheme of Alev and Lau [Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing (STOC), 2020, pp. 1198–1211] to establish such block factorization of entropy in a more general setting of pure weighted simplicial complexes satisfying local spectral expansion; this also substantially generalizes the result of Cryan, Guo, and Mousa, [Proceedings of the 60th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2019, pp. 1358–1370].
Citation: SIAM Journal on Computing
PubDate: 20231020T07:00:00Z
DOI: 10.1137/21M1443340

 Unambiguous DNFs and Alon–Saks–Seymour

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Kaspars Balodis, Shalev BenDavid, Mika Göös, Siddhartha Jain, Robin Kothari
Pages: FOCS21157  FOCS21173
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. We exhibit an unambiguous [math]DNF (disjunctive normal form) formula that requires conjunctive normal form width [math], which is optimal up to logarithmic factors. As a consequence, we get a nearoptimal solution to the Alon–Saks–Seymour problem in graph theory (posed in 1991), which asks, How large a gap can there be between the chromatic number of a graph and its biclique partition number' Our result is also known to imply several other improved separations in query/communication complexity, learning theory, and automata theory.
Citation: SIAM Journal on Computing
PubDate: 20231020T07:00:00Z
DOI: 10.1137/22M1480616

 Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than
Goldberg–Rao
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Yu Gao, Yang Liu, Richard Peng
Pages: FOCS2185  FOCS21156
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. We give an algorithm for computing exact maximum flows on graphs with [math] edges and integer capacities in the range [math] in [math] time. We use [math] to suppress logarithmic factors in [math]. For sparse graphs with polynomially bounded integer capacities, this is the first improvement over the [math] time bound from Goldberg and Rao [J. ACM, 45 (1998), pp. 783–797]. Our algorithm revolves around dynamically maintaining the augmenting electrical flows at the core of the interior point method based algorithm from Ma̧dry [Proceedings of the 57th IEEE Annual Symposium on Foundations of Computer Science, 2016, pp. 593–602]. This entails designing data structures that, in limited settings, return edges with large electric energy in a graph undergoing resistance updates.
Citation: SIAM Journal on Computing
PubDate: 20230426T07:00:00Z
DOI: 10.1137/22M1476666

 Bridging the Gap Between Tree and Connectivity Augmentation: Unified and
Stronger Approaches
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Federica Cecchetto, Vera Traub, Rico Zenklusen
Pages: STOC2126  STOC21103
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. We consider the connectivity augmentation problem (CAP), a classical problem in the area of survivable network design. It is about increasing the edgeconnectivity of a graph by one unit in the cheapest possible way. More precisely, given a [math]edgeconnected graph [math] and a set of extra edges, the task is to find a minimum cardinality subset of extra edges whose addition to [math] makes the graph [math]edgeconnected. If [math] is odd, the problem is known to reduce to the tree augmentation problem (TAP)—i.e., [math] is a spanning tree—for which significant progress has been achieved recently, leading to approximation factors below 1.5 (the current best factor is 1.458). However, advances on TAP have not carried over to CAP so far. Indeed, only very recently, Byrka, Grandoni, and Ameli [Proceedings of the 52nd ACM Symposium on Theory of Computing, 2020, pp. 815–825] managed to obtain the first approximation factor below 2 for CAP by presenting a 1.91approximation algorithm based on a method that is disjoint from recent advances for TAP. We first bridge the gap between TAP and CAP by presenting techniques that allow for leveraging insights and methods from TAP to approach CAP. We then introduce a new way to get approximation factors below 1.5, based on a new analysis technique. Through these ingredients, we obtain a 1.393approximation algorithm for CAP, and therefore also for TAP. This leads to the current best approximation result for both problems in a unified way, by significantly improving on the abovementioned 1.91approximation for CAP and also the previously best approximation factor of 1.458 for TAP by Grandoni, Kalaitzis, and Zenklusen [Proceedings of the 50th ACM Symposium on Theory of Computing, 2018, pp. 632–645]. Additionally, a feature we inherit from recent TAP advances is that our approach can deal with the weighted setting when the ratio of the largest to smallest cost on extra links is bounded, in which case we obtain approximation factors below 1.5.
Citation: SIAM Journal on Computing
PubDate: 20230412T07:00:00Z
DOI: 10.1137/21M1430601

 FIXPMembership via Convex Optimization: Games, Cakes, and Markets

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Aris FilosRatsikas, Kristoffer A. Hansen, Kasper Høgh, Alexandros Hollender
Pages: FOCS2130  FOCS2184
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. We introduce a new technique for proving membership of problems in FIXP: the class capturing the complexity of computing a fixed point of an algebraic circuit. Our technique constructs a “pseudogate,” which can be used as a black box when building FIXP circuits. This pseudogate, which we term the “OPTgate,” can solve most convex optimization problems. Using the OPTgate, we prove new FIXPmembership results, and we generalize and simplify several known results from the literature on fair division, game theory, and competitive markets. In particular, we prove complexity results for two classic problems: Computing a market equilibrium in the Arrow–Debreu model with general concave utilities is in FIXP, and computing an envyfree division of a cake with very general valuations is FIXPcomplete. We further showcase the wide applicability of our technique by using it to obtain simplified proofs and extensions of known FIXPmembership results for equilibrium computation for various types of strategic games as well as the pseudomarket mechanism of Hylland and Zeckhauser.
Citation: SIAM Journal on Computing
PubDate: 20230404T07:00:00Z
DOI: 10.1137/22M1472656

 Improved Truthful Mechanisms for Combinatorial Auctions with Submodular
Bidders
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Sepehr Assadi, Sahil Singla
Pages: FOCS19253  FOCS19275
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. A longstanding open problem in algorithmic mechanism design is to design truthful mechanisms that are computationally efficient and (approximately) maximize welfare in combinatorial auctions with submodular bidders. The first such mechanism was obtained by Dobzinski, Nisan, and Schapira [Proceedings of the 37th Annual ACM Symposium on Theory of Computing, Baltimore, MD, ACM, New York, 2005, pp. 610–618] who gave an [math]approximation, where [math] is the number of items. This problem has been studied extensively since, culminating in an [math]approximation mechanism by Dobzinski [Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, ACM, New York, 2016, pp. 940–948]. We present a computationallyefficient truthful mechanism with an approximation ratio that improves upon the stateoftheart by an exponential factor. In particular, our mechanism achieves an [math]approximation in expectation, uses only [math] demand queries, and has universal truthfulness guarantee. This settles an open question of Dobzinski on whether [math] is the best approximation ratio in this setting in the negative.
Citation: SIAM Journal on Computing
PubDate: 20230214T08:00:00Z
DOI: 10.1137/20M1316068

 HopConstrained Oblivious Routing

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Mohsen Ghaffari, Bernhard Haeupler, Goran Zuzic
Pages: STOC211  STOC2125
Abstract: SIAM Journal on Computing, Ahead of Print.
Abstract. We prove the existence of an oblivious routing scheme that is [math]competitive in terms of [math], thus resolving a wellknown question in oblivious routing. Concretely, consider an undirected network and a set of packets each with its own source and destination. The objective is to choose a path for each packet, from its source to its destination, so as to minimize [math], defined as follows: The dilation is the maximum path hop length, and the congestion is the maximum number of paths that include any single edge. The routing scheme obliviously and randomly selects a path for each packet independent of (the existence of) the other packets. Despite this obliviousness, the selected paths have [math] within a [math] factor of the best possible value. More precisely, for any integer hop constraint [math], this oblivious routing scheme selects paths of length at most [math] and is [math]competitive in terms of congestion in comparison to the best possible congestion achievable via paths of length at most [math] hops. These paths can be sampled in polynomial time. This result can be viewed as an analogue of the celebrated oblivious routing results of Räcke [Proceedings of the 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002; Proceedings of the 40th Annual ACM Symposium on Theory of Computing, 2008], which are [math]competitive in terms of congestion but are not competitive in terms of dilation.
Citation: SIAM Journal on Computing
PubDate: 20230207T08:00:00Z
DOI: 10.1137/21M1443467
