SIAM Journal on Computing
Journal Prestige (SJR): 0.874 Citation Impact (citeScore): 2 Number of Followers: 11 Hybrid journal (It can contain Open Access articles) ISSN (Print) 00975397  ISSN (Online) 10957111 Published by Society for Industrial and Applied Mathematics [17 journals] 
 Satisfiability in MultiValued Circuits

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Paweł M. Idziak, Jacek Krzaczkowski
Pages: 337  378
Abstract: SIAM Journal on Computing, Volume 51, Issue 3, Page 337378, June 2022.
The satisfiability of Boolean circuits is NPcomplete in general but becomes polynomial time when restricted either to monotone gates or linear gates. We go outside the Boolean realm and consider circuits built of any fixed set of gates on an arbitrarily large finite domain. From the complexity point of view this is connected with solving equations over finite algebras. We want to characterize finite algebras A with a polynomial time algorithm deciding if an equation over A has a solution. We are also looking for a polynomial time algorithm deciding if two circuits over a finite algebra compute the same function. Although we have not managed to solve these problems in the most general setting we have obtained such a characterization (in terms of nice structural algebraic properties) for a very broad class of algebras from congruence modular varieties, including groups, rings, and lattices and their extensions.
Citation: SIAM Journal on Computing
PubDate: 20220505T07:00:00Z
DOI: 10.1137/18M1220194
Issue No: Vol. 51, No. 3 (2022)

 On a Combinatorial Generation Problem of Knuth

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Arturo Merino, Ondřej Mička, Torsten Mütze
Pages: 379  423
Abstract: SIAM Journal on Computing, Volume 51, Issue 3, Page 379423, June 2022.
The wellknown middle levels conjecture asserts that for every integer $n\geq 1$, all binary strings of length 2(n+1) with exactly n+1 many 0s and 1s can be ordered cyclically so that any two consecutive strings differ in swapping the first bit with a complementary bit at some later position. In his book The Art of Computer Programming, Knuth raised a stronger form of this conjecture (Problem 56 in section 7.2.1.3), which requires that the sequence of positions with which the first bit is swapped in each step of such an ordering has 2n+1 blocks of the same length, and each block is obtained by adding s=1 (modulo 2n+1) to the previous block. In this work, we prove Knuth's conjecture in a more general form, allowing for arbitrary shifts $s\geq 1$ that are coprime to 2n+1. We also present an algorithm to compute this ordering, generating each new bitstring in $\mathcal{O}(n)$ time, using $\mathcal{O}(n)$ memory in total.
Citation: SIAM Journal on Computing
PubDate: 20220505T07:00:00Z
DOI: 10.1137/20M1377394
Issue No: Vol. 51, No. 3 (2022)

 Sorting Short Keys in Circuits of Size ${o(n \log n)}$

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Gilad Asharov, WeiKai Lin, Elaine Shi
Pages: 424  466
Abstract: SIAM Journal on Computing, Volume 51, Issue 3, Page 424466, June 2022.
We consider the classical problem of sorting an input array containing $n$ elements, where each element is described with a $k$bit comparison key and a $w$bit payload. A longstanding open problem is whether there exist $(k + w) \cdot o(n \log n)$sized Boolean circuits for sorting. A landmark result in this area is the work by Ajtai, Komlós, and Szemerédi (An $O(n \log n)$ sorting network, STOC'83), where they showed how to achieve sorting circuits with $(k + w) \cdot O(n \log n)$ Boolean gates. The recent work of Farhadi et al. (Lower bounds for external memory integer sorting via network coding, STOC'19) showed that if the famous LiLi network coding conjecture is true, then sorting circuits of size $w \cdot o(n \log n)$ do not exist for general $k$; however, no unconditional lower bound is known (in fact proving superlinear circuit lower bounds in general is out of the reach of existing techniques). In this paper, we show that one can overcome the $n\log n$ barrier when the keys to be sorted are short. Specifically, we prove that there is a circuit with $(k + w) \cdot O(n k) \cdot (\log^*n  \log^* (w + k))^{2+\epsilon}$ Boolean gates capable of sorting any input array containing $n$ elements, each described with a $k$bit key and a $w$bit payload. Therefore, if the keys to be sorted are short, say, $k < o(\log n)$, our result is asymptotically better than the classical Ajtai, Komlós, and Szemerédi sorting network (ignoring ${\sf poly}\log^*$ terms); and we also overcome the $n \log n$ barrier in such cases. Such a result might be surprising initially because it is long known that comparatorbased techniques must incur $\Omega(n \log n)$ comparator gates even when the keys to be sorted are only 1bit long (e.g., see Knuth's “Art of Programming” textbook). To the best of our knowledge, we are the first to achieve nontrivial results for sorting circuits using noncomparisonbased techniques. We also show that if the LiLi network coding conjecture is true, our upper bound is optimal, barring ${\sf poly}\log^*$ terms, for every $k$ as long as $k = O(\log n)$.
Citation: SIAM Journal on Computing
PubDate: 20220505T07:00:00Z
DOI: 10.1137/20M1380983
Issue No: Vol. 51, No. 3 (2022)

 Optimal (Euclidean) Metric Compression

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Piotr Indyk, Tal Wagner
Pages: 467  491
Abstract: SIAM Journal on Computing, Volume 51, Issue 3, Page 467491, June 2022.
We study the problem of representing all distances between $n$ points in ${\mathbb R}^d$, with arbitrarily small distortion, using as few bits as possible. We give asymptotically tight bounds for this problem, for Euclidean metrics, for $\ell_1$ (also known as Manhattan)metrics, and for general metrics. Our bounds for Euclidean metrics mark the first improvement over compression schemes based on discretizing the classical dimensionality reduction theorem of Johnson and Lindenstrauss [Contemp. Math. 26 (1984), pp. 189206]. Since it is known that no better dimension reduction is possible, our results establish that Euclidean metric compression is possible beyond dimension reduction.
Citation: SIAM Journal on Computing
PubDate: 20220510T07:00:00Z
DOI: 10.1137/20M1371324
Issue No: Vol. 51, No. 3 (2022)

 On the Complexity of Optimal Lottery Pricing and Randomized Mechanisms for
a UnitDemand Buyer
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Xi Chen, Ilias Diakonikolas, Anthi Orfanou, Dimitris Paparas, Xiaorui Sun, Mihalis Yannakakis
Pages: 492  548
Abstract: SIAM Journal on Computing, Volume 51, Issue 3, Page 492548, June 2022.
We study the optimal lottery problem and the optimal mechanism design problem in the setting of a single unitdemand buyer with item values drawn from independent distributions. Optimal solutions to both problems are characterized by a linear program with exponentially many variables. For the menu size complexity of the optimal lottery problem, we present an explicit, simple instance with distributions of support size 2, and show that exponentially many lotteries are required to achieve the optimal revenue. We also show that, when distributions have support size 2 and share the same high value, the simpler scheme of item pricing can achieve the same revenue as the optimal menu of lotteries. The same holds for the case of two items with support size 2 (but not necessarily the same high value). For the computational complexity of the optimal mechanism design problem, we show that unless the polynomialtime hierarchy collapses (more exactly, ${P}^{{NP}}={P}^{{\#P}}$), there is no efficient randomized algorithm to implement an optimal mechanism even when distributions have support size 3.
Citation: SIAM Journal on Computing
PubDate: 20220512T07:00:00Z
DOI: 10.1137/17M1136481
Issue No: Vol. 51, No. 3 (2022)

 ElasticDegenerate String Matching via Fast Matrix Multiplication

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Giulia Bernardini, Paweł Gawrychowski, Nadia Pisanti, Solon P. Pissis, Giovanna Rosone
Pages: 549  576
Abstract: SIAM Journal on Computing, Volume 51, Issue 3, Page 549576, June 2022.
An elasticdegenerate (ED) string is a sequence of $n$ sets of strings of total length $N$ which was recently proposed to model a set of similar sequences. The ED string matching (EDSM) problem is to find all occurrences of a pattern of length $m$ in an ED text. The EDSM problem has recently received some attention in the combinatorial pattern matching community, and an $\mathcal{O}(nm^{1.5}\sqrt{\log m} + N)$time algorithm is known [Aoyama et al., CPM 2018]. The standard assumption in the prior work on this question is that $N$ is substantially larger than both $n$ and $m$, and thus we would like to have a linear dependency on the former. Under this assumption, the natural open problem is whether we can decrease the 1.5 exponent in the time complexity, similarly as in the related (but, to the best of our knowledge, not equivalent) word break problem [Backurs and Indyk, FOCS 2016]. Our starting point is a conditional lower bound for the EDSM problem. We use the popular combinatorial Boolean matrix multiplication (BMM) conjecture stating that there is no truly subcubic combinatorial algorithm for BMM [Abboud and Williams, FOCS 2014]. By designing an appropriate reduction, we show that a combinatorial algorithm solving the EDSM problem in $\mathcal{O}(nm^{1.5\epsilon} + N)$ time, for any $\epsilon>0$, refutes this conjecture. Our reduction should be understood as an indication that decreasing the exponent requires fast matrix multiplication. String periodicity and fast Fourier transform are two standard tools in string algorithms. Our main technical contribution is that we successfully combine these tools with fast matrix multiplication to design a noncombinatorial $\tilde{\mathcal{O}}(nm^{\omega1}+N)$time algorithm for EDSM, where $\omega$ denotes the matrix multiplication exponent and the $\tilde{\mathcal{O}}(\cdot)$ notation suppresses polylog factors. To the best of our knowledge, we are the first to combine these tools. In particular, using the fact that $\omega
Citation: SIAM Journal on Computing
PubDate: 20220519T07:00:00Z
DOI: 10.1137/20M1368033
Issue No: Vol. 51, No. 3 (2022)

 Constraint Satisfaction Problems with Global Modular Constraints:
Algorithms and Hardness via Polynomial Representations
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Joshua Brakensiek, Sivakanth Gopi, Venkatesan Guruswami
Pages: 577  626
Abstract: SIAM Journal on Computing, Volume 51, Issue 3, Page 577626, June 2022.
We study the complexity of Boolean constraint satisfaction problems (CSPs) when the assignment must have Hamming weight in some congruence class modulo $M$, for various choices of the modulus $M$. Due to the known classification of tractable Boolean CSPs, this mainly reduces to the study of three cases: 2SAT, HORNSAT, and LIN2 (linear equations mod 2). We classify the moduli $M$ for which these respective problems are polynomial time solvable, and when they are not (assuming the exponential time hypothesis). Our study reveals that this modular constraint lends a surprising richness to these classic, wellstudied problems, with interesting broader connections to complexity theory and coding theory. The HORNSAT case is connected to the covering complexity of polynomials representing the NAND function mod $M$. The LIN2 case is tied to the sparsity of polynomials representing the OR function mod $M$, which in turn has connections to modular weight distribution properties of linear codes and locally decodable codes. In both cases, the analysis of our algorithm as well as the hardness reduction rely on these polynomial representations, highlighting an interesting algebraic common ground between hard cases for our algorithms and the gadgets which show hardness. These new complexity measures of polynomial representations merit further study. The inspiration for our study comes from a recent work by Nägele, Sudakov, and Zenklusen on submodular minimization with a global congruence constraint. Our algorithm for HORNSAT has strong similarities to their algorithm, and in particular identical kinds of set systems arise in both cases. Our connection to polynomial representations leads to a simpler analysis of such set systems and also sheds light on (but does not resolve) the complexity of submodular minimization with a congruency requirement modulo a composite $M$.
Citation: SIAM Journal on Computing
PubDate: 20220519T07:00:00Z
DOI: 10.1137/19M1291054
Issue No: Vol. 51, No. 3 (2022)

 Optimal Algorithms for Geometric Centers and Depth

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Timothy M. Chan, Sariel HarPeled, Mitchell Jones
Pages: 627  663
Abstract: SIAM Journal on Computing, Volume 51, Issue 3, Page 627663, June 2022.
We develop a general randomized technique for solving implicit linear programming problems, where the collection of constraints are defined implicitly by an underlying ground set of elements. In many cases, the structure of the implicitly defined constraints can be used to obtain faster linear program solvers. We apply this technique to obtain nearoptimal algorithms for a variety of fundamental problems in geometry. For a given point set $P$ of size $n$ in $\mathbb{R}^d$, we develop algorithms for computing geometric centers of a point set, including the centerpoint and the Tukey median, and several other more involved measures of centrality. For $d=2$, the new algorithms run in $O(n\log n)$ expected time, which is optimal, and for higher constant $d>2$, the expected time bound is within one logarithmic factor of $O(n^{d1})$, which is also likely near optimal for some of the problems.
Citation: SIAM Journal on Computing
PubDate: 20220519T07:00:00Z
DOI: 10.1137/21M1423324
Issue No: Vol. 51, No. 3 (2022)

 Matrices of Optimal TreeDepth and a RowInvariant Parameterized Algorithm
for Integer Programming
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Timothy F. Chan, Jacob W. Cooper, Martin Koutecký, Daniel Král, Kristýna Pekárková
Pages: 664  700
Abstract: SIAM Journal on Computing, Volume 51, Issue 3, Page 664700, June 2022.
A long line of research on fixed parameter tractability of integer programming culminated with showing that integer programs with $n$ variables and a constraint matrix with dual treedepth $d$ and largest entry $\Delta$ are solvable in time $g(d,\Delta){poly}(n)$ for some function $g$. However, the dual treedepth of a constraint matrix is not preserved by row operations, i.e., a given integer program can be equivalent to another with a smaller dual treedepth, and thus does not reflect its geometric structure. We prove that the minimum dual treedepth of a rowequivalent matrix is equal to the branchdepth of the matroid defined by the columns of the matrix. We design a fixed parameter algorithm for computing branchdepth of matroids represented over a finite field and a fixed parameter algorithm for computing a rowequivalent matrix with minimum dual treedepth. Finally, we use these results to obtain an algorithm for integer programming running in time $g(d^*,\Delta){poly}(n)$ where $d^*$ is the branchdepth of the constraint matrix; the branchdepth cannot be replaced by the more permissive notion of branchwidth.
Citation: SIAM Journal on Computing
PubDate: 20220524T07:00:00Z
DOI: 10.1137/20M1353502
Issue No: Vol. 51, No. 3 (2022)

 Almost Tight Bounds for Reordering Buffer Management

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Anna Adamaszek, Artur Czumaj, Matthias Englert, Harald Räcke
Pages: 701  722
Abstract: SIAM Journal on Computing, Volume 51, Issue 3, Page 701722, June 2022.
We give almost tight bounds for the online reordering buffer management problem on the uniform metric. Specifically, we present the first nontrivial lower bounds for this problem by showing that deterministic online algorithms have a competitive ratio of at least $\Omega(\sqrt{\log k/\log\log k})$ and randomized online algorithms have a competitive ratio of at least $\Omega(\log\log k)$, where $k$ denotes the size of the buffer. We complement this by presenting a deterministic online algorithm for the reordering buffer management problem that obtains a competitive ratio of $O(\sqrt{\log k})$, almost matching the lower bound. This improves upon an algorithm by AvigdorElgrabli and Rabani that achieves a competitive ratio of $O(\log k/\log\log k)$.
Citation: SIAM Journal on Computing
PubDate: 20220524T07:00:00Z
DOI: 10.1137/20M1326167
Issue No: Vol. 51, No. 3 (2022)

 Nearly Optimal Planar $k$ Nearest Neighbors Queries under General Distance
Functions
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: ChihHung Liu
Pages: 723  765
Abstract: SIAM Journal on Computing, Volume 51, Issue 3, Page 723765, June 2022.
We study the $k$ nearest neighbors problem in the plane for general, convex, pairwise disjoint sites of constant description complexity such as line segments, disks, and quadrilaterals under a general family of distance functions including the $L_p$ norms and additively weighted Euclidean distances. We compose a static data structure for this general setting with nearly optimal $O(n\log\log n)$ space, optimal $O(\log n+k)$ query time, and optimal expected $O(n\log n)$ preprocessing time. We also devise a dynamic data structure (that allows insertions and deletions of sites) with $O(n\log n)$ space, $O(\log^2n+k\log n)$ query time, and expected amortized $O(\log^2 n)$ insertion time and $O(\log^4n)$ deletion time, matching the best known time complexities for point sites in the Euclidean metric and improving many applications such as dynamic minimum spanning trees in a general planar metric and dynamic connectivity in disk intersection graphs. Our results, to some extent, indicate that for the $k$ nearest neighbors problem, general distance functions may share the same time complexities with point sites in the Euclidean metric. To achieve this progress, we design vertical shallow cuttings of linear size for general distance functions. Vertical shallow cuttings are a key technique to tackle the $k$ nearest neighbors problem for point sites in the Euclidean metric, while existing generalizations to general distance functions are either not vertical or not of linear size. Our innovation is a new random sampling technique for the analysis of geometric structures, and since this technique provides a new way to develop geometric algorithms, we believe it is of independent interest.
Citation: SIAM Journal on Computing
PubDate: 20220526T07:00:00Z
DOI: 10.1137/20M1388371
Issue No: Vol. 51, No. 3 (2022)

 Special Section on the FortyNinth Annual ACM Symposium on the Theory of
Computing (STOC 2017)
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Andy Drucker, Ravi Kumar, Amit Sahai, Mohit Singh
Abstract: SIAM Journal on Computing, Volume 51, Issue 2, Page SiSii, April 2022.
This issue of SICOMP contains ten specially selected papers from STOC 2017, the Fortyninth Annual ACM Symposium on the Theory of Computing, which was held June 1923 in Montreal, Canada. The papers here were chosen to represent the range and quality of the STOC program. These papers have been revised and extended by their authors and subjected to the standard thorough reviewing process of SICOMP. The program committee for STOC 2017 consisted of Nina Balcan, Allan Borodin, Keren CensorHillel, Edith Cohen, Artur Czumaj, Yevgeniy Dodis, Andrew Drucker, Nick Harvey, Monika Henzinger, Russell Impagliazzo, Kenichi Kawarabayashi, Ravi Kumar, James R. Lee, Katrina Ligett, Aleksander M'dry, Cristopher Moore, Jelani Nelson, Eric Price, Amit Sahai, Jared Saia, Shubhangi Saraf, Alexander Sherstov, Mohit Singh, and Gábor Tardos. The program chair was Valerie King. Included in this issue are the following papers: ``Short Presburger Arithmetic Is Hard," by Danny Nguyen and Igor Pak, proves that the satisfiability of short sentences in Presburger arithmetic with $m+2$ alternating quantifiers is $\Sigma^{{P}}_m$complete or $\Pi^{{P}}_m$complete when the first quantifier is $\exists$ or $\forall$, respectively. ``An Efficient Reduction from TwoSource to Nonmalleable Extractors: Achieving NearLogarithmic MinEntropy," by Avraham BenAroya, Dean Doron, and Amnon TaShma, gets an explicit bipartite Ramsey graph (or a twosource extractor) for sets of size 2$k$ for $k = O(\log n \log \log n)$, using the currently best explicit nonmalleable extractors. ``Holographic Algorithm with Matchgates Is Universal for Planar \#CSP over Boolean Domain," by JinYi Cai and Zhiguo Fu, classifies all counting CSPs over Boolean variables into one of three categories: polynomialtime tractable, \#Phard for general instances but solvable in polynomial time over planar graphs, and \#Phard over planar graphs. ``Deciding Parity Games in Quasipolynomial Time," by Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan, shows the parameterized parity game, with $n$ nodes and $m$ priorities, is in the class of fixed parameter tractable problems when parameterized over $m$. ``New Hardness Results for Routing on Disjoint Paths," by Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat, proves that nodedisjoint paths is $2^{\Omega(\sqrt{\log n})}$hard to approximate, unless all problems in NP have algorithms with running time $n^{O(\log n)}$. ``A Weighted Linear Matroid Parity Algorithm," by Satoru Iwata and Yusuke Kobayashi, presents a combinatorial, deterministic, strongly polynomialtime algorithm for the weighted linear matroid parity problem. ``Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace," by William M. Hoza and Chris Umans, shows that $\mathbf{BPL} \subseteq \bigcap_{\alpha> 0} {DSPACE}(\log^{1 + \alpha} n)$, assuming that for every derandomization result for logspace algorithms there is a pseudorandom generator strong enough to nearly recover the derandomization by iterating over all seeds and taking a majority vote. ``Approximating Rectangles by Juntas and Weakly Exponential Lower Bounds for LP Relaxations of CSPs," by Pravesh K. Kothari, Raghu Meka, and Prasad Raghavendra, shows that for CSPs, subexponential size LP relaxations are as powerful as $n^{\Omega(1)}$rounds of the SheraliAdams LP hierarchy. ``Equivocating Yao: ConstantRound Adaptively Secure Multiparty Computation in the Plain Model," by Ran Canetti, Oxana Poburinnaya, and Muthuramakrishnan Venkitasubramaniam, defines a new type of encryption and shows that Yao's garbling scheme, implemented with this encryption mechanism, is secure against adaptive adversaries. ``Geodesic Walks in Polytopes," by Yin Tat Lee and Santosh Vempala, introduces the geodesic walk for sampling Riemannian manifolds and applies it to the problem of generating uniform random points from the interior of polytopes in ${\mathbb{R}}^{n}$ specified by m inequalities; the resulting sampling algorithm for polytopes mixes in $O^{*}(mn^{\frac{3}{4}})$ steps. We thank the authors, the STOC 2017 program committee, the STOC 2017 external reviewers, and the SICOMP referees for all of their hard work. Andy Drucker, Ravi Kumar, Amit Sahai, Mohit Singh, Guest editors
Citation: SIAM Journal on Computing
PubDate: 20220428T07:00:00Z
DOI: 10.1137/22N975482
Issue No: Vol. 51, No. 2 (2022)

 Geodesic Walks in Polytopes

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Yin Tat Lee, Santosh Vempala
Abstract: SIAM Journal on Computing, Volume 51, Issue 2, Page STOC17400STOC17488, April 2022.
We introduce the geodesic walk for sampling Riemannian manifolds and apply it to the problem of generating uniform random points from the interior of polytopes in $\mathbb{R}^{n}$ specified by $m$ inequalities. The walk is a discretetime simulation of a stochastic differential equation on the Riemannian manifold equipped with the metric induced by the Hessian of a convex function; each step is the solution of an ordinary differential equation (ODE). The resulting sampling algorithm for polytopes mixes in $O^{*}(mn^{\frac{3}{4}})$ steps. This is the first walk that breaks the quadratic barrier for mixing in high dimension, improving on the previous best bound of $O^{*}(mn)$ by Kannan and Narayanan for the Dikin walk. We also show that each step of the geodesic walk (solving an ODE) can be implemented efficiently, thus improving the time complexity for sampling polytopes. Our analysis of the geodesic walk for general Hessian manifolds does not assume positive curvature and might be of independent interest.
Citation: SIAM Journal on Computing
PubDate: 20220428T07:00:00Z
DOI: 10.1137/17M1145999
Issue No: Vol. 51, No. 2 (2022)

 When Symmetries Are Not Enough: A Hierarchy of Hard Constraint
Satisfaction Problems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Pierre Gillibert, Julius Jonušas, Michael Kompatscher, Antoine Mottet, Michael Pinsker
Pages: 175  213
Abstract: SIAM Journal on Computing, Volume 51, Issue 2, Page 175213, April 2022.
We produce a class of $\omega$categorical structures with finite signature by applying a modeltheoretic constructiona refinement of the Hrushovskiencodingto $\omega$categorical structures in a possibly infinite signature. We show that the encoded structures retain desirable algebraic properties of the original structures, but that the constraint satisfaction problems (CSPs) associated with these structures can be badly behaved in terms of computational complexity. This method allows us to systematically generate $\omega$categorical templates whose CSPs are complete for a variety of complexity classes of arbitrarily high complexity and $\omega$categorical templates that show that membership in any given complexity class containing AC$^0$ cannot be expressed by a set of identities on the polymorphisms. It moreover enables us to prove that recent results about the relevance of topology on polymorphism clones of $\omega$categorical structures also apply for CSP templates, i.e., structures in a finite language. Finally, we obtain a concrete algebraic criterion which could constitute a description of the delineation between tractability and NPhardness in the dichotomy conjecture for firstorder reducts of finitely bounded homogeneous structures.
Citation: SIAM Journal on Computing
PubDate: 20220309T08:00:00Z
DOI: 10.1137/20M1383471
Issue No: Vol. 51, No. 2 (2022)

 Anchored Parallel Repetition for Nonlocal Games

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Mohammad Bavarian, Thomas Vidick, Henry Yuen
Pages: 214  253
Abstract: SIAM Journal on Computing, Volume 51, Issue 2, Page 214253, April 2022.
We introduce a simple transformation on twoplayer nonlocal games, called “anchoring,” and prove an exponentialdecay parallel repetition theorem for all anchored games in the setting of quantum entangled players. This transformation is inspired in part by the FeigeKilian transformation [SIAM J. Comput., 30 (2000), pp. 324346], and has the property that if the quantum value of the original game $G$ is $v$, then the quantum value of the anchored game $G_{{\perp}}$ is $1  (1  \alpha)^2 \cdot (1  v)$, where $\alpha$ is a parameter of the transformation. In particular the anchored game has quantum value 1 if and only if the original game $G$ has quantum value 1. This provides the first gap amplification technique for general twoplayer nonlocal games that achieves exponential decay of the quantum value.
Citation: SIAM Journal on Computing
PubDate: 20220314T07:00:00Z
DOI: 10.1137/21M1405927
Issue No: Vol. 51, No. 2 (2022)

 A Subexponential Parameterized Algorithm for Directed Subset Traveling
Salesman Problem on Planar Graphs
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Dániel Marx, Marcin Pilipczuk, Michal Pilipczuk
Pages: 254  289
Abstract: SIAM Journal on Computing, Volume 51, Issue 2, Page 254289, April 2022.
There are numerous examples of the socalled “square root phenomenon” in the field of parameterized algorithms: many of the most fundamental graph problems, parameterized by some natural parameter $k$, become significantly simpler when restricted to planar graphs and in particular the best possible running time is exponential in $\mathcal{O}(\sqrt{k})$ instead of $\mathcal{O}(k)$ (modulo standard complexity assumptions). We consider a classic optimization problem Subset Traveling Salesman, where we are asked to visit all the terminals $T$ by a minimumweight closed walk. We investigate the parameterized complexity of this problem in planar graphs, where the number $k= T $ of terminals is regarded as the parameter. We show that Subset TSP can be solved in time $2^{\mathcal{O}(\sqrt{k}\log k)}\cdot n^{\mathcal{O}(1)}$ even on edgeweighted directed planar graphs. This improves upon the algorithm of Klein and Marx [SODA 2014, SIAM, Philadelphia, 2014, pp. 18121830] with the same running time that worked only on undirected planar graphs with polynomially large integer weights.
Citation: SIAM Journal on Computing
PubDate: 20220407T07:00:00Z
DOI: 10.1137/19M1304088
Issue No: Vol. 51, No. 2 (2022)

 Metric Embedding via Shortest Path Decompositions

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Ittai Abraham, Arnold Filtser, Anupam Gupta, Ofer Neiman
Pages: 290  314
Abstract: SIAM Journal on Computing, Volume 51, Issue 2, Page 290314, April 2022.
We study the problem of embedding shortestpath metrics of weighted graphs into $\ell_p$ spaces. We introduce a new embedding technique based on lowdepth decompositions of a graph via shortest paths. The notion of shortest path decomposition (SPD) depth is inductively defined: A (weighed) path graph has SPD depth $1$. General graph has an SPD of depth $k$ if it contains a shortest path whose deletion leads to a graph, each of whose components has SPD depth at most $k1$. In this paper we give an $O(k^{\min\{\nicefrac{1}{p},\nicefrac{1}{2}\}})$distortion embedding for graphs of SPD depth at most $k$. This result is asymptotically tight for any fixed $p>1$, while for $p=1$ it is tight up to second order terms. As a corollary of this result, we show that graphs having pathwidth $k$ embed into $\ell_p$ with distortion $O(k^{\min\{\nicefrac{1}{p},\nicefrac{1}{2}\}})$. For $p=1$, this improves over the best previous bound of Lee and Sidiropoulos that was exponential in $k$; moreover, for other values of $p$ it gives the first embeddings whose distortion is independent of the graph size $n$. Furthermore, we use the fact that planar graphs have SPD depth $O(\log n)$ to give a new proof that any planar graph embeds into $\ell_1$ with distortion $O(\sqrt{\log n})$. Our approach also gives new results for graphs with bounded treewidth, and for graphs excluding a fixed minor.
Citation: SIAM Journal on Computing
PubDate: 20220411T07:00:00Z
DOI: 10.1137/19M1296021
Issue No: Vol. 51, No. 2 (2022)

 Derandomization from Algebraic Hardness

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Zeyu Guo, Mrinal Kumar, Ramprasad Saptharishi, Noam Solomon
Pages: 315  335
Abstract: SIAM Journal on Computing, Volume 51, Issue 2, Page 315335, April 2022.
A hittingset generator (HSG) is a polynomial map ${{\mathsf{Gen}}}:{\mathbb{F}}^k \to {\mathbb{F}}^n$ such that for all $n$variate polynomials $C$ of small enough circuit size and degree, if $C$ is nonzero, then $C\circ {{\mathsf{Gen}}}$ is nonzero. In this paper, we give a new construction of such an HSG assuming that we have an explicit polynomial of sufficient hardness. Formally, we prove the following result over any field ${\mathbb{F}}$ of characteristic zero: Let $k\in {\mathbb{N}}$ and $\delta> 0$ be arbitrary constants. Suppose ${\left\{ {P_d} \right\}}_{d\in {\mathbb{N}}}$ is an explicit family of $k$variate polynomials such that $\deg P_d = d$ and $P_d$ requires algebraic circuits of size $d^\delta$. Then, there are explicit hitting sets of polynomial size for the class ${\mathsf{VP}}$. This is the first HSG in the algebraic setting that yields a complete derandomization of polynomial identity testing (PIT) for general circuits from a suitable algebraic hardness assumption. Unlike the prior constructions of such maps [N. Nisan and A. Wigderson, J. Comput. System Sci., 49 (1994), pp. 149167; V. Kabanets and R. Impagliazzo, Comput. Complexity, 13 (2004), pp. 146; M. Agrawal, S. Ghosh, and N.Saxena, Proc. Natl. Acad. Sci. USA, 116 (2019), pp. 81078118; M. Kumar, R. Saptharishi, and A. Tengse, Proceedings of the \textup30th Annual ACMSIAM Symposium on Discrete Algorithms, 2019, pp. 639646], our construction is purely algebraic and does not rely on the notion of combinatorial designs. As a direct consequence, we show that even saving a single point from the “trivial” explicit, exponential sized hitting sets for constantvariate polynomials of low individual degree which are computable by small circuits implies a deterministic polynomial time algorithm for PIT. More precisely, we show the following: Let $k\in {\mathbb{N}}$ and $\delta> 0$ be arbitrary constants. Suppose for every $s$ large enough, there is an explicit hitting set of size at most $((s+1)^k  1)$ for the class of $k$variate polynomials of individual degree $s$ that are computable by size $s^\delta$ circuits. Then there is an explicit hitting set of size ${\operatorname{poly}}(s)$ for the class of $s$variate polynomials, of degree $s$, that are computable by size $s$ circuits. As a consequence, we give a deterministic polynomial time construction of hitting sets for algebraic circuits, if a strengthening of the $\tau$conjecture of Shub and Smale [M. Shub and S. Smale, Duke Math. J., 81 (1995), pp. 4754; S. Smale, Math. Intelligencer, 20 (1998), pp. 715] is true.
Citation: SIAM Journal on Computing
PubDate: 20220421T07:00:00Z
DOI: 10.1137/20M1347395
Issue No: Vol. 51, No. 2 (2022)

 An $O(\log \log m)$ Prophet Inequality for Subadditive Combinatorial
Auctions
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Paul Dütting, Thomas Kesselheim, Brendan Lucier
Abstract: SIAM Journal on Computing, Ahead of Print.
Prophet inequalities compare the expected performance of an online algorithm for a stochastic optimization problem to the expected optimal solution in hindsight. They are a major alternative to classic worstcase competitive analysis, of particular importance in the design and analysis of simple (postedprice) incentive compatible mechanisms with provable approximation guarantees. A central open problem in this area concerns subadditive combinatorial auctions. Here $n$ agents with subadditive valuation functions compete for the assignment of $m$ items. The goal is to find an allocation of the items that maximizes the total value of the assignment. The question is whether there exists a prophet inequality for this problem that significantly beats the best known approximation factor of $O(\log m)$. We make major progress on this question by providing an $O(\log \log m)$ prophet inequality. Our proof goes through a novel primaldual approach. It is also constructive, resulting in an online policy that takes the form of static and anonymous item prices that can be computed in polynomial time given appropriate query access to the valuations. As an application of our approach, we construct a simple and incentive compatible mechanism based on posted prices that achieves an $O(\log \log m)$ approximation to the optimal revenue for subadditive valuations under an itemindependence assumption.
Citation: SIAM Journal on Computing
PubDate: 20220519T07:00:00Z
DOI: 10.1137/20M1382799

 Strong AverageCase Circuit Lower Bounds from Nontrivial Derandomization

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Lijie Chen, Hanlin Ren
Abstract: SIAM Journal on Computing, Ahead of Print.
In a recent breakthrough, [C. Murray and R. R. Williams, STOC 2018, ACM, New York, 2018, pp. 890901] proved that ${NQP} = {NTIME}[n^{{polylog}(n)}]$ cannot be computed by polynomialsize ${ACC}^0$ circuits (constantdepth circuits consisting of ${AND}$/${OR}$/${MOD}_m$ gates for a fixed constant $m$, a frontier class in circuit complexity). This was recently strengthened by [L. Chen, FOCS 2019, IEEE, Piscataway, NJ, 2019, pp. 12811304] to that ${NQP}$ cannot be $(1/2+1/{polylog}(n))$approximated by polynomialsize ${ACC}^0$ circuits. In this work we will prove that ${NQP}$ cannot be $(1/2+1/n^{\omega(1)})$approximated by polynomialsize ${ACC}^0$ circuits. As a straightforward application, we obtain an infinitely often nondeterministic pseudorandom generator for polysize ${ACC}^0$ circuits with subpolynomial seed length. More generally, we establish a connection showing that, for a typical circuit class $\mathscr{C}$, nontrivial deterministic algorithms estimating the acceptance probability of $\mathscr{C}$ circuits imply strong ($1/2 + 1/n^{\omega(1)}$) averagecase lower bounds against $\mathscr{C}$ circuits. We also apply this connection to prove new lower bounds against several subclasses of ${\sf TC}^0$ circuits (constantdepth circuits consisting entirely of majority gates), and show that nontrivial derandomization of ${MAJ} \circ {MAJ}$ would imply worstcase lower bounds for ${TC}^0_3$ (${MAJ} \circ{MAJ} \circ {MAJ}$), suggesting that ${TC}^0_3$ lower bounds are probably within reach. Our two new important technical ingredients are (1) techniques from cryptography in ${NC}^0$ [B. Applebaum, Y. Ishai, and E. Kushilevitz, SIAM J. Comput., 36 (2006), pp. 845888], and (2) probabilistic checkable proofs of proximity with ${NC}^1$computable proofs.
Citation: SIAM Journal on Computing
PubDate: 20220502T07:00:00Z
DOI: 10.1137/20M1364886

 Nearly Optimal Static Las Vegas Succinct Dictionary

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Huacheng Yu
Abstract: SIAM Journal on Computing, Ahead of Print.
For positive integers $U$, $n$, and $\sigma$, given a set $S$ of $n$ (distinct) keys from key space $[U]$, each associated with a value from $[\sigma]$, the static dictionary problem asks one to preprocess these (key, value) pairs into a data structure, supporting valueretrieval queries: for any given $x\in [U]$, ${valRet}(x)$ must return the value associated with $x$ if $x\in S$, or return $\bot$ if $x\notin S$. The special case where $\sigma=1$ is called the membership problem. The “textbook” solution is to use a hash table, which occupies linear space and answers each query in constant time. On the other hand, the minimum possible space to encode all (key, value) pairs is only ${OPT}:= \lceil\lg_2\binom{U}{n}+n\lg_2 \sigma\rceil$ bits, which could be much smaller than a hash table. In this paper, we design a randomized dictionary data structure using ${OPT}+{poly}\lg n+O(\lg^{(\ell)} U)$ bits of space, and it has expected constant query time, assuming the query algorithm can access an external lookup table of size $n^{\epsilon}$ for any constant $\ell$ and $\epsilon$. The lookup table depends only on $U$, $n$, and $\sigma$, and not the input. Previously, even for membership queries and $U\leq n^{O(1)}$, the best known data structure with constant query time requires ${OPT}+n/{poly}\lg n$ bits of space (Pagh [SIAM J. Comput., 31 (2001), pp. 353363] and Pǎtraşcu [FOCS, IEEE Computer Society, Los Alamitos, CA, 2008, pp. 305313]); the best known using ${OPT}+n^{1\epsilon}$ space has query time $O(\lg n)$. Our new data structure answers open questions by Pǎtraşcu and Thorup [FOCS, IEEE Computer Society, Los Alamitos, CA, 2008, pp. 305313; Bull. Eur. Assoc. Theor. Comput. Sci. EATCS, 109 (2013), pp. 713]. We also present a scheme that compresses a sequence $X\in[\sigma]^n$ to its zeroth order (empirical) entropy up to $\sigma\cdot{poly}\lg n$ extra bits, supporting decoding each $X_i$ in $O(\lg \sigma)$ expected time.
Citation: SIAM Journal on Computing
PubDate: 20220502T07:00:00Z
DOI: 10.1137/20M1363649

 A Parameterized Approximation Scheme for Min $k$Cut

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Daniel Lokshtanov, Saket Saurabh, Vaishali Surianarayanan
Abstract: SIAM Journal on Computing, Ahead of Print.
In the Min $k$Cut problem, the input consists of an edge weighted graph $G$ and an integer $k$, and the task is to partition the vertex set into $k$ nonempty sets, such that the total weight of the edges with endpoints in different parts is minimized. When $k$ is part of the input, the problem is NPcomplete and hard to approximate within any factor less than 2. Recently, the problem has received significant attention from the perspective of parameterized approximation. Gupta, Lee, and Li [Proceedings of the 29th Annual ACMSIAM Symposium on Discrete Algorithms, A. Czumaj, ed., SIAM, Philadelphia, 2018, pp. 28212837] initiated the study of FPTapproximation for the Min $k$Cut problem and gave a 1.9997approximation algorithm running in time $2^{\mathcal{O}(k^6)}n^{\mathcal{O}(1)}$. Later, the same set of authors [Proceedings of the 59th IEEE Annual Symposium on Foundations of Computer Science, M. Thorup, ed., 2018, pp. 113123] designed a $(1 +\epsilon)$approximation algorithm that runs in time $(k/\epsilon)^{\mathcal{O}(k)}n^{k+\mathcal{O}(1)}$ and a 1.81approximation algorithm running in time $2^{\mathcal{O}(k^2)}n^{\mathcal{O}(1)}$. More, recently, Kawarabayashi and Lin [Proceedings of the 31st ACMSIAM Symposium on Discrete Algorithms, S. Chawla, ed., SIAM, Philadelphia, 2020, pp. 990999] gave a $(5/3 + \epsilon)$approximation for Min $k$Cut running in time $2^{\mathcal{O}(k^2 \log k)}n^{\mathcal{O}(1)}$. In this paper, we give a parameterized approximation algorithm with best possible approximation guarantee and best possible running time dependence on said guarantee (up to the exponential time hypothesis and constants in the exponent). In particular, for every $\epsilon> 0$, the algorithm obtains a $(1 +\epsilon)$approximate solution in time $(k/\epsilon)^{\mathcal{O}(k)}n^{\mathcal{O}(1)}$. The main ingredients of our algorithm are a simple sparsification procedure, a new polynomial time algorithm for decomposing a graph into highly connected parts, and a new exact algorithm with running time $s^{\mathcal{O}(k)}n^{\mathcal{O}(1)}$ on unweighted (multi) graphs. Here, $s$ denotes the number of edges in a minimum $k$cut. The latter two are of independent interest.
Citation: SIAM Journal on Computing
PubDate: 20220425T07:00:00Z
DOI: 10.1137/20M1383197

 Unit Capacity Maxflow in Almost $m^{4/3}$ Time

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Tarun Kathuria, Yang P. Liu, Aaron Sidford
Abstract: SIAM Journal on Computing, Ahead of Print.
We present an algorithm which given any $m$edge directed graph with positive integer capacities at most $U$, vertices $a$ and $b$, and an approximation parameter $\epsilon \in (0, 1)$ computes an additive $\eps mU$approximate $a$$b$ maximum flow in time $m^{1+o(1)}/\sqrt{\eps}$. By applying the algorithm for $\eps = (mU)^{2/3}$, rounding to an integral flow, and using augmenting paths, we obtain an algorithm which computes an exact $a$$b$ maximum flow in time $m^{4/3+o(1)}U^{1/3}$ and an algorithm which given an $m$edge bipartite graph computes an exact maximum cardinality matching in time $m^{4/3+o(1)}$.
Citation: SIAM Journal on Computing
PubDate: 20220418T07:00:00Z
DOI: 10.1137/20M1383525

 Counting Small Induced Subgraphs Satisfying Monotone Properties

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Marc Roth, Johannes Schmitt, Philip Wellnitz
Abstract: SIAM Journal on Computing, Ahead of Print.
Given a graph property $\Phi$, the problem $\#\ensuremath{{\sc IndSub}}(\Phi)$ asks, on input of a graph $G$ and a positive integer $k$, to compute the number $\#{\ensuremath{{IndSub}({\Phi,k} \to {G})}}$ of induced subgraphs of size $k$ in $G$ that satisfy $\Phi$. The search for explicit criteria on $\Phi$ ensuring that $\#\ensuremath{{\sc IndSub}}(\Phi)$ is hard was initiated by Jerrum and Meeks [J. Comput. System Sci., 81 (2015), pp. 702716] and is part of the major line of research on counting small patterns in graphs. However, apart from an implicit result due to Curticapean, Dell, and Marx [STOC, ACM, New York, pp. 151158] proving that a full classification into “easy” and “hard” properties is possible and some partial results on edgemonotone properties due to Meeks [Discrete Appl. Math., 198 (2016), pp. 170194] and Dörfler et al. [MFCS, LIPIcs Leibniz Int. Proc. Inform. 138, Wadern Germany, 2019, 26], not much is known. In this work, we fully answer and explicitly classify the case of monotone, that is, subgraphclosed, properties: We show that for any nontrivial monotone property $\Phi$, the problem $\#\ensuremath{{\sc IndSub}}(\Phi)$ cannot be solved in time $f(k)\cdot V(G) ^{o(k/ {\log^{1/2}(k)})}$ for any function $f$, unless the exponential time hypothesis fails. By this, we establish that any significant improvement over the bruteforce approach is unlikely; in the language of parameterized complexity, we also obtain a $\#{\ensuremath{{W[1]}}}$completeness result. The methods we develop for the above problem also allow us to prove a conjecture by Jerrum and Meeks [ACM Trans. Comput. Theory, 7 (2015), 11; Combinatorica 37 (2017), pp. 965990]: $\#\ensuremath{{\sc IndSub}}(\Phi)$ is $\#{\ensuremath{{W[1]}}}$complete if $\Phi$ is a nontrivial graph property only depending on the number of edges of the graph.
Citation: SIAM Journal on Computing
PubDate: 20220411T07:00:00Z
DOI: 10.1137/20M1365624

 Separating the Communication Complexity of Truthful and Nontruthful
Algorithms for Combinatorial Auctions
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Sepehr Assadi, Hrishikesh Khandeparkar, Raghuvansh R. Saxena, S. Matthew Weinberg
Abstract: SIAM Journal on Computing, Ahead of Print.
We provide the first separation in the approximation guarantee achievable by truthful and nontruthful algorithms for combinatorial auctions with polynomial communication. Specifically, we prove that any truthful mechanism guaranteeing a $(\nicefrac{3}{4}\nicefrac{1}{240}+\varepsilon)$approximation for two buyers with XOS valuations over $m$ items requires $\exp(\Omega(\varepsilon^2 \cdot m))$ communication, whereas a nontruthful algorithm by Dobzinski and Schapira [Proceedings of the Seventeenth Annual ACMSIAM Symposium on Discrete Algorithms (SODA), SIAM, 2006, pp. 10641073] and Feige [SIAM J. Comput., 39 (2009), pp. 122142] is already known to achieve a $\nicefrac{3}{4}$approximation in $\mathrm{poly}(m)$ communication. We obtain our separation by proving that any simultaneous protocol (not necessarily truthful) which guarantees a $(\nicefrac{3}{4}\nicefrac{1}{240}+\varepsilon)$approximation requires communication $\exp(\Omega(\varepsilon^2 \cdot m))$. The taxation complexity framework of Dobzinski [Proceedings of the $57$th Annual Symposium on Foundations of Computer Science (FOCS), IEEE, 2016, pp. 209218] extends this lower bound to all truthful mechanisms (including interactive truthful mechanisms).
Citation: SIAM Journal on Computing
PubDate: 20220407T07:00:00Z
DOI: 10.1137/20M1370021

 Smoothing the Gap Between NP and ER

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Jeff Erickson, Ivor van der Hoog, Tillmann Miltzow
Abstract: SIAM Journal on Computing, Ahead of Print.
We study algorithmic problems that belong to the complexity class of the existential theory of the reals ($\exists \mathbb{R}$). A problem is \ensuremath\exists \mathbbRcomplete if it is as hard as the problem existential theory of the reals (ETR) and if it can be written as an ETR formula. Traditionally, these problems are studied in the real random access machine (RAM), a model of computation that assumes that the storage and comparison of realvalued numbers can be done in constant space and time, with infinite precision. The complexity class $\exists \mathbb{R}$ is often called a real RAM analogue of NP, since the problem ETR can be viewed as the realvalued variant of SAT. The real RAM assumption that we can represent and in which we can compare arbitrary irrational values in constant space and time is not very realistic. Yet this assumption is vital, since some $\exists \mathbb{R}$complete problems have an “exponential bit phenomenon,” where there exists an input for the problem, such that the witness of the solution requires geometric coordinates which need exponential word size when represented in binary. The problems that exhibit this phenomenon are NPhard (since ETR is NPhard) but it is unknown if they lie in NP. NP membership is often showed by using the famous CookLevin theorem, which states that the existence of a polynomialtime verification algorithm for the problem witness is equivalent to NP membership. The exponential bit phenomenon prohibits a straightforward application of the CookLevin theorem. In this paper we first present a result which we believe to be of independent interest: we prove a real RAM analogue to the CookLevin theorem which shows that $\exists \mathbb{R}$ membership is equivalent to having a verification algorithm that runs in polynomialtime on a real RAM. This gives an easy proof of \ensuremath\exists \mathbbRmembership, as verification algorithms on a real RAM are much more versatile than ETR formulas. We use this result to construct a framework to study $\exists \mathbb{R}$complete problems under smoothed analysis. We show that for a wide class of $\exists \mathbb{R}$complete problems, its witness can be represented with logarithmic inputprecision by using smoothed analysis on its real RAM verification algorithm. This shows in a formal way that the boundary between NP and $\exists \mathbb{R}$ (formed by inputs whose solution witness needs high inputprecision) consists of contrived input. We apply our framework to wellstudied $\exists \mathbb{R}$complete recognition problems which have the exponential bit phenomenon such as the recognition of realizable order types or the Steinitz problem in fixed dimension. Interestingly our techniques also generalize to problems with a natural notion of resource augmentation (geometric packing, the art gallery problem).
Citation: SIAM Journal on Computing
PubDate: 20220407T07:00:00Z
DOI: 10.1137/20M1385287

 Truly Optimal Euclidean Spanners

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Hung Le, Shay Solomon
Abstract: SIAM Journal on Computing, Ahead of Print.
Euclidean spanners are important geometric structures, having found numerous applications over the years. Cornerstone results in this area from the late 1980s and early 1990s state that for any $d$dimensional $n$point Euclidean space, there exists a $(1+\epsilon)$spanner with $n \cdot O(\epsilon^{d+1})$ edges and lightness (normalized weight) $O(\epsilon^{2d})$. Surprisingly, the fundamental question of whether or not these dependencies on $\epsilon$ and $d$ for small $d$ can be improved has remained elusive, even for $d = 2$. This question naturally arises in any application of Euclidean spanners where precision is a necessity (thus $\epsilon$ is tiny). In the most extreme case $\epsilon$ is inverse polynomial in $n$, and then one could potentially improve the size and lightness bounds by factors that are polynomial in $n$. The stateoftheart bounds $n \cdot O(\epsilon^{d+1})$ and $O(\epsilon^{2d})$ on the size and lightness of spanners are realized by the greedy spanner. In 2016, in a preliminary version, Filtser and Solomon [SIAM J. Comput., 49 (2020), pp. 429447] proved that, in lowdimensional spaces, the greedy spanner is “nearoptimal''; informally, their result states that the greedy spanner for dimension $d$ is just as sparse and light as any other spanner but for dimension larger by a constant factor. Hence the question of whether the greedy spanner is truly optimal remained open to date. The contribution of this paper is twofold: (1) We resolve these longstanding questions by nailing down the dependencies on $\epsilon$ and $d$ and showing that the greedy spanner is truly optimal. Specifically, for any $d= O(1), \epsilon = \Omega({n}^{\frac{1}{d1}})$, (a) we show that there are $n$point sets in $\mathbb{R}^d$ for which any $(1+\epsilon)$spanner must have $n \cdot \Omega(\epsilon^{d+1})$ edges, implying that the greedy (and other) spanners achieve the optimal size; (b) we show that there are $n$point sets in $\mathbb{R}^d$ for which any $(1+\epsilon)$spanner must have lightness $\Omega(\epsilon^{d})$, and then improve the upper bound on the lightness of the greedy spanner from $O(\epsilon^{2d})$ to $O(\epsilon^{d} \log(\epsilon^{1}))$. (The lightness upper and lower bounds match up to a lowerorder term.) (2) We then complement our negative result for the size of spanners with a rather counterintuitive positive result: Steiner points lead to a quadratic improvement in the size of spanners! Our bound for the size of Steiner spanners in $\mathbb{R}^2$ is tight as well (up to a lowerorder term).
Citation: SIAM Journal on Computing
PubDate: 20220331T07:00:00Z
DOI: 10.1137/20M1317906

 Performance of JohnsonLindenstrauss Transform for $k$Means and
$k$Medians Clustering
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Konstantin Makarychev, Yury Makarychev, Ilya Razenshteyn
Abstract: SIAM Journal on Computing, Ahead of Print.
Consider an instance of Euclidean $k$means or $k$medians clustering. We show that the cost of the optimal solution is preserved up to a factor of $(1+\varepsilon)$ under a projection onto a random $O(\log(k / \varepsilon) / \varepsilon^2)$dimensional subspace. Further, the cost of every clustering is preserved within $(1+\varepsilon)$. More generally, our result applies to any dimension reduction map satisfying a mild subGaussiantail condition. Our bound on the dimension is nearly optimal. Additionally, our result applies to Euclidean $k$clustering with the distances raised to the $p$th power for any constant $p$. For $k$means, our result resolves an open problem posed by Cohen et al. [STOC 2015, ACM, New York, 2015, pp. 163172] for $k$medians, it answers a question raised by Kannan.
Citation: SIAM Journal on Computing
PubDate: 20220314T07:00:00Z
DOI: 10.1137/20M1330701

 Polynomial Data Structure Lower Bounds in the Group Model

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Alexander Golovnev, Gleb Posobin, Oded Regev, Omri Weinstein
Abstract: SIAM Journal on Computing, Ahead of Print.
Proving superlogarithmic data structure lower bounds in the static group model has been a fundamental challenge in computational geometry since the early '80s. We prove a polynomial ($n^{\Omega(1)}$) lower bound for an explicit range counting problem of $n^3$ convex polygons in $\mathbb{R}^2$ (each with $n^{\tilde{O}(1)}$ facets/semialgebraic complexity), against linear storage arithmetic data structures in the group model. Our construction and analysis are based on a combination of techniques in Diophantine approximation, pseudorandomness, and compressed sensingin particular, on the existence and partial derandomization of optimal binary compressed sensing matrices in the polynomial sparsity regime ($k = n^{1\delta}$). As a byproduct, this establishes a (logarithmic) separation between compressed sensing matrices and the stronger RIP property.
Citation: SIAM Journal on Computing
PubDate: 20220314T07:00:00Z
DOI: 10.1137/20M1381988

 Efficient Construction of Rigid Matrices Using an NP Oracle

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Josh Alman, Lijie Chen
Abstract: SIAM Journal on Computing, Ahead of Print.
For a matrix $H$ over a field $\mathbb{F}$, its rank$r$ rigidity, denoted by $\mathscr{R}_{H}(r)$, is the minimum Hamming distance from $H$ to a matrix of rank at most $r$ over $\mathbb{F}$. A central open challenge in complexity theory is to give explicit constructions of rigid matrices for a variety of parameter settings. In this work, building on Williams' seminal connection between circuitanalysis algorithms and lower bounds [J. ACM, 61 (2014), 2], we give a construction of rigid matrices in $P}^{NP}}$. Letting $q = p^r$ be a prime power, we show that there is an absolute constant $\delta>0$ such that, for all constants $\varepsilon>0$, there is a $P}^{NP}}$ machine $M$ such that, for infinitely many $N$'s, $M(1^N)$ outputs a matrix $H_N \in \{0,1\}^{N \times N}$ with $\mathscr{R}_{H_N}(2^{(\log N)^{1/4  \varepsilon}}) \ge \delta \cdot N^2$ over $\mathbb{F}_q$. Using known connections between matrix rigidity and other topics in complexity theory, we derive several consequences of our constructions, including that there is a function $f \in TIME}[2^{(\log n)^{\omega(1)}}]^{NP}}$ such that $f \notin PH}^{cc}}$. Previously, it was open whether $E}^{NP}} \subset PH}^{cc}}$. For all $\varepsilon>0$, there is a $P}^{NP}}$ machine $M$ such that, for infinitely many $N$'s, $M(1^N)$ outputs an $N \times N$ matrix $H_N \in \{0,1\}^{N \times N}$ whose linear transformation requires depth2 $\mathbb{F}_q$linear circuits of size $\Omega(N \cdot 2^{(\log N)^{1/4  \varepsilon}})$. The previous best lower bound for an explicit family of $N \times N$ matrices over $\mathbb{F}_q$ was only $\Omega(N \log^2 N / (\log\log N)^2)$, for asymptotically good error correcting codes.
Citation: SIAM Journal on Computing
PubDate: 20220307T08:00:00Z
DOI: 10.1137/20M1322297

 The Complexity of Necklace Splitting, ConsensusHalving, and Discrete Ham
Sandwich
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Aris FilosRatsikas, Paul W. Goldberg
Abstract: SIAM Journal on Computing, Ahead of Print.
We resolve the computational complexity of three problems known as Necklace Splitting, ConsensusHalving, and Discrete Ham sandwich, showing that they are PPAcomplete. For NECKLACE SPLITTING, this result is specific to the important special case in which two thieves share the necklace. These are the first PPAcompleteness results for problems whose definition does not contain an explicit circuit, thus settling the status of PPA as a class that captures the complexity of such “natural' problems.
Citation: SIAM Journal on Computing
PubDate: 20220228T08:00:00Z
DOI: 10.1137/20M1312678

 Weak ZeroKnowledge beyond the BlackBox Barrier

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Nir Bitansky, Dakshita Khurana, Omer Paneth
Abstract: SIAM Journal on Computing, Ahead of Print.
The round complexity of zeroknowledge protocols is a longstanding open question and is yet to be settled under standard assumptions. So far, the question has appeared equally challenging for relaxations such as weak zeroknowledge and witness hiding. Like fullfledged zero knowledge, protocols satisfying these relaxed notions under standard assumptions have at least four messages. The difficulty in improving round complexity stems from a fundamental barrier: none of these notions can be achieved in three messages via reductions (or simulators) that treat the verifier as a black box. We introduce a new nonblackbox technique and use it to obtain the first protocols that cross this barrier under standard assumptions. Our main results are (1) weak zeroknowledge for NP in two messages, assuming quasipolynomially secure fully homomorphic encryption and other standard primitives (known from quasipolynomial hardness of learning with errors) as well as subexponentially secure oneway functions; and (2) weak zeroknowledge for NP in three messages under standard polynomial assumptions (following, for example, from fully homomorphic encryption and factoring). We also give, under polynomial assumptions, a twomessage witnesshiding protocol for any language L in NP that has a witness encryption scheme. This protocol is also publicly verifiable. Our technique is based on a new homomorphic trapdoor paradigm, which can be seen as a nonblackbox analogue of the classic FeigeLapidotShamir trapdoor paradigm.
Citation: SIAM Journal on Computing
PubDate: 20220127T08:00:00Z
DOI: 10.1137/20M1319565

 Equivocating Yao: ConstantRound Adaptively Secure Multiparty Computation
in the Plain Model
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Ran Canetti, Oxana Poburinnaya, Muthuramakrishnan Venkitasubramaniam
Abstract: SIAM Journal on Computing, Ahead of Print.
Yao's circuit garbling scheme is one of the basic building blocks of cryptographic protocol design. Originally designed to enable twomessage, twoparty secure computation, the scheme has been extended in many ways and has innumerable applications. Still, a basic question has remained open throughout the years: Can the scheme be extended to guarantee security in the face of an adversary that corrupts both parties, adaptively, as the computation proceeds' We answer this question in the affirmative. We define a new type of symmetric encryption, called functionally equivocal encryption (FEE), and show that when Yao's scheme is implemented with FEE as the underlying encryption mechanism, it becomes secure against such adaptive adversaries. We then show how to implement FEE from any oneway function. Combining our scheme with noncommitting encryption, we obtain the first twomessage, twoparty computation protocol, and the first constantround multiparty computation protocol, in the plain model, that are secure against semihonest adversaries who can adaptively corrupt all parties. Using standard techniques, this protocol can be made standalone secure against malicious corruptions in the plain model and universal composability secure in the common random string model. Additional applications include the first fully leakagetolerant general multiparty computation protocol (with preprocessing), as well as a publickey version of FEE which can serve as a replacement for noncommitting encryption with better efficiency than what is possible for the latter.
Citation: SIAM Journal on Computing
PubDate: 20220113T08:00:00Z
DOI: 10.1137/17M1151602

 Improved Bounds for Perfect Sampling of $k$Colorings in Graphs

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Siddharth Bhandari, Sayantan Chakraborty
Abstract: SIAM Journal on Computing, Ahead of Print.
We present a randomized algorithm that takes as input an undirected $n$vertex graph $G$ with maximum degree $\Delta$ and an integer $k> 3\Delta$ and returns a random proper $k$coloring of $G$. The distribution of the coloring is perfectly uniform over the set of all proper $k$colorings; the expected running time of the algorithm is ${poly}(k,n)=\widetilde{O}(n\Delta^2\cdot \log(k))$. This improves upon a result of Huber [Proceedings of the $30$th ACM Symposium on Theory of Computing (STOC), 1998, pp. 3140], who obtained a polynomial time perfect sampling algorithm for $k>\Delta^2+2\Delta$. Prior to our work, no algorithm with expected running time ${poly}(k,n)$ was known to guarantee perfectly sampling with a subquadratic number of colors in general. Our algorithm (like several other perfect sampling algorithms including Huber's) is based on the coupling from the past method. Inspired by the bounding chain approach, pioneered independently by Huber (STOC 1998) and Häggström and Nelander [Scand. J. Stat., 26 (1999), pp. 395411], we employ a novel bounding chain to derive our result for the graph coloring problem.
Citation: SIAM Journal on Computing
PubDate: 20220105T08:00:00Z
DOI: 10.1137/20M1366666

 Decremental Strongly Connected Components and SingleSource Reachability
in NearLinear Time
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Aaron Bernstein, Maximilian Probst Gutenberg, Christian WulffNilsen
Abstract: SIAM Journal on Computing, Ahead of Print.
Computing the strongly connected Components (SCCs) in a graph $G=(V,E)$ is known to take only $O(m + n)$ time using an algorithm by Tarjan [SIAM J. Comput., 1 (1972), pp. 146160] where $m = E $, $n= V $. For fully dynamic graphs, conditional lower bounds provide evidence that the update time cannot be improved by polynomial factors over recomputing the SCCs from scratch after every update. Nevertheless, substantial progress has been made to find algorithms with fast update time for decremental graphs, i.e., graphs that undergo edge deletions. In this paper, we present the first algorithm for general decremental graphs that maintains the SCCs in total update time $\tilde{O}(m)$, thus only a polylogarithmic factor from the optimal running time. (We use $\tilde{O}(f(n))$ notation to suppress logarithmic factors, i.e., $g(n) = \tilde{O}(f(n))$ if $g(n) = O(f(n) {polylog}(n)).$) Our result also yields the fastest algorithm for the decremental singlesource reachability (SSR) problem which can be reduced to decrementally maintaining SCCs. Using a wellknown reduction, we use our decremental result to achieve new update/querytime tradeoffs in the fully dynamic setting. We can maintain the reachability of pairs $S \times V$, $S \subseteq V$ in fully dynamic graphs with update time $\tilde{O}(\frac{ S m}{t})$ and query time $O(t)$ for all $t \in [1, S ]$; this matches to polylogarithmic factors the best allpairs reachability algorithm for $S = V$.
Citation: SIAM Journal on Computing
PubDate: 20211214T08:00:00Z
DOI: 10.1137/20M1312149

 NearOptimal Communication Lower Bounds for Approximate Nash Equilibria

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Mika Göös, Aviad Rubinstein
Abstract: SIAM Journal on Computing, Ahead of Print.
We prove an $N^{2o(1)}$ lower bound on the randomized communication complexity of finding an $\epsilon$approximate Nash equilibrium (for constant $\epsilon>0$) in a twoplayer $N\times N$ game.
Citation: SIAM Journal on Computing
PubDate: 20211130T08:00:00Z
DOI: 10.1137/19M1242069

 Distributed Exact Weighted AllPairs Shortest Paths in Randomized
NearLinear Time
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Aaron Bernstein, Danupon Nanongkai
Abstract: SIAM Journal on Computing, Ahead of Print.
In the distributed allpairs shortest paths problem, every node in the weighted undirected distributed network (the CONGEST model) needs to know the distance from every other node using least number of communication rounds (typically called time complexity). The problem admits a $(1+o(1))$approximation $\tilde\Theta(n)$time algorithm and a nearly tight $\tilde \Omega(n)$ lower bound [D. Nanongkai, STOC'14, ACM, New York, 2014, pp. 565573; C. Lenzen and B. PattShamir, PODC'15, ACM, New York, 2015, pp. 153162]. ($\tilde \Theta$, $\tilde O$ and $\tilde \Omega$ hide polylogarithmic factors. Note that the lower bounds also hold even in the unweighted case and in the weighted case with polynomial approximation ratios (C. Lenzen and D. Peleg, PODC, ACM, New York, 2013, pp. 375382; S. Holzer and R. Wattenofer, PODC, ACM, New York, 2012, pp. 355364; D. Peleg, L. Roditty, and E. Tal, ICALP, Springer, Berlin, 2012, pp. 660672; D. Nanongkai, STOC, ACM, New York, 2014, pp. 565573672). For the exact case, Elkin [STOC'17, ACM, New York, 2017, pp. 757790] presented an $O(n^{5/3} \log^{2/3} n)$ time bound, which was later improved to $\tilde O(n^{5/4})$ [C.C. Huang, D. Nanongkai, T. Saranurak, FOCS'17, IEEE Computer Society, Los Alamitos, CA, 2017, pp. 168179]. It was shown that any superlinear lower bound (in $n$) requires a new technique [K. CensorHillel, S. Khoury, A. Paz, DISC'17, LIPIcs Leibniz Int. Proc. Inform., Vol. 91, SchlossDagstuhl, Wadern, Germany, 2017, 10], but otherwise it remained widely open whether there exists a $\tilde O(n)$time algorithm for the exact case, which would match the best possible approximation algorithm. This paper resolves this question positively: we present a randomized (Las Vegas) $\tilde O(n)$time algorithm, matching the lower bound up to polylogarithmic factors. Like the previous $\tilde O(n^{5/4})$ bound, our result works for directed graphs with zero (and even negative) edge weights. In addition to the improved running time, our algorithm works in a more general setting than that required by the previous $\tilde O(n^{5/4})$ bound; in our setting (i) the communication is only along edge directions (as opposed to bidirectional), and (ii) edge weights are arbitrary (as opposed to integers in $\{1, 2, \ldots, \operatorname{poly}(n)\}$). As far as we know, ours is the first $o(n^2)$ algorithm that only requires unidirectional communication. For arbitrary weights, the previous stateoftheart required $\tilde O(n^{4/3})$ time [U. Agarwal and V. Ramachandran, IPDPS 2019, IEEE Computer Society, Los Alamitos, CA, 2019, and SPAA 2020, ACM, New York, 2020, pp. 1121]. Our algorithm is extremely simple and relies on a new technique called random filtered broadcast. Given any sets of nodes $A,B\subseteq V$ and assuming that every $b \in B$ knows all distances from nodes in $A$, and every node $v \in V$ knows all distances from nodes in $B$, we want every $v\in V$ to know ${\sf DistThrough}_B(a,v) = \min_{b\in B} {dist}(a,b) + {\sf dist}(b,v)$ for every $a\in A$. Previous works typically solve this problem by broadcasting all knowledge of every $b\in B$, causing superlinear edge congestion and time. We show a randomized algorithm that can reduce edge congestions and thus solve this problem in $\tilde O(n)$ expected time.
Citation: SIAM Journal on Computing
PubDate: 20211123T08:00:00Z
DOI: 10.1137/20M1312782

 LowDensity ParityCheck Codes Achieve ListDecoding Capacity

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Jonathan Mosheiff, Nicolas Resch, Noga RonZewi, Shashwat Silas, Mary Wootters
Abstract: SIAM Journal on Computing, Ahead of Print.
We show that Gallager's ensemble of lowdensity paritycheck (LDPC) codes achieves listdecoding capacity with high probability. These are the first graphbased codes shown to have this property. This result opens up a potential avenue toward truly lineartime listdecodable codes that achieve listdecoding capacity. Our result on listdecoding follows from a much more general result: any local property satisfied with high probability by a random linear code is also satisfied with high probability by a random LDPC code from Gallager's distribution. Local properties are properties characterized by the exclusion of small sets of codewords and include listdecodability, listrecoverability, and averageradius listdecodability. In order to prove our results on LDPC codes, we establish sharp thresholds for when local properties are satisfied by a random linear code. More precisely, we show that for any local property $\mathcal{P}$, there is some $R^*$ so that random linear codes of rate slightly less than $R^*$ satisfy $\mathcal{P}$ with high probability, while random linear codes of rate slightly more than $R^*$, with high probability, do not. We also give a characterization of the threshold rate $R^*$.
Citation: SIAM Journal on Computing
PubDate: 20211123T08:00:00Z
DOI: 10.1137/20M1365934

 Limits on All Known (and Some Unknown) Approaches to Matrix Multiplication

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Josh Alman, Virginia Vassilevska Williams
Abstract: SIAM Journal on Computing, Ahead of Print.
We study the known techniques for designing Matrix Multiplication algorithms. The two main approaches are the Laser method of Strassen, and the group theoretic approach of Cohn and Umans. We define a generalization based on zeroing outs which subsumes these two approaches, which we call the Solar method, and an even more general method based on monomial degenerations, which we call the Galactic method. We then design a suite of techniques for proving lower bounds on the value of $\omega$, the exponent of matrix multiplication, which can be achieved by algorithms using many tensors $T$ and the Galactic method. Some of our techniques exploit “local” properties of $T$, like finding a subtensor of $T$ which is so “weak” that $T$ itself couldn't be used to achieve a good bound on $\omega$, while others exploit “global” properties, like $T$ being a monomial degeneration of the structural tensor of a group algebra. Our main result is that there is a universal constant $\ell>2$ such that a large class of tensors generalizing the CoppersmithWinograd tensor $CW_q$ cannot be used within the Galactic method to show a bound on $\omega$ better than $\ell$ for any $q$. We give evidence that previous lowerbounding techniques were not strong enough to show this. We also prove a number of complementary results along the way, including that for any group $G$, the structural tensor of $\C[G]$ can be used to recover the best bound on $\omega$ which the CoppersmithWinograd approach gets using $CW_{ G 2}$ as long as the asymptotic rank of the structural tensor is not too large.
Citation: SIAM Journal on Computing
PubDate: 20211122T08:00:00Z
DOI: 10.1137/19M124695X

 A Tight Analysis of Bethe Approximation for Permanent

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Nima Anari, Alireza Rezaei
Abstract: SIAM Journal on Computing, Ahead of Print.
We prove that the permanent of nonnegative matrices can be deterministically approximated within a factor of $\sqrt{2}^n$ in polynomial time, improving upon previous deterministic approximations. We show this by proving that the Bethe approximation of the permanent, a quantity computable in polynomial time, is at least as large as the permanent divided by $\sqrt{2}^{n}$. This resolves a conjecture of [L. Gurvits, Unleashing the Power of Schrijver's Permanental Inequality with the Help of the Bethe Approximation, preprint, arxiv 1106.2844, 2011]. Our bound is tight and, when combined with previously known inequalities lower bounding the permanent, fully resolves the quality of Bethe approximation for the permanent. As an additional corollary of our methods, we resolve a conjecture of [M. Chertkov and A. B. Yedidia, J. Mach. Learn. Res., 14 (2013), pp. 20292066], proving that fractional belief propagation with fractional parameter $\gamma=1/2$ yields an upper bound on the permanent.
Citation: SIAM Journal on Computing
PubDate: 20211118T08:00:00Z
DOI: 10.1137/19M1306142

 Reducing Path TSP to TSP

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Vera Traub, Jens Vygen, Rico Zenklusen
Abstract: SIAM Journal on Computing, Ahead of Print.
We present a blackbox reduction from the path version of the traveling salesman problem (Path TSP) to the classical tour version (TSP). More precisely, given an $\alpha$approximation algorithm for TSP, then, for any $\epsilon>0$, we obtain an $(\alpha+\epsilon)$approximation algorithm for the more general Path TSP. This reduction implies that the approximability of Path TSP is the same as for TSP, up to an arbitrarily small error. This avoids future discrepancies between the best known approximation factors achievable for these two problems, as they have existed until very recently. A wellstudied special case of TSP, Graph TSP, asks for tours in unitweight graphs. Our reduction shows that any $\alpha$approximation algorithm for Graph TSP implies an $(\alpha+\epsilon)$approximation algorithm for its path version. By applying our reduction to the 1.4approximation algorithm for Graph TSP by Sebö and Vygen, we obtain a polynomialtime $(1.4+\epsilon)$approximation algorithm for Graph Path TSP, improving on a recent $1.497$approximation algorithm of Traub and Vygen. We obtain our results through a variety of new techniques, including a novel way to set up a recursive dynamic program to guess significant parts of an optimal solution. At the core of our dynamic program we deal with instances of a new generalization of (Path) TSP which combines parity constraints with certain connectivity requirements. This problem, which we call $\Phi$TSP, has a constantfactor approximation algorithm and can be reduced to TSP in certain cases when the dynamic program would not make sufficient progress.
Citation: SIAM Journal on Computing
PubDate: 20211018T07:00:00Z
DOI: 10.1137/20M135594X

 Lower Bounds for External Memory Integer Sorting via Network Coding

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Alireza Farhadi, MohammadTaghi Hajiaghayi, Kasper Green Larsen, Elaine Shi
Abstract: SIAM Journal on Computing, Ahead of Print.
Sorting extremely large datasets is a frequently occurring task in practice. These datasets are usually much larger than the computer's main memory; thus external memory sorting algorithms, first introduced by Aggarwal and Vitter [Commun. ACM, 31 (1988), pp. 11161127], are often used. The complexity of comparison based external memory sorting has been understood for decades by now, but the situation remains elusive if we assume the keys to be sorted are integers. In internal memory, one can sort a set of $n$ integer keys of $\Theta({lg} n)$ bits each in $O(n)$ time using the classic radix sort algorithm, but in external memory, there are no faster integer sorting algorithms known than the simple comparison based ones. Whether such algorithms exist has remained a central open problem in external memory algorithms for more than three decades. In this paper, we present a tight conditional lower bound on the complexity of external memory sorting of integers. Our lower bound is based on a famous conjecture in network coding by Li and Li [Proceedings of the 42nd Allerton Annual Conference on Communication, Control and Computing, 2004], who conjectured that network coding cannot help anything beyond the standard multicommodity flow rate in undirected graphs. The only previous work connecting the Li and Li conjecture to lower bounds for algorithms is due to Adler et al. [Proceedings of the 17th Annual ACMSIAM Symposium on Discrete Algorithm, 2006, pp. 241250]. Adler et al. indeed obtain relatively simple lower bounds for oblivious algorithms (the memory access pattern is fixed and independent of the input data). Unfortunately obliviousness is a strong limitation, especially for integer sorting: we show that the Li and Li conjecture implies an $\Omega(n \log n)$ lower bound for internal memory oblivious sorting when the keys are $\Theta({lg} n)$ bits. This is in sharp contrast to the classic (nonoblivious) radix sort algorithm. Indeed going beyond obliviousness is highly nontrivial; we need to introduce several new methods and involved techniques, which are of their own interest, to obtain our tight lower bound for external memory integer sorting.
Citation: SIAM Journal on Computing
PubDate: 20210930T07:00:00Z
DOI: 10.1137/20M1321887

 The AverageCase Complexity of Counting Cliques in ErdösRényi
Hypergraphs
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Enric BoixAdserà, Matthew Brennan, Guy Bresler
Abstract: SIAM Journal on Computing, Ahead of Print.
We consider the problem of counting $k$cliques in $s$uniform ErdösRényi hypergraphs $G(n,c,s)$ with edge density $c$ and show that its finegrained averagecase complexity can be based on its worstcase complexity. We prove the following: (1) Dense ErdösRényi graphs and hypergraphs: Counting $k$cliques on $G(n,c,s)$ with $k$ and $c$ constant matches its worstcase complexity up to a $\polylog(n)$ factor. Assuming randomized ETH, it takes $n^{\Omega(k)}$ time to count $k$cliques in $G(n,c,s)$ if $k$ and $c$ are constant. (2) Sparse ErdösRényi graphs and hypergraphs: When $c = \Theta(n^{\alpha})$, we give several algorithms exploiting the sparsity of $G(n, c, s)$ that are faster than the best known worstcase algorithms. Complementing this, based on a finegrained worstcase assumption, our reduction implies a different averagecase phase diagram for each fixed $\alpha$ depicting a tradeoff between a runtime lower bound and $k$. Surprisingly, in the hypergraph case ($s \ge 3$), these lower bounds are tight against our algorithms exactly when $c$ is above the ErdösRényi $k$clique percolation threshold. Our reduction yields the first known averagecase hardness result on ErdösRényi hypergraphs based on worstcase hardness conjectures. We also give a variant of our worstcase to averagecase reduction for computing the parity of the $k$clique count that requires a milder assumption on the error probability of the blackbox solving the problem on $G(n, c, s)$.
Citation: SIAM Journal on Computing
PubDate: 20210914T07:00:00Z
DOI: 10.1137/20M1316044

 NearOptimal Lower Bounds on the Threshold Degree and SignRank of AC$^0$

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Alexander A. Sherstov, Pei Wu
Abstract: SIAM Journal on Computing, Ahead of Print.
The threshold degree of a Boolean function $f\colon{{\{0,1\}}^n}\to{\{0,1\}}$ is the minimum degree of a real polynomial $p$ that represents $f$ in sign: ${sgn}\,p(x)=(1)^{f(x)}.$ A related notion is signrank, defined for a Boolean matrix $F=[F_{ij}]$ as the minimum rank of a real matrix $M$ with ${sgn}\,M_{ij}=(1)^{F_{ij}}$. Determining the maximum threshold degree and signrank achievable by constantdepth circuits (${AC}^{0}$) is a wellknown and extensively studied open problem with complexitytheoretic and algorithmic applications. We give an essentially optimal solution to this problem. For any $\epsilon>0,$ we construct an ${AC}^{0}$ circuit in $n$ variables that has threshold degree $\Omega(n^{1\epsilon})$ and signrank $\exp(\Omega(n^{1\epsilon})),$ improving on the previous best lower bounds of $\Omega(\sqrt{n})$ and $\exp(\tilde{\Omega}(\sqrt{n}))$, respectively. Our results subsume all previous lower bounds on the threshold degree and signrank of ${AC}^{0}$ circuits of any given depth, with a strict improvement starting at depth 4. As a corollary, we also obtain nearoptimal bounds on the discrepancy, threshold weight, and threshold density of ${AC}^{0}$, strictly subsuming previous work on these quantities. Our work gives some of the strongest lower bounds to date on the communication complexity of ${AC}^{0}$.
Citation: SIAM Journal on Computing
PubDate: 20210820T07:00:00Z
DOI: 10.1137/20M1312459

 Spectral Independence in HighDimensional Expanders and Applications to
the Hardcore Model
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Nima Anari, Kuikui Liu, Shayan Oveis Gharan
Abstract: SIAM Journal on Computing, Ahead of Print.
We say a probability distribution $\mu$ is spectrally independent if an associated pairwise influence matrix has a bounded largest eigenvalue for the distribution and all of its conditional distributions. We prove that if $\mu$ is spectrally independent, then the corresponding highdimensional simplicial complex is a local spectral expander. Using a line of recent works on mixing time of highdimensional walks on simplicial complexes [T. Kaufman and D. Mass, Proceedings of ITCS, 2017, pp. 4:14:27; I. Dinur and T. Kaufman, Proceedings of the IEEE 58th Annual Symposium on Foundations of Computer Science, 2017, pp. 974985; T. Kaufman and I. Oppenheim, Proceedings of APPROX/RANDOM, 2018, pp. 47:147:17; V. L. Alev and L. C. Lau, Proceedings of the 52nd Annual ACM Symposium on Theory of Computing, 2020], this implies that the corresponding Glauber dynamics mixes rapidly and generates (approximate) samples from $\mu$. As an application, we show that natural Glauber dynamics mixes rapidly (in polynomial time) to generate a random independent set from the hardcore model up to the uniqueness threshold. This improves the quasipolynomial running time of Weitz's deterministic correlation decay algorithm [D. Weitz, Proceedings of the 38th Annual ACM Symposium on Theory of Computing, 2006, pp. 140149] for estimating the hardcore partition function, also answering a longstanding open problem of mixing time of Glauber dynamics [M. Luby and E. Vigoda, Proceedings of the 29th Annual ACM Symposium on Theory of Computing, 1997, pp. 682687; M. Luby and E. Vigoda, Random Structures Algorithms, 15 (1999), pp. 229241; M. Dyer and C. Greenhill, J. Algorithms, 35 (2000), pp. 1749; E. Vigoda, Electron. J. Combin., 8 (2001); C. Efthymiou et al., Proceedings of FOCS, 2016, pp. 704713].
Citation: SIAM Journal on Computing
PubDate: 20210712T07:00:00Z
DOI: 10.1137/20M1367696

 Approximating Rectangles by Juntas and Weakly Exponential Lower Bounds for
LP Relaxations of CSPs
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Pravesh K. Kothari, Raghu Meka, Prasad Raghavendra
Abstract: SIAM Journal on Computing, Ahead of Print.
We show that for constraint satisfaction problems (CSPs), weakly exponential size linear programming relaxations are as powerful as the explicit linear program described by $n^{\Omega(1)}$rounds of the SheraliAdams linear programming hierarchy. Combining with the known lower bounds on the SheraliAdams hierarchy, we obtain subexponential size lower bounds for linear programming relaxations that beat random guessing for many CSPs such as MAXCUT and MAX3SAT. This is a nearly exponential improvement over previous results; previously, it was only known that linear programs of size $n^{o(\log n)}$ cannot beat random guessing for such CSP Chan et al. [FOCS 2013, IEEE, Piscataway, NJ, 2013, pp. 350359]. Our bounds are obtained by exploiting and extending the recent progress in communication complexity for “lifting" query lower bounds to communication problems. The main ingredient in our results is a new structural result on “highentropy rectangles” that may be of independent interest in communication complexity.
Citation: SIAM Journal on Computing
PubDate: 20210520T07:00:00Z
DOI: 10.1137/17M1152966

 Targeted Pseudorandom Generators, Simulation Advice Generators, and
Derandomizing Logspace
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: William M. Hoza, Chris Umans
Abstract: SIAM Journal on Computing, Ahead of Print.
Assume that for every derandomization result for logspace algorithms, there is a pseudorandom generator strong enough to nearly recover the derandomization by iterating over all seeds and taking a majority vote. We prove under a precise version of this assumption that ${BPL} \subseteq \bigcap_{\alpha> 0} {DSPACE}(\log^{1 + \alpha} n)$. We strengthen the theorem to an equivalence by considering two generalizations of the concept of a pseudorandom generator against logspace. A targeted pseudorandom generator against logspace takes as input a short uniform random seed and a finite automaton; it outputs a long bitstring that looks random to that particular automaton. A simulation advice generator for logspace stretches a small uniform random seed into a long advice string; the requirement is that there is some logspace algorithm that, given a finite automaton and this advice string, simulates the automaton reading a long uniform random input. We prove that $\bigcap_{\alpha> 0} {promise}{BPSPACE}(\log^{1 + \alpha} n) = \bigcap_{\alpha> 0} {promise}{DSPACE}(\log^{1 + \alpha} n)$ if and only if for every targeted pseudorandom generator against logspace, there is a simulation advice generator for logspace with similar parameters. Finally, we observe that in a certain uniform setting (namely, if we only worry about sequences of automata that can be generated in logspace), targeted pseudorandom generators against logspace can be transformed into simulation advice generators with similar parameters.
Citation: SIAM Journal on Computing
PubDate: 20210309T08:00:00Z
DOI: 10.1137/17M1145707

 Explicit NearRamanujan Graphs of Every Degree

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Sidhanth Mohanty, Ryan O'Donnell, Pedro Paredes
Abstract: SIAM Journal on Computing, Ahead of Print.
For every constant $d \geq 3$ and $\epsilon> 0$, we give a deterministic $\operatorname{poly}(n)$time algorithm that outputs a $d$regular graph on $\Theta(n)$ vertices that is $\eps$nearRamanujan; i.e., its eigenvalues are bounded in magnitude by $2\sqrt{d1} + \epsilon$ (excluding the single trivial eigenvalue of $d$).
Citation: SIAM Journal on Computing
PubDate: 20210223T08:00:00Z
DOI: 10.1137/20M1342112

 Quantum Algorithm for Simulating Real Time Evolution of Lattice
Hamiltonians
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Jeongwan Haah, Matthew B. Hastings, Robin Kothari, Guang Hao Low
Abstract: SIAM Journal on Computing, Ahead of Print.
We study the problem of simulating the time evolution of a lattice Hamiltonian, where the qubits are laid out on a lattice and the Hamiltonian only includes geometrically local interactions (i.e., a qubit may only interact with qubits in its vicinity). This class of Hamiltonians is very general and is believed to capture fundamental interactions of physics. Our algorithm simulates the time evolution of such a Hamiltonian on $n$ qubits for time $T$ up to error $\epsilon$ using ${\mathcal O}( nT {polylog} (nT/\epsilon))$ gates with depth ${\mathcal O}(T { polylog} (nT/\epsilon))$. Our algorithm is the first simulation algorithm that achieves gate cost quasilinear in $nT$ and polylogarithmic in $1/\epsilon$. Our algorithm also readily generalizes to timedependent Hamiltonians and yields an algorithm with similar gate count for any piecewise slowly varying timedependent bounded local Hamiltonian. We also prove a matching lower bound on the gate count of such a simulation, showing that any quantum algorithm that can simulate a piecewise constant bounded local Hamiltonian in one dimension to constant error requires ${\widetilde{\Omega}}(nT)$ gates in the worst case. The lower bound holds even if we only require the output state to be correct on local measurements. To the best of our knowledge, this is the first nontrivial lower bound on the gate complexity of the simulation problem. Our algorithm is based on a decomposition of the timeevolution unitary into a product of small unitaries using LiebRobinson bounds. In the appendix, we prove a LiebRobinson bound tailored to Hamiltonians with small commutators between local terms, giving zero LiebRobinson velocity in the limit of commuting Hamiltonians. This improves the performance of our algorithm when the Hamiltonian is close to commuting.
Citation: SIAM Journal on Computing
PubDate: 20210112T08:00:00Z
DOI: 10.1137/18M1231511

 A Weighted Linear Matroid Parity Algorithm

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Satoru Iwata, Yusuke Kobayashi
Abstract: SIAM Journal on Computing, Ahead of Print.
The matroid parity (or matroid matching) problem, introduced as a common generalization of matching and matroid intersection problems, is so general that it requires an exponential number of oracle calls. Nevertheless, Lovász [Acta Sci. Math., 42 (1980), pp. 121131] showed that this problem admits a minmax formula and a polynomial algorithm for linearly represented matroids. Since then efficient algorithms have been developed for the linear matroid parity problem. In this paper, we present a combinatorial, deterministic, polynomialtime algorithm for the weighted linear matroid parity problem. The algorithm builds on a polynomial matrix formulation using Pfaffian and adopts a primaldual approach based on the augmenting path algorithm of Gabow and Stallmann [Combinatorica, 6 (1986), pp. 123150] for the unweighted problem.
Citation: SIAM Journal on Computing
PubDate: 20210107T08:00:00Z
DOI: 10.1137/17M1141709

 Optimization of the SherringtonKirkpatrick Hamiltonian

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Andrea Montanari
Abstract: SIAM Journal on Computing, Ahead of Print.
Let ${A}\in{\mathbb R}^{n\times n}$ be a symmetric random matrix with independent and identically distributed (i.i.d.) Gaussian entries above the diagonal. We consider the problem of maximizing $\langle{\sigma},{A}{\sigma}\rangle$ over binary vectors ${\sigma}\in\{+1,1\}^n$. In the language of statistical physics, this amounts to finding the ground state of the SherringtonKirkpatrick model of spin glasses. The asymptotic value of this optimization problem was characterized by Parisi via a celebrated variational principle, subsequently proved by Talagrand. We give an algorithm that, for any $\varepsilon>0$, outputs ${\sigma}_*\in\{1,+1\}^n$ such that $\langle{\sigma}_*,\boldsymbol{A}{\sigma}_*\rangle$ is at least $(1\varepsilon)$ of the optimum value, with probability converging to one as $n\to\infty$. The algorithm's time complexity is $C(\varepsilon)\, n^2$. We generalize it to matrices with i.i.d., but not necessarily Gaussian, entries, and obtain an algorithm that computes the MAXCUT of a dense ErdösRenyi random graph to within a factor $(1\varepsilon\cdot n^{1/2})$. As a side result, we prove that, at (low) nonzero temperature, the algorithm constructs approximate solutions of the ThoulessAndersonPalmer equations.
Citation: SIAM Journal on Computing
PubDate: 20210107T08:00:00Z
DOI: 10.1137/20M132016X
