SIAM Journal on Computing
Journal Prestige (SJR): 0.874 Citation Impact (citeScore): 2 Number of Followers: 11 Hybrid journal (It can contain Open Access articles) ISSN (Print) 00975397  ISSN (Online) 10957111 Published by Society for Industrial and Applied Mathematics [17 journals] 
 Approximately Counting and Sampling Small Witnesses Using a Colorful
Decision Oracle
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Holger Dell, John Lapinskas, Kitty Meeks
Pages: 849  899
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 849899, August 2022.
In this paper, we design efficient algorithms to approximately count the number of edges of a given $k$hypergraph, and to sample an approximately uniform random edge. The hypergraph is not given explicitly and can be accessed only through its colorful independence oracle: The colorful independence oracle returns yes or no depending on whether a given subset of the vertices contains an edge that is colorful with respect to a given vertexcoloring. Our results extend and/or strengthen recent results in the graph oracle literature due to Beame et al. [ACM Trans. Algorithms, 16 (2020), 52], Dell and Lapinskas [Proceedings of STOC, ACM, 2018, pp. 281288], and Bhattacharya et al. [Proceedings of ISAAC, 2019]. Our results have consequences for approximate counting/sampling: We can turn certain kinds of decision algorithms into approximate counting/sampling algorithms without causing much overhead in the running time. We apply this approximate counting/samplingtodecision reduction to key problems in finegrained complexity (such as $k$SUM, $k$OV, and weighted $k$Clique) and parameterized complexity (such as induced subgraphs of size $k$ or weight$k$ solutions to constraint satisfaction problems).
Citation: SIAM Journal on Computing
PubDate: 20220705T07:00:00Z
DOI: 10.1137/19M130604X
Issue No: Vol. 51, No. 4 (2022)

 A Local Search Framework for Experimental Design

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Lap Chi Lau, Hong Zhou
Pages: 900  951
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 900951, August 2022.
We present a local search framework to design and analyze both combinatorial algorithms and rounding algorithms for experimental design problems. This framework provides a unifying approach to match and improve all known results in D/A/Edesign and to obtain new results in previously unknown settings. For combinatorial algorithms, we provide a new analysis of the classical Fedorov's exchange method. We prove that this simple local search algorithm works well as long as there exists an almost optimal solution with good condition number. Moreover, we design a new combinatorial local search algorithm for Edesign using the regret minimization framework. For rounding algorithms, we provide a unified randomized exchange algorithm to match and improve previous results for D/A/Edesign. Furthermore, the algorithm works in the more general setting to approximately satisfy multiple knapsack constraints, which can be used for weighted experimental design and for incorporating fairness constraints into experimental design.
Citation: SIAM Journal on Computing
PubDate: 20220707T07:00:00Z
DOI: 10.1137/20M1386542
Issue No: Vol. 51, No. 4 (2022)

 Differentially Private Learning of Geometric Concepts

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Haim Kaplan, Yishay Mansour, Yossi Matias, Uri Stemmer
Pages: 952  974
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 952974, August 2022.
We present efficient differentially private algorithms for learning unions of polygons in the plane (which are not necessarily convex). Our algorithms are $(\alpha,\beta)$probably approximately correct and $(\varepsilon,\delta)$differentially private using a sample of size $\tilde{O}\left(\frac{1}{\alpha\varepsilon}k\log d\right)$, where the domain is $[d]\times[d]$ and $k$ is the number of edges in the union of polygons. Our algorithms are obtained by designing a private variant of the classical (nonprivate) learner for conjunctions using the greedy algorithm for set cover.
Citation: SIAM Journal on Computing
PubDate: 20220707T07:00:00Z
DOI: 10.1137/21M1406428
Issue No: Vol. 51, No. 4 (2022)

 Caching with Time Windows and Delays

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Anupam Gupta, Amit Kumar, Debmalya Panigrahi
Pages: 975  1017
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 9751017, August 2022.
We consider two generalizations of the classical weighted paging problem that incorporate the notion of delayed service of page requests. The first is the (weighted) paging with time windows (\sf PageTW) problem, which is like the classical weighted paging problem except that each page request only needs to be served before a given deadline. This problem arises in many practical applications of online caching, such as the “deadline” I/O scheduler in the Linux kernel and videoondemand streaming. The second, and more general, problem is the (weighted) paging with delay (\sf PageD) problem, where the delay in serving a page request results in a penalty being added to the objective. This problem generalizes the caching problem to allow delayed service, a line of work that has recently gained traction in online algorithms (e.g., [Y. Emek, S. Kutten, and R. Wattenhofer, Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, 2016, pp. 333344; Y. Azar et al., Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, 2017, pp. 551563; Y. Azar and N. Touitou, Proceedings of the 60th IEEE Annual Symposium on Foundations of Computer Science, 2019, pp. 6071]). We give $O(\log k\log n)$competitive algorithms for both the \sf PageTW and \sf PageD problems on $n$ pages with a cache of size $k$. This significantly improves on the previous best bounds of $O(k)$ for both problems [Y. Azar et al., Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, 2017, pp. 551563]. We also consider the offline \sf PageTW and \sf PageD problems, for which we give $O(1)$approximation algorithms and prove APXhardness. These are the first results for the offline problems; even NPhardness was not known before our work. At the heart of our algorithms is a novel “hittingset” LP relaxation of the \sf PageTW problem that overcomes the $\Omega(k)$ integrality gap of the natural LP for the problem. To the best of our knowledge, this is the first example of an LPbased algorithm for an online problem with delays/deadlines.
Citation: SIAM Journal on Computing
PubDate: 20220707T07:00:00Z
DOI: 10.1137/20M1346286
Issue No: Vol. 51, No. 4 (2022)

 A Spectral Approach to Network Design

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Lap Chi Lau, Hong Zhou
Pages: 1018  1064
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 10181064, August 2022.
We present a spectral approach to design approximation algorithms for network design problems. We observe that the underlying mathematical questions are the spectral rounding problems, which were studied in spectral sparsification and in discrepancy theory. We extend these results to incorporate additional nonnegative linear constraints, and show that they can be used to significantly extend the scope of network design problems that can be solved. Our algorithm for spectral rounding is an iterative randomized rounding algorithm based on the regret minimization framework. In some settings, this provides an alternative spectral algorithm to achieve constant factor approximation for the classical survivable network design problem, and partially answers a question of Bansal about survivable network design with concentration property. We also show many other applications of the spectral rounding results, including weighted experimental design and spectral network design.
Citation: SIAM Journal on Computing
PubDate: 20220713T07:00:00Z
DOI: 10.1137/20M1330762
Issue No: Vol. 51, No. 4 (2022)

 On Ray Shooting for Triangles in 3Space and Related Problems

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Esther Ezra, Micha Sharir
Pages: 1065  1095
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 10651095, August 2022.
We consider several intersection searching problems that involve lines in ${\mathbb R}^3$ and present improved algorithms for solving them. The problems include (i) ray shooting amid triangles in ${\mathbb R}^3$, (ii) reporting intersections between query lines (segments, or rays) and input triangles in ${\mathbb R}^3$, as well as approximately counting the number of such intersections, (iii) computing the intersection of two nonconvex polyhedra in ${\mathbb R}^3$, (iv) detecting, counting, or reporting intersections in a set of lines in ${\mathbb R}^3$, and (v) outputsensitive construction of an arrangement of triangles in ${\mathbb R}^3$. Our approach is based on the polynomial partitioning technique. Our rayshooting algorithm processes a set of $n$ triangles in ${\mathbb R}^3$ into a data structure for answering rayshooting queries amid the given triangles, which uses $O(n^{3/2+{\varepsilon}})$ storage and expected preprocessing time, and answers a query in $O(n^{1/2+{\varepsilon}})$ time, for any ${\varepsilon}>0$. This is a significant improvement over known results, obtained more than 25 years ago, in which, with this amount of storage, the query time bound is roughly $n^{5/8}$. The algorithms for the other problems have similar performance bounds, with similar improvements over previous results. We also derive a nontrivial improved tradeoff between storage and query time. Using it, we obtain algorithms that answer $m$ queries on $n$ objects in $\max \left\{ O(m^{2/3}n^{5/6+{\varepsilon}} + n^{1+{\varepsilon}}),\; O(m^{5/6+{\varepsilon}}n^{2/3} + m^{1+{\varepsilon}}) \right\}$ expected time for any ${\varepsilon}>0$.
Citation: SIAM Journal on Computing
PubDate: 20220714T07:00:00Z
DOI: 10.1137/21M1408245
Issue No: Vol. 51, No. 4 (2022)

 Contextual Search via Intrinsic Volumes

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Renato Paes Leme, Jon Schneider
Pages: 1096  1125
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 10961125, August 2022.
We study the problem of contextual search, a multidimensional generalization of binary search that captures many problems in contextual decisionmaking. In contextual search, a learner is trying to learn the value of a hidden vector $v \in [0,1]^d$. Every round the learner is provided an adversarially chosen context $u_t \in \R^d$, submits a guess $p_t$ for the value of $\langle u_t, v\rangle$, learns whether $p_t < \langle u_t, v\rangle$, and incurs loss $\ell(\langle u_t, v\rangle, p_t)$ (for some loss function $\ell$). The learner's goal is to minimize their total loss over the course of $T$ rounds. We present an algorithm for the contextual search problem for the symmetric loss function $\ell(\theta, p) = \theta  p $ that achieves $O_{d}(1)$ total loss. We present a new algorithm for the dynamic pricing problem (which can be realized as a special case of the contextual search problem) that achieves $O_{d}(\log \log T)$ total loss, improving on the previous best known upper bounds of $O_{d}(\log T)$ and matching the known lower bounds (up to a polynomial dependence on $d$). Both algorithms make significant use of ideas from the field of integral geometry, most notably the notion of intrinsic volumes of a convex set. To the best of our knowledge, this is the first application of intrinsic volumes to algorithm design.
Citation: SIAM Journal on Computing
PubDate: 20220718T07:00:00Z
DOI: 10.1137/20M1385718
Issue No: Vol. 51, No. 4 (2022)

 Tighter Bounds on MultiParty Coin Flipping via Augmented Weak Martingales
and Differentially Private Sampling
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Amos Beimel, Iftach Haitner, Nikolaos Makriyannis, Eran Omri
Pages: 1126  1171
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 11261171, August 2022.
In his seminal work, Cleve [Proceedings of the 18th Annual ACM Symposium on Theory of Computing, 1986, pp. 364369] has proved that any $r$round coinflipping protocol can be efficiently biased by $\Theta(1/r)$. This lower bound was met for the twoparty case by Moran, Naor, and Segev [J. Cryptology, 29 (2016), pp. 491513] and the threeparty case (up to a ${polylog}$ factor) by Haitner and Tsfadia [SIAM J. Comput., 46 (2017), pp. 479542] and was approached for $n$party protocols when $n< {loglog} r$ by Buchbinder et al. [Proceedings of the 28th Annual ACMSIAM Symposium on Discrete Algorithms, 2017, pp. 25802600]. For $n> {loglog} r$, however, the best bias for $n$party coinflipping protocols remains $O(n/\sqrt{r})$ achieved by the majority protocol of Awerbuch et al. [How to implement Bracha's ${O}(\log n)$ Byzantine Agreement Algorithm, manuscript, 1985]. Our main result is a tighter lower bound on the bias of coinflipping protocols, showing that, for every constant $\varepsilon>0$, an $r^{\varepsilon}$party $r$round coinflipping protocol can be efficiently biased by $\widetilde{\Omega}(1/\sqrt{r})$. As far as we know, this is the first improvement of Cleve's bound and is only $n=r^{\varepsilon}$ (multiplicative) far from the aforementioned upper bound of Awerbuch et al. We prove the above bound using two new results that we believe are of independent interest. The first result is that a sequence of (``augmented'') weak martingales have large gap: with constant probability there exists two adjacent variables whose gap is at least the ratio between the gap between the first and last variables and the square root of the number of variables. This generalizes over the result of Cleve and Impagliazzo [Martingales, Collective Coin Flipping and Discrete Control Processes (Extended Abstract), http://citeseerx.ist.psu.edu/viewdoc/summary'doi=10.1.1.51.1797, 1993], who showed that the above holds for strong martingales, and allows in some setting to exploit this gap by efficient algorithms. We prove the above using a novel argument that does not follow the more complicated approach of R. Cleve and R. Impagliazzo. The second result is a new sampling algorithm that uses a differentially private mechanism to minimize the effect of data divergence.
Citation: SIAM Journal on Computing
PubDate: 20220721T07:00:00Z
DOI: 10.1137/18M1210782
Issue No: Vol. 51, No. 4 (2022)

 Classical Verification of Quantum Computations

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Urmila Mahadev
Pages: 1172  1229
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 11721229, August 2022.
We present the first protocol allowing a classical computer to interactively verify the result of an efficient quantum computation. We achieve this by constructing a measurement protocol, which enables a classical verifier to use a quantum prover as a trusted measurement device. The protocol forces the prover to behave as follows: the prover must construct an $n$ qubit state of his choice, measure each qubit in the Hadamard or standard basis as directed by the verifier, and report the measurement results to the verifier. The soundness of this protocol is enforced based on the assumption that the learning with errors problem is computationally intractable for efficient quantum machines.
Citation: SIAM Journal on Computing
PubDate: 20220721T07:00:00Z
DOI: 10.1137/20M1371828
Issue No: Vol. 51, No. 4 (2022)

 Testing LinearInvariant Properties

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Jonathan Tidor, Yufei Zhao
Pages: 1230  1279
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 12301279, August 2022.
We study the property testing of functions $\mathbb F_p^n\to[R]$ for fixed prime $p$ and positive integer $R$. We work in the natural model where we are allowed to query the function on a random subspace of constant dimension. We say that a property is testable if queries of this form can detect the property with onesided error. Furthermore, a property is proximity oblivioustestable (POtestable) if the test is also independent of the proximity parameter $\epsilon$. It is known that a number of natural properties such as linearity and being a low degree polynomial are POtestable. These properties are examples of linearinvariant properties, meaning that they are preserved under linear automorphisms of the domain. Following work of Kaufman and Sudan, the study of linearinvariant properties has been an important problem in arithmetic property testing. A central conjecture in this field, proposed by Bhattacharyya, Grigorescu, and Shapira, is that a linearinvariant property is testable if and only if it is semisubspacehereditary. We prove two results; the first resolves this conjecture and the second classifies POtestable properties: (1) A linearinvariant property is testable if and only if it is semisubspacehereditary. (2) A linearinvariant property is POtestable if and only if it is locally characterized. Our innovations are twofold. We give a more powerful version of the compactness argument first introduced by Alon and Shapira. This relies on a new strong arithmetic regularity lemma in which one mixes different levels of Gowers uniformity. This allows us to extend the work of Bhattacharyya, Fischer, Hatami, Hatami, and Lovett by removing the bounded complexity restriction in their work. Our second innovation is a novel recoloring technique called patching that builds on earlier work by the authors and Fox. This Ramseytheoretic technique is critical for working in the linearinvariant setting and allows us to remove the translationinvariant restriction present in previous work.
Citation: SIAM Journal on Computing
PubDate: 20220721T07:00:00Z
DOI: 10.1137/21M1397246
Issue No: Vol. 51, No. 4 (2022)

 Perfect Sampling in Infinite Spin Systems Via Strong Spatial Mixing

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Konrad Anand, Mark Jerrum
Pages: 1280  1295
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 12801295, August 2022.
We present a simple algorithm that perfectly samples configurations from the unique Gibbs measure of a spin system on a potentially infinite graph $G$. The sampling algorithm assumes strong spatial mixing together with subexponential growth of $G$. It produces a finite window onto a perfect sample from the Gibbs distribution. The runtime is linear in the size of the window, in particular it is constant for each vertex.
Citation: SIAM Journal on Computing
PubDate: 20220726T07:00:00Z
DOI: 10.1137/21M1437433
Issue No: Vol. 51, No. 4 (2022)

 A NearOptimal Algorithm for Shortest Paths Among Curved Obstacles in the
Plane
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: John Hershberger, Subhash Suri, Hakan Yildiz
Pages: 1296  1340
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 12961340, January 2022.
We propose an algorithm for the problem of computing shortest paths among curved obstacles in the plane. If the obstacles have $O(n)$ description complexity, then the algorithm runs in $O(n\log n)$ time plus a term dependent on the properties of the boundary arcs. Specifically, if the arcs allow a certain kind of bisector intersection to be computed in constant time, or even in $O(\log n)$ time, then the running time of the overall algorithm is $O(n \log n)$. If the arcs support only constanttime tangent, intersection, and length queries, as is customarily assumed, then the algorithm computes an approximate shortest path, with relative error $\varepsilon$, in time $O(n\log n + n\log \frac{1}{\varepsilon})$. In fact, the algorithm computes an approximate shortest path map, a data structure with $O(n\log n)$ size, that allows it to report the (approximate) length of a shortest path from a fixed source point to any query point in the plane in $O(\log n)$ time. By applying an idea due to Wang [Proceedings of the $32$nd Annual ACMSIAM Symposium on Discrete Algorithms, 2021, pp. 810821], the algorithm's working storage and the size of the approximate shortest path map can be reduced to $O(n)$.
Citation: SIAM Journal on Computing
PubDate: 20220809T07:00:00Z
DOI: 10.1137/21M1428248
Issue No: Vol. 51, No. 4 (2022)

 Approximating Longest Common Subsequence in Linear Time: Beating the
$\sqrt{{n}}$ Barrier
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: MohammadTaghi HajiAghayi, Masoud Seddighin, Saeedreza Seddighin, Xiaorui Sun
Pages: 1341  1367
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 13411367, August 2022.
Longest common subsequence (LCS) is one of the most fundamental problems in combinatorial optimization. Apart from theoretical importance, LCS has enormous applications in bioinformatics, revision control systems, and data comparison programs. Although a simple dynamic program computes LCS in quadratic time, it has been recently proven that the problem admits a conditional lower bound and may not be solved in truly subquadratic time [2, 18]. In addition to this, LCS is notoriously hard with respect to approximation algorithms. Apart from a trivial sampling technique that obtains an $n^{x}$ approximation solution in time $O(n^{22x})$ nothing else is known for LCS. This is in sharp contrast to its dual problem edit distance, for which several linear time solutions were obtained in the past two decades [5, 6, 10, 11, 19]. In this work, we present the first nontrivial algorithm for approximating LCS in linear time. Our main result is a linear time algorithm for the LCS which has an approximation factor of $O(n^{0.4991319})$ in expectation. This beats the $\sqrt{n}$ barrier for approximating LCS in linear time.
Citation: SIAM Journal on Computing
PubDate: 20220818T07:00:00Z
DOI: 10.1137/19M1272068
Issue No: Vol. 51, No. 4 (2022)

 On CDCLBased Proof Systems with the Ordered Decision Strategy

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Nathan Mull, Shuo Pang, Alexander Razborov
Pages: 1368  1399
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 13681399, August 2022.
We prove that conflictdriven clause learning (CDCL) SATsolvers with the ordered decision strategy and the DECISION learning scheme are equivalent to ordered resolution. We also prove that by replacing this learning scheme with its opposite, which stops after the first nonconflict clause when backtracking, they become equivalent to general resolution. To the best of our knowledge, along with [M. Vinyals, “Hard Examples for Common Variable Decision Heuristics,” in Proceedings of the 34th AAAI Conference on Artificial Intelligence, 2020, pp. 16521659], this is the first theoretical study of the interplay between specific decision strategies and clause learning. For both results, we allow nondeterminism in the solver's ability to perform unit propagation, conflict analysis, and restarts, in a way that is similar to previous works in the literature. To aid the presentation of our results, and possibly future research, we define a model and language for discussing CDCLbased proof systems that allow for succinct and precise theorem statements.
Citation: SIAM Journal on Computing
PubDate: 20220823T07:00:00Z
DOI: 10.1137/20M1362528
Issue No: Vol. 51, No. 4 (2022)

 QMAHardness of Consistency of Local Density Matrices with Applications to
Quantum ZeroKnowledge
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Anne Broadbent, Alex Bredariol Grilo
Pages: 1400  1450
Abstract: SIAM Journal on Computing, Volume 51, Issue 4, Page 14001450, August 2022.
We provide several advances to the understanding of the class of quantum MerlinArthur (QMA) proof systems, the quantum analogue of NP. Our central contribution is proving a longstanding conjecture that the consistency of local density matrices (CLDM) problem is QMAhard under Karp reductions. The input of CLDM consists of local reduced density matrices on sets of at most $k$ qubits, and the problem asks if there is an $n$qubit global quantum state that is locally consistent with all of the $k$qubit local density matrices. The containment of this problem in QMA and the QMAhardness under Turing reductions were proved by Liu [in Proceeding of APPROXRANDOM, 2006, pp. 438449]. Liu also conjectured that CLDM is QMAhard under Karp reductions, which is desirable for applications, and we finally prove this conjecture. We establish this result using the techniques of simulatable codes of Grilo, Slofstra, and Yuen [in Proceeding of FOCS, 2019, pp. 611635], simplifying their proofs and tailoring them to the context of QMA. In order to develop applications of CLDM, we propose a framework that we call locally simulatable proofs for QMA: this provides QMA proofs that can be efficiently verified by probing only $k$ qubits and, furthermore, the reduced density matrix of any $k$qubit subsystem of an accepting witness can be computed in polynomial time, independently of the witness. Within this framework, we show several advances in zeroknowledge in the quantum setting. We show for the first time a commitandopen computational zeroknowledge proof system for all of QMA, as a quantum analogue of a “sigma” protocol. We then define a proof of quantum knowledge, which guarantees that a prover is effectively in possession of a quantum witness in an interactive proof and show that our zeroknowledge proof system satisfies this definition. Finally, we show that our proof system can be used to establish that QMA has a quantum noninteractive zeroknowledge proof system in the secret parameter setting.
Citation: SIAM Journal on Computing
PubDate: 20220831T07:00:00Z
DOI: 10.1137/21M140729X
Issue No: Vol. 51, No. 4 (2022)

 Constant Depth Formula and Partial Function Versions of MCSP Are Hard

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Rahul Ilango
Abstract: SIAM Journal on Computing, Ahead of Print.
Attempts to prove the intractability of the Minimum Circuit Size Problem ($\mathsf{MCSP}$) date as far back as the 1950s and are well motivated by connections to cryptography, learning theory, and averagecase complexity. In this work, we make progress, on two fronts, towards showing $\mathsf{MCSP}$ is intractable under worstcase assumptions. While Masek showed in the late 1970s that the version of $\mathsf{MCSP}$ for $\mathsf{DNF}$ formulas is $\mathsf{NP}$hard, extending this result to the case of depth3 AND/OR formulas was open. We show that determining the minimum size of a depth$d$ formula computing a given Boolean function is $\mathsf{NP}$hard under quasipolynomialtime randomized reductions for all constant $d \geq 2$. Our approach is based on a method to “lift” depth$d$ formula lower bounds to depth$(d+1)$. This method also implies the existence of a function with a $2^{\Omega_d(n)}$ additive gap between its depth$d$ and depth$(d+1)$ formula complexity. We also make progress in the case of general, unrestricted circuits. We show that the version of $\mathsf{MCSP}$ where the input is a partial function (represented by a string in $\{0,1,\star\}^*$) is not in $\mathsf{P}$ under the Exponential Time Hypothesis (ETH). Intriguingly, we formulate a notion of lower bound statements being $(\mathsf{P/poly})$recognizable that is closely related to Razborov and Rudich's definition of being $(\mathsf{P/poly})$constructive. We show that unless there are subexponentialsized circuits computing $\mathsf{SAT}$, the collection of lower bound statements used to prove the correctness of our reductions cannot be $(\mathsf{P/poly})$recognizable.
Citation: SIAM Journal on Computing
PubDate: 20220831T07:00:00Z
DOI: 10.1137/20M1383562

 Random Walks and Forbidden Minors II: A
$\mathrm{poly}(d\varepsilon^{1})$Query Tester for MinorClosed
Properties of BoundedDegree Graphs
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Akash Kumar, C. Seshadhri, Andrew M. Stolman
Abstract: SIAM Journal on Computing, Ahead of Print.
Let $G$ be a graph with $n$ vertices and maximum degree $d$. Fix some minorclosed property $\mathcal{P}$ (such as planarity). We say that $G$ is $\varepsilon$far from $\mathcal{P}$ if one has to remove $\varepsilon dn$ edges to make it have $\mathcal{P}$. The problem of property testing $\mathcal{P}$ was introduced in the seminal work of Benjamini, Schramm, and Shapira (Symposium on the Theory of Computing 2008) that gave a tester with query complexity triply exponential in $\varepsilon^{1}$. Levi and Ron [ACM Trans. Algorithms, 11 (2005), 24 2015] have given the best tester to date, with a quasipolynomial (in $\varepsilon^{1}$) query complexity. It remained an open problem to show whether there is a property tester whose query complexity is $\mathrm{poly}(d\varepsilon^{1})$, even for planarity. In this paper, we resolve this open question. For any minorclosed property, we give a tester with query complexity $d\cdot \mathrm{poly}(\varepsilon^{1})$. The previous line of work on (independent of $n$, twosided) testers is primarily combinatorial. Our work, on the other hand, employs techniques from spectral graph theory. This paper is a continuation of recent work of the authors (Foundations of Computer Science 2018) analyzing random walk algorithms that find forbidden minors.
Citation: SIAM Journal on Computing
PubDate: 20220816T07:00:00Z
DOI: 10.1137/20M1312824

 $O(\log^2{k}/\log\log{k})$Approximation Algorithm for Directed Steiner
Tree: A Tight QuasiPolynomial Time Algorithm
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Fabrizio Grandoni, Bundit Laekhanukit, Shi Li
Abstract: SIAM Journal on Computing, Ahead of Print.
In the directed Steiner tree (DST) problem, we are given an $n$vertex directed edgeweighted graph, a root $r$, and a collection of $k$ terminal nodes. Our goal is to find a minimumcost subgraph that contains a directed path from $r$ to every terminal. We present an $O(\log^2 k/\log\log{k})$approximation algorithm for DST that runs in quasipolynomial time, i.e., in time $n^{{poly}\log (k)}$. By assuming the projection game conjecture and ${NP}\not\subseteq{\bigcap}_{0
Citation: SIAM Journal on Computing
PubDate: 20220728T07:00:00Z
DOI: 10.1137/20M1312988

 Correlation Decay and Partition Function Zeros: Algorithms and Phase
Transitions
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Jingcheng Liu, Alistair Sinclair, Piyush Srivastava
Abstract: SIAM Journal on Computing, Ahead of Print.
We explore connections between the phenomenon of correlation decay (more precisely, strong spatial mixing) and the location of LeeYang and Fisher zeros for various spin systems. In particular we show that, in many instances, proofs showing that weak spatial mixing on the Bethe lattice (infinite $\Delta$regular tree) implies that strong spatial mixing on all graphs of maximum degree $\Delta$ can be lifted to the complex plane, establishing the absence of zeros of the associated partition function in a complex neighborhood of the region in parameter space corresponding to strong spatial mixing. This allows us to give unified proofs of several recent results of this kind, including the resolution by Peters and Regts of the Sokal conjecture for the partition function of the hardcore lattice gas. It also allows us to prove new results on the location of LeeYang zeros of the antiferromagnetic Ising model. We show further that our methods extend to the case when weak spatial mixing on the Bethe lattice is not known to be equivalent to strong spatial mixing on all graphs. In particular, we show that results on strong spatial mixing in the antiferromagnetic Potts model can be lifted to the complex plane to give new zerofreeness results for the associated partition function, significantly sharpening previous results of Sokal and others. This new extension is also of independent algorithmic interest: it allows us to give the first polynomial time deterministic approximation algorithm (a fully polynomial time approximation scheme (FPTAS)) for counting the number of $q$colorings of a graph of maximum degree $\Delta$ provided only that $q\ge 2\Delta$, a question that has been studied intensively. This matches the natural bound for randomized algorithms obtained by a straightforward application of Markov chain Monte Carlo. In the case when the graph is also trianglefree, we show that our algorithm applies under the weaker condition $q \geq \alpha\Delta + \beta$, where $\alpha \approx 1.764$ and $\beta = \beta(\alpha)$ are absolute constants.
Citation: SIAM Journal on Computing
PubDate: 20220726T07:00:00Z
DOI: 10.1137/20M1317384

 Decodable Quantum LDPC Codes beyond the $\sqrt{n}$ Distance Barrier Using
HighDimensional Expanders
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Shai Evra, Tali Kaufman, Gilles Zémor
Abstract: SIAM Journal on Computing, Ahead of Print.
Constructing quantum lowdensity paritycheck (LDPC) codes with a minimum distance that grows faster than a square root of the length has been a major challenge of the field. With this challenge in mind, we investigate constructions that come from highdimensional expanders, in particular Ramanujan complexes. These naturally give rise to very unbalanced quantum error correcting codes that have a large $X$distance but a much smaller $Z$distance. However, together with a classical expander LDPC code and a tensoring method that generalizes a construction of Hastings and also the TillichZémor construction of quantum codes, we obtain quantum LDPC codes whose minimum distance exceeds the square root of the code length and whose dimension comes close to a square root of the code length. When the ingredient is a 2dimensional Ramanujan complex, or the 2skeleton of a 3dimensional Ramanujan complex, we obtain a quantum LDPC code of minimum distance $n^{1/2}\log^{1/2}n$. We then exploit the expansion properties of the complex to devise the first polynomialtime algorithm that decodes above the square root barrier for quantum LDPC codes. Using a 3dimensional Ramanujan complex, we also obtain an overall quantum code of minimum distance $n^{1/2}\log n$, which sets a new record for quantum LDPC codes.
Citation: SIAM Journal on Computing
PubDate: 20220614T07:00:00Z
DOI: 10.1137/20M1383689

 An $O(\log \log m)$ Prophet Inequality for Subadditive Combinatorial
Auctions
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Paul Dütting, Thomas Kesselheim, Brendan Lucier
Abstract: SIAM Journal on Computing, Ahead of Print.
Prophet inequalities compare the expected performance of an online algorithm for a stochastic optimization problem to the expected optimal solution in hindsight. They are a major alternative to classic worstcase competitive analysis, of particular importance in the design and analysis of simple (postedprice) incentive compatible mechanisms with provable approximation guarantees. A central open problem in this area concerns subadditive combinatorial auctions. Here $n$ agents with subadditive valuation functions compete for the assignment of $m$ items. The goal is to find an allocation of the items that maximizes the total value of the assignment. The question is whether there exists a prophet inequality for this problem that significantly beats the best known approximation factor of $O(\log m)$. We make major progress on this question by providing an $O(\log \log m)$ prophet inequality. Our proof goes through a novel primaldual approach. It is also constructive, resulting in an online policy that takes the form of static and anonymous item prices that can be computed in polynomial time given appropriate query access to the valuations. As an application of our approach, we construct a simple and incentive compatible mechanism based on posted prices that achieves an $O(\log \log m)$ approximation to the optimal revenue for subadditive valuations under an itemindependence assumption.
Citation: SIAM Journal on Computing
PubDate: 20220519T07:00:00Z
DOI: 10.1137/20M1382799

 A Parameterized Approximation Scheme for Min $k$Cut

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Daniel Lokshtanov, Saket Saurabh, Vaishali Surianarayanan
Abstract: SIAM Journal on Computing, Ahead of Print.
In the Min $k$Cut problem, the input consists of an edge weighted graph $G$ and an integer $k$, and the task is to partition the vertex set into $k$ nonempty sets, such that the total weight of the edges with endpoints in different parts is minimized. When $k$ is part of the input, the problem is NPcomplete and hard to approximate within any factor less than 2. Recently, the problem has received significant attention from the perspective of parameterized approximation. Gupta, Lee, and Li [Proceedings of the 29th Annual ACMSIAM Symposium on Discrete Algorithms, A. Czumaj, ed., SIAM, Philadelphia, 2018, pp. 28212837] initiated the study of FPTapproximation for the Min $k$Cut problem and gave a 1.9997approximation algorithm running in time $2^{\mathcal{O}(k^6)}n^{\mathcal{O}(1)}$. Later, the same set of authors [Proceedings of the 59th IEEE Annual Symposium on Foundations of Computer Science, M. Thorup, ed., 2018, pp. 113123] designed a $(1 +\epsilon)$approximation algorithm that runs in time $(k/\epsilon)^{\mathcal{O}(k)}n^{k+\mathcal{O}(1)}$ and a 1.81approximation algorithm running in time $2^{\mathcal{O}(k^2)}n^{\mathcal{O}(1)}$. More, recently, Kawarabayashi and Lin [Proceedings of the 31st ACMSIAM Symposium on Discrete Algorithms, S. Chawla, ed., SIAM, Philadelphia, 2020, pp. 990999] gave a $(5/3 + \epsilon)$approximation for Min $k$Cut running in time $2^{\mathcal{O}(k^2 \log k)}n^{\mathcal{O}(1)}$. In this paper, we give a parameterized approximation algorithm with best possible approximation guarantee and best possible running time dependence on said guarantee (up to the exponential time hypothesis and constants in the exponent). In particular, for every $\epsilon> 0$, the algorithm obtains a $(1 +\epsilon)$approximate solution in time $(k/\epsilon)^{\mathcal{O}(k)}n^{\mathcal{O}(1)}$. The main ingredients of our algorithm are a simple sparsification procedure, a new polynomial time algorithm for decomposing a graph into highly connected parts, and a new exact algorithm with running time $s^{\mathcal{O}(k)}n^{\mathcal{O}(1)}$ on unweighted (multi) graphs. Here, $s$ denotes the number of edges in a minimum $k$cut. The latter two are of independent interest.
Citation: SIAM Journal on Computing
PubDate: 20220425T07:00:00Z
DOI: 10.1137/20M1383197

 Unit Capacity Maxflow in Almost $m^{4/3}$ Time

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Tarun Kathuria, Yang P. Liu, Aaron Sidford
Abstract: SIAM Journal on Computing, Ahead of Print.
We present an algorithm which given any $m$edge directed graph with positive integer capacities at most $U$, vertices $a$ and $b$, and an approximation parameter $\epsilon \in (0, 1)$ computes an additive $\eps mU$approximate $a$$b$ maximum flow in time $m^{1+o(1)}/\sqrt{\eps}$. By applying the algorithm for $\eps = (mU)^{2/3}$, rounding to an integral flow, and using augmenting paths, we obtain an algorithm which computes an exact $a$$b$ maximum flow in time $m^{4/3+o(1)}U^{1/3}$ and an algorithm which given an $m$edge bipartite graph computes an exact maximum cardinality matching in time $m^{4/3+o(1)}$.
Citation: SIAM Journal on Computing
PubDate: 20220418T07:00:00Z
DOI: 10.1137/20M1383525

 Counting Small Induced Subgraphs Satisfying Monotone Properties

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Marc Roth, Johannes Schmitt, Philip Wellnitz
Abstract: SIAM Journal on Computing, Ahead of Print.
Given a graph property $\Phi$, the problem $\#\ensuremath{{\sc IndSub}}(\Phi)$ asks, on input of a graph $G$ and a positive integer $k$, to compute the number $\#{\ensuremath{{IndSub}({\Phi,k} \to {G})}}$ of induced subgraphs of size $k$ in $G$ that satisfy $\Phi$. The search for explicit criteria on $\Phi$ ensuring that $\#\ensuremath{{\sc IndSub}}(\Phi)$ is hard was initiated by Jerrum and Meeks [J. Comput. System Sci., 81 (2015), pp. 702716] and is part of the major line of research on counting small patterns in graphs. However, apart from an implicit result due to Curticapean, Dell, and Marx [STOC, ACM, New York, pp. 151158] proving that a full classification into “easy” and “hard” properties is possible and some partial results on edgemonotone properties due to Meeks [Discrete Appl. Math., 198 (2016), pp. 170194] and Dörfler et al. [MFCS, LIPIcs Leibniz Int. Proc. Inform. 138, Wadern Germany, 2019, 26], not much is known. In this work, we fully answer and explicitly classify the case of monotone, that is, subgraphclosed, properties: We show that for any nontrivial monotone property $\Phi$, the problem $\#\ensuremath{{\sc IndSub}}(\Phi)$ cannot be solved in time $f(k)\cdot V(G) ^{o(k/ {\log^{1/2}(k)})}$ for any function $f$, unless the exponential time hypothesis fails. By this, we establish that any significant improvement over the bruteforce approach is unlikely; in the language of parameterized complexity, we also obtain a $\#{\ensuremath{{W[1]}}}$completeness result. The methods we develop for the above problem also allow us to prove a conjecture by Jerrum and Meeks [ACM Trans. Comput. Theory, 7 (2015), 11; Combinatorica 37 (2017), pp. 965990]: $\#\ensuremath{{\sc IndSub}}(\Phi)$ is $\#{\ensuremath{{W[1]}}}$complete if $\Phi$ is a nontrivial graph property only depending on the number of edges of the graph.
Citation: SIAM Journal on Computing
PubDate: 20220411T07:00:00Z
DOI: 10.1137/20M1365624

 Smoothing the Gap Between NP and ER

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Jeff Erickson, Ivor van der Hoog, Tillmann Miltzow
Abstract: SIAM Journal on Computing, Ahead of Print.
We study algorithmic problems that belong to the complexity class of the existential theory of the reals ($\exists \mathbb{R}$). A problem is \ensuremath\exists \mathbbRcomplete if it is as hard as the problem existential theory of the reals (ETR) and if it can be written as an ETR formula. Traditionally, these problems are studied in the real random access machine (RAM), a model of computation that assumes that the storage and comparison of realvalued numbers can be done in constant space and time, with infinite precision. The complexity class $\exists \mathbb{R}$ is often called a real RAM analogue of NP, since the problem ETR can be viewed as the realvalued variant of SAT. The real RAM assumption that we can represent and in which we can compare arbitrary irrational values in constant space and time is not very realistic. Yet this assumption is vital, since some $\exists \mathbb{R}$complete problems have an “exponential bit phenomenon,” where there exists an input for the problem, such that the witness of the solution requires geometric coordinates which need exponential word size when represented in binary. The problems that exhibit this phenomenon are NPhard (since ETR is NPhard) but it is unknown if they lie in NP. NP membership is often showed by using the famous CookLevin theorem, which states that the existence of a polynomialtime verification algorithm for the problem witness is equivalent to NP membership. The exponential bit phenomenon prohibits a straightforward application of the CookLevin theorem. In this paper we first present a result which we believe to be of independent interest: we prove a real RAM analogue to the CookLevin theorem which shows that $\exists \mathbb{R}$ membership is equivalent to having a verification algorithm that runs in polynomialtime on a real RAM. This gives an easy proof of \ensuremath\exists \mathbbRmembership, as verification algorithms on a real RAM are much more versatile than ETR formulas. We use this result to construct a framework to study $\exists \mathbb{R}$complete problems under smoothed analysis. We show that for a wide class of $\exists \mathbb{R}$complete problems, its witness can be represented with logarithmic inputprecision by using smoothed analysis on its real RAM verification algorithm. This shows in a formal way that the boundary between NP and $\exists \mathbb{R}$ (formed by inputs whose solution witness needs high inputprecision) consists of contrived input. We apply our framework to wellstudied $\exists \mathbb{R}$complete recognition problems which have the exponential bit phenomenon such as the recognition of realizable order types or the Steinitz problem in fixed dimension. Interestingly our techniques also generalize to problems with a natural notion of resource augmentation (geometric packing, the art gallery problem).
Citation: SIAM Journal on Computing
PubDate: 20220407T07:00:00Z
DOI: 10.1137/20M1385287

 Truly Optimal Euclidean Spanners

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Hung Le, Shay Solomon
Abstract: SIAM Journal on Computing, Ahead of Print.
Euclidean spanners are important geometric structures, having found numerous applications over the years. Cornerstone results in this area from the late 1980s and early 1990s state that for any $d$dimensional $n$point Euclidean space, there exists a $(1+\epsilon)$spanner with $n \cdot O(\epsilon^{d+1})$ edges and lightness (normalized weight) $O(\epsilon^{2d})$. Surprisingly, the fundamental question of whether or not these dependencies on $\epsilon$ and $d$ for small $d$ can be improved has remained elusive, even for $d = 2$. This question naturally arises in any application of Euclidean spanners where precision is a necessity (thus $\epsilon$ is tiny). In the most extreme case $\epsilon$ is inverse polynomial in $n$, and then one could potentially improve the size and lightness bounds by factors that are polynomial in $n$. The stateoftheart bounds $n \cdot O(\epsilon^{d+1})$ and $O(\epsilon^{2d})$ on the size and lightness of spanners are realized by the greedy spanner. In 2016, in a preliminary version, Filtser and Solomon [SIAM J. Comput., 49 (2020), pp. 429447] proved that, in lowdimensional spaces, the greedy spanner is “nearoptimal''; informally, their result states that the greedy spanner for dimension $d$ is just as sparse and light as any other spanner but for dimension larger by a constant factor. Hence the question of whether the greedy spanner is truly optimal remained open to date. The contribution of this paper is twofold: (1) We resolve these longstanding questions by nailing down the dependencies on $\epsilon$ and $d$ and showing that the greedy spanner is truly optimal. Specifically, for any $d= O(1), \epsilon = \Omega({n}^{\frac{1}{d1}})$, (a) we show that there are $n$point sets in $\mathbb{R}^d$ for which any $(1+\epsilon)$spanner must have $n \cdot \Omega(\epsilon^{d+1})$ edges, implying that the greedy (and other) spanners achieve the optimal size; (b) we show that there are $n$point sets in $\mathbb{R}^d$ for which any $(1+\epsilon)$spanner must have lightness $\Omega(\epsilon^{d})$, and then improve the upper bound on the lightness of the greedy spanner from $O(\epsilon^{2d})$ to $O(\epsilon^{d} \log(\epsilon^{1}))$. (The lightness upper and lower bounds match up to a lowerorder term.) (2) We then complement our negative result for the size of spanners with a rather counterintuitive positive result: Steiner points lead to a quadratic improvement in the size of spanners! Our bound for the size of Steiner spanners in $\mathbb{R}^2$ is tight as well (up to a lowerorder term).
Citation: SIAM Journal on Computing
PubDate: 20220331T07:00:00Z
DOI: 10.1137/20M1317906

 Performance of JohnsonLindenstrauss Transform for $k$Means and
$k$Medians Clustering
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Konstantin Makarychev, Yury Makarychev, Ilya Razenshteyn
Abstract: SIAM Journal on Computing, Ahead of Print.
Consider an instance of Euclidean $k$means or $k$medians clustering. We show that the cost of the optimal solution is preserved up to a factor of $(1+\varepsilon)$ under a projection onto a random $O(\log(k / \varepsilon) / \varepsilon^2)$dimensional subspace. Further, the cost of every clustering is preserved within $(1+\varepsilon)$. More generally, our result applies to any dimension reduction map satisfying a mild subGaussiantail condition. Our bound on the dimension is nearly optimal. Additionally, our result applies to Euclidean $k$clustering with the distances raised to the $p$th power for any constant $p$. For $k$means, our result resolves an open problem posed by Cohen et al. [STOC 2015, ACM, New York, 2015, pp. 163172] for $k$medians, it answers a question raised by Kannan.
Citation: SIAM Journal on Computing
PubDate: 20220314T07:00:00Z
DOI: 10.1137/20M1330701

 Polynomial Data Structure Lower Bounds in the Group Model

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Alexander Golovnev, Gleb Posobin, Oded Regev, Omri Weinstein
Abstract: SIAM Journal on Computing, Ahead of Print.
Proving superlogarithmic data structure lower bounds in the static group model has been a fundamental challenge in computational geometry since the early '80s. We prove a polynomial ($n^{\Omega(1)}$) lower bound for an explicit range counting problem of $n^3$ convex polygons in $\mathbb{R}^2$ (each with $n^{\tilde{O}(1)}$ facets/semialgebraic complexity), against linear storage arithmetic data structures in the group model. Our construction and analysis are based on a combination of techniques in Diophantine approximation, pseudorandomness, and compressed sensingin particular, on the existence and partial derandomization of optimal binary compressed sensing matrices in the polynomial sparsity regime ($k = n^{1\delta}$). As a byproduct, this establishes a (logarithmic) separation between compressed sensing matrices and the stronger RIP property.
Citation: SIAM Journal on Computing
PubDate: 20220314T07:00:00Z
DOI: 10.1137/20M1381988

 Efficient Construction of Rigid Matrices Using an NP Oracle

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Josh Alman, Lijie Chen
Abstract: SIAM Journal on Computing, Ahead of Print.
For a matrix $H$ over a field $\mathbb{F}$, its rank$r$ rigidity, denoted by $\mathscr{R}_{H}(r)$, is the minimum Hamming distance from $H$ to a matrix of rank at most $r$ over $\mathbb{F}$. A central open challenge in complexity theory is to give explicit constructions of rigid matrices for a variety of parameter settings. In this work, building on Williams' seminal connection between circuitanalysis algorithms and lower bounds [J. ACM, 61 (2014), 2], we give a construction of rigid matrices in $P}^{NP}}$. Letting $q = p^r$ be a prime power, we show that there is an absolute constant $\delta>0$ such that, for all constants $\varepsilon>0$, there is a $P}^{NP}}$ machine $M$ such that, for infinitely many $N$'s, $M(1^N)$ outputs a matrix $H_N \in \{0,1\}^{N \times N}$ with $\mathscr{R}_{H_N}(2^{(\log N)^{1/4  \varepsilon}}) \ge \delta \cdot N^2$ over $\mathbb{F}_q$. Using known connections between matrix rigidity and other topics in complexity theory, we derive several consequences of our constructions, including that there is a function $f \in TIME}[2^{(\log n)^{\omega(1)}}]^{NP}}$ such that $f \notin PH}^{cc}}$. Previously, it was open whether $E}^{NP}} \subset PH}^{cc}}$. For all $\varepsilon>0$, there is a $P}^{NP}}$ machine $M$ such that, for infinitely many $N$'s, $M(1^N)$ outputs an $N \times N$ matrix $H_N \in \{0,1\}^{N \times N}$ whose linear transformation requires depth2 $\mathbb{F}_q$linear circuits of size $\Omega(N \cdot 2^{(\log N)^{1/4  \varepsilon}})$. The previous best lower bound for an explicit family of $N \times N$ matrices over $\mathbb{F}_q$ was only $\Omega(N \log^2 N / (\log\log N)^2)$, for asymptotically good error correcting codes.
Citation: SIAM Journal on Computing
PubDate: 20220307T08:00:00Z
DOI: 10.1137/20M1322297

 The Complexity of Necklace Splitting, ConsensusHalving, and Discrete Ham
Sandwich
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Aris FilosRatsikas, Paul W. Goldberg
Abstract: SIAM Journal on Computing, Ahead of Print.
We resolve the computational complexity of three problems known as Necklace Splitting, ConsensusHalving, and Discrete Ham sandwich, showing that they are PPAcomplete. For NECKLACE SPLITTING, this result is specific to the important special case in which two thieves share the necklace. These are the first PPAcompleteness results for problems whose definition does not contain an explicit circuit, thus settling the status of PPA as a class that captures the complexity of such “natural' problems.
Citation: SIAM Journal on Computing
PubDate: 20220228T08:00:00Z
DOI: 10.1137/20M1312678

 Weak ZeroKnowledge beyond the BlackBox Barrier

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Nir Bitansky, Dakshita Khurana, Omer Paneth
Abstract: SIAM Journal on Computing, Ahead of Print.
The round complexity of zeroknowledge protocols is a longstanding open question and is yet to be settled under standard assumptions. So far, the question has appeared equally challenging for relaxations such as weak zeroknowledge and witness hiding. Like fullfledged zero knowledge, protocols satisfying these relaxed notions under standard assumptions have at least four messages. The difficulty in improving round complexity stems from a fundamental barrier: none of these notions can be achieved in three messages via reductions (or simulators) that treat the verifier as a black box. We introduce a new nonblackbox technique and use it to obtain the first protocols that cross this barrier under standard assumptions. Our main results are (1) weak zeroknowledge for NP in two messages, assuming quasipolynomially secure fully homomorphic encryption and other standard primitives (known from quasipolynomial hardness of learning with errors) as well as subexponentially secure oneway functions; and (2) weak zeroknowledge for NP in three messages under standard polynomial assumptions (following, for example, from fully homomorphic encryption and factoring). We also give, under polynomial assumptions, a twomessage witnesshiding protocol for any language L in NP that has a witness encryption scheme. This protocol is also publicly verifiable. Our technique is based on a new homomorphic trapdoor paradigm, which can be seen as a nonblackbox analogue of the classic FeigeLapidotShamir trapdoor paradigm.
Citation: SIAM Journal on Computing
PubDate: 20220127T08:00:00Z
DOI: 10.1137/20M1319565
