Abstract: We consider two known lower bounds on randomized communication complexity: the smooth rectangle bound and the logarithm of the approximate nonnegative rank. Our main result is that they are the same up to a multiplicative constant and a small additive term. The logarithm of the nonnegative rank is known to be a nearly tight lower bound on the deterministic communication complexity. Our result indicates that proving an analogous result for the randomized case, namely that the log approximate nonnegative rank is a nearly tight bound on randomized communication complexity, would imply the tightness of the information complexity bound. Another corollary of our result is the existence of a Boolean function with a quasipolynomial gap between its approximate rank and approximate nonnegative rank. We also show that our method yields an alternative simple proof of the equivalence between the approximate rank and the approximate μ norm, first shown by Lee and Shraibman. PubDate: 2019-03-01

Abstract: We present an upper bound on the exponent of the asymptotic behaviour of the tensor rank of a family of tensors defined by the complete graph on k vertices. For \({k \geq 4}\) , we show that the exponent per edge is at most 0.77, outperforming the best known upper bound on the exponent per edge for matrix multiplication (k = 3), which is approximately 0.79. We raise the question whether for some k the exponent per edge can be below 2/3, i.e. can outperform matrix multiplication even if the matrix multiplication exponent equals 2. In order to obtain our results, we generalize to higher-order tensors a result by Strassen on the asymptotic subrank of tight tensors and a result by Coppersmith and Winograd on the asymptotic rank of matrix multiplication. Our results have applications in entanglement theory and communication complexity. PubDate: 2019-03-01

Abstract: We prove that the PNP-type query complexity (alternatively, decision list width) of any Boolean function f is quadratically related to the PNP-type communication complexity of a lifted version of f. As an application, we show that a certain “product” lower bound method of Impagliazzo and Williams (CCC 2010) fails to capture PNP communication complexity up to polynomial factors, which answers a question of Papakonstantinou, Scheder, and Song (CCC 2014). PubDate: 2019-03-01

Authors:Benjamin Rossman Pages: 209 - 223 Abstract: We show that unbounded fan-in Boolean formulas of depth d + 1 and size s have average sensitivity \({O(\frac{1}{d} \log s)^d}\) . In particular, this gives a tight \({2^{\Omega(d(n^{1/d}-1))}}\) lower bound on the size of depth d + 1 formulas computing the parity function. These results strengthen the corresponding \({2^{\Omega(n^{1/d})}}\) and \({O(\log s)^d}\) bounds for circuits due to Håstad (Proceedings of the 18th annual ACM symposium on theory of computing, ACM, New York, 1986) and Boppana (Inf Process Lett 63(5): 257–261, 1997). Our proof technique studies a random process where the switching lemma is applied to formulas in an efficient manner. PubDate: 2018-06-01 DOI: 10.1007/s00037-017-0156-0 Issue No:Vol. 27, No. 2 (2018)

Authors:Emanuele Viola; Avi Wigderson Pages: 225 - 244 Abstract: A map \({f : \{0,1\}^{n} \to \{0,1\}^{n}}\) has locality t if every output bit of f depends only on t input bits. Arora et al. (Colloquium on automata, languages and programming, ICALP, 2009) asked if there exist bounded-degree expander graphs on 2 n nodes such that the neighbors of a node \({x\in\{0,1\}^{n}}\) can be computed by maps of constant locality. We give an explicit construction of such graphs with locality one. We then give three applications of this construction: (1) lossless expanders with constant locality, (2) more efficient error reduction for randomized algorithms, and (3) more efficient hardness amplification of one-way permutations. We also give, for n of the form \({n=4\cdot3^{t}}\) , an explicit construction of bipartite Ramanujan graphs of degree 3 with 2 n −1 nodes in each side such that the neighbors of a node \({x\in \{0,1\}^{n}{\setminus} \{0^{n}\}}\) can be computed either (1) in constant locality or (2) in constant time using standard operations on words of length \({\Omega(n)}\) . Our results use in black-box fashion deep explicit constructions of Cayley expander graphs, by Kassabov (Invent Math 170(2):327–354, 2007) for the symmetric group \({S_{n}}\) and by Morgenstern (J Comb Theory Ser B 62(1):44–62, 1994) for the special linear group SL \({(2,F_{2^{n}})}\) . PubDate: 2018-06-01 DOI: 10.1007/s00037-017-0155-1 Issue No:Vol. 27, No. 2 (2018)

Authors:Oded Goldreich; Avishay Tal Pages: 305 - 350 Abstract: A matrix A is said to have rigidity s for rank r if A differs from any matrix of rank r on more than s entries. We prove that random n-by-n Toeplitz matrices over \({\mathbb{F}_{2}}\) (i.e., matrices of the form \({A_{i,j} = a_{i-j}}\) for random bits \({a_{-(n-1)}, \ldots, a_{n-1}}\) ) have rigidity \({\Omega(n^3/(r^2\log n))}\) for rank \({r \ge \sqrt{n}}\) , with high probability. This improves, for \({r = o(n/\log n \log\log n)}\) , over the \({\Omega(\frac{n^2}{r} \cdot\log(\frac{n}{r}))}\) bound that is known for many explicit matrices. Our result implies that the explicit trilinear \({[n]\times [n] \times [2n]}\) function defined by \({F(x,y,z) = \sum_{i,j}{x_i y_j z_{i+j}}}\) has complexity \({\Omega(n^{3/5})}\) in the multilinear circuit model suggested by Goldreich and Wigderson (Electron Colloq Comput Complex 20:43, 2013), which yields an \({\exp(n^{3/5})}\) lower bound on the size of the so-called canonical depth-three circuits for F. We also prove that F has complexity \({\tilde{\Omega}(n^{2/3})}\) if the multilinear circuits are further restricted to be of depth 2. In addition, we show that a matrix whose entries are sampled from a \({2^{-n}}\) -biased distribution has complexity \({\tilde{\Omega}(n^{2/3})}\) , regardless of depth restrictions, almost matching the known \({O(n^{2/3})}\) upper bound for any matrix. We turn this randomized construction into an explicit 4-linear construction with similar lower bounds, using the quadratic small-biased construction of Mossel et al. (Random Struct Algorithms 29(1):56–81, 2006). PubDate: 2018-06-01 DOI: 10.1007/s00037-016-0144-9 Issue No:Vol. 27, No. 2 (2018)

Authors:V. Arvind; Srikanth Srinivasan Pages: 1 - 29 Abstract: In this paper, we study the computational complexity of computing the noncommutative determinant. We first consider the arithmetic circuit complexity of computing the noncommutative determinant polynomial. Then, more generally, we also examine the complexity of computing the determinant (as a function) over noncommutative domains. Our hardness results are summarized below: We show that if the noncommutative determinant polynomial has small noncommutative arithmetic circuits then so does the noncommutative permanent. Consequently, the commutative permanent polynomial has small commutative arithmetic circuits. For any field \({\mathbb{F}}\) we show that computing the \({n\times n}\) permanent over \({\mathbb{F}}\) is polynomial-time reducible to computing the \({2n\times 2n}\) (noncommutative) determinant whose entries are \({O(n^2)\times O(n^2)}\) matrices over the field \({\mathbb{F}}\) . We also derive as a consequence that computing the \({n\times n}\) permanent over nonnegative rationals is polynomial-time reducible to computing the noncommutative determinant over Clifford algebras of \({n^{O(1)}}\) dimension. Our techniques are elementary and use primarily the notion of the Hadamard Product of noncommutative polynomials. PubDate: 2018-03-01 DOI: 10.1007/s00037-016-0148-5 Issue No:Vol. 27, No. 1 (2018)

Abstract: We introduce and study interactive proofs in the framework of real number computations as introduced by Blum, Shub, and Smale. Ivanov and de Rougemont started this line of research showing that an analogue of Shamir’s result holds in the real additive Blum–Shub–Smale model of computation when only Boolean messages can be exchanged. Here, we introduce interactive proofs in the full BSS model in which also multiplications can be performed and reals can be exchanged. The ultimate goal is to give a Shamir-like characterization of the real counterpart \({{\rm IP}_\mathbb{R}}\) of classical IP. Whereas classically Shamir’s result implies IP = PSPACE = PAT = PAR, in our framework a major difficulty arises: In contrast to Turing complexity theory, the real number classes \({{\rm PAR}_\mathbb{R}}\) and \({{\rm PAT}_\mathbb{R}}\) differ and space resources considered separately are not meaningful. It is not obvious how to figure out whether at all \({{\rm IP}_\mathbb{R}}\) is characterized by one of the above classes—and if so by which. We obtain two main results, an upper and a lower bound for the new class \({{\rm IP}_\mathbb{R}.}\) As upper bound we establish \({{{\rm IP}_\mathbb{R}} \subseteq {\rm MA\exists}_\mathbb{R}}\) , where \({{\rm MA} \exists_\mathbb{R}}\) is a real complexity class introduced by Cucker and Briquel satisfying \({{\rm PAR}_\mathbb{R} \subsetneq {\rm MA}\exists_{\mathbb{R}} \subseteq {\rm PAT}_\mathbb{R}}\) and conjectured to be different from \({{\rm PAT}_\mathbb{R}}\) . We then complement this result and prove a non-trivial lower bound for \({{\rm IP}_\mathbb{R}}\) . More precisely, we design interactive real protocols verifying function values for a large class of functions introduced by Koiran and Perifel and denoted by UniformVPSPACE \({^{0}.}\) As a consequence, we show \({{\rm PAR}_\mathbb{R} \subseteq {\rm IP}_\mathbb{R}}\) , which in particular implies co- \({{\rm NP}_\mathbb{R} \subseteq {\rm IP}_\mathbb{R}}\) , and \({{\rm P}_\mathbb{R}^{Res} \subseteq {\rm IP}_\mathbb{R}}\) , where Res denotes certain multivariate Resultant polynomials. Our proof techniques are guided by the question in how far Shamir’s classical proof can be used as well in the real number setting. Towards this aim results by Koiran and Perifel on UniformVPSPACE \({^{0}}\) are extremely helpful. PubDate: 2018-11-07

Abstract: We investigate the power of Non-commutative Arithmetic Circuits, which compute polynomials over the free non-commutative polynomial ring \({\mathbb{F}\langle{x_1,\ldots,x_N\rangle}}\) , where variables do not commute. We consider circuits that are restricted in the ways in which they can compute monomials: this can be seen as restricting the families of parse trees that appear in the circuit. Such restrictions capture essentially all non-commutative circuit models for which lower bounds are known. We prove several results about such circuits. We show exponential lower bounds for circuits with up to an exponential number of parse trees, strengthening the work of Lagarde et al. [Electronic Colloquium on Comput Complexity (ECCC) vol 23, no 94, 2016], who prove such a result for Unique Parse Tree (UPT) circuits which have a single parse tree. The polynomial we prove a lower bound for is in fact computable by a polynomial-sized non-commutative circuit. We show exponential lower bounds for circuits whose parse trees are rotations of a single tree. This simultaneously generalizes recent lower bounds of Limaye et al. (Theory Comput 12(1):1–38, 2016) and the above lower bounds of Lagarde et al. (2016), which are known to be incomparable. Here too, the hard polynomial is computable by a polynomial-sized non-commutative circuit. We make progress on a question of Nisan (STOC, pp 410–418, 1991) regarding separating the power of Algebraic Branching Programs (ABPs) and Formulas in the non-commutative setting by showing a tight lower bound of \({n^{\Omega(\log d)}}\) for any UPT formula computing the product of d \({n \times n}\) matrices. When \({d \leq \log n}\) , we can also prove superpolynomial lower bounds for formulas with up to \({2^{o(d)}}\) many parse trees (for computing the same polynomial). Improving this bound to allow for \({2^{o(d)}}\) trees would give an unconditional separation between ABPs and Formulas. We give deterministic whitebox PIT algorithms for UPT circuits over any field, strengthening a result of Lagarde et al. (2016), and also for sums of a constant number of UPT circuits with different parse trees. PubDate: 2018-09-29

Abstract: We define the notion of diversity for families of finite functions and express the limitations of a simple class of holographic algorithms, called elementary algorithms, in terms of limitations on diversity. We show that this class of elementary algorithms is too weak to solve the Boolean circuit value problem, or Boolean satisfiability, or the permanent. The lower bound argument is a natural but apparently novel combination of counting and algebraic dependence arguments that is viable in the holographic framework. We go on to describe polynomial time holographic algorithms that go beyond the elementarity restriction in the two respects that they use exponential size fields, and multiple oracle calls in the form of polynomial interpolation. These new algorithms, which use bases of three components, compute the parity of the following quantities for degree three planar undirected graphs: the number of 3-colorings up to permutation of colors, the number of connected vertex covers, and the number of induced forests or feedback vertex sets. In each case, the parity can also be computed for any one slice of the problem, in particular for colorings where the first color is used a certain number of times, or where the connected vertex cover, feedback set or induced forest has a certain number of nodes. PubDate: 2018-09-01

Abstract: One of the major challenges of the research in circuit complexity is proving super-polynomial lower bounds for de Morgan formulas. Karchmer et al. (Comput Complex 5(3/4):191–204, 1995b) suggested to approach this problem by proving that formula complexity behaves “as expected” with respect to the composition of functions \({f\diamond g}\) . They showed that this conjecture, if proved, would imply super-polynomial formula lower bounds. The first step toward proving the KRW conjecture was made by Edmonds et al. (Comput Complex 10(3):210–246, 2001), who proved an analogue of the conjecture for the composition of “universal relations.” In this work, we extend the argument of Edmonds et al. (2001) further to \({f\diamond g}\) where f is an arbitrary function and g is the parity function. While this special case of the KRW conjecture was already proved implicitly in Håstad’s work on random restrictions (Håstad in SIAM J Comput 27(1):48–64, 1998), our proof seems more likely to be generalizable to other cases of the conjecture. In particular, our proof uses an entirely different approach, based on communication complexity technique of Karchmer & Wigderson in (SIAM J Discrete Math 3(2):255–265, 1990). In addition, our proof gives a new structural result, which roughly says that the naive way for computing \({f\diamond g}\) is the only optimal way. Along the way, we obtain a new proof of the state-of-the-art formula lower bound of n 3-o(1) due to Håstad (1998). PubDate: 2018-09-01

Abstract: We introduce a simple model illustrating the utility of context in compressing communication and the challenge posed by uncertainty of knowledge of context. We consider a variant of distributional communication complexity where Alice gets some information \({X \in \{0,1\}^n}\) and Bob gets \({Y \in \{0,1\}^n}\) , where (X, Y) is drawn from a known distribution, and Bob wishes to compute some function g(X, Y) or some close approximation to it (i.e., the output is g(X, Y) with high probability over (X, Y)). In our variant, Alice does not know g, but only knows some function f which is a very close approximation to g. Thus, the function being computed forms the context for the communication. It is an enormous implicit input, potentially described by a truth table of size 2 n . Imprecise knowledge of this function models the (mild) uncertainty in this context. We show that uncertainty can lead to a huge cost in communication. Specifically, we construct a distribution \({\mu}\) over \({(X,Y)\in \{0,1\}^n \times \{0,1\}^n}\) and a class of function pairs (f, g) which are very close (i.e., disagree with o(1) probability when (X, Y) are sampled according to \({\mu}\) ), for which the communication complexity of f or g in the standard setting is one bit, whereas the (two-way) communication complexity in the uncertain setting is at least \({\Omega(\sqrt{n})}\) bits even when allowing a constant probability of error. It turns out that this blow-up in communication complexity can be attributed in part to the mutual information between X and Y. In particular, we give an efficient protocol for communication under contextual uncertainty that incurs only a small blow-up in communication if this mutual information is small. Namely, we show that if g has a communication protocol with complexity k in the standard setting and the mutual information between X and Y is I, then g has a one-way communication protocol with complexity \({O((1+I)\cdot 2^k)}\) in the uncertain setting. This result is an immediate corollary of an even stronger result which shows that if g has one-way communication complexity k, then it has one-way uncertain-communication complexity at most \({O((1+I)\cdot k)}\) . In the particular case where the input distribution is a product distribution (and so I = 0), the protocol in the uncertain setting only incurs a constant factor blow-up in one-way communication and error. PubDate: 2018-09-01

Abstract: Statistical Zero-knowledge proofs (Goldwasser et al. in SICOMP: SIAM J Comput, 1989) allow a computationally unbounded server to convince a computationally limited client that an input x is in a language \({\Pi}\) without revealing any additional information about x that the client cannot compute by herself. Randomized encoding (RE) of functions (Ishai & Kushilevitz in FOCS 2000) allows a computationally limited client to publish a single (randomized) message, \({{\rm Enc}(x)}\) , from which the server learns whether x is in \({\Pi}\) and nothing else. It is known that \({\mathcal{SRE}}\) , the class of problems that admit statistically private randomized encoding with polynomial-time client and computationally unbounded server, is contained in the class \({\mathcal{SZK}}\) of problems that have statistical zero-knowledge proof. However, the exact relation between these two classes, and, in particular, the possibility of equivalence was left as an open problem. In this paper, we explore the relationship between \({\mathcal{SRE}}\) and \({\mathcal{SZK}}\) , and derive the following results: In a non-uniform setting, statistical randomized encoding with one-side privacy ( \({\mathcal{1RE}}\) ) is equivalent to non-interactive statistical zero-knowledge ( \({\mathcal{NISZK}}\) ). These variants were studied in the past as natural relaxation/strengthening of the original notions. Our theorem shows that proving \(\mathcal{SRE}=\mathcal{SZK}\) is equivalent to showing that \({\mathcal{1RE} = \mathcal{SRE}}\) and \({\mathcal{SZK} = \mathcal{NISZK}}\) . The latter is a well-known open problem (Goldreich et al. in CCC 1999). If \({\mathcal{SRE}}\) is non-trivial (not in \({\mathcal{BPP}}\) ), then infinitely often one-way functions exist. The analog hypothesis for \({\mathcal{SZK}}\) yields only auxiliary-input one-way functions (Ostrovsky in Sixth Annual Structure in Complexity Theory Conference 1991), which is believed to be a significantly weaker notion. If there exists an average-case hard language with perfect randomized encoding, then collision-resistance hash functions (CRH) exist. Again, a similar assumption for \({\mathcal{SZK}}\) implies only constant-round statistically hiding commitments, a primitive which seems weaker than CRH. We believe that our results sharpen the relationship between \({\mathcal{SRE}}\) and \({\mathcal{SZK}}\) PubDate: 2018-08-20

Authors:Sergey Fomin; Dima Grigoriev; Dorian Nogneng; Éric Schost Abstract: Semiring complexity is the version of arithmetic circuit complexity that allows only two operations: addition and multiplication. We show that semiring complexity of a Schur polynomial \({s_\lambda(x_1,\dots,x_k)}\) labeled by a partition \({\lambda=(\lambda_1\ge\lambda_2\ge\cdots)}\) is bounded by \({O(\log(\lambda_1))}\) provided the number of variables k is fixed. PubDate: 2018-06-04 DOI: 10.1007/s00037-018-0169-3

Authors:Clément L. Canonne; Tom Gur Abstract: Adaptivity is known to play a crucial role in property testing. In particular, there exist properties for which there is an exponential gap between the power of adaptive testing algorithms, wherein each query may be determined by the answers received to prior queries, and their non-adaptive counterparts, in which all queries are independent of answers obtained from previous queries. In this work, we investigate the role of adaptivity in property testing at a finer level. We first quantify the degree of adaptivity of a testing algorithm by considering the number of “rounds of adaptivity” it uses. More accurately, we say that a tester is k-(round) adaptive if it makes queries in \({k+1}\) rounds, where the queries in the \({i}\) ’th round may depend on the answers obtained in the previous \({i-1}\) rounds. Then, we ask the following question: Does the power of testing algorithms smoothly grow with the number of rounds of adaptivity' We provide a positive answer to the foregoing question by proving an adaptivity hierarchy theorem for property testing. Specifically, our main result shows that for every \({n \in \mathbb{N}}\) and \({0 \le k \le n^{0.33}}\) there exists a property \({\mathcal{P}_{n,k}}\) of functions for which (1) there exists a \({k}\) -adaptive tester for \({\mathcal{P}_{n,k}}\) with query complexity \({{\tilde O}{(k)}}\) , yet (2) any \({(k-1)}\) -adaptive tester for \({\mathcal{P}_{n,k}}\) must make \({{\tilde \Omega}{(n/k^2)}}\) queries. In addition, we show that such a qualitative adaptivity hierarchy can be witnessed for testing natural properties of graphs. PubDate: 2018-05-24 DOI: 10.1007/s00037-018-0168-4

Authors:Anurag Pandey; Nitin Saxena; Amit Sinhababu Abstract: The motivation for this work (Pandey et al. 2016) comes from two problems: testing algebraic independence of arithmetic circuits over a field of small characteristic and generalizing the structural property of algebraic dependence used by Kumar, Saraf, CCC’16 to arbitrary fields. It is known that in the case of zero, or large characteristic, using a classical criterion based on the Jacobian, we get a randomized poly-time algorithm to test algebraic independence. Over small characteristic, the Jacobian criterion fails and there is no subexponential time algorithm known. This problem could well be conjectured to be in RP, but the current best algorithm puts it in NP \({^{\#{\rm P}}}\) (Mittmann, Saxena, Scheiblechner, Trans.AMS’14). Currently, even the case of two bivariate circuits over \({\mathbb{F}_2}\) is open. We come up with a natural generalization of Jacobian criterion that works over all characteristics. The new criterion is efficient if the underlying inseparable degree is promised to be a constant. This is a modest step toward the open question of fast independence testing, over finite fields, posed in (Dvir, Gabizon, Wigderson, FOCS’07). In a set of linearly dependent polynomials, any polynomial can be written as a linear combination of the polynomials forming a basis. The analogous property for algebraic dependence is false, but a property approximately in that spirit is named as “functional dependence” in Kumar, Saraf, CCC’16 and proved for zero or large characteristics. We show that functional dependence holds for arbitrary fields, thereby answering the open questions in Kumar, Saraf, CCC’16. Following them, we use the functional dependence lemma to prove the first exponential lower bound for locally low algebraic rank circuits for arbitrary fields (a model that strongly generalizes homogeneous depth-4 circuits). We also recover their quasipoly-time hitting-set for such models, for fields of characteristic smaller than the ones known before. Our results show that approximate functional dependence is indeed a more fundamental concept than the Jacobian as it is field independent. We achieve the former by first picking a “good” transcendence basis, then translating the circuits by new variables, and finally approximating them by truncating higher degree monomials. We give a tight analysis of the “degree” of approximation needed in the criterion. To get the locally low-algebraic-rank circuit applications, we follow the known shifted partial derivative-based methods. PubDate: 2018-05-14 DOI: 10.1007/s00037-018-0167-5

Authors:Matthias Christandl; Jeroen Zuiddam Abstract: We introduce a method for transforming low-order tensors into higher-order tensors and apply it to tensors defined by graphs and hypergraphs. The transformation proceeds according to a surgery-like procedure that splits vertices, creates and absorbs virtual edges and inserts new vertices and edges. We show that tensor surgery is capable of preserving the low rank structure of an initial tensor decomposition and thus allows to prove nontrivial upper bounds on tensor rank, border rank and asymptotic rank of the final tensors. We illustrate our method with a number of examples. Tensor surgery on the triangle graph, which corresponds to the matrix multiplication tensor, leads to nontrivial rank upper bounds for all odd cycle graphs, which correspond to the tensors of iterated matrix multiplication. In the asymptotic setting we obtain upper bounds in terms of the matrix multiplication exponent ω and the rectangular matrix multiplication parameter α. These bounds are optimal if ω equals two. We also give examples that illustrate that tensor surgery on general graphs might involve the absorption of virtual hyperedges and we provide an example of tensor surgery on a hypergraph. Besides its relevance in algebraic complexity theory, our work has applications in quantum information theory and communication complexity. PubDate: 2018-03-22 DOI: 10.1007/s00037-018-0164-8

Authors:Gábor Ivanyos; Youming Qiao; K. V. Subrahmanyam Abstract: We extend the techniques developed in Ivanyos et al. (Comput Complex 26(3):717–763, 2017) to obtain a deterministic polynomial-time algorithm for computing the non-commutative rank of linear spaces of matrices over any field. The key new idea that causes a reduction in the time complexity of the algorithm in Ivanyos et al. (2017) from exponential time to polynomial time is a reduction procedure that keeps the blow-up parameter small, and there are two methods to implement this idea: the first one is a greedy argument that removes certain rows and columns, and the second one is an efficient algorithmic version of a result of Derksen & Makam (Adv Math 310:44–63, 2017b), who were the first to observe that the blow-up parameter can be controlled. Both methods rely crucially on the regularity lemma from Ivanyos et al. (2017). In this note, we improve that lemma by removing a coprime condition there. PubDate: 2018-03-22 DOI: 10.1007/s00037-018-0165-7

Authors:Mika Göös; Toniann Pitassi; Thomas Watson Abstract: We prove several results which, together with prior work, provide a nearly-complete picture of the relationships among classical communication complexity classes between \({\mathsf{P}}\) and \({\mathsf{PSPACE}}\) , short of proving lower bounds against classes for which no explicit lower bounds were already known. Our article also serves as an up-to-date survey on the state of structural communication complexity. Among our new results we show that \({\mathsf{MA} \not\subseteq \mathsf{ZPP}^{\mathsf{NP}[1]}}\) , that is, Merlin–Arthur proof systems cannot be simulated by zero-sided error randomized protocols with one \({\mathsf{NP}}\) query. Here the class \(\mathsf{ZPP}^{\mathsf{NP}[1]}\) has the property that generalizing it in the slightest ways would make it contain \({\mathsf{AM} \cap \mathsf{coAM}}\) , for which it is notoriously open to prove any explicit lower bounds. We also prove that \({\mathsf{US} \not\subseteq \mathsf{ZPP}^{\mathsf{NP}[1]}}\) , where \({\mathsf{US}}\) is the class whose canonically complete problem is the variant of set-disjointness where yes-instances are uniquely intersecting. We also prove that \({\mathsf{US} \not\subseteq \mathsf{coDP}}\) , where \({\mathsf{DP}}\) is the class of differences of two \({\mathsf{NP}}\) sets. Finally, we explore an intriguing open issue: Are rank-1 matrices inherently more powerful than rectangles in communication complexity' We prove a new separation concerning \({\mathsf{PP}}\) that sheds light on this issue and strengthens some previously known separations. PubDate: 2018-03-22 DOI: 10.1007/s00037-018-0166-6

Authors:Alexander Razborov Abstract: We show that the total space in resolution, as well as in any other reasonable proof system, is equal (up to a polynomial and \({(\log n)^{O(1)}}\) factors) to the minimum refutation depth. In particular, all these variants of total space are equivalent in this sense. The same conclusion holds for variable space as long as we penalize for excessively (that is, super-exponential) long proofs, which makes the question about equivalence of variable space and depth about the same as the question of (non)-existence of “supercritical” tradeoffs between the variable space and the proof length. We provide a partial negative answer to this question: for all \({s(n) \leq n^{1/2}}\) there exist CNF contradictions \({\tau_n}\) that possess refutations with variable space s(n) but such that every refutation of \({\tau_n}\) with variable space \({o(s^2)}\) must have double exponential length \({2^{2^{\Omega(s)}}}\) . We also include a much weaker tradeoff result between variable space and depth in the opposite range \({s(n) \ll \log n}\) and show that no supercritical tradeoff is possible in this range. PubDate: 2017-10-17 DOI: 10.1007/s00037-017-0163-1