for Journals by Title or ISSN for Articles by Keywords help
 Subjects -> MATHEMATICS (Total: 955 journals)     - APPLIED MATHEMATICS (82 journals)    - GEOMETRY AND TOPOLOGY (20 journals)    - MATHEMATICS (706 journals)    - MATHEMATICS (GENERAL) (43 journals)    - NUMERICAL ANALYSIS (22 journals)    - PROBABILITIES AND MATH STATISTICS (82 journals) MATHEMATICS (706 journals)                  1 2 3 4 | Last

1 2 3 4 | Last

 Computational Complexity   [SJR: 0.607]   [H-I: 27]   [4 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1420-8954 - ISSN (Online) 1016-3328    Published by Springer-Verlag  [2352 journals]
• On the hardness of the noncommutative determinant
• Authors: V. Arvind; Srikanth Srinivasan
Pages: 1 - 29
Abstract: In this paper, we study the computational complexity of computing the noncommutative determinant. We first consider the arithmetic circuit complexity of computing the noncommutative determinant polynomial. Then, more generally, we also examine the complexity of computing the determinant (as a function) over noncommutative domains. Our hardness results are summarized below: We show that if the noncommutative determinant polynomial has small noncommutative arithmetic circuits then so does the noncommutative permanent. Consequently, the commutative permanent polynomial has small commutative arithmetic circuits. For any field $${\mathbb{F}}$$ we show that computing the $${n\times n}$$ permanent over $${\mathbb{F}}$$ is polynomial-time reducible to computing the $${2n\times 2n}$$ (noncommutative) determinant whose entries are $${O(n^2)\times O(n^2)}$$ matrices over the field $${\mathbb{F}}$$ . We also derive as a consequence that computing the $${n\times n}$$ permanent over nonnegative rationals is polynomial-time reducible to computing the noncommutative determinant over Clifford algebras of $${n^{O(1)}}$$ dimension. Our techniques are elementary and use primarily the notion of the Hadamard Product of noncommutative polynomials.
PubDate: 2018-03-01
DOI: 10.1007/s00037-016-0148-5
Issue No: Vol. 27, No. 1 (2018)

• Short lists with short programs in short time
• Authors: Bruno Bauwens; Anton Makhlin; Nikolay Vereshchagin; Marius Zimand
Pages: 31 - 61
Abstract: Given a machine U, a c-short program for x is a string p such that U(p) = x and the length of p is bounded by c + (the length of a shortest program for x). We show that for any standard Turing machine, it is possible to compute in polynomial time on input x a list of polynomial size guaranteed to contain a $${\operatorname{O}\bigl({\mathrm{log}} x \bigr)}$$ -short program for x. We also show that there exists a computable function that maps every x to a list of size x 2 containing a $${\operatorname{O}\bigl(1\bigr)}$$ -short program for x. This is essentially optimal because we prove that for each such function there is a c and infinitely many x for which the list has size at least c x 2. Finally we show that for some standard machines, computable functions generating lists with 0-short programs must have infinitely often list sizes proportional to 2 x .
PubDate: 2018-03-01
DOI: 10.1007/s00037-017-0154-2
Issue No: Vol. 27, No. 1 (2018)

• Non-interactive proofs of proximity
• Authors: Tom Gur; Ron D. Rothblum
Pages: 99 - 207
Abstract: We initiate a study of non-interactive proofs of proximity. These proof systems consist of a verifier that wishes to ascertain the validity of a given statement, using a short (sublinear length) explicitly given proof, and a sublinear number of queries to its input. Since the verifier cannot even read the entire input, we only require it to reject inputs that are far from being valid. Thus, the verifier is only assured of the proximity of the statement to a correct one. Such proof systems can be viewed as the $${\mathcal{NP}}$$ (or more accurately $${\mathcal{MA}}$$ ) analogue of property testing. We explore both the power and limitations of non-interactive proofs of proximity. We show that such proof systems can be exponentially stronger than property testers, but are exponentially weaker than the interactive proofs of proximity studied by Rothblum, Vadhan and Wigderson (STOC 2013). In addition, we show a natural problem that has a full and (almost) tight multiplicative trade-off between the length of the proof and the verifier’s query complexity. On the negative side, we also show that there exist properties for which even a linearly long (non-interactive) proof of proximity cannot significantly reduce the query complexity.
PubDate: 2018-03-01
DOI: 10.1007/s00037-016-0136-9
Issue No: Vol. 27, No. 1 (2018)

• Deterministic Identity Testing for Sum of Read-Once Oblivious Arithmetic
Branching Programs
• Authors: Rohit Gurjar; Arpita Korwar; Nitin Saxena; Thomas Thierauf
Pages: 835 - 880
Abstract: A read-once oblivious arithmetic branching program (ROABP) is an arithmetic branching program (ABP) where each variable occurs in at most one layer. We give the first polynomial-time whitebox identity test for a polynomial computed by a sum of constantly many ROABPs. We also give a corresponding blackbox algorithm with quasi-polynomial-time complexity $${n^{O({\rm log}\,n)}}$$ . In both the cases, our time complexity is double exponential in the number of ROABPs. ROABPs are a generalization of set-multilinear depth-3 circuits. The prior results for the sum of constantly many set-multilinear depth-3 circuits were only slightly better than brute force, i.e., exponential time. Our techniques are a new interplay of three concepts for ROABP: low evaluation dimension, basis isolating weight assignment and low-support rank concentration. We relate basis isolation to rank concentration and extend it to a sum of two ROABPs using evaluation dimension.
PubDate: 2017-12-01
DOI: 10.1007/s00037-016-0141-z
Issue No: Vol. 26, No. 4 (2017)

• Sparse multivariate polynomial interpolation on the basis of Schubert
polynomials
• Authors: Priyanka Mukhopadhyay; Youming Qiao
Pages: 881 - 909
Abstract: Schubert polynomials were discovered by A. Lascoux and M. Schützenberger in the study of cohomology rings of flag manifolds in 1980s. These polynomials generalize Schur polynomials and form a linear basis of multivariate polynomials. In 2003, Lenart and Sottile introduced skew Schubert polynomials, which generalize skew Schur polynomials and expand in the Schubert basis with the generalized Littlewood–Richardson coefficients. In this paper, we initiate the study of these two families of polynomials from the perspective of computational complexity theory. We first observe that skew Schubert polynomials, and therefore Schubert polynomials, are in #P (when evaluating on nonnegative integral inputs) and VNP. Our main result is a deterministic algorithm that computes the expansion of a polynomial f of degree d in $${\mathbb{Z}[x_1,\dots, x_n]}$$ on the basis of Schubert polynomials, assuming an oracle computing Schubert polynomials. This algorithm runs in time polynomial in n, d, and the bit size of the expansion. This generalizes, and derandomizes, the sparse interpolation algorithm of symmetric polynomials in the Schur basis by Barvinok and Fomin (Adv Appl Math 18(3):271–285, 1997). In fact, our interpolation algorithm is general enough to accommodate any linear basis satisfying certain natural properties. Applications of the above results include a new algorithm that computes the generalized Littlewood–Richardson coefficients.
PubDate: 2017-12-01
DOI: 10.1007/s00037-016-0142-y
Issue No: Vol. 26, No. 4 (2017)

• Tight Size-Degree Bounds for Sums-of-Squares Proofs
• Authors: Massimo Lauria; Jakob Nordström
Pages: 911 - 948
Abstract: We exhibit families of 4-CNF formulas over n variables that have sums-of-squares (SOS) proofs of unsatisfiability of degree (a.k.a. rank) d but require SOS proofs of size $${n^{\Omega{(d)}}}$$ for values of d = d(n) from constant all the way up to $${n^{\delta}}$$ for some universal constant $${\delta}$$ . This shows that the $${{n^{{\rm O}{(d)}}}}$$ running time obtained by using the Lasserre semidefinite programming relaxations to find degree-d SOS proofs is optimal up to constant factors in the exponent. We establish this result by combining NP-reductions expressible as low-degree SOS derivations with the idea of relativizing CNF formulas in Krajíček (Arch Math Log 43(4):427–441, 2004) and Dantchev & Riis (Proceedings of the 17th international workshop on computer science logic (CSL ’03), 2003) and then applying a restriction argument as in Atserias et al. (J Symb Log 80(2):450–476, 2015; ACM Trans Comput Log 17:19:1–19:30, 2016). This yields a generic method of amplifying SOS degree lower bounds to size lower bounds and also generalizes the approach used in Atserias et al. (2016) to obtain size lower bounds for the proof systems resolution, polynomial calculus, and Sherali–Adams from lower bounds on width, degree, and rank, respectively.
PubDate: 2017-12-01
DOI: 10.1007/s00037-017-0152-4
Issue No: Vol. 26, No. 4 (2017)

• Low-degree test with polynomially small error
• Authors: Dana Moshkovitz
Pages: 531 - 582
Abstract: A long line of work in Theoretical Computer Science shows that a function is close to a low-degree polynomial iff it is locally close to a low-degree polynomial. This is known as low-degree testing and is the core of the algebraic approach to construction of PCP. We obtain a low-degree test whose error, i.e., the probability it accepts a function that does not correspond to a low-degree polynomial, is polynomially smaller than existing low-degree tests. A key tool in our analysis is an analysis of the sampling properties of the incidence graph of degree-k curves and k′-tuples of points in a finite space $${\mathbb{F}^m}$$ . We show that the Sliding Scale Conjecture in PCP, namely the conjecture that there are PCP verifiers whose error is exponentially small in their randomness, would follow from a derandomization of our low-degree test.
PubDate: 2017-09-01
DOI: 10.1007/s00037-016-0149-4
Issue No: Vol. 26, No. 3 (2017)

• Dual VP Classes
• Authors: Eric Allender; Anna Gál; Ian Mertz
Pages: 583 - 625
Abstract: We consider the complexity class ACC 1 and related families of arithmetic circuits. We prove a variety of collapse results, showing several settings in which no loss of computational power results if fan-in of gates is severely restricted, as well as presenting a natural class of arithmetic circuits in which no expressive power is lost by severely restricting the algebraic degree of the circuits. We draw attention to the strong connections that exist between ACC 1 and VP, via connections to the classes CC 1[m] for various m. These results tend to support a conjecture regarding the computational power of the complexity class VP over finite algebras, and they also highlight the significance of a class of arithmetic circuits that is in some sense dual to VP. In particular, these dual-VP classes provide new characterizations of ACC 1 and TC 1 in terms of circuits of semiunbounded fan-in. As a corollary, we show that ACC i =  CC i for all $${i \geq 1}$$ .
PubDate: 2017-09-01
DOI: 10.1007/s00037-016-0146-7
Issue No: Vol. 26, No. 3 (2017)

• Graph Isomorphism, Color Refinement, and Compactness
• Authors: V. Arvind; Johannes Köbler; Gaurav Rattan; Oleg Verbitsky
Pages: 627 - 685
Abstract: Color refinement is a classical technique used to show that two given graphs G and H are non-isomorphic; it is very efficient, although it does not succeed on all graphs. We call a graph G amenable to color refinement if the color refinement procedure succeeds in distinguishing G from any non-isomorphic graph H. Babai et al. (SIAM J Comput 9(3):628–635, 1980) have shown that random graphs are amenable with high probability. We determine the exact range of applicability of color refinement by showing that amenable graphs are recognizable in time $${O((n+m)\log n)}$$ , where n and m denote the number of vertices and the number of edges in the input graph. We use our characterization of amenable graphs to analyze the approach to Graph Isomorphism based on the notion of compact graphs. A graph is called compact if the polytope of its fractional automorphisms is integral. Tinhofer (Discrete Appl Math 30(2–3):253–264, 1991) noted that isomorphism testing for compact graphs can be done quite efficiently by linear programming. However, the problem of characterizing compact graphs and recognizing them in polynomial time remains an open question. Our results in this direction are summarized below: We show that all amenable graphs are compact. In other words, the applicability range for Tinhofer’s linear programming approach to isomorphism testing is at least as large as for the combinatorial approach based on color refinement. Exploring the relationship between color refinement and compactness further, we study related combinatorial and algebraic graph properties introduced by Tinhofer and Godsil. We show that the corresponding classes of graphs form a hierarchy, and we prove that recognizing each of these graph classes is P-hard. In particular, this gives a first complexity lower bound for recognizing compact graphs.
PubDate: 2017-09-01
DOI: 10.1007/s00037-016-0147-6
Issue No: Vol. 26, No. 3 (2017)

• Topological lower bounds for arithmetic networks
• Authors: Andrei Gabrielov; Nicolai Vorobjov
Pages: 687 - 715
Abstract: We prove that the depth of any arithmetic network for deciding membership in a semialgebraic set $${\Sigma \subset \mathbb{R}^{n}}$$ is bounded from below by $$c_1 \sqrt{ \frac{\log ({\rm b}(\Sigma))}{n}} -c_2 \log n,$$ where $${{\rm b}(\Sigma)}$$ is the sum of the Betti numbers of $${\Sigma}$$ with respect to “ordinary” (singular) homology, and c 1, c 2 are some (absolute) positive constants. This result complements the similar lower bound in Montaña et al. (Appl Algebra Engrg Comm Comput 7:41–51, 1996) for locally closed semialgebraic sets in terms of the sum of Borel–Moore Betti numbers. We also prove that if $${\rho: \mathbb{R}^{n} \to \mathbb{R}^{n-r}}$$ is the projection map, for some $${r=0, \ldots, n}$$ , then the depth of any arithmetic network deciding membership in $${\Sigma}$$ is bounded by $$\frac{c_1\sqrt{\log ({\rm b}(\rho(\Sigma)))}}{n} - c_2 \log n$$ for some positive constants c 1, c 2.
PubDate: 2017-09-01
DOI: 10.1007/s00037-016-0145-8
Issue No: Vol. 26, No. 3 (2017)

• Non-commutative Edmonds’ problem and matrix semi-invariants
• Authors: Gábor Ivanyos; Youming Qiao; K. V. Subrahmanyam
Pages: 717 - 763
Abstract: In 1967, J. Edmonds introduced the problem of computing the rank over the rational function field of an $${n \times n}$$ matrix T with integral homogeneous linear polynomials. In this paper, we consider the non-commutative version of Edmonds’ problem: compute the rank of T over the free skew field. This problem has been proposed, sometimes in disguise, from several different perspectives in the study of, for example, the free skew field itself (Cohn in J Symbol Log 38(2):309–314, 1973), matrix spaces of low rank (Fortin-Reutenauer in Sémin Lothar Comb 52:B52f 2004), Edmonds’ original problem (Gurvits in J Comput Syst Sci 69(3):448–484, 2004), and more recently, non-commutative arithmetic circuits with divisions (Hrubeš and Wigderson in Theory Comput 11:357-393, 2015. doi:10.4086/toc.2015.v011a014). It is known that this problem relates to the following invariant ring, which we call the $${\mathbb{F}}$$ -algebra of matrix semi-invariants, denoted as R(n, m). For a field $${\mathbb{F}}$$ , it is the ring of invariant polynomials for the action of $${{\rm SL}(n, \mathbb{F}) \times {\rm SL}(n, \mathbb{F})}$$ on tuples of matrices— $${(A, C)\in {\rm SL}(n, \mathbb{F}) \times {\rm SL}(n, \mathbb{F})}$$ sends $${(B_{1}, \ldots, B_m)\in M(n, \mathbb{F})^{\oplus m}}$$ to $${(AB_1 {C}^{{\rm T}}, \ldots, AB_m {C}^{\rm T})}$$ . Then those T with non-commutative rank <  n correspond to those points in the nullcone of R(n, m). In particular, if the nullcone of R(n, m) is defined by elements of degree $${\leq \sigma}$$ , then there follows a $${{\rm poly}(n,\sigma)}$$ -time randomized algorithm to decide whether the non-commutative rank of T is full. To our knowledge, previously the best bound for $${\sigma}$$ was $${O(n^{2}\cdot 4^{n^2})}$$ over algebraically closed fields of characteristic 0 (Derksen in Proc Am Math Soc 129(4):955–964, 2001). We now state the main contributions of this paper: We observe that by using an algorithm of Gurvits, and assuming the above bound $${\sigma}$$ for R(n, m) over $${\mathbb{Q}}$$ , deciding whether or not T has non-commutative rank < n over $${\mathbb{Q}}$$ can be done deterministically in time polynomial in the input size and $${\sigma}$$ . When $${\mathbb{F}}$$ is large enough, we devise an algorithm for the non-commutative Edmonds pr...
PubDate: 2017-09-01
DOI: 10.1007/s00037-016-0143-x
Issue No: Vol. 26, No. 3 (2017)

• On Space and Depth in Resolution
• Authors: Alexander Razborov
Abstract: We show that the total space in resolution, as well as in any other reasonable proof system, is equal (up to a polynomial and $${(\log n)^{O(1)}}$$ factors) to the minimum refutation depth. In particular, all these variants of total space are equivalent in this sense. The same conclusion holds for variable space as long as we penalize for excessively (that is, super-exponential) long proofs, which makes the question about equivalence of variable space and depth about the same as the question of (non)-existence of “supercritical” tradeoffs between the variable space and the proof length. We provide a partial negative answer to this question: for all $${s(n) \leq n^{1/2}}$$ there exist CNF contradictions $${\tau_n}$$ that possess refutations with variable space s(n) but such that every refutation of $${\tau_n}$$ with variable space $${o(s^2)}$$ must have double exponential length $${2^{2^{\Omega(s)}}}$$ . We also include a much weaker tradeoff result between variable space and depth in the opposite range $${s(n) \ll \log n}$$ and show that no supercritical tradeoff is possible in this range.
PubDate: 2017-10-17
DOI: 10.1007/s00037-017-0163-1

• The Complexity of Approximating complex-valued Ising and Tutte partition
functions
• Authors: Leslie Ann Goldberg; Heng Guo
Abstract: We study the complexity of approximately evaluating the Ising and Tutte partition functions with complex parameters. Our results are partly motivated by the study of the quantum complexity classes BQP and IQP. Recent results show how to encode quantum computations as evaluations of classical partition functions. These results rely on interesting and deep results about quantum computation in order to obtain hardness results about the difficulty in (classically) evaluating the partition functions for certain fixed parameters. The motivation for this paper is to study more comprehensively the complexity of (classically) approximating the Ising and Tutte partition functions with complex parameters. Partition functions are combinatorial in nature, and quantifying their approximation complexity does not require a detailed understanding of quantum computation. Using combinatorial arguments, we give the first full classification of the complexity of multiplicatively approximating the norm and additively approximating the argument of the Ising partition function for complex edge interactions (as well as of approximating the partition function according to a natural complex metric). We also study the norm approximation problem in the presence of external fields, for which we give a complete dichotomy when the parameters are roots of unity. Previous results were known just for a few such points, and we strengthen these results from BQP-hardness to #P-hardness. Moreover, we show that computing the sign of the Tutte polynomial is #P-hard at certain points related to the simulation of BQP. Using our classifications, we then revisit the connections to quantum computation, drawing conclusions that are a little different from (and incomparable to) ones in the quantum literature, but along similar lines.
PubDate: 2017-09-13
DOI: 10.1007/s00037-017-0162-2

• Some observations on holographic algorithms
• Authors: Leslie G. Valiant
Abstract: We define the notion of diversity for families of finite functions and express the limitations of a simple class of holographic algorithms, called elementary algorithms, in terms of limitations on diversity. We show that this class of elementary algorithms is too weak to solve the Boolean circuit value problem, or Boolean satisfiability, or the permanent. The lower bound argument is a natural but apparently novel combination of counting and algebraic dependence arguments that is viable in the holographic framework. We go on to describe polynomial time holographic algorithms that go beyond the elementarity restriction in the two respects that they use exponential size fields, and multiple oracle calls in the form of polynomial interpolation. These new algorithms, which use bases of three components, compute the parity of the following quantities for degree three planar undirected graphs: the number of 3-colorings up to permutation of colors, the number of connected vertex covers, and the number of induced forests or feedback vertex sets. In each case, the parity can also be computed for any one slice of the problem, in particular for colorings where the first color is used a certain number of times, or where the connected vertex cover, feedback set or induced forest has a certain number of nodes.
PubDate: 2017-08-29
DOI: 10.1007/s00037-017-0160-4

• Communication with Contextual Uncertainty
Abstract: We introduce a simple model illustrating the utility of context in compressing communication and the challenge posed by uncertainty of knowledge of context. We consider a variant of distributional communication complexity where Alice gets some information $${X \in \{0,1\}^n}$$ and Bob gets $${Y \in \{0,1\}^n}$$ , where (X, Y) is drawn from a known distribution, and Bob wishes to compute some function g(X, Y) or some close approximation to it (i.e., the output is g(X, Y) with high probability over (X, Y)). In our variant, Alice does not know g, but only knows some function f which is a very close approximation to g. Thus, the function being computed forms the context for the communication. It is an enormous implicit input, potentially described by a truth table of size 2 n . Imprecise knowledge of this function models the (mild) uncertainty in this context. We show that uncertainty can lead to a huge cost in communication. Specifically, we construct a distribution $${\mu}$$ over $${(X,Y)\in \{0,1\}^n \times \{0,1\}^n}$$ and a class of function pairs (f, g) which are very close (i.e., disagree with o(1) probability when (X, Y) are sampled according to $${\mu}$$ ), for which the communication complexity of f or g in the standard setting is one bit, whereas the (two-way) communication complexity in the uncertain setting is at least $${\Omega(\sqrt{n})}$$ bits even when allowing a constant probability of error. It turns out that this blow-up in communication complexity can be attributed in part to the mutual information between X and Y. In particular, we give an efficient protocol for communication under contextual uncertainty that incurs only a small blow-up in communication if this mutual information is small. Namely, we show that if g has a communication protocol with complexity k in the standard setting and the mutual information between X and Y is I, then g has a one-way communication protocol with complexity $${O((1+I)\cdot 2^k)}$$ in the uncertain setting. This result is an immediate corollary of an even stronger result which shows that if g has one-way communication complexity k, then it has one-way uncertain-communication complexity at most $${O((1+I)\cdot k)}$$ . In the particular case where the input distribution is a product distribution (and so I = 0), the protocol in the uncertain setting only incurs a constant factor blow-up in one-way communication and error.
PubDate: 2017-08-29
DOI: 10.1007/s00037-017-0161-3

• On vanishing of Kronecker coefficients
• Authors: Christian Ikenmeyer; Ketan D. Mulmuley; Michael Walter
Abstract: We show that the problem of deciding positivity of Kronecker coefficients is NP-hard. Previously, this problem was conjectured to be in P, just as for the Littlewood–Richardson coefficients. Our result establishes in a formal way that Kronecker coefficients are more difficult than Littlewood–Richardson coefficients, unless P = NP. We also show that there exists a #P-formula for a particular subclass of Kronecker coefficients whose positivity is NP-hard to decide. This is an evidence that, despite the hardness of the positivity problem, there may well exist a positive combinatorial formula for the Kronecker coefficients. Finding such a formula is a major open problem in representation theory and algebraic combinatorics. Finally, we consider the existence of the partition triples $${(\lambda, \mu, \pi)}$$ such that the Kronecker coefficient $${k^\lambda_{\mu, \pi} = 0}$$ but the Kronecker coefficient $${k^{l\lambda}_{l \mu, l \pi} > 0}$$ for some integer l > 1. Such “holes” are of great interest as they witness the failure of the saturation property for the Kronecker coefficients, which is still poorly understood. Using insight from computational complexity theory, we turn our hardness proof into a positive result: We show that not only do there exist many such triples, but they can also be found efficiently. Specifically, we show that, for any $${0 < \epsilon \leq 1}$$ , there exists $${0 < a < 1}$$ such that, for all m, there exist $${\Omega(2^{m^a})}$$ partition triples $${(\lambda,\mu,\mu)}$$ in the Kronecker cone such that: (a) the Kronecker coefficient $${k^\lambda_{\mu,\mu}}$$ is zero, (b) the height of $${\mu}$$ is m, (c) the height of $${\lambda}$$ is $${\leq m^\epsilon}$$ , and (d) $${ \lambda = \mu \le m^3}$$ . The proof of the last result illustrates the effectiveness of the explicit proof strategy of GCT.
PubDate: 2017-07-27
DOI: 10.1007/s00037-017-0158-y

• Toward the KRW Composition Conjecture: Cubic Formula Lower Bounds via
Communication Complexity
• Authors: Irit Dinur; Or Meir
Abstract: One of the major challenges of the research in circuit complexity is proving super-polynomial lower bounds for de Morgan formulas. Karchmer et al. (Comput Complex 5(3/4):191–204, 1995b) suggested to approach this problem by proving that formula complexity behaves “as expected” with respect to the composition of functions $${f\diamond g}$$ . They showed that this conjecture, if proved, would imply super-polynomial formula lower bounds. The first step toward proving the KRW conjecture was made by Edmonds et al. (Comput Complex 10(3):210–246, 2001), who proved an analogue of the conjecture for the composition of “universal relations.” In this work, we extend the argument of Edmonds et al. (2001) further to $${f\diamond g}$$ where f is an arbitrary function and g is the parity function. While this special case of the KRW conjecture was already proved implicitly in Håstad’s work on random restrictions (Håstad in SIAM J Comput 27(1):48–64, 1998), our proof seems more likely to be generalizable to other cases of the conjecture. In particular, our proof uses an entirely different approach, based on communication complexity technique of Karchmer & Wigderson in (SIAM J Discrete Math 3(2):255–265, 1990). In addition, our proof gives a new structural result, which roughly says that the naive way for computing $${f\diamond g}$$ is the only optimal way. Along the way, we obtain a new proof of the state-of-the-art formula lower bound of n 3-o(1) due to Håstad (1998).
PubDate: 2017-07-27
DOI: 10.1007/s00037-017-0159-x

• Autoreducibility of NP-Complete Sets under Strong Hypotheses
• Authors: John M. Hitchcock; Hadi Shafei
Abstract: We study the polynomial-time autoreducibility of NP-complete sets and obtain separations under strong hypotheses for NP. Assuming there is a p-generic set in NP, we show the following: For every $${k \geq 2}$$ , there is a k-T-complete set for NP that is k-T-autoreducible, but is not k-tt-autoreducible or (k − 1)-T-autoreducible. For every $${k \geq 3}$$ , there is a k-tt-complete set for NP that is k-tt-autoreducible, but is not (k − 1)-tt-autoreducible or (k − 2)-T-autoreducible. There is a tt-complete set for NP that is tt-autoreducible, but is not btt-autoreducible. Under the stronger assumption that there is a p-generic set in NP $${\cap}$$ coNP, we show: For every $${k \geq 2}$$ , there is a k-tt-complete set for NP that is k-tt-autoreducible, but is not (k − 1)-T-autoreducible. Our proofs are based on constructions from separating NP-completeness notions. For example, the construction of a 2-T-complete set for NP that is not 2-tt-complete also separates 2-T-autoreducibility from 2-tt-autoreducibility.
PubDate: 2017-07-18
DOI: 10.1007/s00037-017-0157-z

• The Average Sensitivity of Bounded-Depth Formulas
• Authors: Benjamin Rossman
Abstract: We show that unbounded fan-in Boolean formulas of depth d + 1 and size s have average sensitivity $${O(\frac{1}{d} \log s)^d}$$ . In particular, this gives a tight $${2^{\Omega(d(n^{1/d}-1))}}$$ lower bound on the size of depth d + 1 formulas computing the parity function. These results strengthen the corresponding $${2^{\Omega(n^{1/d})}}$$ and $${O(\log s)^d}$$ bounds for circuits due to Håstad (Proceedings of the 18th annual ACM symposium on theory of computing, ACM, New York, 1986) and Boppana (Inf Process Lett 63(5): 257–261, 1997). Our proof technique studies a random process where the switching lemma is applied to formulas in an efficient manner.
PubDate: 2017-07-05
DOI: 10.1007/s00037-017-0156-0

• Local Expanders
• Authors: Emanuele Viola; Avi Wigderson
Abstract: A map $${f : \{0,1\}^{n} \to \{0,1\}^{n}}$$ has locality t if every output bit of f depends only on t input bits. Arora et al. (Colloquium on automata, languages and programming, ICALP, 2009) asked if there exist bounded-degree expander graphs on 2 n nodes such that the neighbors of a node $${x\in\{0,1\}^{n}}$$ can be computed by maps of constant locality. We give an explicit construction of such graphs with locality one. We then give three applications of this construction: (1) lossless expanders with constant locality, (2) more efficient error reduction for randomized algorithms, and (3) more efficient hardness amplification of one-way permutations. We also give, for n of the form $${n=4\cdot3^{t}}$$ , an explicit construction of bipartite Ramanujan graphs of degree 3 with 2 n −1 nodes in each side such that the neighbors of a node $${x\in \{0,1\}^{n}{\setminus} \{0^{n}\}}$$ can be computed either (1) in constant locality or (2) in constant time using standard operations on words of length $${\Omega(n)}$$ . Our results use in black-box fashion deep explicit constructions of Cayley expander graphs, by Kassabov (Invent Math 170(2):327–354, 2007) for the symmetric group $${S_{n}}$$ and by Morgenstern (J Comb Theory Ser B 62(1):44–62, 1994) for the special linear group SL $${(2,F_{2^{n}})}$$ .
PubDate: 2017-06-20
DOI: 10.1007/s00037-017-0155-1

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327

Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs