Authors:Igor Pak; Greta Panova Pages: 1 - 36 Abstract: Abstract We study the complexity of computing Kronecker coefficients \({g(\lambda,\mu,\nu)}\) . We give explicit bounds in terms of the number of parts \({\ell}\) in the partitions, their largest part size N and the smallest second part M of the three partitions. When M = O(1), i.e., one of the partitions is hook-like, the bounds are linear in log N, but depend exponentially on \({\ell}\) . Moreover, similar bounds hold even when \({M=e^{O(\ell)}}\) . By a separate argument, we show that the positivity of Kronecker coefficients can be decided in O(log N) time for a bounded number \({\ell}\) of parts and without restriction on M. Related problems of computing Kronecker coefficients when one partition is a hook and computing characters of S n are also considered. PubDate: 2017-03-01 DOI: 10.1007/s00037-015-0109-4 Issue No:Vol. 26, No. 1 (2017)

Authors:Eli Ben-Sasson; Noga Ron-Zewi; Madhu Sudan Pages: 37 - 77 Abstract: Abstract We show that sparse affine-invariant linear properties over arbitrary finite fields are locally testable with a constant number of queries. Given a finite field \({\mathbb{F}_q}\) and an extension field \({\mathbb{F}_{q^n}}\) , a property is a set of functions mapping \({\mathbb{F}_{q^n}}\) to \({\mathbb{F}_q}\) . The property is said to be affine-invariant if it is invariant under affine transformations of \({\mathbb{F}_{q^n}}\) , linear if it is an \({\mathbb{F}_q}\) -vector space, and sparse if its size is polynomial in the domain size. Our work completes a line of work initiated by Grigorescu et al. (2009) and followed by Kaufman & Lovett (2011). The latter showed such a result for the case when q was prime. Extending to non-prime cases turns out to be non-trivial, and our proof involves some detours into additive combinatorics, as well as a new calculus for building property testers for affine-invariant linear properties. PubDate: 2017-03-01 DOI: 10.1007/s00037-015-0115-6 Issue No:Vol. 26, No. 1 (2017)

Authors:Barış Aydınlıoğlu; Dieter van Melkebeek Pages: 79 - 118 Abstract: Abstract In several settings, derandomization is known to follow from circuit lower bounds that themselves are equivalent to the existence of pseudorandom generators. This leaves open the question whether derandomization implies the circuit lower bounds that are known to imply it, i.e., whether the ability to derandomize in any way implies the ability to do so in the canonical way through pseudorandom generators. For the setting of decision problems, Impagliazzo, Kabanets, and Wigderson (2002) implicitly showed the following equivalence: Randomized polynomial-time decision procedures for promise problems can be simulated in \({\sf{NSUBEXP}}\) (the subexponential version of \({\sf{NP}}\) ) with subpolynomial advice on infinitely many input lengths if and only if \({{\sf{NEXP}} \not \subseteq }\) \({\sf{P}/ poly}\) . We establish a full analog in the setting of verification procedures: Arthur-Merlin games for promise problems can be simulated in \({{\sf{\Sigma_{2} SUBEXP}}}\) (the subexponential version of \({\sf{\Sigma_{2} P}}\) ) with subpolynomial advice on infinitely many input lengths if and only if \({\sf{\Sigma_2EXP} \not \subseteq }\) \({\sf{NP}/poly}\) . A key ingredient in our proofs is improved Karp-Lipton style collapse results for nondeterministic circuits. The following are two instantiations that may be of independent interest: Assuming that Arthur-Merlin games can be derandomized in \({{\sf{\Sigma_{2}} P}}\) , we show that (i) \({\sf{PSPACE} \subseteq }\) \({\sf{NP/poly}}\) implies \({\sf{PSPACE} \subseteq \sf{\Sigma_2P}}\) and (ii) \({\sf{coNP} \subseteq {\sf{NP/poly}}}\) implies \({{\sf{PH} \subseteq \sf{P^{\Sigma_2P}}}}\) . Of possible independent interest is a generic framework that we provide, which captures several results in the literature of the form “derandomization implies circuit lower bounds.” In particular, our framework enables a unified view of our result, the one by Impagliazzo et al. (2002) mentioned above, as well as the result by Kabanets and Impagliazzo (2004) that derandomizing polynomial identity testing yields arithmetic circuit lower bounds. PubDate: 2017-03-01 DOI: 10.1007/s00037-014-0095-y Issue No:Vol. 26, No. 1 (2017)

Authors:Salman Beigi; Omid Etesami; Amin Gohari Pages: 119 - 145 Abstract: Abstract “Help bits" are some limited trusted information about an instance or instances of a computational problem that may reduce the computational complexity of solving that instance or instances. In this paper, we study the value of help bits in the settings of randomized and average-case complexity. If k instances of a decision problem can be efficiently solved using \({\ell < k}\) help bits, then without access to help bits one can efficiently compute a k-bit vector that is not equal to the k-bit vector of solutions to the k instances. A decision problem with this property is called k-membership comparable. Amir, Beigel, and Gasarch (1990) show that for constant k, all k-membership comparable languages are in P/poly. We extend this result to the setting of randomized computation: We show that for k at most logarithmic, the decision problem is k-membership comparable if using \({\ell}\) help bits, k instances of the problem can be efficiently solved with probability greater than \({2^{\ell-k}}\) . The same conclusion holds if using less than \({k(1 - h(\alpha))}\) help bits (where \({h(\cdot)}\) is the binary entropy function), we can efficiently solve \({1-\alpha}\) fraction of the instances correctly with non-vanishing probability. We note that when k is constant, k-membership comparability implies being in P/poly. Next we consider the setting of average-case complexity: Assume that we can solve k instances of a decision problem using some help bits whose entropy is less than k when the k instances are drawn independently from a particular distribution. Then we can efficiently solve an instance drawn from that distribution with probability better than 1/2. Finally, we show that in the case where k is super-logarithmic, assuming k-membership comparability of a decision problem, one cannot prove that the problem is in P/poly by a “relativizing" proof technique. All previous known proofs in this area have been relativizing. PubDate: 2017-03-01 DOI: 10.1007/s00037-016-0135-x Issue No:Vol. 26, No. 1 (2017)

Authors:Gábor Braun; Rahul Jain; Troy Lee; Sebastian Pokutta Pages: 147 - 197 Abstract: Abstract Common information was introduced by Wyner (IEEE Trans Inf Theory 21(2):163–179, 1975) as a measure of dependence of two random variables. This measure has been recently resurrected as a lower bound on the logarithm of the nonnegative rank of a nonnegative matrix in Braun and Pokutta (Proceedings of FOCS, 2013) and Jain et al. (Proceedings of SODA, 2013). Lower bounds on nonnegative rank have important applications to several areas such as communication complexity and combinatorial optimization. We begin a systematic study of common information extending the dual characterization of Witsenhausen (SIAM J Appl Math 31(2):313–333, 1976). Our main results are: (i) Common information is additive under tensoring of matrices. (ii) It characterizes the (logarithm of the) amortized nonnegative rank of a matrix, i.e., the minimal nonnegative rank under tensoring and small \({\ell_1}\) perturbations. We also provide quantitative bounds generalizing previous asymptotic results by Wyner (IEEE Trans Inf Theory 21(2):163–179, 1975). (iii) We deliver explicit witnesses from the dual problem for several matrices leading to explicit lower bounds on common information, which are robust under \({\ell_1}\) perturbations. This includes improved lower bounds for perturbations of the all important unique disjointness partial matrix, as well as new insights into its information-theoretic structure. PubDate: 2017-03-01 DOI: 10.1007/s00037-016-0125-z Issue No:Vol. 26, No. 1 (2017)

Authors:Rahul Jain; Zhaohui Wei; Penghui Yao; Shengyu Zhang Pages: 199 - 228 Abstract: Abstract The concepts of quantum correlation complexity and quantum communication complexity were recently proposed to quantify the minimum amount of resources needed in generating bipartite classical or quantum states in the single-shot setting. The former is the minimum size of the initially shared state \({\sigma}\) on which local operations by the two parties (without communication) can generate the target state \({\rho}\) , and the latter is the minimum amount of communication needed when initially sharing nothing. In this paper, we generalize these two concepts to multipartite cases, for both exact and approximate state generation. Our results are summarized as follows. For multipartite pure states, the correlation complexity can be completely characterized by local ranks of sybsystems. We extend the notion of PSD-rank of matrices to that of tensors, and use it to bound the quantum correlation complexity for generating multipartite classical distributions. For generating multipartite mixed quantum states, communication complexity is not always equal to correlation complexity (as opposed to bipartite case). But they differ by at most a factor of 2. Generating a multipartite mixed quantum state has the same communication complexity as generating its optimal purification. But the correlation complexities of these two tasks can be different (though still related by less than a factor of 2). To generate a bipartite classical distribution P(x, y) approximately, the quantum communication complexity is completely characterized by the approximate PSD-rank of P. The quantum correlation complexity of approximately generating multipartite pure states is bounded by approximate local ranks. PubDate: 2017-03-01 DOI: 10.1007/s00037-016-0126-y Issue No:Vol. 26, No. 1 (2017)

Authors:Amir Shpilka; Avishay Tal; Ben lee Volk Pages: 229 - 273 Abstract: Abstract In this paper we prove results regarding Boolean functions with small spectral norm (the spectral norm of f is \({\ \hat{f}\ _1 = \sum_{\alpha} \hat{f}(\alpha) }\) ). Specifically, we prove the following results for functions \({f : \{0, 1\}^n \to \{0, 1\}}\) with \({\ \hat{f}\ _1 = A}\) . There is an affine subspace V of co-dimension at most A 2 such that \({f _V}\) is constant. f can be computed by a parity decision tree of size at most \({2^{A^2} n^{2A}}\) . (A parity decision tree is a decision tree whose nodes are labeled with arbitrary linear functions.) f can be computed by a De Morgan formula of size \({O(2^{A^2} n^{2A + 2})}\) and by a De Morgan formula of depth \({O(A^2 + \log(n) \cdot A)}\) . If in addition f has at most s nonzero Fourier coefficients, then f can be computed by a parity decision tree of depth at most \({A^2 \log s}\) . For every \({\epsilon > 0}\) there is a parity decision tree of depth \({O(A^2 + \log(1/\epsilon))}\) and size \({2^{O(A^2)} \cdot \min \{1/\epsilon^2, \log(1/\epsilon)^{2A}\}}\) that \({\epsilon}\) -approximates f. Furthermore, this tree can be learned (in the uniform distribution model), with probability \({1 - \delta}\) , using \({{\tt poly}(n, {\rm exp}(A^2), 1/\epsilon, \log(1/\delta))}\) membership queries. All the results above (except 3) also hold (with a slight change in parameters) for functions \({f : \mathbb{Z}_p^n \to \{0, 1\}}\) . PubDate: 2017-03-01 DOI: 10.1007/s00037-015-0110-y Issue No:Vol. 26, No. 1 (2017)

Authors:Russell Impagliazzo; Valentine Kabanets Pages: 275 - 321 Abstract: Abstract For a class \({\mathcal{F}}\) of formulas (general de Morgan or read-once de Morgan), the shrinkage exponent \({\Gamma_{\mathcal{F}}}\) is the parameter measuring the reduction in size of a formula \({F\in\mathcal{F}}\) after \({F}\) is hit with a random restriction. A Boolean function \({f\colon \{0,1\}^n\to\{1,-1\}}\) is Fourier-concentrated if, when viewed in the Fourier basis, \({f}\) has most of its total mass on “low-degree” coefficients. We show a direct connection between the two notions by proving that shrinkage implies Fourier concentration: For a shrinkage exponent \({\Gamma_{\mathcal{F}}}\) , a formula \({F\in\mathcal{F}}\) of size \({s}\) will have most of its Fourier mass on the coefficients of degree up to about \({s^{1/\Gamma_{\mathcal{F}}}}\) . More precisely, for a Boolean function \({f\colon\{0,1\}^n\to\{1,-1\}}\) computable by a formula of (large enough) size \({s}\) and for any parameter \({r > 0}\) , $$\sum_{A\subseteq [n]\; :\; A \geq s^{1/\Gamma} \cdot r} \hat{f}(A)^2\leq s\cdot{\mathscr{polylog}}(s)\cdot exp\left(-\frac{r^{\frac{\Gamma}{\Gamma-1}}}{s^{o(1)}} \right),$$ where \({\Gamma}\) is the shrinkage exponent for the corresponding class of formulas: \({\Gamma=2}\) for de Morgan formulas, and \({\Gamma=1/\log_2(\sqrt{5}-1)\approx 3.27}\) for read-once de Morgan formulas. This Fourier concentration result is optimal, to within the \({o(1)}\) term in the exponent of \({s}\) . As a standard application of these Fourier concentration results, we get that subquadratic-size de Morgan formulas have negligible correlation with parity. We also show the tight \({\Theta(s^{1/\Gamma})}\) bound on the average sensitivity of read-once formulas of size \({s}\) , which mirrors the known tight bound \({\Theta(\sqrt{s})}\) on the average sensitivity of general de Morgan \({s}\) -size formulas. PubDate: 2017-03-01 DOI: 10.1007/s00037-016-0134-y Issue No:Vol. 26, No. 1 (2017)

Authors:Elena Grigorescu; Chris Peikert Abstract: Abstract The question of list-decoding error-correcting codes over finite fields (under the Hamming metric) has been widely studied in recent years. Motivated by the similar discrete linear structure of linear codes and point lattices in \({\mathbb{R}^{N}}\) , and their many shared applications across complexity theory, cryptography, and coding theory, we initiate the study of list decoding for lattices. Namely: for a lattice \({\mathcal{L}\subseteq \mathbb{R}^N}\) , given a target vector \({r \in \mathbb{R}^N}\) and a distance parameter d, output the set of all lattice points \({w \in \mathcal{L}}\) that are within distance d of r. In this work, we focus on combinatorial and algorithmic questions related to list decoding for the well-studied family of Barnes–Wall lattices. Our main contributions are twofold: We give tight combinatorial bounds on the worst-case list size, showing it to be polynomial in the lattice dimension for any error radius bounded away from the lattice’s minimum distance (in the Euclidean norm). We use our combinatorial bounds to generalize the unique-decoding algorithm of Micciancio and Nicolosi (IEEE International Symposium on Information Theory 2008) to work beyond the unique-decoding radius and still run in polynomial time up to the list-decoding radius. Just like the Micciancio–Nicolosi algorithm, our algorithm is highly parallelizable, and with sufficiently many processors, it can run in parallel time only poly-logarithmic in the lattice dimension. PubDate: 2017-01-12 DOI: 10.1007/s00037-016-0151-x

Authors:Yuichi Yoshida Pages: 737 - 773 Abstract: Abstract In the List H- Homomorphism Problem, for a graph H that is a parameter of the problem, an instance consists of an undirected graph G with a list constraint \({L(v) \subseteq V(H)}\) for each variable \({v \in V(G)}\) , and the objective is to determine whether there is a list H-homomorphism \({f:V(G) \to V(H)}\) , that is, \({f(v) \in L(v)}\) for every \({v \in V(G)}\) and \({(f(u),f(v)) \in E(H)}\) whenever \({(u,v) \in E(G)}\) . We consider the problem of testing list H-homomorphisms in the following weighted setting: An instance consists of an undirected graph G, list constraints L, weights imposed on the vertices of G, and a map \({f:V(G) \to V(H)}\) given as an oracle access. The objective is to determine whether f is a list H-homomorphism or far from any list H-homomorphism. The farness is measured by the total weight of vertices \({v \in V(G)}\) for which f(v) must be changed so as to make f a list H-homomorphism. In this paper, we classify graphs H with respect to the number of queries to f required to test the list H-homomorphisms. Specifically, we show that (i) list H-homomorphisms are testable with a constant number of queries if and only if H is a reflexive complete graph or an irreflexive complete bipartite graph and (ii) list H-homomorphisms are testable with a sublinear number of queries if and only if H is a bi-arc graph. PubDate: 2016-12-01 DOI: 10.1007/s00037-014-0093-0 Issue No:Vol. 25, No. 4 (2016)

Authors:Michael Blondin; Andreas Krebs; Pierre McKenzie Pages: 775 - 814 Abstract: Abstract The problem of determining whether several finite automata accept a word in common is closely related to the well-studied membership problem in transformation monoids. We raise the issue of limiting the number of final states in the automata intersection problem. For automata with two final states, we show the problem to be \({\oplus}\) L-complete or NP-complete according to whether a nontrivial monoid other than a direct product of cyclic groups of order 2 is allowed in the automata. We further consider idempotent commutative automata and (Abelian, mainly) group automata with one, two, or three final states over a singleton or larger alphabet, elucidating (under the usual hypotheses on complexity classes) the complexity of the intersection nonemptiness and related problems in each case. PubDate: 2016-12-01 DOI: 10.1007/s00037-014-0089-9 Issue No:Vol. 25, No. 4 (2016)

Authors:Laurent Evain; Mathias Lederer; Bjarke Hammersholt Roune Pages: 815 - 848 Abstract: Abstract We introduce standard decomposition, a natural way of decomposing a labeled graph into a sum of certain labeled subgraphs. We motivate this graph-theoretic concept by relating it to Connect Four decompositions of standard sets. We prove that all standard decompositions can be generated in polynomial time as a function of the combined size of the input and the output. This implies that all Connect Four decompositions can be generated in polynomial time. PubDate: 2016-12-01 DOI: 10.1007/s00037-016-0123-1 Issue No:Vol. 25, No. 4 (2016)

Authors:Andrew Hughes; Debasis Mandal; A. Pavan; Nathan Russell; Alan L. Selman Pages: 883 - 919 Abstract: Abstract Even, Selman, and Yacobi (Even et al. in Inf Control 61(2):159–173, 1984, Selman and Yacobi in Proceedings of the 8th international colloquium on automata, languages, and programming, volume 140 of lecture notes in computer science. Springer, Berlin, pp 502–509, 1982) formulated a conjecture that in current terminology asserts that there do not exist disjoint NP-pairs all of whose separators are NP-hard via Turing reductions. In this paper, we consider a variant of this conjecture—there do not exist disjoint NP-pairs all of whose separators are NP-hard via bounded-truth-table reductions. We provide evidence for this conjecture. We also observe that if the original conjecture holds, then some of the known probabilistic public-key cryptosystems are not NP-hard to crack. PubDate: 2016-12-01 DOI: 10.1007/s00037-015-0107-6 Issue No:Vol. 25, No. 4 (2016)

Authors:Jean Bourgain; Zeev Dvir; Ethan Leeman Pages: 921 - 931 Abstract: Abstract We describe a construction of explicit affine extractors over large finite fields with exponentially small error and linear output length. Our construction relies on a deep theorem of Deligne giving tight estimates for exponential sums over smooth varieties in high dimensions. PubDate: 2016-12-01 DOI: 10.1007/s00037-015-0108-5 Issue No:Vol. 25, No. 4 (2016)

Authors:Dana Moshkovitz Abstract: Abstract A long line of work in Theoretical Computer Science shows that a function is close to a low-degree polynomial iff it is locally close to a low-degree polynomial. This is known as low-degree testing and is the core of the algebraic approach to construction of PCP. We obtain a low-degree test whose error, i.e., the probability it accepts a function that does not correspond to a low-degree polynomial, is polynomially smaller than existing low-degree tests. A key tool in our analysis is an analysis of the sampling properties of the incidence graph of degree-k curves and k′-tuples of points in a finite space \({\mathbb{F}^m}\) . We show that the Sliding Scale Conjecture in PCP, namely the conjecture that there are PCP verifiers whose error is exponentially small in their randomness, would follow from a derandomization of our low-degree test. PubDate: 2016-12-19 DOI: 10.1007/s00037-016-0149-4

Authors:V. Arvind; Johannes Köbler; Gaurav Rattan; Oleg Verbitsky Abstract: Abstract Color refinement is a classical technique used to show that two given graphs G and H are non-isomorphic; it is very efficient, although it does not succeed on all graphs. We call a graph G amenable to color refinement if the color refinement procedure succeeds in distinguishing G from any non-isomorphic graph H. Babai et al. (SIAM J Comput 9(3):628–635, 1980) have shown that random graphs are amenable with high probability. We determine the exact range of applicability of color refinement by showing that amenable graphs are recognizable in time \({O((n+m)\log n)}\) , where n and m denote the number of vertices and the number of edges in the input graph. We use our characterization of amenable graphs to analyze the approach to Graph Isomorphism based on the notion of compact graphs. A graph is called compact if the polytope of its fractional automorphisms is integral. Tinhofer (Discrete Appl Math 30(2–3):253–264, 1991) noted that isomorphism testing for compact graphs can be done quite efficiently by linear programming. However, the problem of characterizing compact graphs and recognizing them in polynomial time remains an open question. Our results in this direction are summarized below: We show that all amenable graphs are compact. In other words, the applicability range for Tinhofer’s linear programming approach to isomorphism testing is at least as large as for the combinatorial approach based on color refinement. Exploring the relationship between color refinement and compactness further, we study related combinatorial and algebraic graph properties introduced by Tinhofer and Godsil. We show that the corresponding classes of graphs form a hierarchy, and we prove that recognizing each of these graph classes is P-hard. In particular, this gives a first complexity lower bound for recognizing compact graphs. PubDate: 2016-12-19 DOI: 10.1007/s00037-016-0147-6

Abstract: Abstract We show that approximating the second eigenvalue of stochastic operators is BPL-complete, thus giving a natural problem complete for this class. We also show that approximating any eigenvalue of a stochastic and Hermitian operator with constant accuracy can be done in BPL. This work together with related work on the subject reveal a picture where the various space-bounded classes (e.g., probabilistic logspace, quantum logspace and the class DET) can be characterized by algebraic problems (such as approximating the spectral gap) where, roughly speaking, the difference between the classes lies in the kind of operators they can handle (e.g., stochastic, Hermitian or arbitrary). PubDate: 2016-12-16 DOI: 10.1007/s00037-016-0150-y

Authors:V. Arvind; Srikanth Srinivasan Abstract: Abstract In this paper, we study the computational complexity of computing the noncommutative determinant. We first consider the arithmetic circuit complexity of computing the noncommutative determinant polynomial. Then, more generally, we also examine the complexity of computing the determinant (as a function) over noncommutative domains. Our hardness results are summarized below: We show that if the noncommutative determinant polynomial has small noncommutative arithmetic circuits then so does the noncommutative permanent. Consequently, the commutative permanent polynomial has small commutative arithmetic circuits. For any field \({\mathbb{F}}\) we show that computing the \({n\times n}\) permanent over \({\mathbb{F}}\) is polynomial-time reducible to computing the \({2n\times 2n}\) (noncommutative) determinant whose entries are \({O(n^2)\times O(n^2)}\) matrices over the field \({\mathbb{F}}\) . We also derive as a consequence that computing the \({n\times n}\) permanent over nonnegative rationals is polynomial-time reducible to computing the noncommutative determinant over Clifford algebras of \({n^{O(1)}}\) dimension. Our techniques are elementary and use primarily the notion of the Hadamard Product of noncommutative polynomials. PubDate: 2016-11-30 DOI: 10.1007/s00037-016-0148-5

Authors:Yasuhiro Takahashi; Seiichiro Tani Abstract: Abstract We study the quantum complexity class \({\mathsf{QNC}^\mathsf{0}_\mathsf{f}}\) of quantum operations implementable exactly by constant-depth polynomial-size quantum circuits with unbounded fan-out gates. Our main result is that the quantum OR operation is in \({\mathsf{QNC}^\mathsf{0}_\mathsf{f}}\) , which is an affirmative answer to the question posed by Høyer and Špalek. In sharp contrast to the strict hierarchy of the classical complexity classes: \({\mathsf{NC}^{0} \subsetneq \mathsf{AC}^{0} \subsetneq \mathsf{TC}^{0}}\) , our result with Høyer and Špalek’s one implies the collapse of the hierarchy of the corresponding quantum ones: \({\mathsf{QNC}^\mathsf{0}_\mathsf{f}=\mathsf{QAC}^\mathsf{0}_\mathsf{f}=\mathsf{QTC}^\mathsf{0}_\mathsf{f}}\) . Then, we show that there exists a constant-depth subquadratic-size quantum circuit for the quantum threshold operation. This allows us to obtain a better bound on the size difference between the \({\mathsf{QNC}^\mathsf{0}_\mathsf{f}}\) and \({\mathsf{QTC}^\mathsf{0}_\mathsf{f}}\) circuits for implementing the same operation. Lastly, we show that, if the quantum Fourier transform modulo a prime is in \({\mathsf{QNC}^\mathsf{0}_\mathsf{f}}\) , there exists a polynomial-time exact classical algorithm for a discrete logarithm problem using a \({\mathsf{QNC}^\mathsf{0}_\mathsf{f}}\) oracle. This implies that, under a plausible assumption, there exists a classically hard problem that is solvable exactly by a \({\mathsf{QNC}^\mathsf{0}_\mathsf{f}}\) circuit with gates for the quantum Fourier transform. PubDate: 2016-06-23 DOI: 10.1007/s00037-016-0140-0

Authors:Andris Ambainis; Kazuo Iwama; Masaki Nakanishi; Harumichi Nishimura; Rudy Raymond; Seiichiro Tani; Shigeru Yamashita Abstract: Abstract This paper considers the quantum query complexity of almost all functions in the set \({\mathcal{F}}_{N,M}\) of \({N}\) -variable Boolean functions with on-set size \({M (1\le M \le 2^{N}/2)}\) , where the on-set size is the number of inputs on which the function is true. The main result is that, for all functions in \({\mathcal{F}}_{N,M}\) except its polynomially small fraction, the quantum query complexity is \({ \Theta\left(\frac{\log{M}}{c + \log{N} - \log\log{M}} + \sqrt{N} \right)}\) for a constant \({c > 0}\) . This is quite different from the quantum query complexity of the hardest function in \({\mathcal{F}}_{N,M}\) : \({\Theta\left(\sqrt{N\frac{\log{M}}{c + \log{N} - \log\log{M}}} + \sqrt{N} \right)}\) . In contrast, almost all functions in \({\mathcal{F}}_{N,M}\) have the same randomized query complexity \({\Theta(N)}\) as the hardest one, up to a constant factor. PubDate: 2016-06-21 DOI: 10.1007/s00037-016-0139-6