Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract An ordering constraint satisfaction problem (OCSP) is defined by a family \(\mathcal F\) of predicates mapping permutations on \(\{1,\ldots,k\}\) to \(\{0,1\}\) . An instance of Max-OCSP( \(\mathcal F\) ) on n variables consists of a list of constraints, each consisting of a predicate from \(\mathcal F\) applied on k distinct variables. The goal is to find an ordering of the n variables that maximizes the number of constraints for which the induced ordering on the k variables satisfies the predicate. OCSPs capture well-studied problems including ‘maximum acyclic subgraph’ (MAS) and “maximum betweenness”. In this work, we consider the task of approximating the maximum number of satisfiable constraints in the (single-pass) streaming setting, when an instance is presented as a stream of constraints. We show that for every \(\mathcal F\) , Max-OCSP( \(\mathcal F\) ) is approximation-resistant to o(n)-space streaming algorithms, i.e., algorithms using o(n) space cannot distinguish streams where almost every constraint is satisfiable from streams where no ordering beats the random ordering by a noticeable amount. This space bound is tight up to polylogarithmic factors. In the case of MAS, our result shows that for every \(\epsilon>0\) , MAS is not \((1/2+\epsilon)\) -approximable in o(n) space. The previous best inapproximability result, due to Guruswami & Tao (2019), only ruled out 3/4-approximations in \(o(\sqrt n)\) space. Our results build on recent works of Chou et al. (2022b, 2024) who provide a tight, linear-space inapproximability theorem for a broad class of “standard” (i.e., non-ordering) constraint satisfaction problems (CSPs) over arbitrary (finite) alphabets. Our results are obtained by building a family of appropriate standard CSPs (one for every alphabet size q) from any given OCSP and applying their theorem to this family of CSPs. To convert the resulting hardness results for standard CSPs back to our OCSP, we show that the hard instances from this earlier theorem have the following “partition expansion” property with high probability: For every partition of the n variables into small blocks, for most of the constraints, all variables are in distinct blocks. PubDate: 2024-05-29

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The constraint satisfaction problem (CSP) on a finite relational structure B is to decide, given a set of constraints on variables where the relations come from B, whether or not there is an assignment to the variables satisfying all of the constraints; the surjective CSP is the variant where one decides the existence of a surjective satisfying assignment onto the universe of B. We present an algebraic framework for proving hardness results on surjective CSPs; essentially, this framework computes global gadgetry that permits one to present a reduction from a classical CSP to a surjective CSP. We show how to derive a number of hardness results for surjective CSP in this framework, including the hardness of the disconnected cut problem, of the no-rainbow three-coloring problem, and of the surjective CSP on all two-element structures known to be intractable (in this setting). Our framework thus allows us to unify these hardness results and reveal common structure among them; we believe that our hardness proof for the disconnected cut problem is more succinct than the original. In our view, the framework also makes very transparent a way in which classical CSPs can be reduced to surjective CSPs. PubDate: 2024-05-29

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract It is a classical result that the inner product function cannot be computed by an \({\rm AC}^0\) circuit. It is conjectured that this holds even if we allow arbitrary preprocessing of each of the two inputs separately. We prove this conjecture when the preprocessing of one of the inputs is limited to output \(n + n/(\log^{\omega(1)}n)\) bits and obtain a tight correlation bound. Our methods extend to many other functions, including pseudorandom functions, and imply a---weak yet nontrivial---limitation on the power of encoding inputs in low-complexity cryptography. Finally, under cryptographic assumptions, we relate the question of proving variants of the above conjecture with the question of learning \({\rm AC}^0\) under simple input distributions. PubDate: 2024-05-23

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract One of the major open problems in complexity theory is proving super-logarithmic lower bounds on the depth of circuits (i.e., \(P \not\subseteq {NC}^1\) ). Karchmer et al. (Comput Complex 5(3/4):191–204, 1995) suggested to approach this problem by proving that depth complexity behaves “as expected” with respect to the composition of functions \(f \Diamond g\) . They showed that the validity of this conjecture would imply that \(P \not\subseteq {NC}^1\) . Several works have made progress toward resolving this conjecture by proving special cases. In particular, these works proved the KRW conjecture for every outer function f, but only for few inner functions g. Thus, it is an important challenge to prove the KRW conjecture for a wider range of inner functions. In this work, we extend significantly the range of inner functions that can be handled. First, we consider the monotone version of the KRW conjecture. We prove it for every monotone inner function g whose depth complexity can be lower-bounded via a query-to-communication lifting theorem. This allows us to handle several new and well-studied functions such as the s-t-connectivity, clique, and generation functions. In order to carry this progress back to the non-monotone setting, we introduce a new notion of semi-monotone composition, which combines the non-monotone complexity of the outer function f with the monotone complexity of the inner function g. In this setting, we prove the KRW conjecture for a similar selection of inner functions g, but only for a specific choice of the outer function f. PubDate: 2024-04-29

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract For a polynomial f, a weighted sum-of-squares representation (SOS) has the form \(f = \sum_{i\in [s]} c_i f_i^2\) , where the weights \(c_i\) are field elements. The size of the representation is the number of monomials that appear across the \(f_i\) 's. Its minimum across all such decompositions is called the support-sum S(f) of f. For a univariate polynomial f of degree d of full support, a lower bound for the support-sum is \(S(f) \ge \sqrt d\) . We show that the existence of an explicit univariate polynomial f with support-sum just slightly larger than the lower bound, that is, \(S(f) \ge d^{0.5+\varepsilon}\) , for some \(\varepsilon > 0\) , implies that VP \(\ne\) VNP, the major open problem in algebraic complexity. In fact, our proof works for some subconstant functions \(\varepsilon(d) > 0\) as well. We also consider the sum-of-cubes representation (SOC) of polynomials. We show that an explicit hard polynomial implies both blackbox-PIT is in P, and VP \(\neq\) VNP. PubDate: 2024-04-16

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract A matrix is blocky if it is a "blowup" of a permutation matrix. The blocky rank of a matrix M is the minimum number of blocky matrices that linearly span M. Hambardzumyan, Hatami and Hatami defined blocky rank and showed that it is connected to communication complexity and operator theory. We describe additional connections to circuit complexity and combinatorics, and we prove upper and lower bounds on blocky rank in various contexts. PubDate: 2024-03-06 DOI: 10.1007/s00037-024-00248-1

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We show that for any non-real algebraic number q, such that \( q-1 >1\) or \(\Re(q)>\frac{3}{2}\) it is #P-hard to compute a multiplicative (resp. additive) approximation to the absolute value (resp. argument) of the chromatic polynomial evaluated at q on planar graphs. This implies #P-hardness for all non-real algebraic q on the family of all graphs. We, moreover, prove several hardness results for q, such that \( q-1 \leq 1\) . Our hardness results are obtained by showing that a polynomial time algorithm for approximately computing the chromatic polynomial of a planar graph at non-real algebraic q (satisfying some properties) leads to a polynomial time algorithm for exactly computing it, which is known to be hard by a result of Vertigan. Many of our results extend in fact to the more general partition function of the random cluster model, a well-known reparametrization of the Tutte polynomial. PubDate: 2024-01-18 DOI: 10.1007/s00037-023-00247-8

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract While it can be easily shown by counting that almost all Boolean predicates of n variables have circuit size \(\Omega(2^n/n)\) , we have no example of an NP function requiring even a superlinear number of gates. Moreover, only modest linear lower bounds are known. Until recently, the strongest known lower bound was \(3n - o(n)\) presented by Blum in 1984. In 2011, Demenkov and Kulikov presented a much simpler proof of the same lower bound, but for a more complicated function —an affine disperser for sublinear dimension. Informally, this is a function that is resistant to any \(n - o(n)\) affine substitutions. In 2011, Ben-Sasson and Kopparty gave an explicit construction of such a function. The proof of the lower bound basically goes by showing that for any circuit there exists an affine hyperplane where the function complexity decreases at least by three gates. In this paper, we prove the following two extensions. 1. A \((3+\frac {1}{86}\, n - o(n)\) lower bound for the circuit size of an affine disperser for sublinear dimension. The proof is based on the gate elimination technique extended with the following three ideas: (i) generalizing the computational model by allowing circuits to contain cycles, this in turn allows us to perform affine substitutions, (ii) a carefully chosen circuit complexity measure to track the progress of the gate elimination process, and (iii) quadratic substitutions that may be viewed as delayed affine substitutions. 2. A much simpler proof of a stronger lower bound of \(3.11n\) for a quadratic disperser. Informally, a quadratic disperser is resistant to sufficiently many substitutions of the form \(x \leftarrow p\) , where p is a polynomial of degree at most two. Currently, there are no constructions of quadratic dispersers in NP (although there are constructions over large fields, and constructions with weaker parameters over GF(2)). The key ingredient of this proof is the induction on the size of the underlying quadratic variety instead of the number of variables as in the previously known proofs. PubDate: 2023-12-16 DOI: 10.1007/s00037-023-00246-9

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We introduce a novel take on sum-of-squares that is able to reason with complex numbers and still make use of polynomial inequalities. This proof system might be of independent interest since it allows to represent multivalued domains both with Boolean and Fourier encoding. We show degree and size lower bounds in this system for a natural generalization of knapsack: the vanishing sums of roots of unity. These lower bounds naturally apply to polynomial calculus as-well. PubDate: 2023-11-12 DOI: 10.1007/s00037-023-00242-z

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract For a natural number \(m\) , a distribution is called \(m\) -grained, if each element appears with probability that is an integer multiple of \(1/m\) . We prove that, for any constant \(c<1\) , testing whether a distribution over \([\Theta(m)]\) is \(m\) -grained requires \(\Omega(m^c)\) samples, where testing a property of distributions means distinguishing between distributions that have the property and distributions that are far (in total variation distance) from any distribution that has the property. PubDate: 2023-10-26 DOI: 10.1007/s00037-023-00245-w

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Power circuits have been introduced in 2012 by Myasnikov, Ushakov and Won as a data structure for non-elementarily compressed integers supporting the arithmetic operations addition and \((x,y) \mapsto x\cdot 2^y\) . The same authors applied power circuits to give a polynomial time solution to the word problem of the Baumslag group, which has a non-elementary Dehn function. In this work, we examine power circuits and the word problem of the Baumslag group under parallel complexity aspects. In particular, we establish that the word problem of the Baumslag group can be solved in NC \(\textemdash\) even though one of the essential steps is to compare two integers given by power circuits and this, in general, is shown to be P-complete. The key observation is that the depth of the occurring power circuits is logarithmic and such power circuits can be compared in NC. PubDate: 2023-09-13 DOI: 10.1007/s00037-023-00244-x

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We study the power of preprocessing adversaries in finding bounded-length collisions in the widely used Merkle-Damgård (MD) hashing in the random oracle model. Specifically, we consider adversaries with arbitrary S-bit advice about the random oracle and can make at most T queries to it. Our goal is to characterize the advantage of such adversaries in finding a B-block collision in an MD hash function constructed using the random oracle with range size N as the compression function (given a random salt). The answer to this question is completely understood for very large values of B (essentially \(\Omega(T)\) ) as well as for B = 1, 2. For \(B\approx T\) , Coretti et al. (EUROCRYPT '18) gave matching upper and lower bounds of \(\tilde\Theta(ST^2/N)\) . Akshima et al. (CRYPTO '20) observed that the attack of Coretti et al. could be adapted to work for any value of B > 1, giving an attack with advantage \(\tilde\Omega(STB/N + T^2/N)\) . Unfortunately, they could only prove that this attack is optimal for B = 2. Their proof involves a compression argument with exhaustive case analysis and, as they claim, a naive attempt to generalize their bound to larger values of B (even for B = 3) would lead to an explosion in the number of cases needed to be analyzed, making it unmanageable. With the lack of a more general upper bound, they formulated the STB conjecture, stating that the best-possible advantage is \(\tilde O(STB/N + T^2/N)\) for any B >1. In this work, we confirm the STB conjecture in many new parameter settings. For instance, in one result, we show that the conjecture holds for all constant values of B, significantly extending the result of Akshima et al. Further, using combinatorial properties of graphs, we are able to confirm the conjecture even for super constant values of B, as long as some restriction is made on S. For instance, we confirm the conjecture for all \(B \le T^{1{/}4}\) as long as \(S \le T^{1{/}8}\) . Technically, we develop structural characterizations for bounded-length collisions in MD hashing that allow us to give a compression argument in which the number of cases needed to be handled does not explode. PubDate: 2023-09-13 DOI: 10.1007/s00037-023-00243-y

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We study the decomposition of multivariate polynomials as sums of powers of linear forms. We give a randomised algorithm for the following problem: If a homogeneous polynomial \(f \in K[x_1,..., x_n]\) (where \(K \subseteq \mathbb {C}\) ) of degree d is given as a blackbox, decide whether it can be written as a linear combination of d-th powers of linearly independent complex linear forms. The main novel features of the algorithm are: For \(d = 3\) , we improve by a factor of n on the running time from the algorithm in Koiran & Skomra (2021). The price to be paid for this improvement is that the algorithm now has two-sided error. For \(d > 3\) , we provide the first randomised blackbox algorithm for this problem that runs in time \(\text {poly}(n,d)\) (in an algebraic model where only arithmetic operations and equality tests are allowed). Previous algorithms for this problem Kayal (2011) as well as most of the existing reconstruction algorithms for other classes appeal to a polynomial factorisation subroutine. This requires extraction of complex polynomial roots at unit cost and in standard models such as the unit-cost RAM or the Turing machine this approach does not yield polynomial time algorithms. For \(d > 3\) , when f has rational coefficients (i.e. \(K = \mathbb {Q}\) ), the running time of the blackbox algorithm is polynomial in n, d and the maximal bit size of any coefficient of f. This yields the first algorithm for this problem over \(\mathbb {C}\) with polynomial running time in the bit model of computation. These results are true even when we replace \(\mathbb {C}\) by \(\mathbb {R}\) . We view the problem as a tensor decomposition problem and use linear algebraic methods such as checking the simultaneous diagonalisability of the slices of a tensor. The number of such slices is exponential in d. But surprisingly, we show that after a random change of variables, computing just 3 special slices is enough. We also show that our approach can be extended to the computation of the actual decomposition. In forthcoming work we plan to extend these results to overcomplete decompositions, i.e. decompositions in more than n powers of linear forms. PubDate: 2023-08-03 DOI: 10.1007/s00037-023-00239-8

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract This paper studies the expressive and computational power of discrete Ordinary Differential Equations (ODEs), a.k.a. (Ordinary) Difference Equations. It presents a new framework using these equations as a central tool for computation and algorithm design. We present the general theory of discrete ODEs for computation theory, we illustrate this with various examples of algorithms, and we provide several implicit characterizations of complexity and computability classes. The proposed framework presents an original point of view on complexity and computation classes. It unifies several constructions that have been proposed for characterizing these classes including classical approaches in implicit complexity using restricted recursion schemes, as well as recent characterizations of computability and complexity by classes of continuous ordinary differential equations. It also helps understanding the relationships between analog computations and classical discrete models of computation theory. At a more technical point of view, this paper points out the fundamental role of linear (discrete) ODEs and classical ODE tools such as changes of variables to capture computability and complexity measures, or as a tool for programming many algorithms. PubDate: 2023-07-12 DOI: 10.1007/s00037-023-00240-1

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We study the power of randomized complexity classes that are given oracle access to a natural property of Razborov and Rudich (JCSS, 1997) or its special case, the Minimal Circuit Size Problem (MCSP). We show that in a number of complexity-theoretic results that use the SAT oracle, one can use the MCSP oracle instead. For example, we show that \(\mathsf{ZPEXP}^\mathsf{MCSP}\nsubseteq \mathsf{P} / \mathsf {poly}\) , which should be contrasted with the previously known circuit lower bound \(\mathsf{ZPEXP}^\mathsf{NP}\nsubseteq \mathsf{P} / \mathsf{poly}\) . We also show that, assuming the existence of indistinguishability obfuscators (IOs), SAT and MCSP are equivalent in the sense that one has an efficient randomized (BPP) algorithm if and only if the other one does. We interpret our results as providing some evidence that MCSP may be NP-hard under randomized polynomial-time reductions. PubDate: 2023-07-06 DOI: 10.1007/s00037-023-00241-0

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Yao’s XOR lemma states that for every function \(f:\{0,1\}^k \rightarrow \{0,1\}\) , if f has hardness 2/3 for P/poly (meaning that for every circuit C in P/poly, \(\Pr[C(X)=f(X)] \le 2/3\) on a uniform input X), then the task of computing \(f(X_1) \oplus \ldots \oplus f(X_t)\) for sufficiently large t has hardness \(\frac{1}{2} + \epsilon\) for P/poly. Known proofs of this lemma cannot achieve \(\epsilon=\frac{1}{k^{\omega(1)}}\) , and even for \(\epsilon=\frac{1}{k}\) , we do not know how to replace P/poly by AC0[parity] (the class of constant depth circuits with the gates {and, or, not, parity} of unbounded fan-in). Grinberg, Shaltiel and Viola (FOCS 2018) (building on a sequence of earlier works) showed that these limitations cannot be circumvented by black-box reductions. Namely, by reductions \({\rm Red}^{(\cdot)}\) that given oracle access to a function D that violates the conclusion of Yao’s XOR lemma, implement a circuit that violates the assumption of Yao’s XOR lemma. There are a few known reductions in the related literature on worst-case to average-case reductions that are non-black-box. Specifically, the reductions of Gutfreund, Shaltiel and Ta-Shma (Computational Complexity 2007) and Hirahara (FOCS 2018)) are “class reductions” that are only guaranteed to succeed when given oracle access to an oracle D from some efficient class of algorithms. These works seem to circumvent some black-box impossibility results. In this paper, we extend the previous limitations of Grinberg, Shaltiel and Viola to several types of class reductions, giving evidence that class reductions cannot yield the desired improvements in Yao’s XOR lemma. To the best of our knowledge, this is the first limitation on reductions for hardness amplification that applies to class reductions. Our technique imitates the previous lower bounds for black-box reductions, replacing the inefficient oracle used in that proof, with an efficient one that is based on limited independence, and developing tools to deal with the technical difficulties that arise following this replacement. PubDate: 2023-05-23 DOI: 10.1007/s00037-023-00238-9

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Suppose \(\varphi\) and \(\psi\) are two angles satisfying \(\tan(\varphi) = 2 \tan(\psi) > 0\) . We prove that under this condition \(\varphi\) and \(\psi\) cannot be both rational multiples of π. We use this number theoretic result to prove a classification of the computational complexity of spin systems on k-regular graphs with general (not necessarily symmetric) real valued edge weights. We establish explicit criteria, according to which the partition functions of all such systems are classified into three classes: (1) Polynomial time computable, (2) #P-hard in general but polynomial time computable on planar graphs, and (3) #P-hard on planar graphs. In particular, problems in (2) are precisely those that can be transformed by a holographic reduction to a form solvable by the Fisher-Kasteleyn-Temperley algorithm for counting perfect matchings in a planar graph. PubDate: 2023-05-17 DOI: 10.1007/s00037-023-00237-w

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Schur Polynomials are families of symmetric polynomials that have been classically studied in Combinatorics and Algebra alike. They play a central role in the study of symmetric functions, in Representation theory Stanley (1999), in Schubert calculus Ledoux & Malham (2010) as well as in Enumerative combinatorics Gasharov (1996); Stanley (1984, 1999). In recent years, they have also shown up in various incarnations in Computer Science, e.g, quantum computation Hallgren et al. (2000); O'Donnell & Wright (2015) and geometric complexity theory Ikenmeyer & Panova (2017). However, unlike some other families of symmetric polynomials like the elementary symmetric polynomials, the power symmetric polynomials and the complete homogeneous symmetric polynomials, the computational complexity of syntactically computing Schur polynomials has not been studied much. In particular, it is not known whether Schur polynomials can be computed efficiently by algebraic formulas. In this work, we address this question and show that unless every polynomial with a small algebraic branching program (ABP) has a small algebraic formula, there are Schur polynomials that cannot be computed by algebraic formula of polynomial size. In other words, unless the algebraic complexity class VBP is equal to the complexity class VF, there exist Schur polynomials which do not have polynomial size algebraic formulas. As a consequence of our proof, we also show that computing the determinant of certain generalized Vandermonde matrices is essentially as hard as computing the general symbolic determinant. To the best of our knowledge, these are one of the first hardness results of this kind for families of polynomials, which are not multilinear. A key ingredient of our proof is the study of composition of well behaved algebraically independent polynomials with a homogeneous polynomial, which might be of independent interest. PubDate: 2023-05-13 DOI: 10.1007/s00037-023-00236-x

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract A constant locality function is one in which each output bit depends on just a constant number of input bits. Viola and Wigderson (2018) gave an explicit construction of bipartite degree-3 Ramanujan graphs such that each neighbor of a vertex can be computed using a constant locality function. In this work, we construct the first explicit local Ramanujan graph (bipartite) of degree q + 1, where q > 2 is any prime power. Alon and Capalbo (2002) used 4-regular, 8-regular and 44-regular Ramanujan graphs to construct unique-neighbor expanders that were 3-regular, 4-regular, 6-regular and `bipartite' (respectively). Viola and Wigderson (2018) had asked if a local construction of such unique-neighbor expanders exists. Our construction gives local 4-regular, 8-regular and 44-regular Ramanujan graphs, which also solves the corresponding open problem of the construction of local unique-neighbor expanders. The only known explicit construction of Ramanujan graphs exists for degree q + 1, where q is a prime-power. In this paper, we essentially localize the explicit Ramanujan graphs for all these degrees. Our results use the explicit Ramanujan graphs by Morgenstern (1994) and a significant generalization of the ideas used in Viola and Wigderson (2018). PubDate: 2023-04-06 DOI: 10.1007/s00037-023-00235-y

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We present a uniform description of sets of m linear forms in n variables over the field of rational numbers whose computation requires m(n – 1) additions. PubDate: 2022-11-29 DOI: 10.1007/s00037-022-00234-5