Abstract: Abstract We consider the question of whether PPAD hardness can be based on standard cryptographic assumptions, such as the existence of one-way functions or public-key encryption. This question is particularly well-motivated in light of new devastating attacks on obfuscation candidates and their underlying building blocks, which are currently the only known source for PPAD hardness. Central in the study of obfuscation-based PPAD hardness is the sink-of-verifiable-line (SVL) problem, an intermediate step in constructing instances of the PPAD-complete problem source-or-sink. Within the framework of black-box reductions, we prove the following results: (i) average-case PPAD hardness (and even SVL hardness) does not imply any form of cryptographic hardness (not even one-way functions). Moreover, even when assuming the existence of one-way functions, average-case PPAD hardness (and, again, even SVL hardness) does not imply any public-key primitive. Thus, strong cryptographic assumptions (such as obfuscation-related ones) are not essential for average-case PPAD hardness. (ii) Average-case SVL hardness cannot be based either on standard cryptographic assumptions or on average-case PPAD hardness. In particular, average-case SVL hardness is not essential for average-case PPAD hardness. (iii) Any attempt for basing the average-case hardness of the PPAD-complete problem source-or-sink on standard cryptographic assumptions must result in instances with a nearly exponential number of solutions. This stands in striking contrast to the obfuscation-based approach, which results in instances having a unique solution. Taken together, our results imply that it may still be possible to base PPAD hardness on standard cryptographic assumptions, but any such black-box attempt must significantly deviate from the obfuscation-based approach: It cannot go through the SVL problem, and it must result in source-or-sink instances with a nearly exponential number of solutions. PubDate: 2021-01-12
Abstract: Abstract Gentry’s bootstrapping technique is still the only known method of obtaining fully homomorphic encryption where the system’s parameters do not depend on the complexity of the evaluated functions. Bootstrapping involves a recryption procedure where the scheme’s decryption algorithm is evaluated homomorphically. Prior to this work, there were very few implementations of recryption and fewer still that can handle “packed ciphertexts” that encrypt vectors of elements. In the current work, we report on an implementation of recryption of fully packed ciphertexts using the HElib library for somewhat homomorphic encryption. This implementation required extending previous recryption algorithms from the literature, as well as many aspects of the HElib library. Our implementation supports bootstrapping of packed ciphertexts over many extension fields/rings. One example that we tested involves ciphertexts that encrypt vectors of 1024 elements from \({\text {GF}}(2^{16})\) . In that setting, the recryption procedure takes under 3 min (at security level \(\approx 80\) ) on a single core and allows a multiplicative depth-11 computation before the next recryption is needed. This report updates the results that we reported in Eurocrypt 2015 in several ways. Most importantly, it includes a much more robust method for deriving the parameters, ensuring that recryption errors only occur with negligible probability. Many aspects of this analysis are proved, and for the few well-specified heuristics that we made, we report on thorough experimentation to validate them. The procedure that we describe here is also significantly more efficient than in the previous version, incorporating many optimizations that were reported elsewhere (such as more efficient linear transformations) and adding a few new ones. Finally, our implementation now also incorporates Chen and Han’s techniques from Eurocrypt 2018 for more efficient digit extraction (for some parameters), as well as for “thin bootstrapping” when the ciphertext is only sparsely packed. PubDate: 2021-01-06
Abstract: Abstract Almost perfect nonlinear (APN) functions play an important role in the design of block ciphers as they offer the strongest resistance against differential cryptanalysis. Despite more than 25 years of research, only a limited number of APN functions are known. In this paper, we show that a recent construction by Taniguchi provides at least \(\frac{\varphi (m)}{2}\left\lceil \frac{2^m+1}{3m} \right\rceil \) inequivalent APN functions on the finite field with \({2^{2m}}\) elements, where \(\varphi \) denotes Euler’s totient function. This is a great improvement of previous results: for even m, the best known lower bound has been \(\frac{\varphi (m)}{2}\left( \lfloor \frac{m}{4}\rfloor +1\right) \) ; for odd m, there has been no such lower bound at all. Moreover, we determine the automorphism group of Taniguchi’s APN functions. PubDate: 2021-01-06
Abstract: Abstract In (STOC, 2008), Gentry, Peikert, and Vaikuntanathan proposed the first identity-based encryption (GPV-IBE) scheme based on a post-quantum assumption, namely the learning with errors assumption. Since their proof was only made in the random oracle model (ROM) instead of the quantum random oracle model (QROM), it remained unclear whether the scheme was truly post-quantum or not. In (CRYPTO, 2012), Zhandry developed new techniques to be used in the QROM and proved security of GPV-IBE in the QROM, hence answering in the affirmative that GPV-IBE is indeed post-quantum. However, since the general technique developed by Zhandry incurred a large reduction loss, there was a wide gap between the concrete efficiency and security level provided by GPV-IBE in the ROM and QROM. Furthermore, regardless of being in the ROM or QROM, GPV-IBE is not known to have a tight reduction in the multi-challenge setting. Considering that in the real-world an adversary can obtain many ciphertexts, it is desirable to have a security proof that does not degrade with the number of challenge ciphertext. In this paper, we provide a much tighter proof for the GPV-IBE in the QROM in the single-challenge setting. In addition, we show that a slight variant of the GPV-IBE has an almost tight reduction in the multi-challenge setting both in the ROM and QROM, where the reduction loss is independent of the number of challenge ciphertext. Our proof departs from the traditional partitioning technique and resembles the approach used in the public key encryption scheme of Cramer and Shoup (CRYPTO, 1998). Our proof strategy allows the reduction algorithm to program the random oracle the same way for all identities and naturally fits the QROM setting where an adversary may query a superposition of all identities in one random oracle query. Notably, our proofs are much simpler than the one by Zhandry and conceptually much easier to follow for cryptographers not familiar with quantum computation. Although at a high level, the techniques used for the single- and multi-challenge setting are similar, the technical details are quite different. For the multi-challenge setting, we rely on the Katz–Wang technique (CCS, 2003) to overcome some obstacles regarding the leftover hash lemma. PubDate: 2021-01-06
Abstract: Abstract Non-interactive zero-knowledge ( \(\mathsf {NIZK}\) ) is a fundamental primitive that is widely used in the construction of cryptographic schemes and protocols. Our main result is a reduction from constructing \(\mathsf {NIZK}\) proof systems for all of \(\mathbf {NP}\) based on \(\mathsf {LWE}\) , to constructing a \(\mathsf {NIZK}\) proof system for a particular computational problem on lattices, namely a decisional variant of the bounded distance decoding ( \(\mathsf {BDD}\) ) problem. That is, we show that assuming \(\mathsf {LWE}\) , every language \(L \in \mathbf {NP}\) has a \(\mathsf {NIZK}\) proof system if (and only if) the decisional \(\mathsf {BDD}\) problem has a \(\mathsf {NIZK}\) proof system. This (almost) confirms a conjecture of Peikert and Vaikuntanathan (CRYPTO, 2008). To construct our \(\mathsf {NIZK}\) proof system, we introduce a new notion that we call prover-assisted oblivious ciphertext sampling ( \(\mathsf {POCS}\) ), which we believe to be of independent interest. This notion extends the idea of oblivious ciphertext sampling, which allows one to sample ciphertexts without knowing the underlying plaintext. Specifically, we augment the oblivious ciphertext sampler with access to an (untrusted) prover to help it accomplish this task. We show that the existence of encryption schemes with a \(\mathsf {POCS}\) procedure, as well as some additional natural requirements, suffices for obtaining \(\mathsf {NIZK}\) proofs for \(\mathbf {NP}\) . We further show that such encryption schemes can be instantiated based on \(\mathsf {LWE}\) , assuming the existence of a \(\mathsf {NIZK}\) proof system for the decisional \(\mathsf {BDD}\) problem. PubDate: 2021-01-06
Abstract: Abstract Public key quantum money can be seen as a version of the quantum no-cloning theorem that holds even when the quantum states can be verified by the adversary. In this work, we investigate quantum lightning, a formalization of “collision-free quantum money” defined by Lutomirski et al. [ICS’10], where no-cloning holds even when the adversary herself generates the quantum state to be cloned. We then study quantum money and quantum lightning, showing the following results: We demonstrate the usefulness of quantum lightning beyond quantum money by showing several potential applications, such as generating random strings with a proof of entropy, to completely decentralized cryptocurrency without a blockchain where transactions are instantaneous and local. We give win–win results for quantum money/lightning, showing that either signatures/hash functions/commitment schemes meet very strong recently proposed notions of security, or they yield quantum money or lightning. Given the difficulty in constructing public key quantum money, this suggests that natural schemes do attain strong security guarantees. We show that instantiating the quantum money scheme of Aaronson and Christiano [STOC’12] with indistinguishability obfuscation that is secure against quantum computers yields a secure quantum money scheme. This construction can be seen as an instance of our win–win result for signatures, giving the first separation between two security notions for signatures from the literature. Finally, we give a plausible construction for quantum lightning, which we prove secure under an assumption related to the multicollision resistance of degree-2 hash functions. Our construction is inspired by our win–win result for hash functions and yields the first plausible standard model instantiation of a non-collapsing collision-resistant hash function. This improves a result of Unruh [Eurocrypt’16] which is relative to a quantum oracle. Thus, we provide the first constructions of public key quantum money from several cryptographic assumptions. Along the way, we develop several new techniques including a new precise variant of the no-cloning theorem. PubDate: 2021-01-06
Abstract: Abstract Lattice signature schemes generally require particular care when it comes to preventing secret information from leaking through signature transcript. For example, the Goldreich–Goldwasser–Halevi (GGH) signature scheme and the NTRUSign scheme were completely broken by the parallelepiped-learning attack of Nguyen and Regev (Eurocrypt 2006). Several heuristic countermeasures were also shown vulnerable to similar statistical attacks. At PKC 2008, Plantard, Susilo and Win proposed a new variant of GGH, informally arguing resistance to such attacks. Based on this variant, Plantard, Sipasseuth, Dumondelle and Susilo proposed a concrete signature scheme, called DRS, that is in the round 1 of the NIST post-quantum cryptography project. In this work, we propose yet another statistical attack and demonstrate a weakness of the DRS scheme: one can recover some partial information of the secret key from sufficiently many signatures. One difficulty is that, due to the DRS reduction algorithm, the relation between the statistical leak and the secret seems more intricate. We work around this difficulty by training a statistical model, using a few features that we designed according to a simple heuristic analysis. While we only recover partial secret coefficients, this information is easily exploited by lattice attacks, significantly decreasing their complexity. Concretely, we claim that, provided that \(100\,000\) signatures are available, the secret key may be recovered using BKZ-138 for the first set of DRS parameters submitted to the NIST. This puts the security level of this parameter set below 80-bits (maybe even 70-bits), to be compared to an original claim of 128-bits. Furthermore, we review the DRS v2 scheme that is proposed to resist above statistical attack. For this countermeasure, while one may not recover partial secret coefficients exactly by learning, it seems feasible to gain some information on the secret key. Exploiting this information, we can still effectively reduce the cost of lattice attacks. PubDate: 2020-11-25
Abstract: Abstract Fuzzy extractors (Dodis et al., in Advances in cryptology—EUROCRYPT 2014, Springer, Berlin, 2014, pp 93–110) convert repeated noisy readings of a secret into the same uniformly distributed key. To eliminate noise, they require an initial enrollment phase that takes the first noisy reading of the secret and produces a nonsecret helper string to be used in subsequent readings. Reusable fuzzy extractors (Boyen, in Proceedings of the 11th ACM conference on computer and communications security, CCS, ACM, New York, 2004, pp 82–91) remain secure even when this initial enrollment phase is repeated multiple times with noisy versions of the same secret, producing multiple helper strings (for example, when a single person’s biometric is enrolled with multiple unrelated organizations). We construct the first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated. The extractor works for binary strings with Hamming noise; it achieves computational security under the existence of digital lockers (Canetti and Dakdouk, in Advances in cryptology—EUROCRYPT 2008, Springer, Berlin, 2008, pp 489–508). It is simple and tolerates near-linear error rates. Our reusable extractor is secure for source distributions of linear min-entropy rate. The construction is also secure for sources with much lower entropy rates—lower than those supported by prior (nonreusable) constructions—assuming that the distribution has some additional structure, namely, that random subsequences of the source have sufficient minentropy. Structure beyond entropy is necessary to support distributions with low entropy rates. We then explore further how different structural properties of a noisy source can be used to construct fuzzy extractors when the error rates are high, building a computationally secure and an information-theoretically secure construction for large-alphabet sources. PubDate: 2020-11-23
Abstract: Abstract We present practical attacks on OCB2. This mode of operation of a blockcipher was designed with the aim to provide particularly efficient and provably secure authenticated encryption services, and since its proposal about 15 years ago it belongs to the top performers in this realm. OCB2 was included in an ISO standard in 2009. An internal building block of OCB2 is the tweakable blockcipher obtained by operating a regular blockcipher in \({\text {XEX}}^*\) mode. The latter provides security only when evaluated in accordance with certain technical restrictions that, as we note, are not always respected by OCB2. This leads to devastating attacks against OCB2’s security promises: We develop a range of very practical attacks that, amongst others, demonstrate universal forgeries and full plaintext recovery. We complete our report with proposals for (provably) repairing OCB2. As a direct consequence of our findings, OCB2 is currently in a process of removal from ISO standards. Our attacks do not apply to OCB1 and OCB3, and our privacy attacks on OCB2 require an active adversary. PubDate: 2020-10-01
Abstract: Abstract The universal composability paradigm allows for the modular design and analysis of cryptographic protocols. It has been widely and successfully used in cryptography. However, devising a coherent yet simple and expressive model for universal composability is, as the history of such models shows, highly non-trivial. For example, several partly severe problems have been pointed out in the literature for the UC model. In this work, we propose a coherent model for universal composability, called the IITM model (“Inexhaustible Interactive Turing Machine”). A main feature of the model is that it is stated without a priori fixing irrelevant details, such as a specific way of addressing of machines by session and party identifiers, a specific modeling of corruption, or a specific protocol hierarchy. In addition, we employ a very general notion of runtime. All reasonable protocols and ideal functionalities should be expressible based on this notion in a direct and natural way, and without tweaks, such as (artificial) padding of messages or (artificially) adding extra messages. Not least because of these features, the model is simple and expressive. Also the general results that we prove, such as composition theorems, hold independently of how such details are fixed for concrete applications. Being inspired by other models for universal composability, in particular the UC model and because of the flexibility and expressivity of the IITM model, conceptually, results formulated in these models directly carry over to the IITM model. PubDate: 2020-10-01
Abstract: Abstract Group signatures allow members of a group to anonymously sign on behalf of the group. Membership is administered by a designated group manager. The group manager can also reveal the identity of a signer if and when needed to enforce accountability and deter abuse. For group signatures to be applicable in practice, they need to support fully dynamic groups, i.e., users may join and leave at any time. Existing security definitions for fully dynamic group signatures are informal, have shortcomings, and are mutually incompatible. We fill the gap by providing a formal rigorous security model for fully dynamic group signatures. Our model is general and is not tailored toward a specific design paradigm and can therefore, as we show, be used to argue about the security of different existing constructions following different design paradigms. Our definitions are stringent and when possible incorporate protection against maliciously chosen keys. We consider both the case where the group management and tracing signatures are administered by the same authority, i.e., a single group manager, and also the case where those roles are administered by two separate authorities, i.e., a group manager and an opening authority. We also show that a specialization of our model captures existing models for static and partially dynamic schemes. In the process, we identify a subtle gap in the security achieved by group signatures using revocation lists. We show that in such schemes new members achieve a slightly weaker notion of traceability. The flexibility of our security model allows to capture such relaxation of traceability. PubDate: 2020-10-01
Abstract: Abstract In threshold cryptography, private keys are divided into n shares, each one of which is given to a different server in order to avoid single points of failure. In the case of threshold public-key encryption, at least \(t \le n\) servers need to contribute to the decryption process. A threshold primitive is said robust if no coalition of t malicious servers can prevent remaining honest servers from successfully completing private key operations. Non-interactive schemes, considered the most practical ones, allow servers to contribute to decryption without interactions. So far, most non-interactive threshold cryptosystems were only proved secure against static corruptions. In the adaptive corruption scenario (where the adversary can corrupt servers at any time, based on its complete view), all existing robust threshold encryption schemes that also resist chosen-ciphertext attacks till recently require interaction in the decryption phase. A very specific method (in composite order groups) for getting rid of interaction was recently suggested, leaving the question of more generic frameworks and constructions with better security and, in particular, better flexibility (i.e., compatibility with distributed key generation). This paper advances the state of the art and describes a general construction of adaptively secure robust non-interactive threshold cryptosystems with chosen-ciphertext security. We define the novel notion of all-but-one perfectly sound threshold hash proof systems that can be seen as (threshold) hash proof systems with publicly verifiable and simulation-sound proofs. We show that this notion generically implies threshold cryptosystems combining the aforementioned properties. Then, we provide efficient instantiations under well-studied assumptions in bilinear groups (e.g., in such groups of prime order). These instantiations have a tighter security proof in the single-challenge setting and are indeed compatible with distributed key generation protocols. PubDate: 2020-10-01
Abstract: Abstract In this work, we present two new actively secure, constant-round multi-party computation (MPC) protocols with security against all-but-one corruptions. Our protocols both start with an actively secure MPC protocol, which may have linear round complexity in the depth of the circuit, and compile it into a constant-round protocol based on garbled circuits, with very low overhead. Our first protocol takes a generic approach using any secret-sharing-based MPC protocol for binary circuits, and a correlated oblivious transfer functionality. Our second protocol builds on secret-sharing-based MPC with information-theoretic MACs. This approach is less flexible, being based on a specific form of MPC, but requires no additional oblivious transfers to compute the garbled circuit. In both approaches, the underlying secret-sharing-based protocol is only used for one actively secure \(\mathbb {F}_2\) multiplication per AND gate. An interesting consequence of this is that, with current techniques, constant-round MPC for binary circuits is not much more expensive than practical, non-constant-round protocols. We demonstrate the practicality of our second protocol with an implementation and perform experiments with up to 9 parties securely computing the AES and SHA-256 circuits. Our running times improve upon the best possible performance with previous protocols in this setting by 60 times. PubDate: 2020-10-01
Abstract: Abstract In frameworks for universal composability, complex protocols can be built from sub-protocols in a modular way using composition theorems. However, as first pointed out and studied by Canetti and Rabin, this modular approach often leads to impractical implementations. For example, when using a functionality for digital signatures within a more complex protocol, parties have to generate new verification and signing keys for every session of the protocol. This motivates to generalize composition theorems to so-called joint state (composition) theorems, where different copies of a functionality may share some state, e.g., the same verification and signing keys. In this paper, we present a joint state theorem which is more general than the original theorem of Canetti and Rabin, for which several problems and limitations are pointed out. We apply our theorem to obtain joint state realizations for three functionalities: public-key encryption, replayable public-key encryption, and digital signatures. Unlike most other formulations, our functionalities model that ciphertexts and signatures are computed locally, rather than being provided by the adversary. To obtain the joint state realizations, the functionalities have to be designed carefully. Other formulations proposed in the literature are shown to be unsuitable. Our work is based on the IITM model. Our definitions and results demonstrate the expressivity and simplicity of this model. For example, unlike Canetti’s UC model, in the IITM model no explicit joint state operator needs to be defined and the joint state theorem follows immediately from the composition theorem in the IITM model. PubDate: 2020-10-01
Abstract: Abstract We construct the first tightly secure hierarchical identity-based encryption (HIBE) scheme based on standard assumptions, which solves an open problem from Blazy, Kiltz, and Pan (CRYPTO 2014). At the core of our constructions is a novel randomization technique that enables us to randomize user secret keys for identities with flexible length. The security reductions of previous HIBEs lose at least a factor of Q, which is the number of user secret key queries. Different to that, the security loss of our schemes is only dependent on the security parameter. Our schemes are adaptively secure based on the Matrix Diffie-Hellman assumption, which is a generalization of standard Diffie-Hellman assumptions such as k-Linear. We have two tightly secure constructions, one with constant ciphertext size, and the other with tighter security at the cost of linear ciphertext size. Among other things, our schemes imply the first tightly secure identity-based signature scheme by a variant of the Naor transformation. PubDate: 2020-09-15
Abstract: Abstract We construct a verifiable delay function (VDF). A VDF is a function whose evaluation requires running a given number of sequential steps, yet the result can be efficiently verified. They have applications in decentralised systems, such as the generation of trustworthy public randomness in a trustless environment, or resource-efficient blockchains. To construct our VDF, we actually build a trapdoor VDF. A trapdoor VDF is essentially a VDF which can be evaluated efficiently by parties who know a secret (the trapdoor). By setting up this scheme in a way that the trapdoor is unknown (not even by the party running the setup, so that there is no need for a trusted setup environment), we obtain a simple VDF. Our construction is based on groups of unknown order such as an RSA group or the class group of an imaginary quadratic field. The output of our construction is very short (the result and the proof of correctness are each a single element of the group), and the verification of correctness is very efficient. PubDate: 2020-09-09
Abstract: Abstract One approach toward basing public-key encryption (PKE) schemes on weak and credible assumptions is to build “stronger” or more general schemes generically from “weaker” or more restricted ones. One particular line of work in this context was initiated by Myers and Shelat (FOCS ’09) and continued by Hohenberger, Lewko, and Waters (Eurocrypt ’12), who provide constructions of multi-bit CCA-secure PKE from single-bit CCA-secure PKE. It is well known that encrypting each bit of a plaintext string independently is not CCA-secure—the resulting scheme is malleable. We therefore investigate whether this malleability can be dealt with using the conceptually simple approach of applying a suitable non-malleable code (Dziembowski et al., ICS ’10) to the plaintext and subsequently encrypting the resulting codeword bit by bit. We find that an attacker’s ability to ask multiple decryption queries requires that the underlying code be continuously non-malleable (Faust et al., TCC ’14). Since, as we show, this flavor of non-malleability can only be achieved if the code is allowed to “self-destruct,” the resulting scheme inherits this property and therefore only achieves a weaker variant of CCA security. We formalize this new notion of so-called indistinguishability under self-destruct attacks (IND-SDA) as CCA security with the restriction that the decryption oracle stops working once the attacker submits an invalid ciphertext. We first show that the above approach based on non-malleable codes yields a solution to the problem of domain extension for IND-SDA-secure PKE, provided that the underlying code is continuously non-malleable against (a reduced form of) bit-wise tampering. Then, we prove that the code of Dziembowski et al. is actually already continuously non-malleable against bit-wise tampering. We further investigate the notion of security under self-destruct attacks and combine IND-SDA security with non-malleability under chosen-ciphertext attacks (NM-CPA) to obtain the strictly stronger notion of non-malleability under self-destruct attacks (NM-SDA). We show that NM-SDA security can be obtained from basic IND-CPA security by means of a black-box construction based on the seminal work by Choi et al. (TCC ’08). Finally, we provide a domain extension technique for building a multi-bit NM-SDA scheme from a single-bit NM-SDA scheme. To achieve this goal, we define and construct a novel type of continuous non-malleable code, called secret-state NMC, since, as we show, standard continuous NMCs are insufficient for the natural “encode-then-encrypt-bit-by-bit” approach to work. PubDate: 2020-08-04
Abstract: Abstract Assuming the existence of one-way functions, we show that there is no polynomial-time differentially private algorithm \({\mathcal {A}}\) that takes a database \(D\in (\{0,1\}^d)^n\) and outputs a “synthetic database” \({\hat{D}}\) all of whose two-way marginals are approximately equal to those of D. (A two-way marginal is the fraction of database rows \(x\in \{0,1\}^d\) with a given pair of values in a given pair of columns.) This answers a question of Barak et al. (PODS ‘07), who gave an algorithm running in time \(\mathrm {poly}(n,2^d)\) . Our proof combines a construction of hard-to-sanitize databases based on digital signatures (by Dwork et al., STOC ‘09) with encodings based on the PCP theorem. We also present both negative and positive results for generating “relaxed” synthetic data, where the fraction of rows in D satisfying a predicate c are estimated by applying c to each row of \({\hat{D}}\) and aggregating the results in some way. PubDate: 2020-07-31
Abstract: Abstract Non-malleable codes (Dziembowski et al., ICS’10 and J. ACM’18) are a natural relaxation of error correcting/detecting codes with useful applications in cryptography. Informally, a code is non-malleable if an adversary trying to tamper with an encoding of a message can only leave it unchanged or modify it to the encoding of an unrelated value. This paper introduces continuous non-malleability, a generalization of standard non-malleability where the adversary is allowed to tamper continuously with the same encoding. This is in contrast to the standard definition of non-malleable codes, where the adversary can only tamper a single time. The only restriction is that after the first invalid codeword is ever generated, a special self-destruct mechanism is triggered and no further tampering is allowed; this restriction can easily be shown to be necessary. We focus on the split-state model, where an encoding consists of two parts and the tampering functions can be arbitrary as long as they act independently on each part. Our main contributions are outlined below. We show that continuous non-malleability in the split-state model is impossible without relying on computational assumptions. We construct a computationally secure split-state code satisfying continuous non-malleability in the common reference string (CRS) model. Our scheme can be instantiated assuming the existence of collision-resistant hash functions and (doubly enhanced) trapdoor permutations, but we also give concrete instantiations based on standard number-theoretic assumptions. We revisit the application of non-malleable codes to protecting arbitrary cryptographic primitives against related-key attacks. Previous applications of non-malleable codes in this setting required perfect erasures and the adversary to be restricted in memory. We show that continuously non-malleable codes allow to avoid these restrictions. PubDate: 2020-07-24
Abstract: Abstract A protocol for computing a functionality is secure if an adversary in this protocol cannot cause more harm than in an ideal computation, where parties give their inputs to a trusted party that returns the output of the functionality to all parties. In particular, in the ideal model, such computation is fair—if the corrupted parties get the output, then the honest parties get the output. Cleve (STOC 1986) proved that, in general, fairness is not possible without an honest majority. To overcome this impossibility, Gordon and Katz (Eurocrypt 2010) suggested a relaxed definition—1/p-secure computation—which guarantees partial fairness. For two parties, they constructed 1/p-secure protocols for functionalities for which the size of either their domain or their range is polynomial (in the security parameter). Gordon and Katz ask whether their results can be extended to multiparty protocols. We study 1/p-secure protocols in the multiparty setting for general functionalities. Our main result is constructions of 1/p-secure protocols that are resilient against any number of corrupted parties provided that the number of parties is constant and the size of the range of the functionality is at most polynomial (in the security parameter \({n}\) ). If fewer than 2/3 of the parties are corrupted, the size of the domain of each party is constant, and the functionality is deterministic, then our protocols are efficient even when the number of parties is \(\log \log {n}\) . On the negative side, we show that when the number of parties is super-constant, 1/p-secure protocols are not possible when the size of the domain of each party is polynomial. Thus, our feasibility results for 1/p-secure computation are essentially tight. We further motivate our results by constructing protocols with stronger guarantees: If in the execution of the protocol there is a majority of honest parties, then our protocols provide full security. However, if only a minority of the parties are honest, then our protocols are 1/p-secure. Thus, our protocols provide the best of both worlds, where the 1/p-security is only a fall-back option if there is no honest majority. PubDate: 2020-07-16