A Computational Separation Between Quantum Nocloning and Noteleportation


Two of the fundamental nogo theorems of quantum information are the nocloning
theorem (that it is impossible to make copies of general quantum states) and the
noteleportation theorem (the prohibition on sending quantum states over classical
channels without preshared entanglement). They are known to be equivalent, in the
sense that a collection of quantum states is clonable if and only if it is
teleportable.
Our main result suggests that this is not the case when computational efficiency is considered. We give a collection of quantum states and oracles relative to which these states are efficiently clonable but not efficiently teleportable. Given that the opposite scenario is impossible (states that can be teleported can always trivially be cloned), this gives the most complete oracle separation possible between these two important nogo properties. In doing so, we introduce a related quantum nogo property, reconstructibility, which refers to the ability to construct a quantum state from a uniquely identifying classical description. We show the stronger result of a collection of quantum states that are efficiently clonable but not efficiently reconstructible. This novel nogo property only exists in relation to computational efficiency, as it is trivial for unbounded computation. It thus opens up the possibility of further computational nogo properties that have not yet been studied because they do not exist outside the computational context.
@misc{NehZha22,
author = {Barak Nehoran and Mark Zhandry}, title = {A Computational Separation Between Quantum Nocloning and Noteleportation}, year = {2022} }  
Full Quantum Equivalence of Group Action DLog and CDH, and More


Cryptographic group actions are a relaxation of standard cryptographic groups that
have less structure. This lack of structure allows them to be plausibly quantum
resistant despite Shor's algorithm, while still having a number of applications. The
most famous example of group actions are built from isogenies on elliptic curves.
Our main result is that CDH for abelian group actions is quantumly equivalent to discrete log. Galbraith et al. (Mathematical Cryptology) previously showed perfectly solving CDH to be equivalent to discrete log quantumly; our result works for any nonnegligible advantage. We also explore several other questions about group action and isogeny protocols.
@inproceedings{AC:MonZha22,
author = {Hart Montgomery and Mark Zhandry}, howpublished = {ASIACRYPT 2022}, title = {Full Quantum Equivalence of Group Action DLog and CDH, and More}, year = {2022} }  
Another Round of Breaking and Making Quantum Money: How to Not Build It from Lattices, and More


Public verification of quantum money has been one of the central objects in quantum
cryptography ever since Wiesner's pioneering idea of using quantum mechanics to
construct banknotes against counterfeiting. So far, we do not know any
publiclyverifiable quantum money scheme that is provably secure from standard
assumptions.
In this work, we provide both negative and positive results for publicly verifiable quantum money. • In the first part, we give a general theorem, showing that a certain natural class of quantum money schemes from lattices cannot be secure. We use this theorem to break the recent quantum money scheme of Khesin, Lu, and Shor. • In the second part, we propose a framework for building quantum money and quantum lightning we call invariant money which abstracts some of the ideas of quantum money from knots by Farhi et al.(ITCS'12). In addition to formalizing this framework, we provide concrete hard computational problems loosely inspired by classical knowledgeofexponent assumptions, whose hardness would imply the security of quantum lightning, a strengthening of quantum money where not even the bank can duplicate banknotes. • We discuss potential instantiations of our framework, including an oracle construction using cryptographic group actions and instantiations from rerandomizable functional encryption, isogenies over elliptic curves, and knots.
@misc{EPRINT:LiuMonZha22,
author = {Jiahui Liu and Hart Montgomery and Mark Zhandry}, howpublished = {Cryptology ePrint Archive, Report 2022/1620}, note = {\url{https://eprint.iacr.org/2022/1620}}, title = {Another Round of Breaking and Making Quantum Money: How to Not Build It from Lattices, and More}, year = {2022} }  
CollusionResistant CopyProtection for Watermarkable Functionalities


Copyprotection is the task of encoding a program into a quantum state to prevent
illegal duplications. A line of recent works studied copyprotection schemes under
"1 > 2 attacks": the adversary receiving one program copy can not produce two valid
copies. However, under most circumstances, vendors need to sell more than one copy of
a program and still ensure that no duplicates can be generated. In this work, we
initiate the study of collusionresistant copyprotection in the plain model. Our
results are twofold:
• For the first time, we show that all major watermarkable functionalities can be copyprotected (including unclonable decryption, digital signatures, and PRFs). Among these, copyprotection of digital signature schemes is not known before. The feasibility of copyprotecting all watermarkable functionalities is an open question raised by Aaronson et al. (CRYPTO' 21) • We make all the above schemes k bounded collusionresistant for any polynomial k, giving the first bounded collusionresistant copyprotection for various functionalities in the plain model.
@inproceedings{TCC:LLQZ22,
author = {Jiahui Liu and Qipeng Liu and Luowen Qian and Mark Zhandry}, howpublished = {TCC 2022}, title = {CollusionResistant CopyProtection for Watermarkable Functionalities}, year = {2022} }  
Adaptive Multiparty NIKE


We construct adaptively secure multiparty noninteractive key exchange (NIKE) from
polynomiallyhard indistinguishability obfuscation and other standard assumptions.
This improves on all prior such protocols, which required subexponential hardness.
Along the way, we establish several compilers which simplify the task of constructing
new multiparty NIKE protocols, and also establish a close connection with a particular
type of constrained PRF.
@inproceedings{TCC:KWZ22,
author = {Venkata Koppula and Brent Waters and Mark Zhandry}, howpublished = {TCC 2022}, title = {Adaptive Multiparty NIKE}, year = {2022} }  
Verifiable Quantum Advantage without Structure


We show the following hold, unconditionally unless otherwise stated, relative to a
random oracle with probability 1:
• There are NP search problems solvable by BQP machines but not BPP machines. • There exist functions that are oneway, and even collision resistant, against classical adversaries but are easily inverted quantumly. Similar separations hold for digital signatures and CPAsecure public key encryption (the latter requiring the assumption of a classically CPAsecure encryption scheme). Interestingly, the separation does not necessarily extend to the case of other cryptographic objects such as PRGs. • There are unconditional publicly verifiable proofs of quantumness with the minimal rounds of interaction: for uniform adversaries, the proofs are noninteractive, whereas for nonuniform adversaries the proofs are two message public coin. • Our results do not appear to contradict the AaronsonAmbanis conjecture. Assuming this conjecture, there exist publicly verifiable certifiable randomness, again with the minimal rounds of interaction. By replacing the random oracle with a concrete cryptographic hash function such as SHA2, we obtain plausible Minicrypt instantiations of the above results. Previous analogous results all required substantial structure, either in terms of highly structured oracles and/or algebraic assumptions in Cryptomania and beyond.
@misc{FOCS:YamZha22,
author = {Takashi Yamakawa and Mark Zhandry}, booktitle = {63rd FOCS}, title = {Verifiable Quantum Advantage without Structure}, year = {2022}, publisher = {{IEEE} Computer Society}, month = {oct} }  
Commitments to Quantum States


What does it mean to commit to a quantum state? In this work, we propose a simple
answer: a commitment to quantum messages is binding if, after the commit phase, the
committed state is hidden from the sender`s view. We accompany this new definition
with several instantiations. We build the first noninteractive succinct quantum state
commitments, which can be seen as an analogue of collisionresistant hashing for
quantum messages. We also show that hiding quantum state commitments (QSCs) are
implied by any commitment scheme for classical messages. All of our constructions can
be based on quantumcryptographic assumptions that are implied by but are potentially
weaker than oneway functions.
Commitments to quantum states open the door to many new cryptographic possibilities. Our flagship application of a succinct QSC is a quantumcommunication version of Kilian`s succinct arguments for any language that has quantum PCPs with constant error and polylogarithmic locality. Plugging in the PCP theorem, this yields succinct arguments for NP under significantly weaker assumptions than required classically; moreover, if the quantum PCP conjecture holds, this extends to QMA. At the heart of our security proof is a new rewinding technique for extracting quantum information.
@misc{EPRINT:GJMZ22,
author = {Sam Gunn and Nathan Ju and Fermi Ma and Mark Zhandry}, howpublished = {Cryptology ePrint Archive, Report 2022/1358}, note = {\url{https://eprint.iacr.org/2022/1358}}, title = {Commitments to Quantum States}, year = {2022} }  
Augmented Random Oracles


We propose a new paradigm for justifying the security of random oraclebased
protocols, which we call the Augmented Random Oracle Model (AROM). We show that the
AROM captures a wide range of important random oracle impossibility results. Thus a
proof in the AROM implies some resiliency to such impossibilities. We then consider
three ROM transforms which are subject to impossibilities: FiatShamir (FS),
FujisakiOkamoto (FO), and EncryptwithHash (EwH). We show in each case how to obtain
security in the AROM by strengthening the building blocks or modifying the transform.
Along the way, we give a couple other results. We improve the assumptions needed for the FO and EwH impossibilities from indistinguishability obfuscation to circularly secure LWE; we argue that our AROM still captures this improved impossibility. We also demonstrate that there is no ``best possible'' hash function, by giving a pair of security properties, both of which can be instantiated in the standard model separately, which cannot be simultaneously satisfied by a single hash function.
@inproceedings{C:Zhandry22a,
author = {Mark Zhandry}, howpublished = {CRYPTO 2022}, note = {\url{https://eprint.iacr.org/2022/783}}, title = {Augmented Random Oracles}, year = {2022} }  
On the Feasibility of Unclonable Encryption, and More


Unclonable encryption, first introduced by Broadbent and Lord (TQC'20), is a onetime
encryption scheme with the following security guarantee: any nonlocal adversary
(A, B, C) cannot simultaneously distinguish encryptions of two equal length messages.
This notion is termed as unclonable indistinguishability. Prior works focused on
achieving a weaker notion of unclonable encryption, where we required that any
nonlocal adversary (A, B, C) cannot simultaneously recover the entire message m.
Seemingly innocuous, understanding the feasibility of encryption schemes satisfying
unclonable indistinguishability (even for 1bit messages) has remained elusive. We
make progress towards establishing the feasibility of unclonable encryption.
• We show that encryption schemes satisfying unclonable indistinguishability exist unconditionally in the quantum random oracle model. • Towards understanding the necessity of oracles, we present a negative result stipulating that a large class of encryption schemes cannot satisfy unclonable indistinguishability. • Finally, we also establish the feasibility of another closely related primitive: copyprotection for singlebit output point functions. Prior works only established the feasibility of copyprotection for multibit output point functions or they achieved constant security error for singlebit output point functions.
@inproceedings{C:AKLLZ22,
author = {Prabhanjan Ananth and Fatih Kaleoglu and Xingjian Li and Qipeng Liu and Mark Zhandry}, howpublished = {CRYPTO 2022}, title = {On the Feasibility of Unclonable Encryption and, More}, year = {2022} }  
To Label, or Not To Label (in Generic Groups)


Generic groups are an important tool for analyzing the feasibility and infeasibility
of groupbased cryptosystems. There are two distinct widespread versions of generic
groups, Shoup's and Maurer's, the main difference being whether or not group elements
are given explicit labels. The two models are often treated as equivalent. In this
work, however, we demonstrate that the models are in fact quite different, and care is
needed when stating generic group results:
• We show that numerous textbook constructions are not captured by Maurer, but are captured by Shoup. In the other direction, any construction captured by Maurer is captured by Shoup. • For constructions that exist in both models, we show that security is equivalent for "single stage" games, but Shoup security is strictly stronger than Maurer security for some "multistage" games. • The existing generic group uninstantiability results do not apply to Maurer. We fill this gap with a new uninstantiability result. • We explain how the known black box separations between generic groups and identitybased encryption do not fully apply to Shoup, and resolve this by providing such a separation. • We give a new uninstantiability result for the algebraic group model.
@inproceedings{C:Zhandry22b,
author = {Mark Zhandry}, howpublished = {CRYPTO 2022}, note = {\url{https://eprint.iacr.org/2022/226}}, title = {To Label, or Not To Label (in Generic Groups)}, year = {2022} }  
New Constructions of Collapsing Hashes


Collapsing is the preferred postquantum security notion for hash functions, needed to
lift many classical results to the quantum setting. Unfortunately, the only existing
standardmodel proofs of collapsing hashes require LWE. We construct the first
collapsing hashes from the quantum hardness of any one of the following problems:
• LPN in a variety of low noise or highhardness regimes, essentially matching what is known for collision resistance from LPN. • Finding cycles on certain exponentiallylarge expander graphs, such as those arising from isogenies on elliptic curves. • The "optimal" hardness of finding collisions in any hash function. • The polynomial hardness of finding collisions, assuming a certain plausible regularity condition on the hash. As an immediate corollary, we obtain the first statistically hiding postquantum commitments and postquantum succinct arguments (of knowledge) under the same assumptions. Our results are obtained by a general theorem which shows how to construct a collapsing hash H' from a postquantum collisionresistant hash function H, regardless of whether or not H itself is collapsing, assuming H satisfies a certain regularity condition we call "semiregularity".
@inproceedings{C:Zhandry22c,
author = {Mark Zhandry}, howpublished = {CRYPTO 2022}, note = {\url{https://eprint.iacr.org/2022/678}}, title = {New Constructions of Collapsing Hashes}, year = {2022} }  
Quantum Algorithms for Variants of AverageCase Lattice Problems via Filtering


We show polynomialtime quantum algorithms for the following problems:
• Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant. • Extrapolated dihedral coset problem (EDCP) with certain parameters. • Learning with errors (LWE) problem given LWElike quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions. The SIS, EDCP, and LWE problems in their standard forms are as hard as solving lattice problems in the worst case. However, the variants that we can solve are not in the parameter regimes known to be as hard as solving worstcase lattice problems. Still, no classical or quantum polynomialtime algorithms were known for those variants. Our algorithms for variants of SIS and EDCP use the existing quantum reductions from those problems to LWE, or more precisely, to the problem of solving LWE given LWElike quantum states. Our main contributions are introducing a filtering technique and solving LWE given LWElike quantum states with interesting parameters.
@inproceedings{EC:CheLiuZha22,
author = {Yilei Chen and Qipeng Liu and Mark Zhandry}, booktitle = {EUROCRYPT~2022, Part~III}, editor = {Orr Dunkelman and Stefan Dziembowski}, month = may # {~/~} # jun, pages = {372401}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Quantum Algorithms for Variants of AverageCase Lattice Problems via Filtering}, volume = {13277}, year = {2022}, }  
Incompressible Cryptography


Incompressible encryption allows us to make the ciphertext size flexibly large and
ensures that an adversary learns nothing about the encrypted data, even if the
decryption key later leaks, unless she stores essentially the entire ciphertext.
Incompressible signatures can be made arbitrarily large and ensure that an adversary
cannot produce a signature on any message, even one she has seen signed before, unless
she stores one of the signatures essentially in its entirety.
In this work, we give simple constructions of both incompressible publickey encryption and signatures under minimal assumptions. Furthermore, large incompressible ciphertexts (resp. signatures) can be decrypted (resp. verified) in a streaming manner with low storage. In particular, these notions strengthen the related concepts of disappearing encryption and signatures, recently introduced by Guan and Zhandry (TCC 2021), whose previous constructions relied on sophisticated techniques and strong, nonstandard assumptions. We extend our constructions to achieve an optimal "rate", meaning the large ciphertexts (resp. signatures) can contain almost equally large messages, at the cost of stronger assumptions.
@inproceedings{EC:GuaWicZha22,
author = {Jiaxin Guan and Daniel Wichs and Mark Zhandry}, booktitle = {EUROCRYPT~2022, Part~I}, editor = {Orr Dunkelman and Stefan Dziembowski}, month = may # {~/~} # jun, pages = {700730}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Incompressible Cryptography}, volume = {13275}, year = {2022}, }  
PostQuantum Succinct Arguments: Breaking the Quantum Rewinding Barrier


We prove that Kilian's fourmessage succinct argument system is postquantum secure
in the standard model when instantiated with any probabilistically checkable proof and
any collapsing hash function (which in turn exist based on the postquantum hardness
of Learning with Errors).
At the heart of our proof is a new "measureandrepair" quantum rewinding procedure that achieves asymptotically optimal knowledge error.
@inproceedings{FOCS:CMSZ21,
author = {Alessandro Chiesa and Fermi Ma and Nicholas Spooner and Mark Zhandry}, booktitle = {62nd FOCS}, title = {PostQuantum Succinct Arguments: Breaking the Quantum Rewinding Barrier}, year = {2022}, pages = {4958}, publisher = {{IEEE} Computer Society}, month = {feb} }  
Redeeming Reset Indifferentiability and Applications to PostQuantum Security


Indifferentiability is used to analyze the security of constructions of idealized
objects, such as random oracles or ideal ciphers. Reset indifferentiability is a
strengthening of plain indifferentiability which is applicable in far more scenarios,
but has largely been abandoned due to significant impossibility results and a lack of
positive results. Our main results are:
• Under weak reset indifferentiability, ideal ciphers imply (fixed size) random oracles, and domain shrinkage is possible. We thus show reset indifferentiability is more useful than previously thought. • We lift our analysis to the quantum setting, showing that ideal ciphers imply random oracles under quantum indifferentiability. • Despite Shor's algorithm, we observe that generic groups are still meaningful quantumly, showing that they are quantumly (reset) indifferentiable from ideal ciphers; combined with the above, cryptographic groups yield postquantum symmetric key cryptography. In particular, we obtain a plausible postquantum random oracle that is a subsetproduct followed by two modular reductions.
@inproceedings{AC:Zhandry21,
author = {Mark Zhandry}, booktitle = {ASIACRYPT~2021, Part~I}, editor = {Mehdi Tibouchi and Huaxiong Wang}, month = dec, pages = {518548}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Redeeming Reset Indifferentiability and Applications to PostQuantum Security}, volume = {13090}, year = {2021} }  
Franchised Quantum Money


Quantum computers, which harness the power of quantum physics, offer the potential for
new kinds of security. For instance, classical bits can be copied, but quantum bits,
in general, cannot. As a result, there is interest in creating uncounterfeitable
quantum money, in which a set of qubits can be spent as money but cannot be
duplicated. However existing constructions of quantum money are limited: the
verification key, which is used to verify that a banknote was honestly generated, can
also be used to create counterfeit banknotes. Recent attempts have tried to allow
public key verification, where any untrusted user, even a wouldbe counterfeiter, can
verify the banknotes. However, despite many attempts, a secure construction of
publickey quantum money has remained elusive.
Here we introduce franchised quantum money, a new notion that is weaker than public key quantum money but brings us closer to realizing it. Franchised quantum money allows any untrusted user to verify the banknotes, and every user gets a unique secret verification key. Further more, we give a construction of franchised quantum money and prove security assuming the quantum hardness of the shortinteger solution problem (SIS). This is the first construction of quantum money that allows an untrusted user to verify the banknotes, and which has a proof of security based on widespread assumptions. It is therefore an important step toward public key quantum money.
@inproceedings{AC:RobZha21,
author = {Bhaskar Roberts and Mark Zhandry}, booktitle = {ASIACRYPT~2021, Part~I}, editor = {Mehdi Tibouchi and Huaxiong Wang}, month = dec, pages = {549574}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Franchised Quantum Money}, volume = {13090}, year = {2021} }  
Disappearing Cryptography in the Bounded Storage Model


In this work, we study disappearing cryptography in the bounded storage model. Here, a
component of the transmission, say a ciphertext, a digital signature, or even a
program, is streamed bit by bit. The stream is so large for anyone to store in its
entirety, meaning the transmission effectively disappears once the stream stops.
We first propose the notion of online obfuscation, capturing the goal of disappearing programs in the bounded storage model. We give a negative result for VBB security in this model, but propose candidate constructions for a weaker security goal, namely VGB security. We then demonstrate the utility of VGB online obfuscation, showing that it can be used to generate disappearing ciphertexts and signatures. All of our applications are not possible in the standard model of cryptography, regardless of computational assumptions used.
@inproceedings{TCC:GuaZha21,
author = {Jiaxin Guan and Mark Zhandry}, booktitle = {TCC~2021, Part~II}, editor = {Kobbi Nissim and Brent Waters}, title = {Disappearing Cryptography in the Bounded Storage Model}, month = nov, pages = {365396}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, volume = {13043}, title = {Disappearing Cryptography in the Bounded Storage Model}, year = {2021} }  
Classical vs Quantum Random Oracles


In this paper, we study relationship between security of cryptographic schemes in the
random oracle model (ROM) and quantum random oracle model (QROM). First, we introduce
a notion of a proof of quantum access to a random oracle (PoQRO), which is a protocol
to prove the capability to quantumly access a random oracle to a classical verifier.
We observe that a proof of quantumness recently proposed by Brakerski et al. (TQC '20)
can be seen as a PoQRO. We also give a construction of a publicly verifiable PoQRO
relative to a classical oracle. Based on them, we construct digital signature and
public key encryption schemes that are secure in the ROM but insecure in the QROM. In
particular, we obtain the first examples of natural cryptographic schemes that
separate the ROM and QROM under a standard cryptographic assumption.
On the other hand, we give lifting theorems from security in the ROM to that in the QROM for certain types of cryptographic schemes and security notions. For example, our lifting theorems are applicable to FiatShamir noninteractive arguments, FiatShamir signatures, and FullDomainHash signatures etc. We also discuss applications of our lifting theorems to quantum query complexity.
@inproceedings{EC:YamZha21,
author = {Takashi Yamakawa and Mark Zhandry}, booktitle = {EUROCRYPT~2021, Part~II}, editor = {Anne Canteaut and Fran\c{c}oisXavier Standaert}, month = oct, pages = {568597}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Classical vs Quantum Random Oracles}, volume = {12697}, year = {2021} }  
Iterated Inhomogeneous Polynomials


Let p be a polynomial, and let p^{(i)}(x) be the result of iterating the
polynomial i times, starting at an input x. The case where p(x) is the homogeneous
polynomial x^{2} has been extensively studied in cryptography. Due to its associated
group structure, iterating this polynomial gives rise to a number of interesting
cryptographic applications such as timelock puzzles and verifiable delay functions.
On the other hand, the associated group structure leads to quantum attacks on the
applications.
In this work, we consider whether inhomogeneous polynomials, such as 2x^{2}+3x+1, can have useful cryptographic applications. We focus on the case of polynomials mod 2^^{n}, due to some useful mathematical properties. The natural group structure no longer exists, so the quantum attacks but also applications no longer immediately apply. We nevertheless show classical polynomialtime attacks on analogs of hard problems from the homogeneous setting. We conclude by proposing new computational assumptions relating to these inhomogeneous polynomials, with cryptographic applications.
@misc{EPRINT:GuaZha21,
author = {Jiaxin Guan and Mark Zhandry}, howpublished = {Cryptology ePrint Archive, Report 2021/1399}, note = {\url{https://eprint.iacr.org/2021/1399}}, title = {Iterated Inhomogeneous Polynomials}, year = {2021} }  
New Approaches for Quantum CopyProtection


Quantum copy protection uses the unclonability of quantum states to construct quantum
software that provably cannot be pirated. Copy protection would be immensely useful,
but unfortunately little is known about how to achieve it in general. In this work, we
make progress on this goal, by giving the following results:
• We show how to copy protect any program that cannot be learned from its input/output behavior, relative to a classical oracle. This improves on Aaronson [CCC'09], which achieves the same relative to a quantum oracle. By instantiating the oracle with postquantum candidate obfuscation schemes, we obtain a heuristic construction of copy protection. • We show, roughly, that any program which can be watermarked can be copy detected, a weaker version of copy protection that does not prevent copying, but guarantees that any copying can be detected. Our scheme relies on the security of the assumed watermarking, plus the assumed existence of public key quantum money. Our construction is general, applicable to many recent watermarking schemes.
@inproceedings{C:ALLZZ21,
author = {Scott Aaronson and Jiahui Liu and Qipeng Liu and Mark Zhandry and Ruizhe Zhang}, booktitle = {CRYPTO~2021, Part~I}, editor = {Tal Malkin and Chris Peikert}, month = aug, pages = {526555}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {New Approaches for Quantum CopyProtection}, volume = {12825}, year = {2021} }  
Hidden Cosets and Applications to Unclonable Cryptography


In 2012, Aaronson and Christiano introduced the idea of hidden subspace states to
build publickey quantum money [STOC '12]. Since then, this idea has been applied to
realize several other cryptographic primitives which enjoy some form of unclonability.
In this work, we study a generalization of hidden subspace states to hidden coset
states. This notion was considered independently by Vidick and Zhang [Eurocrypt '21],
in the context of proofs of quantum knowledge from quantum money schemes. We explore
unclonable properties of coset states and several applications:
• We show that, assuming indistinguishability obfuscation (iO), hidden coset states possess a certain direct product hardness property, which immediately implies a tokenized signature scheme in the plain model. Previously, a tokenized signature scheme was known only relative to an oracle, from a work of BenDavid and Sattath [QCrypt '17]. • Combining a tokenized signature scheme with extractable witness encryption, we give a construction of an unclonable decryption scheme in the plain model. The latter primitive was recently proposed by Georgiou and Zhandry [ePrint '20], who gave a construction relative to a classical oracle. • We conjecture that coset states satisfy a certain natural (informationtheoretic) monogamyofentanglement property. Assuming this conjecture is true, we remove the requirement for extractable witness encryption in our unclonable decryption construction, by relying instead on computeandcompare obfuscation for the class of unpredictable distributions. As potential evidence in support of the monogamy conjecture, we prove a weaker version of this monogamy property, which we believe will still be of independent interest. • Finally, we give a construction of a copyprotection scheme for pseudorandom functions (PRFs) in the plain model. Our scheme is secure either assuming iO, OWF and extractable witness encryption, or assuming iO, OWF, computeandcompare obfuscation for the class of unpredictable distributions, and the conjectured monogamy property mentioned above. This is the first example of a copyprotection scheme with provable security in the plain model for a class of functions that is not evasive.
@inproceedings{C:CLLZ21,
author = {Andrea Coladangelo and Jiahui Liu and Qipeng Liu and Mark Zhandry}, booktitle = {CRYPTO~2021, Part~I}, editor = {Tal Malkin and Chris Peikert}, month = aug, pages = {556584}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Hidden Cosets and Applications to Unclonable Cryptography}, volume = {12825}, year = {2021} }  
White Box Traitor Tracing


Traitor tracing aims to identify the source of leaked decryption keys. Since the
"traitor" can try to hide their key within obfuscated code in order to evade tracing,
the tracing algorithm should work for general, potentially obfuscated, decoder
programs. In the setting of such general decoder programs, prior work uses
black box tracing: the tracing algorithm ignores the implementation of the
decoder, and instead traces just by making queries to the decoder and observing the
outputs.
We observe that, in some settings, such black box tracing leads to consistency and user privacy issues. On the other hand, these issues do not appear inherent to white box tracing, where the tracing algorithm actually inspects the decoder implementation. We therefore develop new white box traitor tracing schemes providing consistency and/or privacy. Our schemes can be instantiated under various assumptions ranging from public key encryption and NIZKs to indistinguishability obfuscation, with different tradeoffs. To the best of our knowledge, ours is the first work to consider white box tracing in the general decoder setting.
@inproceedings{C:Zhandry21,
author = {Mark Zhandry}, booktitle = {CRYPTO~2021, Part~IV}, editor = {Tal Malkin and Chris Peikert}, month = aug, pages = {303333}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {White Box Traitor Tracing}, volume = {12828}, year = {2021} }  
The Relationship Between Idealized Models Under Computationally Bounded Adversaries


The random oracle, generic group, and generic bilinear map models (ROM, GGM, GBM,
respectively) are fundamental heuristics used to justify new computational assumptions
and prove the security of efficient cryptosystems. While known to be invalid in some
contrived settings, the heuristics generally seem reasonable for realworld
applications.
In this work, we ask: which heuristics are closer to reality? Or conversely, which heuristics are a larger leap? We answer this question through the framework of computational indifferentiability, showing that the ROM is a strictly "milder" heuristic than the GGM, which in turn is strictly milder than the GBM. While this may seem like the expected outcome, we explain why it does not follow from prior works and is not the a priori obvious conclusion. In order to prove our results, we develop new ideas for proving computational indifferentiable separations.
@misc{EPRINT:ZhaZha21,
author = {Mark Zhandry and Cong Zhang}, howpublished = {Cryptology ePrint Archive, Report 2021/240}, note = {\url{https://eprint.iacr.org/2021/240}}, title = {The Relationship Between Idealized Models Under Computationally Bounded Adversaries}, year = {2021} }  
Towards NonInteractive Witness Hiding


Witness hiding proofs require that the verifier cannot find a witness after seeing a
proof. The exact round complexity needed for witness hiding proofs has so far remained
an open question. In this work, we provide compelling evidence that witness hiding
proofs are achievable noninteractively for wide classes of languages. We use
noninteractive witness indistinguishable proofs as the basis for all of our
protocols. We give four schemes in different settings under different assumptions:
• A universal noninteractive proof that is witness hiding as long as any proof system, possibly an inefficient and/or nonuniform scheme, is witness hiding, has a known bound on verifier runtime, and has short proofs of soundness. • A nonuniform noninteractive protocol justified under a worstcase complexity assumption that is witness hiding and efficient, but may not have short proofs of soundness. • A new security analysis of the twomessage argument of Pass [Crypto 2003], showing witness hiding for any nonuniformly hard distribution. We propose a heuristic approach to removing the first message, yielding a noninteractive argument. • A witness hiding noninteractive proof system for languages with unique witnesses, assuming the nonexistence of a weak form of witness encryption for any language in NP ∩ coNP.
@inproceedings{TCC:KuyZha20,
author = {Benjamin Kuykendall and Mark Zhandry}, booktitle = {TCC~2020, Part~I}, editor = {Rafael Pass and Krzysztof Pietrzak}, month = nov, pages = {627656}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Towards Noninteractive Witness Hiding}, volume = {12550}, year = {2020} }  
Schrödinger's Pirate: How To Trace a Quantum Decoder


We explore the problem of traitor tracing where the pirate decoder can contain a
quantum state. Our main results include:
• We show how to overcome numerous definitional challenges to give a meaningful notion of tracing for quantum decoders • We give negative results, demonstrating barriers to adapting classical tracing algorithms to the quantum decoder setting. • On the other hand, we show how to trace quantum decoders in the setting of (public key) private linear broadcast encryption, capturing a common approach to traitor tracing.
@inproceedings{TCC:Zhandry20,
author = {Mark Zhandry}, booktitle = {TCC~2020, Part~III}, editor = {Rafael Pass and Krzysztof Pietrzak}, month = nov, pages = {6191}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {{Schr{\"o}dinger}'s Pirate: How to Trace a Quantum Decoder}, volume = {12552}, year = {2020} }  
New Techniques for Traitor Tracing: Size N^{⅓} and More from Pairings


The best existing pairingbased traitor tracing schemes have O(√N)sized
parameters, which has stood since 2006. This intuitively seems to be consistent with
the fact that pairings allow for degree2 computations, yielding a quadratic
compression.
In this work, we show that this intuition is false by building a tracing scheme from pairings with O(∛N)sized parameters. We additionally give schemes with a variety of parameter size tradeoffs, including a scheme with constantsize ciphertexts and public keys (but linearsized secret keys). All of our schemes make blackbox use of the pairings. We obtain our schemes by developing a number of new traitor tracing techniques, giving the first significant parameter improvements in pairingsbased traitor tracing in over a decade.
@inproceedings{C:Zhandry20,
author = {Mark Zhandry}, booktitle = {CRYPTO~2020, Part~I}, editor = {Daniele Micciancio and Thomas Ristenpart}, month = aug, pages = {652682}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {New Techniques for Traitor Tracing: Size {$N^{1/3}$} and More from Pairings}, volume = {12170}, year = {2020} }  
Indifferentiability for Public Key Cryptosystems


We initiate the study of indifferentiability for public key encryption and other
public key primitives. Our main results are definitions and constructions of public
key cryptosystems that are indifferentiable from ideal cryptosystems, in the random
oracle model. Cryptosystems include:
• Public key encryption; • Digital signatures; • Noninteractive key agreement. Our schemes are based on standard public key assumptions. By being indifferentiable from an ideal object, our schemes satisfy any security property that can be represented as a singlestage game and can be composed to operate in higherlevel protocols.
@inproceedings{C:ZhaZha20,
author = {Mark Zhandry and Cong Zhang}, booktitle = {CRYPTO~2020, Part~I}, editor = {Daniele Micciancio and Thomas Ristenpart}, month = aug, pages = {6393}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Indifferentiability for Public Key Cryptosystems}, volume = {12170}, year = {2020} }  
Unclonable Decryption Keys


We initiate the study of encryption schemes where the decryption keys are unclonable
quantum objects, which we call single decryptor encryption. We give a number of
initial results in this area:
• We formalize the notion of single decryptor encryption. • We show that secretkey single decryptor encryption is possible unconditionally, in the setting where a limited number of ciphertexts are given. However, given an encryption oracle, we show that unconditional security is impossible. • We show how to use a very recent notion of oneshot signatures, together with sufficiently powerful witness encryption, to achieve public key single decryptor encryption. • We demonstrate several extensions of our scheme, achieving a number of interesting properties that are not possible classically.
@misc{EPRINT:GeoZha20,
author = {Marios Georgiou and Mark Zhandry}, howpublished = {Cryptology ePrint Archive, Report 2020/877}, note = {\url{https://eprint.iacr.org/2020/877}}, title = {Unclonable Decryption Keys}, year = {2020} }  
Quantum Immune OneTime Memories


Onetime memories (OTM) are the hardware version of oblivious transfer, and are useful
for constructing objects that are impossible with software alone, such as onetime
programs. In this work, we consider attacks on OTMs where a quantum adversary can
leverage his physical access to the memory to mount quantum "superposition attacks"
against the memory. Such attacks result in significantly weakened OTMs. For example,
in the application to onetime programs, it may appear that such an adversary can
always “quantumize” the classical protocol by running it on a superposition of inputs,
and therefore learn superpositions of outputs of the protocol.
Perhaps surprisingly, we show that this intuition is false: we construct onetime programs from quantumaccessible onetime memories where the view of an adversary, despite making quantum queries, can be simulated by making only classical queries to the ideal functionality. At the heart of our work is a method of immunizing onetime memories against superposition attacks.
@misc{EPRINT:LiuSahZha20,
author = {Qipeng Liu and Amit Sahai and Mark Zhandry}, howpublished = {Cryptology ePrint Archive, Report 2020/871}, note = {\url{https://eprint.iacr.org/2020/871}}, title = {Quantum Immune OneTime Memories}, year = {2020} }  
Oneshot Signatures and Applications to Hybrid Quantum/Classical Authentication


We define the notion of oneshot signatures, which are signatures where any
secret key can be used to sign only a single message, and then selfdestructs. While
such signatures are of course impossible classically, we construct oneshot signatures
using quantum nocloning. In particular, we show that such signatures exist
relative to a classical oracle, which we can then heuristically obfuscate using known
indistinguishability obfuscation schemes.
We show that oneshot signatures have numerous applications for hybrid quantum/classical cryptographic tasks, where all communication is required to be classical, but local quantum operations are allowed. Applications include onetime signature tokens, quantum money with classical communication, decentralized blockchainless cryptocurrency, signature schemes with unclonable secret keys, noninteractive certifiable minentropy, and more. We thus position oneshot signatures as a powerful new building block for novel quantum cryptographic protocols.
@inproceedings{STOC:AGKZ20,
author = {Ryan Amos and Marios Georgiou and Aggelos Kiayias and Mark Zhandry}, booktitle = {52nd ACM STOC}, editor = {Konstantin Makarychev and Yury Makarychev and Madhur Tulsiani and Gautam Kamath and Julia Chuzhoy}, month = jun, pages = {255268}, publisher = {{ACM} Press}, title = {Oneshot signatures and applications to hybrid quantum/classical authentication}, year = {2020} }  
Affine Determinant Programs: A Framework for Obfuscation and Witness Encryption


An affine determinant program ADP: {0,1}^{n} → {0,1} is specified
by a tuple (A,B_{1},…,B_{n}) of square matrices over
F_{q} and a function Eval: F_{q} → {0,1}, and is evaluated on x
∈ {0,1}^{n} by computing Eval(det(A + ∑ x_{i}B_{i})).
In this work, we suggest ADPs as a new framework for building generalpurpose obfuscation and witness encryption. We provide evidence to suggest that constructions following our ADPbased framework may one day yield secure, practically feasible obfuscation. As a proofofconcept, we give a candidate ADPbased construction of indistinguishability obfuscation for all circuits along with a simple witness encryption candidate. We provide cryptanalysis demonstrating that our schemes resist several potential attacks, and leave further cryptanalysis to future work. Lastly, we explore practically feasible applications of our witness encryption candidate, such as publickey encryption with nearoptimal key generation.
@inproceedings{ITCS:BIJMSZ20,
author = {James Bartusek and Yuval Ishai and Aayush Jain and Fermi Ma and Amit Sahai and Mark Zhandry}, booktitle = {ITCS 2020}, editor = {Thomas Vidick}, month = jan, pages = {82:182:39}, publisher = {{LIPIcs}}, title = {Affine Determinant Programs: {A} Framework for Obfuscation and Witness Encryption}, volume = {151}, year = {2020} }  
The Distinction Between Fixed and Random Generators in GroupBased Assumptions


There is surprisingly little consensus on the precise role of the generator g in
groupbased assumptions such as DDH. Some works consider g to be a fixed part of the
group description, while others take it to be random. We study this subtle distinction
from a number of angles.
• In the generic group model, we demonstrate the plausibility of groups in which randomgenerator DDH (resp. CDH) is hard but fixedgenerator DDH (resp. CDH) is easy. We observe that such groups have interesting cryptographic applications. • We find that seemingly tight generic lower bounds for the DiscreteLog and CDH problems with preprocessing (CorriganGibbs and Kogan, Eurocrypt 2018) are not tight in the subconstant success probability regime if the generator is random. We resolve this by proving tight lower bounds for the random generator variants; our results formalize the intuition that using a random generator will reduce the effectiveness of preprocessing attacks. • We observe that DDHlike assumptions in which exponents are drawn from lowentropy distributions are particularly sensitive to the fixed vs. randomgenerator distinction. Most notably, we discover that the Strong Power DDH assumption of Komargodski and Yogev (Eurocrypt 2018) used for nonmalleable point obfuscation is in fact false precisely because it requires a fixed generator. In response, we formulate an alternative fixedgenerator assumption that suffices for a new construction of nonmalleable point obfuscation, and we prove the assumption holds in the generic group model. We also give a generic group proof for the security of fixedgenerator, lowentropy DDH (Canetti, Crypto 1997).
@inproceedings{C:BarMaZha19,
author = {James Bartusek and Fermi Ma and Mark Zhandry}, booktitle = {CRYPTO~2019, Part~II}, editor = {Alexandra Boldyreva and Daniele Micciancio}, month = aug, pages = {801830}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {The Distinction Between Fixed and Random Generators in GroupBased Assumptions}, volume = {11693}, year = {2019} }  
Revisiting PostQuantum FiatShamir


The FiatShamir transformation is a useful approach to building noninteractive
arguments (of knowledge) in the random oracle model. Unfortunately, existing proof
techniques are incapable of proving the security of FiatShamir in the quantum
setting. The problem stems from (1) the difficulty of quantum rewinding, and (2) the
inability of current techniques to adaptively program random oracles in the quantum
setting.
In this work, we show how to overcome the limitations above in many settings. In particular, we give mild conditions under which FiatShamir is secure in the quantum setting. As an application, we show that existing lattice signatures based on FiatShamir are secure without any modifications.
@inproceedings{C:LiuZha19,
author = {Qipeng Liu and Mark Zhandry}, booktitle = {CRYPTO~2019, Part~II}, editor = {Alexandra Boldyreva and Daniele Micciancio}, month = aug, pages = {326355}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Revisiting Postquantum {Fiat}{Shamir}}, volume = {11693}, year = {2019} }  
How to Record Quantum Queries, and Applications to Quantum Indifferentiability


The quantum random oracle model (QROM) has become the standard model in which to prove
the postquantum security of randomoraclebased constructions. Unfortunately, none of
the known proof techniques allow the reduction to record information about the
adversary's queries, a crucial feature of many classical ROM proofs, including all
proofs of indifferentiability for hash function domain extension. In this work, we
give a new QROM proof technique that overcomes this "recording barrier". Our central
observation is that when viewing the adversary's query and the oracle itself in the
Fourier domain, an oracle query switches from writing to the adversary's space to
writing to the oracle itself. This allows a reduction to simulate the oracle by simply
recording information about the adversary's query in the Fourier domain.
We then use this new technique to show the indifferentiability of the MerkleDamgard domain extender for hash functions. Given the threat posed by quantum computers and the push toward quantumresistant cryptosystems, our work represents an important tool for efficient postquantum cryptosystems.
@inproceedings{C:Zhandry19,
author = {Mark Zhandry}, booktitle = {CRYPTO~2019, Part~II}, editor = {Alexandra Boldyreva and Daniele Micciancio}, month = aug, pages = {239268}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {How to Record Quantum Queries, and Applications to Quantum Indifferentiability}, volume = {11693}, year = {2019} }  
Simple Schemes in the Bounded Storage Model


The bounded storage model promises unconditional security proofs against
computationally unbounded adversaries, so long as the adversary's space is bounded.
In this work, we develop simple new constructions of twoparty key agreement, bit
commitment, and oblivious transfer in this model. In addition to simplicity, our
constructions have several advantages over prior work, including an improved number of
rounds and enhanced correctness. Our schemes are based on Raz's lower bound for
learning parities.
@inproceedings{EC:GuaZha19,
author = {Jiaxin Guan and Mark Zhandary}, booktitle = {EUROCRYPT~2019, Part~III}, editor = {Yuval Ishai and Vincent Rijmen}, month = may, pages = {500524}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Simple Schemes in the Bounded Storage Model}, volume = {11478}, year = {2019} }  
Quantum Lightning Never Strikes the Same State Twice


Public key quantum money can be seen as a version of the quantum nocloning theorem
that holds even when the quantum states can be verified by the adversary. In this
work, investigate quantum lightning, a formalization of "collisionfree quantum
money" defined by Lutomirski et al. [ICS'10], where nocloning holds even when the
adversary herself generates the quantum state to be cloned. We then study quantum
money and quantum lightning, showing the following results:
• We demonstrate the usefulness of quantum lightning beyond quantum money by showing several potential applications, such as generating random strings with a proof of entropy, to completely decentralized cryptocurrency without a blockchain, where transactions is instant and local. • We give winwin results for quantum money/lightning, showing that either signatures/hash functions/commitment schemes meet very strong recently proposed notions of security, or they yield quantum money or lightning. Given the difficulty in constructing public key quantum money, this gives some indication that natural schemes do attain strong security guarantees. • We construct quantum lightning under the assumed multicollision resistance of random degree2 systems of polynomials. Our construction is inspired by our winwin result for hash functions, and yields the first plausible standard model instantiation of a noncollapsing collision resistant hash function. This improves on a result of Unruh [Eurocrypt'16] that requires a quantum oracle. •We show that instantiating the quantum money scheme of Aaronson and Christiano [STOC'12] with indistinguishability obfuscation that is secure against quantum computers yields a secure quantum money scheme. This construction can be seen as an instance of our winwin result for signatures, giving the first separation between two security notions for signatures from the literature. Thus, we provide the first constructions of public key quantum money from several cryptographic assumptions. Along the way, we develop several new techniques including a new precise variant of the nocloning theorem.
@inproceedings{EC:Zhandry19b,
author = {Mark Zhandry}, booktitle = {EUROCRYPT~2019, Part~III}, editor = {Yuval Ishai and Vincent Rijmen}, month = may, pages = {408438}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Quantum Lightning Never Strikes the Same State Twice}, volume = {11478}, year = {2019} }  
On ELFs, Deterministic Encryption, and CorrelatedInput Security


We construct deterministic public key encryption secure for any constant number
of arbitrarily correlated computationally unpredictable messages. Prior works
required either random oracles or nonstandard knowledge assumptions. In contrast,
our constructions are based on the exponential hardness of DDH, which is plausible in
elliptic curve groups. Our central tool is a new trapdoored extremely lossy
function, which modifies extremely lossy functions by adding a trapdoor.
@inproceedings{EC:Zhandry19a,
author = {Mark Zhandry}, booktitle = {EUROCRYPT~2019, Part~III}, editor = {Yuval Ishai and Vincent Rijmen}, month = may, pages = {332}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {On {ELFs}, Deterministic Encryption, and CorrelatedInput Security}, volume = {11478}, year = {2019} }  
On Finding Quantum Multicollisions


A kcollision for a compressing hash function H is a set of k distinct inputs that all
map to the same output. In this work, we show that for any constant k,
Θ(N^{(1/2)(11/(2^k1))}) quantum queries are both necessary and
sufficient to achieve a kcollision with constant probability. This improves on both
the best prior upper bound (Hosoyamada et al., ASIACRYPT 2017) and provides the first
nontrivial lower bound, completely resolving the problem.
@inproceedings{EC:LiuZha19,
author = {Qipeng Liu and Mark Zhandry}, booktitle = {EUROCRYPT~2019, Part~III}, editor = {Yuval Ishai and Vincent Rijmen}, month = may, pages = {189218}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {On Finding Quantum Multicollisions}, volume = {11478}, year = {2019}, }  
New Techniques for Obfuscating Conjunctions


A conjunction is a function f(x_{1},...,x_{n}) = ∧_{i ∈
S} l_{i} where S ⊆ [n] and each l_{i} is x_{i} or
¬ x_{i}. Bishop et al. (CRYPTO 2018) recently proposed obfuscating
conjunctions by embedding them in the error positions of a noisy ReedSolomon codeword
and placing the codeword in a group exponent. They prove distributional virtual black
box (VBB) security in the generic group model for random conjunctions where S ≥
0.226n. While conjunction obfuscation is known from LWE, these constructions rely on
substantial technical machinery.
In this work, we conduct an extensive study of simple conjunction obfuscation techniques. • We abstract the Bishop et al. scheme to obtain an equivalent yet more efficient "dual" scheme that handles conjunctions over exponential size alphabets. We give a significantly simpler proof of generic group security, which we combine with a novel combinatorial argument to obtain distributional VBB security for S of any size. • If we replace the ReedSolomon code with a random binary linear code, we can prove security from standard LPN and avoid encoding in a group. This addresses an open problem posed by Bishop et al.~to prove security of this simple approach in the standard model. • We give a new construction that achieves information theoretic distributional VBB security and weak functionality preservation for S ≥ n  n^{δ} and δ < 1. Assuming discrete log and δ < 1/2, we satisfy a stronger notion of functionality preservation for computationally bounded adversaries while still achieving information theoretic security.
@inproceedings{EC:BLMZ19,
author = {James Bartusek and Tancr{\'e}de Lepoint and Fermi Ma and Mark Zhandry}, booktitle = {EUROCRYPT~2019, Part~III}, editor = {Yuval Ishai and Vincent Rijmen}, month = may, pages = {636666}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {New Techniques for Obfuscating Conjunctions}, volume = {11478}, year = {2019} }  
ParameterHiding Order Revealing Encryption


Orderrevealing encryption (ORE) is a popular primitive for outsourcing encrypted
databases, as it allows for efficiently performing range queries over encrypted data.
Unfortunately, a series of works, starting with Naveed et al. (CCS 2015), have shown
that when the adversary has a good estimate of the distribution of the data, ORE
provides little protection. In this work, we consider the case that the database
entries are drawn identically and independently from a distribution of known shape,
but for which the mean and variance are not (and thus the attacks of Naveed et al. do
not apply). We define a new notion of security for ORE, called parameterhiding ORE,
which maintains the secrecy of these parameters. We give a construction of ORE
satisfying our new definition from bilinear maps.
@inproceedings{AC:CLOZZ18,
author = {David Cash and FengHao Liu and Adam O'Neill and Mark Zhandry and Cong Zhang}, booktitle = {ASIACRYPT~2018, Part~I}, editor = {Thomas Peyrin and Steven Galbraith}, month = dec, pages = {181210}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {ParameterHiding Order Revealing Encryption}, volume = {11272}, year = {2018} }  
Preventing Zeroizing Attacks on GGH15


The GGH15 multilinear maps have served as the foundation for a number of cuttingedge
cryptographic proposals. Unfortunately, many schemes built on GGH15 have been
explicitly broken by socalled "zeroizing attacks," which exploit leakage from honest
zerotest queries. The precise settings in which zeroizing attacks are possible have
remained unclear. Most notably, none of the current indistinguishability obfuscation
(iO) candidates from GGH15 have any formal security guarantees against zeroizing
attacks.
In this work, we demonstrate that all known zeroizing attacks on GGH15 implicitly construct algebraic relations between the results of zerotesting and the encoded plaintext elements. We then propose a "GGH15 zeroizing model" as a new general framework which greatly generalizes known attacks. Our second contribution is to describe a new GGH15 variant, which we formally analyze in our GGH15 zeroizing model. We then construct a new iO candidate using our multilinear map, which we prove secure in the GGH15 zeroizing model. This implies resistance to all known zeroizing strategies. The proof relies on the Branching Program UnAnnihilatability (BPUA) Assumption of Garg et al. [TCC 16B] (which is implied by PRFs in NC^1 secure against P/Poly) and the complexitytheoretic pBounded Speedup Hypothesis of Miles et al. [ePrint 14] (a strengthening of the Exponential Time Hypothesis).
@inproceedings{TCC:BGMZ18,
author = {James Bartusek and Jiaxin Guan and Fermi Ma and Mark Zhandry}, booktitle = {TCC~2018, Part~II}, editor = {Amos Beimel and Stefan Dziembowski}, month = nov, pages = {544574}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Return of {GGH15}: Provable Security Against Zeroizing Attacks}, volume = {11240}, year = {2018} }  
The MMap Strikes Back: Obfuscation and New Multilinear Maps Immune to CLT13 Zeroizing Attacks


We devise the first weak multilinear map model for CLT13 multilinear maps (Coron et
al., CRYPTO 2013) that captures all known classical polynomialtime attacks on the
maps. We then show important applications of our model. First, we show that in our
model, several existing obfuscation and orderrevealing encryption schemes, when
instantiated with CLT13 maps, are secure against known attacks under a mild algebraic
complexity assumption used in prior work. These are schemes that are actually being
implemented for experimentation. However, until our work, they had no rigorous
justification for security.
Next, we turn to building constant degree multilinear maps on top of CLT13 for which there are no known attacks. Precisely, we prove that our scheme achieves the ideal security notion for multilinear maps in our weak CLT13 model, under a much stronger variant of the algebraic complexity assumption used above. Our multilinear maps do not achieve the full functionality of multilinear maps as envisioned by Boneh and Silverberg (Contemporary Mathematics, 2003), but do allow for rerandomization and for encoding arbitrary plaintext elements.
@inproceedings{TCC:MaZha18,
author = {Fermi Ma and Mark Zhandry}, booktitle = {TCC~2018, Part~II}, editor = {Amos Beimel and Stefan Dziembowski}, month = nov, pages = {513543}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {The {MMap} Strikes Back: Obfuscation and New Multilinear Maps Immune to {CLT13} Zeroizing Attacks}, volume = {11240}, year = {2018} }  
Impossibility of OrderRevealing Encryption in Idealized Models


An OrderRevealing Encryption (ORE) scheme gives a public procedure by which two
ciphertext can be compared to reveal the order of their underlying plaintexts. The
ideal security notion for ORE is that only the order is revealed — anything
else, such as the distance between plaintexts, is hidden. The only known constructions
of ORE achieving such ideal security are based on cryptographic multilinear maps, and
are currently too impractical for realworld applications. In this work, we give
evidence that building ORE from weaker tools may be hard. Indeed, we show blackbox
separations between ORE and most symmetrickey primitives, as well as public key
encryption and anything else implied by generic groups in a blackbox way. Thus, any
construction of ORE must either (1) achieve weaker notions of security, (2) be based
on more complicated cryptographic tools, or (3) require nonblackbox techniques. This
suggests that any ORE achieving ideal security will likely be somewhat inefficient.
Central to our proof is an proof of impossibility for something we call information theoretic ORE, which has connections to tournament graphs and a theorem by Erdős. This impossibility proof will be useful for proving other black box separations for ORE.
@inproceedings{TCC:ZhaZha18,
author = {Mark Zhandry and Cong Zhang}, booktitle = {TCC~2018, Part~II}, editor = {Amos Beimel and Stefan Dziembowski}, month = nov, pages = {129158}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Impossibility of OrderRevealing Encryption in Idealized Models}, volume = {11240}, year = {2018} }  
Multiparty NonInteractive Key Exchange and More From Isogenies on Elliptic Curves


We describe a framework for constructing an efficient noninteractive key exchange
(NIKE) protocol for n parties for any n≥2. Our approach is based on the problem of
computing isogenies between isogenous elliptic curves, which is believed to be
difficult. We do not obtain a working protocol because of a missing step that is
currently an open problem. What we need to complete our protocol is an efficient
algorithm that takes as input an abelian variety presented as a product of isogenous
elliptic curves, and outputs an isomorphism invariant of the abelian variety.
Our framework builds a cryptographic invariant map, which is a new primitive closely related to a cryptographic multilinear map, but whose range does not necessarily have a group structure. Nevertheless, we show that a cryptographic invariant map can be used to build several cryptographic primitives, including NIKE, that were previously constructed from multilinear maps and indistinguishability obfuscation.
@article{Boneh2018MultipartyNK,
title = {Multiparty NonInteractive Key Exchange and More From Isogenies on Elliptic Curves}, author = {Dan Boneh and Darren B. Glass and Daniel Krashen and Kristin E. Lauter and Shahed Sharif and Alice Silverberg and Mehdi Tibouchi and Mark Zhandry}, journal = {Journal of Mathematical Cryptology}, year = {2018}, volume = {14}, pages = {5  14} }  
Decomposable Obfuscation: A Framework for Building Applications of Obfuscation From Polynomial Hardness


There is some evidence that indistinguishability obfuscation (iO) requires either
exponentially many assumptions or (sub)exponentially hard assumptions, and indeed, all
known ways of building obfuscation suffer one of these two limitations. As such, any
application built from iO suffers from these limitations as well. However, for most
applications, such limitations do not appear to be inherent to the application, just
the approach using iO. Indeed, several recent works have shown how to base
applications of iO instead on functional encryption (FE), which can in turn be based
on the polynomial hardness of just a few assumptions. However, these constructions are
quite complicated and recycle a lot of similar techniques.
In this work, we unify the results of previous works in the form of a weakened notion of obfuscation, called Decomposable Obfuscation. We show (1) how to build decomposable obfuscation from functional encryption, and (2) how to build a variety of applications from decomposable obfuscation, including all of the applications already known from FE. The construction in (1) hides most of the difficult techniques in the prior work, whereas the constructions in (2) are much closer to the comparatively simple constructions from iO. As such, decomposable obfuscation represents a convenient new platform for obtaining more applications from polynomial hardness.
@inproceedings{TCC:LiuZha17,
author = {Qipeng Liu and Mark Zhandry}, booktitle = {TCC~2017, Part~I}, editor = {Yael Kalai and Leonid Reyzin}, month = nov, pages = {138169}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Decomposable Obfuscation: {A} Framework for Building Applications of Obfuscation from Polynomial Hardness}, volume = {10677}, year = {2017} }  
New Security Notions and Feasibility Results for Authentication of Quantum Data


We give a new class of security definitions for authentication in the quantum setting.
Our definitions capture and strengthen several existing definitions, including
superposition attacks on classical authentication, as well as full authentication of
quantum data. We argue that our definitions resolve some of the shortcomings of
existing definitions.
We then give several feasibility results for our strong definitions. As a consequence, we obtain several interesting results, including: (1) the classical CarterWegman authentication scheme with 3universal hashing is secure against superposition attacks, as well as adversaries with quantum side information; (2) quantum authentication where the entire key can be reused if verification is successful; (3) conceptually simple constructions of quantum authentication; and (4) a conceptually simple QKD protocol.
@inproceedings{C:GarYueZha17,
author = {Sumegha Garg and Henry Yuen and Mark Zhandry}, booktitle = {CRYPTO~2017, Part~II}, editor = {Jonathan Katz and Hovav Shacham}, month = aug, pages = {342371}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {New Security Notions and Feasibility Results for Authentication of Quantum Data}, volume = {10402}, year = {2017} }  
Breaking the SubExponential Barrier in Obfustopia


Indistinguishability obfuscation (iO) has emerged as a surprisingly powerful notion.
Almost all known cryptographic primitives can be constructed from general purpose iO
and other minimalistic assumptions such as oneway functions. A major challenge in
this direction of research is to develop novel techniques for using iO since iO by
itself offers virtually no protection for secret information in the underlying
programs. When dealing with complex situations, often these techniques have to
consider an exponential number of hybrids (usually one per input) in the security
proof. This results in a subexponential loss in the security reduction.
Unfortunately, this scenario is becoming more and more common and appears to be a
fundamental barrier to many current techniques.
A parallel research challenge is building obfuscation from simpler assumptions. Unfortunately, it appears that such a construction would likely incur an exponential loss in the security reduction. Thus, achieving any application of iO from simpler assumptions would also require a subexponential loss, even if the iOtoapplication security proof incurred a polynomial loss. Functional encryption (FE) is known to be equivalent to iO up to a subexponential loss in the FEtoiO security reduction; yet, unlike iO, FE can be achieved from simpler assumptions (namely, specific multilinear map assumptions) with only a polynomial loss. In the interest of basing applications on weaker assumptions, we therefore argue for using FE as the starting point, rather than iO, and restricting to reductions with only a polynomial loss. By significantly expanding on ideas developed by Garg, Pandey, and Srinivasan (CRYPTO 2016), we achieve the following early results in this line of study: • We construct universal samplers based only on polynomiallysecure publickey FE. As an application of this result, we construct a noninteractive multiparty key exchange (NIKE) protocol for an unbounded number of users without a trusted setup. Prior to this work, such constructions were only known from indistinguishability obfuscation. • We also construct trapdoor oneway permutations (OWP) based on polynomiallysecure publickey FE. This improves upon the recent result of Bitansky, Paneth, and Wichs (TCC 2016) which requires iO of subexponential strength. We proceed in two steps, first giving a construction requiring iO of polynomial strength, and then specializing the FEtoiO conversion to our specific application. Many of the techniques that have been developed for using iO, including many of those based on the "punctured programming" approach, become inapplicable when we insist on polynomial reductions to FE. As such, our results above require many new ideas that will likely be useful for future works on basing security on FE.
@inproceedings{EC:GPSZ17,
author = {Sanjam Garg and Omkant Pandey and Akshayaram Srinivasan and Mark Zhandry}, booktitle = {EUROCRYPT~2017, Part~III}, editor = {JeanS{\'{e}}bastien Coron and Jesper Buus Nielsen}, month = apr # {~/~} # may, pages = {156181}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Breaking the SubExponential Barrier in Obfustopia}, volume = {10212}, year = {2017} }  
Encryptor Combiners: A Unified Approach to Multiparty NIKE, (H)IBE, and Broadcast Encryption


We define the concept of an encryptor combiner. Roughly, such a combiner takes as
input n public keys for a public key encryption scheme, and produces a new combined
public key. Anyone knowing a secret key for one of the input public keys can learn the
secret key for the combined public key, but an outsider who just knows the input
public keys (who can therefore compute the combined public key for himself) cannot
decrypt ciphertexts from the combined public key. We actually think of public keys
more generally as encryption procedures, which can correspond to, say, encrypting to a
particular identity under an IBE scheme or encrypting to a set of attributes under an
ABE scheme.
We show that encryptor combiners satisfying certain natural properties can give natural constructions of multiparty noninteractive key exchange, lowoverhead broadcast encryption, and hierarchical identitybased encryption. We then show how to construct two different encryptor combiners. Our first is built from universal samplers (which can in turn be built from indistinguishability obfuscation) and is sufficient for each application above, in some cases improving on existing obfuscationbased constructions. Our second is built from lattices, and is sufficient for hierarchical identitybased encryption. Thus, encryptor combiners serve as a new abstraction that (1) is a useful tool for designing cryptosystems, (2) unifies constructing hierarchical IBE from vastly different assumptions, and (3) provides a target for instantiating obfuscation applications from better tools.
@misc{EPRINT:MaZha17,
author = {Fermi Ma and Mark Zhandry}, howpublished = {Cryptology ePrint Archive, Report 2017/152}, note = {\url{https://eprint.iacr.org/2017/152}}, title = {Encryptor Combiners: {A} Unified Approach to Multiparty {NIKE}, ({H}){IBE}, and Broadcast Encryption}, year = {2017} }  
How to Generate and use Universal Samplers


The random oracle is an idealization that allows to model a hash function as an oracle
that will output a uniformly random string given an input. We introduce the notion of
universal sampler scheme as a method sampling securely from arbitrary
distributions.
We first motivate such a notion by describing several applications including generating the trusted parameters for many schemes from just a single trusted setup. We further demonstrate the versatility of universal sampler by showing how they give rise to applications such as identitybased encryption and multiparty key exchange. We give a solution in the random oracle model based on indistinguishability obfuscation. At the heart of our construction and proof is a new technique we call "delayed backdoor programming".
@inproceedings{AC:HJKSWZ16,
author = {Dennis Hofheinz and Tibor Jager and Dakshita Khurana and Amit Sahai and Brent Waters and Mark Zhandry}, booktitle = {ASIACRYPT~2016, Part~II}, editor = {Jung Hee Cheon and Tsuyoshi Takagi}, month = dec, pages = {715744}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {How to Generate and Use Universal Samplers}, volume = {10032}, year = {2016} }  
A Note on QuantumSecure PRPs


We show how to construct pseudorandom permutations (PRPs) that remain secure even if
the adversary can query the permutation on a quantum superposition of inputs. Such
PRPs are called quantumsecure. Our construction combines a quantumsecure
pseudorandom function together with constructions of classical format
preserving encryption. By combining known results, we obtain the first quantumsecure
PRP in this model whose security relies only on the existence of oneway functions.
Previously, to the best of the author's knowledge, quantum security of PRPs had to be
assumed, and there were no prior security reductions to simpler primitives, let alone
oneway functions.
@misc{EPRINT:Zhandry16b,
author = {Mark Zhandry}, howpublished = {Cryptology ePrint Archive, Report 2016/1076}, note = {\url{https://eprint.iacr.org/2016/1076}}, title = {A Note on QuantumSecure {PRPs}}, year = {2016} }  
Strong Hardness of Privacy from Weak Traitor Tracing


Despite much study, the computational complexity of differential privacy remains
poorly understood. In this paper we consider the computational complexity of
accurately answering a family Q of statistical queries over a data
universe X under differential privacy. A statistical query on a dataset
D∈X^{n} asks "what fraction of the elements of D satisfy a given
predicate p on X?" Dwork et al. (STOC'09) and Boneh and Zhandry (CRYPTO'14) showed
that if both Q and X are of polynomial size, then there is an efficient differentially
private algorithm that accurately answers all the queries, and if both Q and X are
exponential size, then under a plausible assumption, no efficient algorithm exists.
We show that, under the same assumption, if either the number of queries or the data universe is of exponential size, then there is no differentially private algorithm that answers all the queries. Specifically, we prove that if oneway functions and indistinguishability obfuscation exist, then: • For every n, there is a family Q of O(n^{7}) queries on a data universe X of size 2^{d} such that no poly(n,d) time differentially private algorithm takes a dataset D∈X^{n} and outputs accurate answers to every query in Q. •For every n, there is a family Q of 2^{d} queries on a data universe X of size O(n^{7}) such that no poly(n,d) time differentially private algorithm takes a dataset D∈X^{n} and outputs accurate answers to every query in Q. In both cases, the result is nearly quantitatively tight, since there is an efficient differentially private algorithm that answers Ω(n^{2}) queries on an exponential size data universe, and one that answers exponentially many queries on a data universe of size Ω(n^{2}). Our proofs build on the connection between hardness results in differential privacy and traitortracing schemes (Dwork et al., STOC'09; Ullman, STOC'13). We prove our hardness result for a polynomial size query set (resp., data universe) by showing that they follow from the existence of a special type of traitortracing scheme with very short ciphertexts (resp., secret keys), but very weak security guarantees, and then constructing such a scheme.
@inproceedings{TCC:KMUZ16,
author = {Lucas Kowalczyk and Tal Malkin and Jonathan Ullman and Mark Zhandry}, booktitle = {TCC~2016B, Part~I}, editor = {Martin Hirt and Adam D. Smith}, month = oct # {~/~} # nov, pages = {659689}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Strong Hardness of Privacy from Weak Traitor Tracing}, volume = {9985}, year = {2016} }  
Secure Obfuscation in a Weak Multilinear Map Model


All known candidate indistinguishibility obfuscation (iO) schemes rely on candidate
multilinear maps. Until recently, the strongest proofs of security available for iO
candidates were in a generic model that only allows "honest" use of the multilinear
map. Most notably, in this model the zerotest procedure only reveals whether an
encoded element is 0, and nothing more.
However, this model is inadequate: there have been several attacks on multilinear maps that exploit extra information revealed by the zerotest procedure. In particular, Miles, Sahai and Zhandry [Crypto'16] recently gave a polynomialtime attack on several iO candidates when instantiated with the multilinear maps of Garg, Gentry, and Halevi [Eurocrypt'13], and also proposed a new "weak multilinear map model" that captures all known polynomialtime attacks on GGH13. In this work, we give a new iO candidate which can be seen as a small modification or generalization of the original candidate of Garg, Gentry, Halevi, Raykova, Sahai, and Waters [FOCS'13]. We prove its security in the weak multilinear map model, thus giving the first iO candidate that is provably secure against all known polynomialtime attacks on GGH13. The proof of security relies on a new assumption about the hardness of computing annihilating polynomials, and we show that this assumption is implied by the existence of pseudorandom functions in NC^{1}.
@inproceedings{TCC:GMMSSZ16,
author = {Sanjam Garg and Eric Miles and Pratyay Mukherjee and Amit Sahai and Akshayaram Srinivasan and Mark Zhandry}, booktitle = {TCC~2016B, Part~II}, editor = {Martin Hirt and Adam D. Smith}, month = oct # {~/~} # nov, pages = {241268}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Secure Obfuscation in a Weak Multilinear Map Model}, volume = {9986}, year = {2016} }  
The Magic of ELFs


We introduce the notion of an Extremely Lossy Function (ELF). An ELF is a
family of functions with an image size that is tunable anywhere from injective to
having a polynomialsized image. Moreover, for any efficient adversary, for a
sufficiently large polynomial r (necessarily chosen to be larger than the running time
of the adversary), the adversary cannot distinguish the injective case from the case
of image size r.
We develop a handful of techniques for using ELFs, and show that such extreme lossiness is useful for instantiating random oracles in several settings. In particular, we show how to use ELFs to build secure point function obfuscation with auxiliary input, as well as polynomiallymany hardcore bits for any oneway function. Such applications were previously known from strong knowledge assumptions — for example polynomiallymany hardcore bits were only know from differing inputs obfuscation, a notion whose plausibility has been seriously challenged. We also use ELFs to build a simple hash function with output intractability, a new notion we define that may be useful for generating common reference strings. Next, we give a construction of ELFs relying on the exponential hardness of the decisional DiffieHellman problem, which is plausible in elliptic curve groups. Combining with the applications above, our work gives several practical constructions relying on qualitatively different — and arguably better — assumptions than prior works.
@inproceedings{C:Zhandry16,
author = {Mark Zhandry}, booktitle = {CRYPTO~2016, Part~I}, editor = {Matthew Robshaw and Jonathan Katz}, month = aug, pages = {479508}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {The Magic of {ELFs}}, volume = {9814}, year = {2016} }  
Annihilation Attacks for Multilinear Maps: Cryptanalysis of Indistinguishability Obfuscation over GGH13


In this work, we put forward a new class of polynomialtime attacks on the original
multilinear maps of Garg, Gentry, and Halevi (2013). Previous polynomialtime attacks
on GGH13 were "zeroizing" attacks that generally required the availability of
lowlevel encodings of zero. Most significantly, such zeroizing attacks were not
applicable to candidate indistinguishability obfuscation (iO) schemes. iO has been the
subject of intense study.
To address this gap, we introduce annihilation attacks, which attack multilinear maps using nonlinear polynomials. Annihilation attacks can work in situations where there are no lowlevel encodings of zero. Using annihilation attacks, we give the first polynomialtime cryptanalysis of candidate iO schemes over GGH13. More specifically, we exhibit two simple programs that are functionally equivalent, and show how to efficiently distinguish between the obfuscations of these two programs. Given the enormous applicability of iO, it is important to devise iO schemes that can avoid attack.
@inproceedings{C:MilSahZha16,
author = {Eric Miles and Amit Sahai and Mark Zhandry}, booktitle = {CRYPTO~2016, Part~II}, editor = {Matthew Robshaw and Jonathan Katz}, month = aug, pages = {629658}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Annihilation Attacks for Multilinear Maps: Cryptanalysis of Indistinguishability Obfuscation over {GGH13}}, volume = {9815}, year = {2016} }  
PostZeroizing Obfuscation: New Mathematical Tools, and the Case of Evasive Circuits


Recent devastating attacks by Cheon et al.~[Eurocrypt'15] and others have highlighted
significant gaps in our intuition about security in candidate multilinear map schemes,
and in candidate obfuscators that use them. The new attacks, and some that were
previously known, are typically called "zeroizing" attacks because they all crucially
rely on the ability of the adversary to create encodings of 0.
In this work, we initiate the study of postzeroizing obfuscation, and we present a construction for the special case of evasive functions. We show that our obfuscator survives all known attacks on the underlying multilinear maps, by proving that no encodings of 0 can be created by a genericmodel adversary. Previous obfuscators (for both evasive and general functions) were either analyzed in a lessconservative "prezeroizing" model that does not capture recent attacks, or were proved secure relative to assumptions that are now known to be false. To prove security, we introduce a new technique for analyzing polynomials over multilinear map encodings. This technique shows that the types of encodings an adversary can create are much more restricted than was previously known, and is a crucial step toward achieving postzeroizing security. We also believe the technique is of independent interest, as it yields efficiency improvements for existing schemes.
@inproceedings{EC:BMSZ16,
author = {Saikrishna Badrinarayanan and Eric Miles and Amit Sahai and Mark Zhandry}, booktitle = {EUROCRYPT~2016, Part~II}, editor = {Marc Fischlin and JeanS{\'{e}}bastien Coron}, month = may, pages = {764791}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Postzeroizing Obfuscation: New Mathematical Tools, and the Case of Evasive Circuits}, volume = {9666}, year = {2016} }  
Anonymous Traitor Tracing: How to Embed Arbitrary Information in a Key


In a traitor tracing scheme, each user is given a different decryption key. A content
distributor can encrypt digital content using a public encryption key and each user in
the system can decrypt it using her decryption key. Even if a coalition of users
combines their decryption keys and constructs some "pirate decoder" that is capable of
decrypting the content, there is a public tracing algorithm that is guaranteed to
recover the identity of at least one of the users in the coalition given blackbox
access to such decoder.
In prior solutions, the users are indexed by numbers 1,…,N and the tracing algorithm recovers the index i of a user in a coalition. Such solutions implicitly require the content distributor to keep a record that associates each index i with the actual identifying information for the corresponding user (e.g., name, address, etc.) in order to ensure accountability. In this work, we construct traitor tracing schemes where all of the identifying information about the user can be embedded directly into the user's key and recovered by the tracing algorithm. In particular, the content distributor does not need to separately store any records about the users of the system, and honest users can even remain anonymous to the content distributor. The main technical difficulty comes in designing tracing algorithms that can handle an exponentially large universe of possible identities, rather than just a polynomial set of indices i∈[N]. We solve this by abstracting out an interesting algorithmic problem that has surprising connections with seemingly unrelated areas in cryptography. We also extend our solution to a full "broadcasttraceandrevoke" scheme in which the traced users can subsequently be revoked from the system. Depending on parameters, some of our schemes can be based only on the existence of publickey encryption while others rely on indistinguishability obfuscation.
@inproceedings{EC:NisWicZha16,
author = {Ryo Nishimaki and Daniel Wichs and Mark Zhandry}, booktitle = {EUROCRYPT~2016, Part~II}, editor = {Marc Fischlin and JeanS{\'{e}}bastien Coron}, month = may, pages = {388419}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Anonymous Traitor Tracing: How to Embed Arbitrary Information in a Key}, volume = {9666}, year = {2016} }  
OrderRevealing Encryption and the Hardness of Private Learning


An orderrevealing encryption scheme gives a public procedure by which two ciphertexts
can be compared to reveal the ordering of their underlying plaintexts. We show how to
use orderrevealing encryption to separate computationally efficient PAC learning from
efficient (ε,δ)differentially private PAC learning. That is, we
construct a concept class that is efficiently PAC learnable, but for which every
efficient learner fails to be differentially private. This answers a question of
Kasiviswanathan et al. (FOCS '08, SIAM J. Comput. '11).
To prove our result, we give a generic transformation from an orderrevealing encryption scheme into one with strongly correct comparison, which enables the consistent comparison of ciphertexts that are not obtained as the valid encryption of any message. We believe this construction may be of independent interest.
@inproceedings{TCC:BunZha16,
author = {Mark Bun and Mark Zhandry}, booktitle = {TCC~2016A, Part~I}, editor = {Eyal Kushilevitz and Tal Malkin}, month = jan, pages = {176206}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {OrderRevealing Encryption and the Hardness of Private Learning}, volume = {9562}, year = {2016} }  
Functional Encryption without Obfuscation


Previously known functional encryption (FE) schemes for general circuits relied on
indistinguishability obfuscation, which in turn either relies on an exponential number
of assumptions (basically, one per circuit), or a polynomial set of assumptions, but
with an exponential loss in the security reduction. Additionally these schemes are
proved in an unrealistic selective security model, where the adversary is forced to
specify its target before seeing the public parameters. For these constructions, full
security can be obtained but at the cost of an exponential loss in the security
reduction.
In this work, we overcome the above limitations and realize a fully secure functional encryption scheme without using indistinguishability obfuscation. Specifically the security of our scheme relies only on the polynomial hardness of simple assumptions on multilinear maps.
@inproceedings{TCC:GGHZ16,
author = {Sanjam Garg and Craig Gentry and Shai Halevi and Mark Zhandry}, booktitle = {TCC~2016A, Part~II}, editor = {Eyal Kushilevitz and Tal Malkin}, month = jan, pages = {480511}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Functional Encryption Without Obfuscation}, volume = {9563}, year = {2016} }  
How to Avoid Obfuscation Using Witness PRFs


We propose a new cryptographic primitive called witness pseudorandom functions
(witness PRFs). Witness PRFs are related to witness encryption, but appear strictly
stronger: we show that witness PRFs can be used for applications such as multiparty
key exchange without trsuted setup, polynomiallymany hardcore bits for any oneway
function, and several others that were previously only possible using obfuscation.
Current candidate obfuscators are far from practical and typically rely on unnatural
hardness assumptions about multilinear maps. We give a construction of witness PRFs
from multilinear maps that is simpler and much more efficient than current obfuscation
candidates, thus bringing several applications of obfuscation closer to practice. Our
construction relies on new but very natural hardness assumptions about the underlying
maps that appear to be resistant to a recent line of attacks.
@inproceedings{TCC:Zhandry16,
author = {Mark Zhandry}, booktitle = {TCC~2016A, Part~II}, editor = {Eyal Kushilevitz and Tal Malkin}, month = jan, pages = {421448}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {How to Avoid Obfuscation Using Witness {PRFs}}, volume = {9563}, year = {2016} }  
CuttingEdge Cryptography Through the Lens of Secret Sharing


Secret sharing is a mechanism by which a trusted dealer holding a secret "splits" a
secret into many "shares" and distributes the shares to a collection of parties.
Associated with the sharing is a monotone access structure, that specifies which
parties are "qualified" and which are not: any qualified subset of parties can
(efficiently) reconstruct the secret, but no unqualified subset can learn anything
about the secret. In the most general form of secret sharing, the access structure
can be any monotone NP language.
In this work, we consider two very natural extensions of secret sharing. In the first, which we call distributed secret sharing, there is no trusted dealer at all, and instead the role of the dealer is distributed amongst the parties themselves. Distributed secret sharing can be thought of as combining the features of multiparty noninteractive key exchange and standard secret sharing, and may be useful in settings where the secret is so sensitive that no one individual dealer can be trusted with the secret. Our second notion is called functional secret sharing, which incorporates some of the features of functional encryption into secret sharing by providing more finegrained access to the secret. Qualified subsets of parties do not learn the secret, but instead learn some function applied to the secret, with each set of parties potentially learning a different function. Our main result is that both of the extensions above are equivalent to several recent cuttingedge primitives. In particular, generalpurpose distributed secret sharing is equivalent to witness PRFs, and generalpurpose functional secret sharing is equivalent to indistinguishability obfuscation. Thus, our work shows that it is possible to view some of the recent developments in cryptography through a secret sharing lens, yielding new insights about both these cuttingedge primitives and secret sharing.
@inproceedings{TCC:KomZha16,
author = {Ilan Komargodski and Mark Zhandry}, booktitle = {TCC~2016A, Part~II}, editor = {Eyal Kushilevitz and Tal Malkin}, month = jan, pages = {449479}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {CuttingEdge Cryptography Through the Lens of Secret Sharing}, volume = {9563}, year = {2016} }  
Quantum Oracle Classification: The Case of Group Structure


The Quantum Oracle Classification (QOC) problem is to classify a function, given only
quantum black box access, into one of several classes without necessarily determining
the entire function. Generally, QOC captures a very wide range of problems in quantum
query complexity. However, relatively little is known about many of these problems.
In this work, we analyze the a subclass of the QOC problems where there is a group structure. That is, suppose the range of the unknown function A is a commutative group G, which induces a commutative group law over the entire function space. Then we consider the case where A is drawn uniformly at random from some subgroup A of the function space. Moreover, there is a homomorpism f on A, and the goal is to determine f(A). This class of problems is very general, and covers several interesting cases, such as oracle evaluation; polynomial interpolation, evaluation, and extrapolation; and parity. These problems are important in the study of message authentication codes in the quantum setting, and may have other applications. We exactly characterize the quantum query complexity of every instance of QOC with group structure in terms of a particular counting problem. That is, we provide an algorithm for this general class of problems whose success probability is determined by the solution to the counting problem, and prove its exact optimality. Unfortunately, solving this counting problem in general is a nontrivial task, and we resort to analyzing special cases. Our bounds unify some existing results, such as the existing oracle evaluation and parity bounds. In the case of polynomial interpolation and evaluation, our bounds give new results for secret sharing and information theoretic message authentication codes in the quantum setting.
@article{ARXIV:Zhandry2015,
title={Quantum Oracle Classification  The Case of Group Structure}, author={Mark Zhandry}, journal={ArXiv}, year={2015}, volume={abs/1510.08352} }  
Cryptography in the Age of Quantum Computers (PhD Thesis)


It is well established that fullfledged quantum computers, when realized, will
completely break many of today's cryptosystems. This looming threat has led to the
proposal of socalled "postquantum" systems, namely those that appear resistant to
quantum attacks. We argue, however, that the attacks considered in prior works model
only the near future, where the attacker may be equipped with a quantum computer, but
the endusers implementing the protocols are still running classical devices.
Eventually, quantum computers will reach maturity and everyone — even the endusers — will be running quantum computers. In this event, attackers can interact with the endusers over quantum channels, opening up a new set of attacks that have not been considered before. This thesis puts forth new security models and new security analyses showing how to ensure security against such quantum channel attacks. In particular, we rebuild many core cryptographic functionalities, including pseudorandom functions, encryption, digital signatures, and more, resulting in the first protocols that are safe to use in a ubiquitous quantum computing world. Along the way, we resolve several open problems in quantum query complexity, such as the Collision Problem for random functions, the Set Equality Problem, and the Oracle Interrogation Problem.  
Semantically Secure OrderRevealing Encryption: MultiInput Functional Encryption Without Obfuscation


Deciding "greaterthan" relations among data items just given their encryptions is at
the heart of search algorithms on encrypted data, most notably, noninteractive binary
search on encrypted data. Orderpreserving encryption provides one solution, but
provably provides only limited security guarantees. Twoinput functional encryption is
another approach, but requires the full power of obfuscation machinery and is
currently not implementable.
We construct the first implementable encryption system supporting greaterthan comparisons on encrypted data that provides the "bestpossible" semantic security. In our scheme there is a public algorithm that given two ciphertexts as input, reveals the order of the corresponding plaintexts and nothing else. Our constructions are inspired by obfuscation techniques, but do not use obfuscation. For example, to compare two 16bit encrypted values (e.g., salaries or age) we only need a 9way multilinear map. More generally, comparing kbit values requires only a (k/2+1)way multilinear map. The required degree of multilinearity can be further reduced, but at the cost of increasing ciphertext size. Beyond comparisons, our results give an implementable secretkey multiinput functional encryption scheme for functionalities that can be expressed as (generalized) branching programs of polynomial length and width. Comparisons are a special case of this class, where for kbit inputs the branching program is of length k+1 and width 4.
@inproceedings{EC:BLRSZZ15,
author = {Dan Boneh and Kevin Lewi and Mariana Raykova and Amit Sahai and Mark Zhandry and Joe Zimmerman}, booktitle = {EUROCRYPT~2015, Part~II}, editor = {Elisabeth Oswald and Marc Fischlin}, month = apr, pages = {563594}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Semantically Secure OrderRevealing Encryption: Multiinput Functional Encryption Without Obfuscation}, volume = {9057}, year = {2015} }  
Adaptively Secure Broadcast Encryption with Small System Parameters


We build the first publickey broadcast encryption systems that simultaneously achieve
adaptive security against arbitrary number of colluders, have small system parameters,
and have security proofs that do not rely on knowledge assumptions or complexity
leveraging. Our schemes are built from either composite order multilinear maps or
obfuscation and enjoy a ciphertext overhead, private key size, and public key size
that are all polylogarithmic in the total number of users. Previous broadcast
schemes with similar parameters are either proven secure in a weaker static model, or
rely on nonfalsifiable knowledge assumptions.
@misc{EPRINT:Zhandry14b,
author = {Mark Zhandry}, howpublished = {Cryptology ePrint Archive, Report 2014/757}, note = {\url{https://eprint.iacr.org/2014/757}}, title = {Adaptively Secure Broadcast Encryption with Small System Parameters}, year = {2014} }  
Low Overhead Broadcast Encryption from Multilinear Maps


We use multilinear maps to provide a solution to the longstanding problem of
publickey broadcast encryption where all parameters in the system are small. In our
constructions, ciphertext overhead, private key size, and public key size are all
polylogarithmic in the total number of users. The systems are fully
collusionresistant against any number of colluders. All our systems are based on an
O(log N)way multilinear map to support a broadcast system for N users. We present
three constructions based on different types of multilinear maps and providing
different security guarantees. Our systems naturally give identitybased broadcast
systems with short parameters.
@inproceedings{C:BonWatZha14,
author = {Dan Boneh and Brent Waters and Mark Zhandry}, booktitle = {CRYPTO~2014, Part~I}, editor = {Juan A. Garay and Rosario Gennaro}, month = aug, pages = {206223}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Low Overhead Broadcast Encryption from Multilinear Maps}, volume = {8616}, year = {2014} }  
Multiparty Key Exchange, Efficient Traitor Tracing, and More from Indistinguishability Obfuscation


In this work, we show how to use indistinguishability obfuscation (iO) to build
multiparty key exchange, efficient broadcast encryption, and efficient traitor
tracing. Our schemes enjoy several interesting properties that have not been
achievable before:
• Our multiparty noninteractive key exchange protocol does not require a trusted setup. Moreover, the size of the published value from each user is independent of the total number of users. • Our broadcast encryption schemes support distributed setup, where users choose their own secret keys rather than be given secret keys by a trusted entity. The broadcast ciphertext size is independent of the number of users. • Our traitor tracing system is fully collusion resistant with short ciphertexts, secret keys, and public key. Ciphertext size is logarithmic in the number of users and secret key size is independent of the number of users. Our public key size is polylogarithmic in the number of users. The recent functional encryption system of Garg, Gentry, Halevi, Raykova, Sahai, and Waters also leads to a traitor tracing scheme with similar ciphertext and secret key size, but the construction in this paper is simpler and more direct. These constructions resolve an open problem relating to differential privacy. • Generalizing our traitor tracing system gives a private broadcast encryption scheme (where broadcast ciphertexts reveal minimal information about the recipient set) with optimal size ciphertext. Several of our proofs of security introduce new tools for proving security using indistinguishability obfuscation.
@inproceedings{C:BonZha14,
author = {Dan Boneh and Mark Zhandry}, booktitle = {CRYPTO~2014, Part~I}, editor = {Juan A. Garay and Rosario Gennaro}, month = aug, pages = {480499}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Multiparty Key Exchange, Efficient Traitor Tracing, and More from Indistinguishability Obfuscation}, volume = {8616}, year = {2014} }  
Fully Secure Attribute Based Encryption from Multilinear Maps


We construct the first fully secure attribute based encryption (ABE) scheme that can
handle access control policies expressible as polynomialsize circuits. Previous ABE
schemes for general circuits were proved secure only in an unrealistic selective
security model, where the adversary is forced to specify its target before seeing the
public parameters, and full security could be obtained only by complexity leveraging,
where the reduction succeeds only if correctly guesses the adversary's target string
x*, incurring a 2^{x*} loss factor in the tightness of the reduction.
At a very high level, our basic ABE scheme is reminiscent of Yao's garbled circuits, with 4 gadgets per gate of the circuit, but where the decrypter in our scheme puts together the appropriate subset of gate gadgets like puzzle pieces by using a cryptographic multilinear map to multiply the pieces together. We use a novel twist of Waters' dual encryption methodology to prove the full security of our scheme. Most importantly, we show how to preserve the delicate informationtheoretic argument at the heart of Waters'dual system by enfolding it in an informationtheoretic argument similar to that used in Yao's garbled circuits.
@misc{EPRINT:GGHZ14,
author = {Sanjam Garg and Craig Gentry and Shai Halevi and Mark Zhandry}, howpublished = {Cryptology ePrint Archive, Report 2014/622}, note = {\url{https://eprint.iacr.org/2014/622}}, title = {Fully Secure Attribute Based Encryption from Multilinear Maps}, year = {2014} }  
A Note on the Quantum Collision and Set Equality Problems


The results showing a quantum query complexity of Θ(N^{1/3}) for the
collision problem do not apply to random functions. The issues are twofold. First,
the Ω(N^{1/3}) lower bound only applies when the range is no larger than
the domain, which precludes many of the cryptographically interesting applications.
Second, most of the results in the literature only apply to rto1 functions, which
are quite different from random functions.
Understanding the collision problem for random functions is of great importance to cryptography, and we seek to fill the gaps of knowledge for this problem. To that end, we prove that, as expected, a quantum query complexity of Θ(N^{1/3}) holds for all interesting domain and range sizes. Our proofs are simple, and combine existing techniques with several novel tricks to obtain the desired results. Using our techniques, we also give an optimal Ω(N^{1/3}) lower bound for the set equality problem. This new lower bound can be used to improve the relationship between classical randomized query complexity and quantum query complexity for socalled permutationsymmetric functions.
@article{CIS:Zhandry15,
author = {Zhandry, Mark}, title = {A Note on the Quantum Collision and Set Equality Problems}, year = {2015}, issue_date = {May 2015}, publisher = {Rinton Press, Incorporated}, volume = {15}, number = {7–8}, journal = {Quantum Info. Comput.}, month = may, pages = {557–567}, numpages = {11} }  
DifferingInputs Obfuscation and Applications


In this paper, we study of the notion of differinginput obfuscation, introduced by
Barak et al. (CRYPTO 2001,JACM 2012). For any two circuits
C0 and
C1 , a differinginput obfuscator diO guarantees that the
nonexistence of a adversary that can find an input on which C0 and
C1 differ implies that diO(C0) and diO(C1) are
computationally indistinguishable. We show many applications of this notion:• We define the notion of a differinginput obfuscator for Turing machines and give a construction for the same (without converting it to a circuit) with inputspecific running times. More specifically, for each input, our obfuscated Turning machine takes time proportional to the running time of the Turing machine on that specific input rather than the machine\'s worstcase running time. • We give a functional encryption scheme that is fullysecure even when the adversary can obtain an unbounded number of secret keys. Furthermore, our scheme allows for secretkeys to be associated with Turing machines and thereby achieves inputspecific running times and can be equipped with delegation properties. We stress that this is the first functional encryption scheme with security for an unbounded number of secret keys satisfying any of these properties. • We construct a multiparty noninteractive key exchange protocol with no trusted setup where all parties post only logarithmicsize messages. It is the first such scheme with such short messages. We similarly obtain a broadcast encryption system where the ciphertext overhead and secretkey size is constant (i.e. independent of the number of users), and the public key is logarithmic in the number of users. Both our constructions make inherent use of the power provided by differinginput obfuscation. It is not currently known how to construct systems with these properties from the weaker notion of indistinguishability obfuscation.
@misc{EPRINT:ABGSZ13,
author = {Prabhanjan Ananth and Dan Boneh and Sanjam Garg and Amit Sahai and Mark Zhandry}, howpublished = {Cryptology ePrint Archive, Report 2013/689}, note = {\url{https://eprint.iacr.org/2013/689}}, title = {DifferingInputs Obfuscation and Applications}, year = {2013} }  
Secure Signatures and Chosen Ciphertext Security in a Quantum Computing World


We initiate the study of quantumsecure digital signatures and quantum
chosen ciphertext security. In the case of signatures, we enhance the standard chosen
message query model by allowing the adversary to issue quantum chosen message
queries: given a superposition of messages, the adversary receives a superposition of
signatures on those messages. Similarly, for encryption, we allow the adversary to
issue quantum chosen ciphertext queries: given a superposition of ciphertexts,
the adversary receives a superposition of their decryptions. These adversaries model a
natural postquantum environment where endusers sign messages and decrypt ciphertexts
on a personal quantum computer.
We construct classical systems that remain secure when exposed to such quantum queries. For signatures we construct two compilers that convert classically secure signatures into signatures secure in the quantum setting and apply these compilers to existing postquantum signatures. We also show that standard constructions such as Lamport onetime signatures and Merkle signatures remain secure under quantum chosen message attacks, thus giving signatures whose quantum security is based on generic assumptions. For encryption, we define security under quantum chosen ciphertext attacks and present both publickey and symmetrickey constructions.
@inproceedings{C:BonZha13,
author = {Dan Boneh and Mark Zhandry}, booktitle = {CRYPTO~2013, Part~II}, editor = {Ran Canetti and Juan A. Garay}, month = aug, pages = {361379}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Secure Signatures and Chosen Ciphertext Security in a Quantum Computing World}, volume = {8043}, year = {2013} }  
QuantumSecure Message Authentication Codes


We construct the first Message Authentication Codes (MACs) that are existentially
unforgeable against a quantum chosen message attack. These chosen message
attacks model a quantum adversary's ability to obtain the MAC on a superposition of
messages of its choice. We begin by showing that a quantum secure PRF is sufficient
for constructing a quantum secure MAC, a fact that is considerably harder to prove
than its classical analogue. Next, we show that a variant of CarterWegman MACs can
be proven to be quantum secure. Unlike the classical settings, we present an attack
showing that a pairwise independent hash family is insufficient to construct a
quantum secure onetime MAC, but we prove that a fourwise independent family
is sufficient for onetime security.
@inproceedings{EC:BonZha13,
author = {Dan Boneh and Mark Zhandry}, booktitle = {EUROCRYPT~2013}, editor = {Thomas Johansson and Phong Q. Nguyen}, month = may, pages = {592608}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {QuantumSecure Message Authentication Codes}, volume = {7881}, year = {2013} }  
How to Construct Quantum Random Functions


In the presence of a quantum adversary, there are two possible definitions of security
for a pseudorandom function. The first, which we call standardsecurity, allows the
adversary to be quantum, but requires queries to the function to be classical. The
second, quantumsecurity, allows the adversary to query the function on a quantum
superposition of inputs, thereby giving the adversary a superposition of the values of
the function at many inputs at once. Existing proof techniques for proving the
security of pseudorandom functions fail when the adversary can make quantum queries.
We give the first quantumsecurity proofs for pseudorandom functions by showing that
some classical constructions of pseudorandom functions are quantumsecure. Namely, we
show that the standard constructions of pseudorandom functions from pseudorandom
generators or pseudorandom synthesizers are secure, even when the adversary can make
quantum queries. We also show that a direct construction from lattices is
quantumsecure. To prove security, we develop new tools to prove the
indistinguishability of distributions under quantum queries.
In light of these positive results, one might hope that all standardsecure pseudorandom functions are quantumsecure. To the contrary, we show a separation: under the assumption that standardsecure pseudorandom functions exist, there are pseudorandom functions secure against quantum adversaries making classical queries, but insecure once the adversary can make quantum queries.
@inproceedings{FOCS:Zhandry12,
author = {Mark Zhandry}, booktitle = {53rd FOCS}, month = oct, pages = {679687}, publisher = {{IEEE} Computer Society Press}, title = {How to Construct Quantum Random Functions}, year = {2012} }  
Secure IdentityBased Encryption in the Quantum Random Oracle Model


We give the first proof of security for an identitybased encryption scheme in the
quantum random oracle model. This is the first proof of security for any
scheme in this model that requires no additional assumptions. Our techniques are quite
general and we use them to obtain security proofs for two random oracle hierarchical
identitybased encryption schemes and a random oracle signature scheme, all of which
have previously resisted quantum security proofs, even using additional assumptions.
We also explain how to remove the extra assumptions from prior quantum random oracle
model proofs. We accomplish these results by developing new tools for arguing that
quantum algorithms cannot distinguish between two oracle distributions. Using a
particular class of oracle distributions, so called semiconstant
distributions, we argue that the aforementioned cryptosystems are secure against
quantum adversaries.
@inproceedings{C:Zhandry12,
author = {Mark Zhandry}, booktitle = {CRYPTO~2012}, editor = {Reihaneh SafaviNaini and Ran Canetti}, month = aug, pages = {758775}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Secure IdentityBased Encryption in the Quantum Random Oracle Model}, volume = {7417}, year = {2012} }  
Random Oracles in a Quantum World


The interest in postquantum cryptography — classical systems that remain
secure in the presence of a quantum adversary — has generated elegant proposals
for new cryptosystems. Some of these systems are set in the random oracle model and
are proven secure relative to adversaries that have classical access to the random
oracle. We argue that to prove postquantum security one needs to prove security in
the quantumaccessible random oracle model where the adversary can query the random
oracle with quantum state.
We begin by separating the classical and quantumaccessible random oracle models by presenting a scheme that is secure when the adversary is given classical access to the random oracle, but is insecure when the adversary can make quantum oracle queries. We then set out to develop generic conditions under which a classical random oracle proof implies security in the quantumaccessible random oracle model. We introduce the concept of a historyfree reduction which is a category of classical random oracle reductions that basically determine oracle answers independently of the history of previous queries, and we prove that such reductions imply security in the quantum model. We then show that certain postquantum proposals, including ones based on lattices, can be proven secure using historyfree reductions and are therefore postquantum secure. We conclude with a rich set of open problems in this area.
@inproceedings{AC:BDFLSZ11,
author = {Dan Boneh and {\"O}zg{\"u}r Dagdelen and Marc Fischlin and Anja Lehmann and Christian Schaffner and Mark Zhandry}, booktitle = {ASIACRYPT~2011}, editor = {Dong Hoon Lee and Xiaoyun Wang}, month = dec, pages = {4169}, publisher = {Springer, Heidelberg}, series = {{LNCS}}, title = {Random Oracles in a Quantum World}, volume = {7073}, year = {2011} }  
ACM Security TechPack


