## CryptoDB

### Manoj Prabhakaran

#### Publications

**Year**

**Venue**

**Title**

2021

CRYPTO

Secure Computation from One-Way Noisy Communication, or: Anti-Correlation via Anti-Concentration
📺
Abstract

Can a sender encode a pair of messages (m_0,m_1) jointly, and send their encoding over (say) a binary erasure channel, so that the receiver can decode exactly one of the two messages and the sender does not know which one?
Garg et al. (Crypto 2015) showed that this is information-theoretically impossible.
We show how to circumvent this impossibility by assuming that the receiver is computationally bounded, settling for an inverse-polynomial security error (which is provably necessary), and relying on ideal obfuscation.
Our solution creates a ``computational anti-correlation'' between the events of receiving m_0 and receiving m_1 by exploiting the anti-concentration of the binomial distribution.
The ideal obfuscation primitive in our construction can either be directly realized using (stateless) tamper-proof hardware, yielding an unconditional result, or heuristically instantiated using existing indistinguishability obfuscation schemes. We put forward a new notion of obfuscation that suffices to securely instantiate our construction.
As a corollary, we get similar feasibility results for general secure computation of sender-receiver functionalities by leveraging the completeness of the above ``random oblivious transfer'' functionality.

2021

TCC

On Communication Models and Best-Achievable Security in Two-Round MPC
Abstract

Recently, a sequence of works have made strong advances in two-round (i.e., round-optimal) secure multi-party computation (MPC). In the {\em honest-majority} setting -- the focus of this work -- Ananth et al. [CRYPTO'18, EC'19], Applebaum et al. [TCC'18, EC'19] and Garg et al. [TCC'18] have established the feasibility of general two-round MPC in standard communication models involving broadcast ($\BC$) and private point-to-point ($\PTP$) channels.
In this work, we set out to understand what features of the communication model are necessary for these results, and more broadly the design of two-round MPC. Focusing our study on the plain model -- the most natural model for honest-majority MPC -- we obtain the following results:
1. {\bf Dishonest majority from Honest majority:}
In the two round setting, honest-majority MPC and dishonest-majority MPC are surprisingly close, and often {\em equivalent}. This follows from our results that the former implies 2-message oblivious transfer, in many settings. (i) We show that without private point-to-point ($\PTP$) channels, i.e., when we use only broadcast ($\BC$) channels, {\em honest-majority MPC implies 2-message oblivious transfer}. (ii) Furthermore, this implication holds even when we use both $\PTP$ and $\BC$, provided that the MPC protocol is robust against ``fail-stop'' adversaries.
2. {\bf Best-Achievable Security:} While security with guaranteed output delivery (and even fairness) against malicious adversaries is impossible in two rounds, nothing is known with regards to the ``next best'' security notion, namely, security with identifiable abort (\IA). We show that \IA\ is also {\em impossible} to achieve with honest-majority even if we use both $\PTP$ and $\BC$ channels. However, if we replace $\PTP$ channels with a ``bare'' (i.e., untrusted) public-key infrastructure ($\PKI$), then even security with guaranteed output delivery (and hence $\IA$) is possible to achieve.
\end{itemize}
These results ``explain'' that the reliance on $\PTP$ channels (together with $\BC$) in the recent two-round protocols in the plain model was in fact {\em necessary}, and that these protocols {\em couldn't} have achieved a stronger security guarantee, namely, $\IA$. Overall, our results (put together with prior works) fully determine the best-achievable security for honest-majority MPC in different communication models in two rounds. As a consequence, they yield the following hierarchy of communication models:
$\BC < \PTP < \BC+\PTP < \BC+\PKI$.
This shows that $\BC$ channel is the {\em weakest} communication model, and that $\BC+\PKI$ model is strictly stronger than $\BC+\PTP$ model.

2020

PKC

Witness Maps and Applications
📺
Abstract

We introduce the notion of Witness Maps as a cryptographic notion of a proof system. A Unique Witness Map (UWM) deterministically maps all witnesses for an $$mathbf {NP}$$ statement to a single representative witness, resulting in a computationally sound, deterministic-prover, non-interactive witness independent proof system. A relaxation of UWM, called Compact Witness Map (CWM), maps all the witnesses to a small number of witnesses, resulting in a “lossy” deterministic-prover, non-interactive proof-system. We also define a Dual Mode Witness Map (DMWM) which adds an “extractable” mode to a CWM. Our main construction is a DMWM for all $$mathbf {NP}$$ relations, assuming sub-exponentially secure indistinguishability obfuscation ( $${imathcal {O}}$$ ), along with standard cryptographic assumptions. The DMWM construction relies on a CWM and a new primitive called Cumulative All-Lossy-But-One Trapdoor Functions (C-ALBO-TDF), both of which are in turn instantiated based on $${imathcal {O}}$$ and other primitives. Our instantiation of a CWM is in fact a UWM; in turn, we show that a UWM implies Witness Encryption. Along the way to constructing UWM and C-ALBO-TDF, we also construct, from standard assumptions, Puncturable Digital Signatures and a new primitive called Cumulative Lossy Trapdoor Functions (C-LTDF). The former improves up on a construction of Bellare et al. (Eurocrypt 2016), who relied on sub-exponentially secure $${imathcal {O}}$$ and sub-exponentially secure OWF. As an application of our constructions, we show how to use a DMWM to construct the first leakage and tamper-resilient signatures with a deterministic signer , thereby solving a decade old open problem posed by Katz and Vaikunthanathan (Asiacrypt 2009), by Boyle, Segev and Wichs (Eurocrypt 2011), as well as by Faonio and Venturi (Asiacrypt 2016). Our construction achieves the optimal leakage rate of $$1 - o(1)$$ .

2020

TCC

Zero-Communication Reductions
📺
Abstract

We introduce a new primitive in information-theoretic cryptography, namely zero-communication reductions (ZCR), with different levels of security. We relate ZCR to several other important primitives, and obtain new results on upper and lower bounds.
In particular, we obtain new upper bounds for PSM, CDS and OT complexity of functions, which are exponential in the information complexity of the functions. These upper bounds complement the results of Beimel et al. (2014) which broke the circuit-complexity barrier for ``high complexity'' functions; our results break the barrier of input size for ``low complexity'' functions.
We also show that lower bounds on secure ZCR can be used to establish lower bounds for OT-complexity. We recover the known (linear) lower bounds on OT-complexity by Beimal and Malkin (2004) via this new route. We also formulate the lower bound problem for secure ZCR in purely linear-algebraic terms, by defining the invertible rank of a matrix.
We present an Invertible Rank Conjecture, proving which will establish super-linear lower bounds for OT-complexity (and if accompanied by an explicit construction, will provide explicit functions with super-linear circuit lower bounds).

2020

ASIACRYPT

Cryptography from One-Way Communication: On Completeness of Finite Channels
📺
Abstract

Garg et al. (Crypto 2015) initiated the study of cryptographic protocols over noisy channels in the non-interactive setting, namely when only one party speaks. A major question left open by this work is the completeness of {\em finite} channels, whose input and output alphabets do not grow with the desired level of security. In this work, we address this question by obtaining the following results:
Completeness of Bit-ROT with Inverse Polynomial Error: We show that bit-ROT (i.e., Randomized Oblivious Transfer channel, where each of the two messages is a single bit) can be used to realize general randomized functionalities with inverse polynomial error. Towards this, we provide a construction of string-ROT from bit-ROT with inverse polynomial error.
No Finite Channel is Complete with Negligible Error: To complement the above, we show that {\it no} finite channel can be used to realize string-ROT with negligible error, implying that the inverse polynomial error in the completeness of bit-ROT is inherent. This holds even with semi-honest parties and for computational security, and is contrasted with the (negligible-error) completeness of string-ROT shown by Garg et al.
Characterization of Finite Channels Enabling Zero-Knowledge Proofs: An important instance of secure computation is zero-knowledge proofs.
Noisy channels can potentially be used to realize truly non-interactive zero-knowledge proofs, without trusted common randomness, and with non-transferability and deniability features that cannot be realized in the plain model. Garg et al. obtain such zero-knowledge proofs from the binary erasure channel (BEC) and the binary symmetric channel (BSC). We complete the picture by showing that in fact any non-trivial channel suffices.

2019

EUROCRYPT

Uncovering Algebraic Structures in the MPC Landscape
📺
Abstract

A fundamental problem in the theory of secure multi-party computation (MPC) is to characterize functions with more than 2 parties which admit MPC protocols with information-theoretic security against passive corruption. This question has seen little progress since the work of Chor and Ishai (1996), which demonstrated difficulties in resolving it. In this work, we make significant progress towards resolving this question in the important case of aggregating functionalities, in which m parties
$$P_1,\dots ,P_m$$
P1,⋯,Pm hold inputs
$$x_1,\dots ,x_m$$
x1,⋯,xm and an aggregating party
$$P_0$$
P0 must learn
$$f(x_1,\dots ,x_m)$$
f(x1,⋯,xm).We uncover a rich class of algebraic structures that are closely related to secure computability, namely, “Commuting Permutations Systems” (CPS) and its variants. We present an extensive set of results relating these algebraic structures among themselves and to MPC, including new protocols, impossibility results and separations. Our results include a necessary algebraic condition and slightly stronger sufficient algebraic condition for a function to admit information-theoretically secure MPC protocols.We also introduce and study new models of minimally interactive MPC (called UNIMPC and
), which not only help in understanding our positive and negative results better, but also open up new avenues for studying the cryptographic complexity landscape of multi-party functionalities. Our positive results include novel protocols in these models, which may be of independent practical interest.Finally, we extend our results to a definition that requires UC security as well as semi-honest security (which we term strong security). In this model we are able to carry out the characterization of all computable functions, except for a gap in the case of aggregating functionalities.

2018

PKC

Towards Characterizing Securely Computable Two-Party Randomized Functions
Abstract

A basic question of cryptographic complexity is to combinatorially characterize all randomized functions which have information-theoretic semi-honest secure 2-party computation protocols. The corresponding question for deterministic functions was answered almost three decades back, by Kushilevitz [Kus89]. In this work, we make progress towards understanding securely computable randomized functions. We bring tools developed in the study of completeness to bear on this problem. In particular, our characterizations are obtained by considering only symmetric functions with a combinatorial property called simplicity [MPR12].Our main result is a complete combinatorial characterization of randomized functions with ternary output kernels, that have information-theoretic semi-honest secure 2-party computation protocols. In particular, we show that there exist simple randomized functions with ternary output that do not have secure computation protocols. (For deterministic functions, the smallest output alphabet size of such a function is 5, due to an example given by Beaver [Bea89].)Also, we give a complete combinatorial characterization of randomized functions that have 2-round information-theoretic semi-honest secure 2-party computation protocols.We also give a counter-example to a natural conjecture for the full characterization, namely, that all securely computable simple functions have secure protocols with a unique transcript for each output value. This conjecture is in fact true for deterministic functions, and – as our results above show – for ternary functions and for functions with 2-round secure protocols.

2016

TCC

2015

TCC

2015

TCC

2012

CRYPTO

2010

EPRINT

A Zero-One Law for Deterministic 2-Party Secure Computation
Abstract

We use security in the Universal Composition framework as a means to study the ``cryptographic complexity'' of 2-party secure computation tasks (functionalities). We say that a functionality $F$ {\em reduces to} another functionality $G$ if there is a UC-secure protocol for $F$ using ideal access to $G$. This reduction is a natural and fine-grained way to compare the relative complexities of cryptographic tasks. There are two natural ``extremes'' of complexity under the reduction: the {\em trivial} functionalities, which can be reduced to any other functionality; and the {\em complete} functionalities, to which any other functionality can be reduced.
In this work we show that under a natural computational assumption (the existence of a protocol for oblivious transfer secure against semi-honest adversaries), there is a {\bf zero-one law} for the cryptographic complexity of 2-party deterministic functionalities. Namely, {\em every such functionality is either trivial or complete.} No other qualitative distinctions exist among functionalities, under this computational assumption.
While nearly all previous work classifying multi-party computation functionalities has been restricted to the case of secure function evaluation, our results are the first to consider completeness of arbitrary {\em reactive} functionalities, which receive input and give output repeatedly throughout several rounds of interaction. One important technical contribution in this work is to initiate the comprehensive study of the cryptographic properties of reactive functionalities. We model these functionalities as finite automata and develop an automata-theoretic methodology for classifying and studying their cryptographic properties. Consequently, we completely characterize the reactive behaviors that lead to cryptographic non-triviality. Another contribution of independent interest is to optimize the hardness assumption used by Canetti et al.\ (STOC 2002) in showing that the common random string functionality is complete (a result independently obtained by Damg{\aa}rd et al.\ (TCC 2010)).

2008

CRYPTO

2008

EPRINT

Homomorphic Encryption with CCA Security
Abstract

We address the problem of constructing public-key encryption schemes that meaningfully combine useful {\em computability features} with {\em non-malleability}. In particular, we investigate schemes in which anyone can change an encryption of an unknown message $m$ into an encryption of $T(m)$ (as a {\em feature}), for a specific set of allowed functions $T$, but the scheme is ``non-malleable'' with respect to all other operations. We formulate precise definitions that capture these intuitive requirements and also show relationships among our new definitions and other more standard ones (IND-CCA, gCCA, and RCCA). We further justify our definitions by showing their equivalence to a natural formulation of security in the Universally Composable framework. We also consider extending the definitions to features which combine {\em multiple} ciphertexts, and show that a natural definition is unattainable for a useful class of features. Finally, we describe a new family of encryption schemes that satisfy our definitions for a wide variety of allowed transformations $T$, and which are secure under the standard Decisional Diffie-Hellman (DDH) assumption.

2008

EPRINT

Attribute-Based Signatures: Achieving Attribute-Privacy and Collusion-Resistance
Abstract

We introduce a new and versatile cryptographic primitive called {\em Attribute-Based Signatures} (ABS), in which a signature attests not to the identity of the individual who endorsed a message, but instead to a (possibly complex) claim regarding the attributes she posseses. ABS offers:
* A strong unforgeability guarantee for the verifier,
that the signature was produced by a {\em single} party whose
attributes satisfy the claim being made; i.e., not by a
collusion of individuals who pooled their attributes together.
* A strong privacy guarantee for the signer, that the
signature reveals nothing about the identity or attributes of the
signer beyond what is explicitly revealed by the claim being made.
We formally define the security requirements of ABS as a cryptographic primitive, and then describe an efficient ABS construction based on groups with bilinear pairings. We prove that our construction is secure in the generic group model. Finally, we illustrate several applications of this new tool; in particular, ABS fills a critical security requirement in attribute-based messaging (ABM) systems.
A powerful feature of our ABS construction is that unlike many other attribute-based cryptographic primitives, it can be readily used
in a {\em multi-authority} setting, wherein users can make claims involving combinations of attributes issued by independent
and mutually distrusting authorities.

2008

EPRINT

Secure Arithmetic Computation with No Honest Majority
Abstract

We study the complexity of securely evaluating arithmetic circuits over finite rings. This question is motivated by natural secure computation tasks. Focusing mainly on the case of {\em two-party} protocols with security against {\em malicious} parties, our main goals are to: (1) only make black-box calls to the ring operations and standard cryptographic primitives, and (2) minimize the number of such black-box calls as well as the communication overhead.
We present several solutions which differ in their efficiency, generality, and underlying intractability assumptions. These include:
\begin{itemize}
\item An {\em unconditionally secure} protocol in the OT-hybrid model which makes a black-box use of an arbitrary ring $R$, but where the number of ring operations grows linearly with (an upper bound on) $\log|R|$.
\item Computationally secure protocols in the OT-hybrid model which make a black-box use of an underlying ring, and in which the numberof ring operations does not grow with the ring size. The protocols rely on variants of previous intractability assumptions related to linear codes. In the most efficient instance of these protocols, applied to a suitable class of fields, the (amortized) communication cost is a constant number of field elements per multiplication gate and the computational cost is dominated by $O(\log k)$ field operations per gate, where$k$ is a security parameter. These results extend a previous approach of Naor and Pinkas for secure polynomial evaluation ({\em SIAM J.\ Comput.}, 35(5), 2006).
\item A protocol for the rings $\mathbb{Z}_m=\mathbb{Z}/m\mathbb{Z}$ which only makes a black-box use of a homomorphic encryption scheme. When $m$ is prime, the (amortized) number of calls to the encryption scheme for each gate of the circuit is constant.
\end{itemize}
All of our protocols are in fact {\em UC-secure} in the OT-hybrid model and can be generalized to {\em multiparty} computation with an arbitrary number of malicious parties.

2008

EPRINT

Complexity of Multiparty Computation Problems: The Case of 2-Party Symmetric Secure Function Evaluation
Abstract

In symmetric secure function evaluation (SSFE), Alice has an input
$x$, Bob has an input $y$, and both parties wish to securely
compute $f(x,y)$. We classify these functions $f$ according
to their ``cryptographic complexities,'' and show that the
landscape of complexity among these functions is surprisingly
rich.
We give combinatorial characterizations of the SSFE
functions $f$ that have passive-secure protocols, and those which are
protocols secure in
the standalone setting. With respect to universally composable
security (for unbounded parties), we show that there is an infinite
hierarchy of increasing complexity for SSFE functions,
That is, we describe a family of SSFE functions $f_1, f_2, \ldots$
such that there exists a UC-secure protocol for $f_i$ in the
$f_j$-hybrid world if and only if $i \le j$.
Our main technical tool for deriving complexity separations
is a powerful protocol simulation theorem which states that,
even in the strict setting of UC security, the canonical
protocol for $f$ is as secure as any other protocol for $f$,
as long as $f$ satisfies a certain combinatorial characterization.
We can then show intuitively clear impossibility results by
establishing the combinatorial properties of $f$ and then
describing attacks against the very simple canonical
protocols, which by extension are also feasible
attacks against {\em any} protocol for the same functionality.

2007

EPRINT

Rerandomizable RCCA Encryption
Abstract

We give the first perfectly rerandomizable, Replayable-CCA (RCCA) secure encryption scheme, positively answering an open problem of Canetti et al. [CRYPTO 2003]. Our encryption scheme, which we call the Double-strand Cramer-Shoup scheme, is a non-trivial extension of the popular Cramer-Shoup encryption. Its security is based on the standard DDH assumption. To justify our definitions, we define a powerful "Replayable Message Posting" functionality in the Universally Composable (UC) framework, and show that any encryption scheme that satisfies our definitions of rerandomizability and RCCA security is a UC-secure implementation of this functionality. Finally, we enhance the notion of rerandomizable RCCA security by adding a receiver-anonymity (or key-privacy) requirement, and show that it results in a correspondingly enhanced UC functionality. We leave open the problem of constructing a scheme that achieves this enhancement.

2007

EPRINT

Statistically Hiding Sets
Abstract

Abstract: Zero-knowledge set, a primitive introduced by Micali, Rabin, and Kilian (FOCS 2003), enables a prover to commit a set to a verifier, without revealing even the size of the set. Later the prover can give zero-knowledge proofs to convince the verifier of membership/non-membership of elements in/not in the committed set.
We present a new primitive called {\em Statistically Hiding Sets} (SHS), similar to zero-knowledge sets, but providing an information theoretic hiding guarantee. This is comparable to relaxing zero-knowledge proofs to {\em witness independent proofs}. More precisely, we continue to use the simulation paradigm for our definition, but do not require the simulator (nor the distinguisher) to be efficient.
We present a new scheme for statistically hiding sets, which does not fit into the ``Merkle-tree/mercurial-commitment'' paradigm used for {\em all} zero-knowledge set constructions so far. This not only provides some efficiency gains compared to the best possible schemes in that paradigm, but also lets us provide {\em statistical} hiding, without the prover having to maintain growing amounts of state with each new proof; this is not known to be possible with the previous approach.
Our construction is based on an algebraic tool called {\em trapdoor DDH groups} (TDG), introduced recently by Dent and Galbraith (ANTS 2006). Ours is perhaps the first non-trivial application of this tool. However the specific hardness assumptions we associate with TDG are different, and of a strong nature --- strong RSA and a knowledge-of-exponent assumption. Our new knowledge-of-exponent assumption may be of independent interest.

2006

EPRINT

Concurrent Non-Malleable Zero Knowledge
Abstract

We provide the first construction of a
concurrent and non-malleable zero knowledge argument for every
language in NP. We stress that our construction is in the plain
model with no common random string, trusted parties, or
super-polynomial simulation. That is, we construct a zero knowledge
protocol $\Pi$ such that for every polynomial-time adversary that
can adaptively and concurrently schedule polynomially many
executions of $\Pi$, and corrupt some of the verifiers and some of
the provers in these sessions, there is a polynomial-time simulator
that can simulate a transcript of the entire execution, along with
the witnesses for all statements proven by a corrupt prover to an
honest verifier.
Our security model is the traditional model for concurrent zero
knowledge, where the statements to be proven by the honest provers
are fixed in advance and do not depend on the previous history (but
can be correlated with each other); corrupted provers, of course,
can chose the statements adaptively. We also prove that there exists
some functionality F (a combination of zero knowledge and
oblivious transfer) such that it is impossible to obtain a
concurrent non-malleable protocol for F in this model.
Previous impossibility results for composable protocols ruled out
existence of protocols for a wider class of functionalities
(including zero knowledge!) but only if these protocols were
required to remain secure when executed concurrently with
arbitrarily chosen different protocols (Lindell, FOCS 2003) or if
these protocols were required to remain secure when the honest
parties' inputs in each execution are chosen adaptively based on the
results of previous executions (Lindell, TCC 2004).
We obtain an $\Tilde{O}(n)$-round protocol under the assumption that
one-to-one one-way functions exist. This can be improved to
$\Tilde{O}(k\log n)$ rounds under the assumption that there exist
$k$-round statistically hiding commitment schemes. Our protocol is a
black-box zero knowledge protocol.

2005

EPRINT

Concurrent Composition of Secure Protocols in the Timing Model
Abstract

In the setting of secure multiparty computation, a set of mutually
distrustful parties wish to securely compute some joint function
of their inputs. In the stand-alone case, it has been shown that
{\em every} efficient function can be securely computed.
However, in the setting of concurrent composition, broad
impossibility results have been proven for the case of no honest
majority and no trusted setup phase. These results hold both for
the case of general composition (where a secure protocol is run
many times concurrently with arbitrary other protocols) and self
composition (where a single secure protocol is run many times
concurrently).
In this paper, we investigate the feasibility of obtaining
security in the concurrent setting, assuming that each party has a
local clock and that these clocks proceed at approximately the
same rate. We show that under this mild timing assumption, it is
possible to securely compute {\em any} multiparty functionality
under concurrent \emph{self} composition. We also show that it
is possible to securely compute {\em any} multiparty
functionality under concurrent {\em general} composition, as
long as the secure protocol is run only with protocols whose
messages are delayed by a specified amount of time. On the
negative side, we show that it is impossible to achieve security
under concurrent general composition with no restrictions
whatsoever on the network (like the aforementioned delays), even
in the timing model.

2005

EPRINT

Resource Fairness and Composability of Cryptographic Protocols
Abstract

We introduce the notion of {\em resource-fair} protocols.
Informally, this property states that if one party learns the
output of the protocol, then so can all other parties, as long as
they expend roughly the same amount of resources. As opposed to
similar previously proposed definitions, our definition follows
the standard simulation paradigm and enjoys strong composability
properties. In particular, our definition is similar to the
security definition in the universal composability (UC) framework,
but works in a model that allows any party to request additional
resources from the environment to deal with dishonest parties that
may prematurely abort.
In this model we specify the ideally fair functionality as
allowing parties to ``invest resources'' in return for outputs,
but in such an event offering all other parties a fair deal. (The
formulation of fair dealings is kept independent of any particular
functionality, by defining it using a ``wrapper.'') Thus, by
relaxing the notion of fairness, we avoid a well-known
impossibility result for fair multi-party computation with
corrupted majority; in particular, our definition admits
constructions that tolerate arbitrary number of corruptions. We
also show that, as in the UC framework, protocols in our framework
may be arbitrarily and concurrently composed.
Turning to constructions, we define a ``commit-prove-fair-open''
functionality and design an efficient resource-fair protocol that
securely realizes it, using a new variant of a cryptographic
primitive known as ``time-lines.'' With (the fairly wrapped
version of) this functionality we show that some of the existing
secure multi-party computation protocols can be easily transformed
into resource-fair protocols while preserving their security.

2004

EPRINT

Positive Results and Techniques for Obfuscation
Abstract

Informally, an obfuscator $\Obf$ is an efficient, probabilistic ``compiler''
that transforms a program $P$ into a new program $\Obf(P)$ with the same
functionality as $P$, but such that $\Obf(P)$ protects any secrets that may
be built into and used by $P$. Program obfuscation, if possible, would have
numerous important cryptographic applications, including: (1) ``Intellectual
property'' protection of secret algorithms and keys in software, (2) Solving
the long-standing open problem of homomorphic public-key encryption, (3)
Controlled delegation of authority and access, (4) Transforming Private-Key
Encryption into Public-Key Encryption, and (5) Access Control Systems.
Unfortunately however, program obfuscators that work on arbitrary programs
cannot exist [Barak et al]. No positive results for program obfuscation
were known prior to this work.
In this paper, we provide the first positive results in program obfuscation.
We focus on the goal of access control, and give several provable
obfuscations for complex access control functionalities, in the random
oracle model. Our results are obtained through non-trivial compositions of
obfuscations; we note that general composition of obfuscations is
impossible, and so developing techniques for composing obfuscations is an
important goal. Our work can also be seen as making initial progress toward
the goal of obfuscating finite automata or regular expressions, an important
general class of machines which are not ruled out by the impossibility
results of Barak et al. We also note that our work provides the first
formal proof techniques for obfuscation, which we expect to be useful in
future work in this area.

2004

EPRINT

New Notions of Security: Achieving Universal Composability without Trusted Setup
Abstract

We propose a modification to the framework of Universally Composable (UC) security [Canetti'01]. Our new notion, involves comparing the protocol executions with an ideal execution involving ideal functionalities (just as in UC-security), but allowing the environment and adversary access to some super-polynomial computational power. We argue the meaningfulness of the new notion, which in particular subsumes many of the traditional notions of security.
We generalize the Universal Composition theorem of [Canetti'01] to the new setting. Then under new computational assumptions, we realize secure multi-party computation (for static adversaries) without a common reference string or any other set-up assumptions, in the new framework. This is known to be impossible under the UC framework.

2002

EPRINT

Concurrent Zero Knowledge Proofs with Logarithmic Round-Complexity
Abstract

We consider the problem of constructing Concurrent Zero Knowledge
Proofs, in which the fascinating and useful ``zero
knowledge'' property is guaranteed even in situations where
multiple concurrent proof sessions are executed with many
colluding dishonest verifiers. Canetti et al.
show that black-box concurrent zero knowledge proofs for
non-trivial languages require $\tilde\Omega(\log k)$ rounds where
$k$ is the security parameter. Till now the best known upper
bound on the number of rounds for NP languages was $\omega(\log^2
k)$, due to Kilian, Petrank and Richardson. We establish an
upper bound of $\omega(\log k)$ on the number of rounds for NP
languages, thereby closing the gap between the upper and lower
bounds, up to a $\omega(\log\log k)$ factor.

#### Program Committees

- Crypto 2016
- Eurocrypt 2015
- Asiacrypt 2013
- Crypto 2013
- TCC 2011
- Asiacrypt 2009
- TCC 2009
- Asiacrypt 2008
- Crypto 2008
- TCC 2006

#### Coauthors

- Navneet Agarwal (1)
- Divesh Aggarwal (2)
- Shweta Agrawal (6)
- Shashank Agrawal (11)
- Sanat Anand (1)
- Prabhanjan Ananth (1)
- Saikrishna Badrinarayanan (1)
- Boaz Barak (1)
- Suvradip Chakraborty (1)
- Deepesh Data (3)
- Juan A. Garay (3)
- Daniel Genkin (1)
- Vipul Goyal (2)
- Carl A. Gunter (1)
- Divya Gupta (5)
- Yuval Ishai (10)
- Abhishek Jain (2)
- Yael Tauman Kalai (1)
- Dakshita Khurana (1)
- Daniel Kraschewski (3)
- Abishek Kumarasubramanian (1)
- Eyal Kushilevitz (5)
- Yehuda Lindell (1)
- Benjamin Lynn (1)
- Ben Lynn (1)
- Philip D. MacKenzie (3)
- Mohammad Mahmoody (1)
- Hemanta K. Maji (15)
- Varun Narayanan (3)
- Muhammad Naveed (1)
- Rafail Ostrovsky (2)
- Pichayoot Ouppaphan (1)
- Omkant Pandey (6)
- Vinod Prabhakaran (2)
- Vinod M. Prabhakaran (3)
- Aarushi Goel Rajeev Raghunath (1)
- Rajeev Raghunath (1)
- Alon Rosen (3)
- Mike Rosulek (12)
- Amit Sahai (20)
- Eran Tromer (1)
- David Wagner (1)
- Daniel Wichs (1)
- Jürg Wullschleger (1)
- Rui Xue (1)
- Ke Yang (3)
- Ching-Hua Yu (2)