Quantum Encodings and Applications to Locally Decodable Codes...
Transcript of Quantum Encodings and Applications to Locally Decodable Codes...
Quantum Encodings and Applications to Locally Decodable Codes and
Communication Complexity
by
Iordanis Kerenidis
GRAD. (National Technical University of Athens, Greece) 2000
A dissertation submitted in partial satisfaction of the
requirements for the degree of
Doctor of Philosophy
in
Computer Science
in the
GRADUATE DIVISION
of the
UNIVERSITY of CALIFORNIA at BERKELEY
Committee in charge:
Professor Umesh V. Vazirani, ChairProfessor Luca TrevisanProfessor Elwyn Berlekamp
Fall 2004
The dissertation of Iordanis Kerenidis is approved:
Chair Date
Date
Date
University of California at Berkeley
Fall 2004
Quantum Encodings and Applications to Locally Decodable Codes and
Communication Complexity
Copyright Fall 2004
by
Iordanis Kerenidis
1
Abstract
Quantum Encodings and Applications to Locally Decodable Codes and
Communication Complexity
by
Iordanis Kerenidis
Doctor of Philosophy in Computer Science
University of California at Berkeley
Professor Umesh V. Vazirani, Chair
Quantum computation and information has become a central research area in
theoretical computer science. It studies how information is encoded in nature according
to the laws of quantum mechanics and what this means for its computational power. On
one hand, the ability of a quantum system to be in a superposition of states provides
a way to encode an exponential amount of information in a quantum state. However,
this information is accessible only indirectly via measurements. The goal of this thesis
is precisely to investigate the fundamental question of how information is encoded and
processed in quantum mechanical systems.
• We investigate their power relative to classical encodings in the model of one-way
communication complexity and show that they can be exponentially more efficient,
answering a long-standing open question in the area of quantum communication com-
2
plexity.
• We show how the theory developed for the study of quantum information can be
employed in order to answer questions about classical coding theory and complexity.
We resolve a main open question about the efficiency of Locally Decodable Codes
by reducing the problem to one about quantum codes and solving the latter using
tools from quantum information theory. This is the first example where quantum
techniques are essential in resolving a purely classical question.
• We investigate the power of quantum encodings in the area of cryptography. We
study certain cryptographic primitives, such as Private Information Retrieval, Sym-
metrically -Private Information Retrieval and Coin Flipping.
Professor Umesh V. VaziraniDissertation Committee Chair
i
Contents
1 Introduction 11.1 The power of quantum encodings . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Communication Complexity . . . . . . . . . . . . . . . . . . . . . . . 51.1.2 Coding Theory and Complexity . . . . . . . . . . . . . . . . . . . . . 91.1.3 Quantum Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2 Results and Organization of this thesis . . . . . . . . . . . . . . . . . . . . . 18
2 Preliminaries 212.1 Quantum states and evolution . . . . . . . . . . . . . . . . . . . . . . . . . . 212.2 Quantum Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.3 Bipartite states and Impossibility of Bit Commitment . . . . . . . . . . . . 252.4 Quantum Information Theory and Random Access Codes . . . . . . . . . . 28
3 Communication Complexity 333.1 The Hidden Matching Problem . . . . . . . . . . . . . . . . . . . . . . . . . 363.2 The Hidden Matching Problem and complete problems for one-way commu-
nication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.3 The quantum upper bound . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.4 The randomized lower bound . . . . . . . . . . . . . . . . . . . . . . . . . . 423.5 An exponential separation for Simultaneous Messages . . . . . . . . . . . . 453.6 The complexity of Boolean Hidden Matching . . . . . . . . . . . . . . . . . 45
3.6.1 Lower bound for 0-error protocols . . . . . . . . . . . . . . . . . . . 463.6.2 Lower bound for linear randomized protocols . . . . . . . . . . . . . 49
3.7 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4 Locally Decodable Codes 614.1 Lower Bound for 2-Query Locally Decodable Codes . . . . . . . . . . . . . . 63
4.1.1 From 2 Classical to 1 Quantum Query . . . . . . . . . . . . . . . . . 644.1.2 Lower Bound for 1-Query LQDCs . . . . . . . . . . . . . . . . . . . 674.1.3 Lower Bound for 2-Query LDCs . . . . . . . . . . . . . . . . . . . . 72
4.2 Extentions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744.2.1 Non-Binary Alphabets . . . . . . . . . . . . . . . . . . . . . . . . . . 74
ii
4.2.2 Bounds for More Than 2 Queries . . . . . . . . . . . . . . . . . . . . 784.2.3 Locally Quantum-Decodable Codes with Few Queries . . . . . . . . 804.2.4 Locally Decodable Erasure Codes . . . . . . . . . . . . . . . . . . . . 814.2.5 Optimal 1-Query Quantum Algorithms for 2-Bit Functions . . . . . 82
4.3 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5 Private Information Retrieval 865.1 Lower Bounds for Binary 2-Server PIR . . . . . . . . . . . . . . . . . . . . . 895.2 Extentions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.2.1 Lower Bounds for 2-Server PIR with Larger Answers . . . . . . . . . 925.2.2 Lower Bounds for General 2-Server PIR . . . . . . . . . . . . . . . . 925.2.3 Upper Bounds for Quantum PIR . . . . . . . . . . . . . . . . . . . . 94
5.3 Symmetrically Private Information Retrieval . . . . . . . . . . . . . . . . . 955.3.1 Honest-user QSPIRs from PIR schemes . . . . . . . . . . . . . . . . 985.3.2 Honest-user 2-server QSPIR with Bell states . . . . . . . . . . . . . 1015.3.3 Dishonest-user quantum SPIR schemes . . . . . . . . . . . . . . . . . 103
5.4 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6 Weak Coin Flipping 1076.1 A game with small bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096.2 A strong coin flipping protocol . . . . . . . . . . . . . . . . . . . . . . . . . 1146.3 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
7 Conclusions 117
Bibliography 119
iii
Acknowledgements
First of all, I would like to thank my advisor, Umesh Vazirani, for all his guidance
and support throughout my stay in Berkeley. The most important thing he tried to teach me
was how to be elegant in research. I hope I learned how to pick interesting and challenging
problems, immerse myself in them and hopefully solve them. His advice always was: “You
should be having fun with it”. And that I did.
I would also like to thank Luca Trevisan for the valuable research discussions we
had during my time in Berkeley as well as our fun outings in the city. Luca is one of the
people who made me realize that academia is the best place to be. My thanks to Elwyn
Berlekamp for his helpful comments and for being in my dissertation committee.
The research in this thesis was done in collaboration with some incredible re-
searchers and friends. I am extremely grateful to Ziv Bar-Yossef, T.S. Jayram, Ashwin
Nayak, Ronald de Wolf for everything I learned from them. Also,my co-authors, Petros
Drineas, Prabhakar Raghavan and Chris Harrelson.
My decision to come to Berkeley for graduate school was probably one of the
easiest in my life. Berkeley was as liberal a place as I could hope to find this side of the
Atlantic and the sea was just minutes away. Four years later, never having swum in the
ocean and with Arnold’s signature on my degree, I still think it’s the best decision anyone
can make. The Theory group in Berkeley makes it almost impossible for anyone not to have
an amazing time here both academically and personally. A big thanks to all my friends in
Berkeley, Hoeteck, Anand, Andrej, Sam, Kamalika and especially Jhala, Kevin and Kunal
for the endless fun we had together. A special thanks to Eleni for all the happy times we
iv
shared. Also my friends from San Francisco and Greece for putting me in touch with the
real world and my sister and parents for their love and for never questioning my sanity.
Last, my dearest thanks to Ashwin, for he always makes me smile.
1
Chapter 1
Introduction
1.1 The power of quantum encodings
Quantum computation and information studies how information is encoded and
processed in quantum mechanical systems. Although it is a rather new research area, there
have been numerous surprising results, for example Shor’s factoring algorithm ([62]) and the
existence of unconditionally secure key distribution ([19]), that make quantum information
a very exciting and fundamental research field.
The carrier of quantum information is the quantum bit or qubit, which can be
not only in one of the two classical states 0 and 1, but also in any linear combination or
superposition of them. From a mathematical point of view, a qubit is a unit vector in a
2-dimensional Hilbert space, i.e. a normed vector space. Let |0〉, |1〉 be an orthonormal
basis for this space, then the state of a qubit is
α0|0〉+ α1|1〉,
2
where α0, α1 are complex amplitudes, and |α0|2 + |α1|2 = 1. Likewise, a log n-qubit system
|φ〉 is a unit vector in an n-dimensional Hilbert space and its state can be expressed as a
linear combination of the basis states |1〉, |2〉, · · · , |n〉
|φ〉 =
n∑
i=1
αi|i〉
In order to describe a quantum system of log n qubits one needs to specify n
amplitudes. This suggests, on one hand, that it is hard to simulate the evolution of quantum
systems since one needs to keep track of an exponential number of amplitudes. More
importantly, one can use these amplitudes to encode an exponential amount of information.
For example, we can encode an n-bit string x using only log n qubits by construct-
ing the state
1√n
n∑
i=1
(−1)xi |i〉
This state is a uniform superposition of all basis states |i〉, where the sign in front of each
state |i〉 is ”+” if xi = 0 and ”-” if xi = 1. For example, we can encode the 4-bit string
x = 0011 using the state 12 (|1〉+ |2〉 − |3〉 − |4〉).
However, the access we have to this information is indirect. We cannot explicitly
examine a qubit to determine its quantum state, i.e. learn the values of the amplitudes
αi. We can only obtain more restricted information about a quantum state by performing
measurements on it. For example, measuring a qubit α0|0〉 + α1|1〉 will give outcome |0〉
with probability |α0|2 and outcome |1〉 with probability |α1|2. Moreover, after we perform
the measurement on the state, the qubit collapses to the state |0〉 or |1〉 and hence, it no
longer contains any of the initial information.
3
The ability of a quantum bit to be in a superposition of states enables us to encode
an exponential amount of information in a quantum state. This information, however,
can only be accessed indirectly via measurements. The goal of this thesis is precisely to
study this boundary of how information is encoded and processed in quantum systems.
It’s a fundamental question, whose answer is essential for the better understanding of the
computational power of nature.
The first question we consider is whether quantum encodings can be more powerful
than classical ones. One one hand, Holevo’s bound shows that if we want to encode n
classical bits x1, · · · , xn into k qubits, such that we can recover every bit xi with probability
p, then we must use k ≥ (1−H(p))n qubits, where H() is the binary entropy. This means
that we cannot use quantum encodings to compress information and suggests that quantum
encodings may in fact be only as powerful as classical ones. However, many surprising
results, such as 2-to-1 dense coding [7] and exponential separations in the communication
complexity models [8, 21, 60], have refuted this view and showed scenarios were quantum
encodings are indeed more powerful. In this thesis, we resolve a long standing open question
about the power of quantum encodings, first posed by Kremer [45] and also considered by
Raz [60], by showing that they are exponentially more efficient than probabilistic ones in
the model of one-way communication complexity.
The study of the strengths and limitations of quantum encodings is a very im-
portant research area and has given rise to powerful mathematical concepts, such as von
Neumann entropy and Holevo’s bound. In this thesis, we go one step farther and show
how quantum information theory can be used as a tool for the study of classical informa-
4
tion. Such interaction between seemingly different theories has proved extremely fruitful
in theoretical computer science. For example, the probabilistic method, though it concerns
probabilistic computation, has been used extensively for the study of deterministic com-
putations. In chapter 4, we manage to resolve an important question about the efficiency
of classical Locally Decodable Codes. These codes play a central role in complexity the-
ory, however, the study of their efficiency has been proven to be notoriously hard. Our
proof involves quantum information theory in an essential way. We reduce the classical Lo-
cally Decodable Codes to Locally quantum-Decodable Codes and use concepts like Holevo’s
bound and quantum Random Access Codes to study their efficiency. This is the first ex-
ample where tools from quantum information theory have been used in a fundamental way
for the study of classical coding theory.
Cryptography is another area where the use of quantum encodings has resulted into
fascinating results, such as an unconditionally secure quantum protocol for key distribution
between two parties. In chapters 5 and 6, we study the primitives of private information
retrieval and weak coin flipping. We demonstrate limitations of classical private informa-
tion retrieval schemes and show that by using quantum encodings symmetrically-private
information retrieval and weak coin flipping are possible. We provide more details for these
results in the following sections.
The way information is encoded and decoded in the quantum world is very intrigu-
ing and has led to remarkable results such as Shor’s factoring algorithm and the existence of
unconditionally secure key distribution. In this thesis, we further investigate the strengths
and limitations of quantum encodings by answering some main open questions in the areas
5
of communication complexity, coding theory and cryptography.
1.1.1 Communication Complexity
Communication complexity is a central model of computation with numerous ap-
plications. It was introduced by Yao [69] and has been used for proving lower bounds in
many areas including Boolean circuits, time-space tradeoffs, data structures, automata, for-
mulae size, etc. Examples of these applications can be found in the textbook of Kushilevitz
and Nisan [46]. In this model, two players, Alice and Bob, want to compute a function on
their inputs, while minimizing the communication between them. The two players can con-
sult a common random string, toss private coins and are also allowed to output the wrong
answer with small probability. For some problems, the optimal communication protocol is
essentially to have one player send his or her entire input to the other player, who can then
solve the problem. However, there are many problems, for which there exist much more
efficient protocols. For example, in the EQUALITY problem, Alice and Bob are given
an n-bit string each, x and y respectively, and their goal is to determine whether the two
strings x and y are equal or not. Alice and Bob can use their common random string to
pick a random n-bit string r and then Alice can communicate the inner product x ·r to Bob.
When x 6= y then with probability 1/2 this inner product will be different from y · r. If
x = y then the inner product is always equal. By repeating this process a constant number
of times and hence using only constant communication, Alice and Bob can decide whether
x = y with very high probability.
There are two other variants of communication complexity that are going to be of
6
interest to us. In the one-way communication complexity, we allow Alice to send a single
message to Bob, who afterwards outputs the solution to the communication problem. Here
we are looking for a succinct encoding of Alice’s input, such that Bob can decode with high
probability the information that will enable him to solve the communication problem. In
the Simultaneous Messages (SM) Alice and Bob send a single message each to a Referree,
who afterwards outputs the solution to the problem.
The definition of quantum communication complexity is due to Yao [71]. Similarly
to the classical case, quantum communication complexity, apart from being of interest in
itself, has been used to prove bounds on quantum formulae size, automata, data structures,
etc. (e.g., [71, 43, 61]). In this setting, Alice and Bob hold qubits, some of which are
initialized to the input. In a communication round, each player can perform some arbitrary
unitary operation on his/her part of the qubits and send some of them to the other player.
At the end of the protocol they perform a measurement and decide on an outcome. The
output of the protocol is required to be correct with high probability. The quantum one-way
communication and quantum SM are defined similarly. We provide more precise definitions
in chapter 3.
It is a natural and important question to ask whether the ability to encode and
transmit information using quantum bits can significantly reduce the amount of communica-
tion necessary to solve certain problems. A series of papers have investigated the power and
limitations of quantum communication complexity. An exponential separation with respect
to randomized protocols was given by Ambainis et al. [8] in the so called sampling model.
The quantum protocol uses only one-round, the separation, however, does not hold when
7
the two players share public coins. Buhrman et al. [21] were able to solve the EQUALITY
problem in the SM model with a quantum protocol of complexity O(log n) rather than the
Θ(√n) bits necessary in any randomized SM protocol with private coins [57, 11]. As we
saw before, if we allow the players to share random coins, then EQUALITY can be solved
classically with O(1) communication.
Ran Raz [60] was the first to show an exponential gap between the quantum and
the public-coin randomized communication complexity models. He described a problem P
with an efficient quantum protocol of complexity O(log n). He then proved a lower bound of
Ω(n1/4) on the classical randomized communication complexity of P . The quantum protocol
given for P uses two rounds, hence the separation holds only for interactive protocols
that use two rounds or more. The main question that remained unresolved since then
was whether there exists an exponential separation in the case of one-way communication
complexity.
In this thesis we resolve this question in the affirmative by proving the first expo-
nential separation between quantum and classical one-way communication complexity. We
define and analyze the communication complexity of the Hidden Matching Problem. We
describe a quantum one-way protocol of communication O(log n) and also prove that any
randomized one-way protocol requires Ω(√n) bits of communication.
The Hidden Matching Problem:
Let n be a positive even integer. In the Hidden Matching Problem, denoted HMn,
Alice is given x ∈ 0, 1n and Bob is given M ∈Mn (Mn denotes the family of all possible
8
perfect matchings on n nodes). Their goal is to output a tuple 〈i, j, b〉 such that the edge
(i, j) belongs to the matching M and b = xi ⊕ xj .
This problem is new and we believe that its definition plays the major role in
obtaining our result. The inspiration came from our work on Locally Decodable Codes
that we describe in the chapter 4. Let us give the intuition why this problem is hard for
communication complexity protocols. Suppose (to make the problem even easier) that Bob’s
matching M is restricted to be one of n fixed edge- disjoint matchings on x. Bob’s goal is
to find the value of xi ⊕ xj for some (i, j) ∈ M . However, since Alice has no information
about which matching Bob has, her message needs to contain information about the parity
of at least one pair from each matching. Hence, she needs to communicate parities of Ω(n)
different pairs to Bob. It can be shown that such message must be of size Ω(√n). In
Section 3.4 we turn this intuition into a proof for the randomized one-way communication
complexity of HMn. We also show that our lower bound is tight by describing a randomized
one-way protocol with communication O(√n). In this protocol, Alice just sends O(
√n)
random bits of her input. By the birthday paradox, with high probability, Bob can recover
the value of at least one of his matching pairs from Alice’s message.
Remarkably, this problem remains easy for quantum one-way communication. Al-
ice only needs to send a uniform superposition of her string x
|ψ〉 =1√n
n∑
i=1
(−1)xi |i〉,
hence communicating only log n qubits. Bob performs a measurement on this superposition
that depends on his matching M , specifically, he measures the state |ψ〉 in the basis B =
9
1√2(|k〉 ± |`〉) | (k, `) ∈M.
“Measuring in a basis B = b1, . . . , bn” means that we apply the projective mea-
surement given by the projections Ei = |bi〉〈bi|. Applying the measurement to a state |ψ〉
gives a resulting state |bi〉 with probability pi = |〈ψ|bi〉|2. In other words, measuring a state
|ψ〉 in a basis B collapses the state into one of basis states |bi〉 with probability equal to the
square of the inner product between |φ〉 and |bi〉.
It’s not hard to see that when Bob performs his measurement, the probability that
the outcome is a basis state 1√2(|k〉 + |`〉) is
|〈ψ| 1√2(|k〉+ |`〉)〉|2 =
1
2n((−1)xk + (−1)x`)2.
Hence, if the outcome of the measurement is a state 1√2(|k〉 + |`〉) then Bob knows with
certainty that xk⊕x` = 0 and outputs 〈k, `, 0〉. Similarly, if the outcome is a state 1√2(|k〉−
|`〉) then Bob knows with certainty that xk ⊕ x` = 1 and hence outputs 〈k, `, 1〉 In Section
3.3 we describe the quantum protocol in more detail.
In the words of coding theory, we showed that there exists an encoding of n-bit
strings x ∈ 0, 1n into log n qubits, such that for any perfect matching M on [n] there
exists a decoding procedure that outputs an edge (i, j) ∈M and the bit xi ⊕ xj with high
probability. On the other hand any classical encoding that succeeds with high probability
must have length Ω(√n).
1.1.2 Coding Theory and Complexity
Error correcting codes allow one to encode an n-bit string x into an m-bit code-
word C(x) in such a way that x can still be recovered even if the codeword is corrupted
10
in a number of places. For example, codewords of length m = O(n) already suffice to
recover from errors in a constant fraction of the bitpositions of the codeword, even in lin-
ear time [63]. Error-correcting codes and related combinatorial constructions have played a
very important role in complexity theory, for example in the construction of hard-core pred-
icates, hash functions, randomness extractors and graph expanders. Some error-correcting
codes have the additional property of local decodability. In other words, if one is only in-
terested in recovering one or a few of the bits of x, then locally decodable codes (LDCs)
allow us to extract small parts of encoded information from a corrupted codeword, while
looking at (“querying”) only a few positions of that word. These codes are very powerful
and they have found numerous applications in complexity theory and cryptography, such
as self-correcting computations [14, 47, 31, 29, 32], Probabilistically Checkable Proofs [9],
worst-case to average-case reductions [10, 66], private information retrieval [23], and extrac-
tors [50]. Informally, LDCs are described as follows:
A (q, δ, ε)-locally decodable code encodes n-bit strings x into m-bit codewordsC(x), such that, for each i, the bit xi can be recovered with probability 1/2 + εmaking only q queries, even if the codeword is corrupted in δm of the bits.
In the query model, we assume that we can ask an oracle for a bit of the codeword by giving
as input an index k and getting back the bit C(x)k.
For example, the Hadamard code is a locally decodable code where two queries
suffice for predicting any bit with constant advantage, even with a constant fraction of
errors. The code has m = 2n and C(x)j = j · x mod 2 for all j ∈ 0, 1n. Recovery from a
corrupted codeword y is possible by picking a random j ∈ 0, 1n, querying yj and yj⊕ei,
and computing the XOR of those two bits as our guess for xi. If neither of the two queried
11
bits has been corrupted, then we output yj ⊕ yj⊕ei = j · x ⊕ (j ⊕ ei) · x = ei · x = xi, as
we should. If C(x) has been corrupted in at most δm positions, then a fraction of at least
1− 2δ of all (j, j ⊕ ei) pairs of indices is uncorrupted, so the recovery probability is at least
1− 2δ. This is > 1/2 as long as δ < 1/4. The main drawback of the Hadamard code is its
exponential length.
Despite their prominent role in complexity theory very little is known about the
complexity of Locally Decodable Codes. Clearly, we would like both the codeword length
m and the number of queries q to be small. The main complexity question about LDCs is
how large m needs to be, as a function of n, q, δ, and ε. For q = polylog(n), Babai et al. [9]
showed how to achieve almost linear size codes, for some fixed δ and ε. Beimel et al. [17]
recently improved the best known upper bounds for constant q to m = 2nO(log log q/q log q),
with some more precise bounds for small q.
The study of lower bounds on m was initiated by Katz and Trevisan [38]. They
proved that for q = 1, LDCs do not exist and for q ≥ 2, they proved a bound of m =
Ω(n1+1/(q−1)) if the q queries are made non-adaptively; this bound was generalized to the
adaptive case by Deshpande et al. [27]. This establishes superlinear (but at most quadratic)
lower bounds on the length of LDCs with a constant number of queries. There is still a
large gap between the best known upper and lower bounds. In particular, it is open whether
m = poly(n) is achievable with constant q. Goldreich et al. [35] examined the case of q = 2
and proved that for the special case of linear codes m ≥ 2Ωn.
Proving a tight lower bound for general 2-query LDCs proved to be a surprisingly
hard problem to tackle.
12
In this thesis, we resolve this open question by proving that any 2-query LDC has
exponential size m ≥ 2Ω(n). This is the first superpolynomial lower bound on general LDCs
with more than one query.
Very recently, Ben-Sasson et al. [18] studied a relaxed notion of LDCs where the
decoder is allowed to output “don’t know” for a constant fraction of the indices. They
construct relaxed LDCs with constant queries and size m = n1+ε.
The very surprising aspect of our result is that it introduces one radically new
ingredient: quantum computing. We show that if two classical queries can recover xi
with high probability, then xi can also be recovered with high probability using only one
“quantum query”. In the quantum query model we assume that we have an oracle which
maps a state |k〉 to the state (−1)C(x)k |k〉 and we can ask queries in superposition.
For simplicity, let us assume that the classical decoder computes the bit xi by
outputing the XOR of two bits of the codeword C(x)k, C(x)`, as in the case of the Hadamard
code. We can compute the XOR of two bits with only one quantum query:
1√2(|k〉 + |`〉) 7→ 1√
2((−1)C(x)k |k〉+ (−1)C(x)` |`〉) ≡ |ψ〉
By performing a measurement in the basis B = b0, b1 = 1√2(|k〉 + |`〉), 1√
2(|k〉 − |`〉)
we can compute C(x)k ⊕ C(x)` with certainty. The probabilities of the outcomes of this
measurement are:
Pr[outcome b0] = |〈ψ|b0〉|2 =1
4((−1)C(x)k + (−1)C(x)l)2
Pr[outcome b1] = |〈ψ|b1〉|2 =1
4((−1)C(x)k − (−1)C(x)l)2
13
If the outcome is the state b0 then we know with certainty that C(x)k ⊕ C(x)l = 0 and if
the outcome is b1 then C(x)k⊕C(x)l = 1. In other words, a 2-query locally decodable code
is a 1-query locally quantum-decodable code.
We then prove an exponential lower bound for 1-query LQDCs by showing that a
1-query LQDC of length m induces a quantum random access code for x of length about
logm. Roughly speaking, we show that if C(x) is a 1-query LQDC, then from the quantum
statem∑
j=1
(−1)C(x)j |j〉
a user can recover each bit xi of his choice with high probability. Such an encoding is called
a quantum random access code. Using Holevo’s bound, Nayak [54] proved a linear lower
bound on the length of such codes and hence we have that logm = Ω(n). In Chapter 4, we
provide a detailed exposition of our results.
Our lower bound for classical LDCs is one of the very few examples where tools
from quantum computing enable one to prove new results in classical computer science. We
know only a few other examples of this, e.g. [59, 61, 44, 34], however in all these cases, the
underlying proof techniques easily yield a classical proof: one just replaces quantum notions
like von Neumann entropy and trace distance by their classical analogues to get a classical
proof for the classical case. In contrast, our proof seems to be more “inherently quantum”
since there is no classical analog of our 2-classical-queries-to-1-quantum-query reduction
(2-query LDCs exist but 1-query LDCs don’t). Furthermore, despite the attempts of many
researchers, there is still no natural “classical” proof of the same result.
14
We also extend our results to codes with larger alphabets and provide a better, but
still polynomial, bound on LDCs with more than two queries. Lastly, we describe quantum
LDCs which are more succinct than the classical codes with the same number of queries.
As we noted above, our result is a prime example of how the study of quantum
encodings provides us with new insight on classical encoding of information. It is also closely
related to our result in communication complexity, in fact the definition of the problem
that provided the exponential separation in the one-way communication complexity model
is inspired by our work on LDCs.
1.1.3 Quantum Cryptography
The impact of quantum computing in the area of cryptography is irrefutable.
Shor’s remarkable algorithm for factoring showed that public-key cryptosystems, such as
RSA, are not secure against quantum computers. On the other hand, the possibility of
unconditionally secure key distribution shows that quantum encodings can be used to en-
chance cryptography, since their security relies on the laws of quantum mechanics and not
on computational assumptions.
A large body of research has been focused on which cryptographic primitives are
possible in the quantum world. For example, despite the fact that key distribution is pos-
sible, the primitives of bit commitment, coin flipping and oblivious transfer are impossible.
However, one can achieve weaker versions of these primitives using quantum encodings,
which are still impossible classically. It is really fascinating to see what is the exact power
and limitations of quantum cryptography.
15
Private Information Retrieval
One very interesting and well-studied cryptographic task is private information
retrieval. A Private Information Retrieval scheme allows a user to retrieve some information
from a database, without revealing what information she is interested in.
If there is only one copy of the database, then it is easy to see that the only possible
PIR scheme is when the user receives the entire database. However, when we allow multiple
copies of the database to reside in different servers, then much more efficient schemes are
possible. The best PIR schemes with k = 2 servers have communication O(n1/3) ([23]) and
for k > 2 servers the communication is O(n2 log log k/k log k) ([17]).
Katz and Trevisan, and Goldreich et al. established a close connection between
locally decodable codes and private information retrieval (PIR) schemes. Roughly, the
queries in an LDC correspond to the servers in a PIR scheme. In fact, the best known
LDCs for constant q are derived from PIR schemes. No general lower bounds better than
Ω(log n) are known for PIRs with k ≥ 2 servers. For the case of 2 servers, the best known
lower bound was 4 log n, due to Mann [51].
The techniques we developed for the study of Locally Decodable Codes allow us to
reduce classical 2-server PIR schemes with 1-bit answers to quantum 1-server PIRs, which
in turn can be reduced to a random access code [54]. Thus we obtain an Ω(n) lower bound
on the communication complexity for all classical 2-server PIRs with 1-bit answers. We
extend our results to PIR schemes with larger answers, a 4.4 log n lower bound for the
general 2-server PIR and obtain quantum PIR schemes that beat the best known classical
PIRs.
16
Subsequently to our work, Beigel, Fortnow, and Gasarch [16] found a classical
proof that a 2-server PIR with perfect recovery (ε = 1/2) and 1-bit answers needs query
length ≥ n − 2. However, their proof does not seem to extend to the case ε < 1/2, or to
larger answers.
In its standard form, PIR just protects the privacy of the user : the individual
servers learn nothing about i. But now suppose we also want to protect the privacy of the
data. That is, we don’t want the user to learn anything about x beyond the xi that he asks
for. This setting of Symmetrically-Private Information Retrieval (SPIR) was introduced
by Gertner et al. [33] and is closely related to oblivious transfer.
In this thesis, we study the existence and efficiency of Symmetrically PIR schemes
in the quantum world, where user and servers have quantum computers and can commu-
nicate qubits. Our main result is that honest-user quantum SPIR schemes exist even in
the case where the servers do not share any randomness. Such honest-user SPIRs without
shared randomness are impossible in the classical world. This gives another example of a
cryptographic task that can be performed with information-theoretic security in the quan-
tum world but that is impossible classically (key distribution [19] is the main example of
this).
Weak Coin flipping
Another central cryptographic primitive is coin flipping. In the classic example
from [20], Adam and Bob are getting a divorce, and would like to decide who gets the
car. They decide to toss a coin for that purpose, but don’t trust each other. In such a
17
scenario, instead of a coin tossing protocol, they could play any fair game to decide the
issue. Motivated by this, we consider the following weaker version of coin-flipping.
A weak coin flipping protocol with bias ε, is a two-party communication game in
the style of [71], in which Alice and Bob compute a value 0 or 1 or declare that the other
player is cheating. The outcome 0 is identified with Alice winning, and 1 with Bob winning.
When both players are honest, then each player has probability 1/2 of winning. When one
player is dishonest, then he or she cannot win with probability more than 1/2 + ε.
In a strong coin flipping protocol, the goal is instead to produce a random bit which
is biased away from any particular value 0 or 1. The primitive of quantum strong coin
flipping has been studied extensively, e.g. in [49, 52, 2, 5, 64]. The best known protocol,
with bias 1/4 = 0.25, is due to Ambainis [5], also independently proposed by Spekkens
and Rudolph [64]. We present a protocol that demonstrates that weak coin flipping with
bias ≈ 0.239, less than 1/4, is possible. Our protocol is obtained by modifying the protocol
of [5] especially so that the winning party is checked for cheating. We also describe a related
strong coin flipping protocol with bias 1/4 that has the advantage over [5] that the analysis
is considerably simpler. A similar analysis for a restricted class of cheating strategies has
been given by [64].
Since the discovery of the abovementioned protocol, we have learnt of several
exciting developments. Kitaev [42] has shown that in any protocol for strong coin flipping,
the product of the probabilities with which each of the players can achieve outcome (say) 0,
has to be at least 1/2. Hence the protocols with arbitrarily small bias are not possible; the
bias is always at least 1/√
2− 1/2 ≈ 0.207. (Previous lower bounds applied only to certain
18
kinds of protocol [5, 64, 55].) Furthermore, Ambainis [6] and Spekkens and Rudolph [65]
have constructed a family of protocols for weak coin flipping, where the product of the
winning probabilities is exactly 1/2. By making the winning probabilities equal, they get
protocols in which each player wins with probability at most 1/√
2, and hence the bias
is 1/√
2−1/2 ≈ 0.207. Subsequently, Mochon [53] constructed a weak coin flipping protocol
with bias 0.192, which is less than 1/√
2 − 1/2, hence showing that weak coin flipping is
strictly weaker than strong coin flipping. Whether there exists a weak coin flipping with
arbitrarily small bias is still an open question.
1.2 Results and Organization of this thesis
Preliminaries
We start by giving the essential definitions of the quantum formalism and some examples
that illustrate the definitions. We first define quantum states and their evolution by unitary
operations and measurements. Then, we describe bipartite quantum systems and the no-
tion of von Neummann entropy of a quantum system. In order to illustrate the definitions
we give the following examples: we show how to compute the XOR of two bits by a single
quantum query, we describe a proof of the impossibility of Bit Commitment and prove a
lower bound on the length of Random Access Codes. We also provide some essential defi-
nitions of classical Information Theory.
Quantum one-way communication complexity
In Chapter 3, we define the model of communication complexity and prove our first result,
19
namely an exponential separation between quantum and classical one-way communication
complexity. We describe a problem which can be solved by a quantum one-way communica-
tion protocol exponentially faster than any classical one. No asymptotic gap was previously
known. We prove a similar result in the model of Simultaneous Messages with public coins.
This is joint work with Ziv Bar-Yossef and T.S. Jayram [12].
Locally Decodable Codes
Locally Decodable Codes are central in the area of Probabilistically Checkable Proofs. In
Chapter 4, we resolve a main open question about their efficiency by first reducing the prob-
lem into one about quantum codes and then employing tools from quantum information
theory to solve the latter. This is the first example of a proof of a classical result in an
“inherently quantum” way. This is joint work with Ronald de Wolf [40].
Private Information Retrieval
Private Information Retrieval (PIR) is an interesting cryptographic primitive that is closely
related to Locally Decodable Codes. In Chapter 5, we prove new lower bounds for PIR
and new upper bounds for QPIR by using the same technique as in the study of Locally
Decodable Codes. We also study the primitive of Symmetrically-Private Information Re-
trieval and show that honest-user quantum SPIR schemes exist even in the case where the
servers do not share any randomness. Such honest-user SPIRs without shared randomness
are impossible in the classical world. This is joint work with Ronald de Wolf [40, 41].
20
Weak Coin Flipping
Coin Flipping is another very important cryptographic primitive that has been studied ex-
tensively both in the classical and quantum model. In Chapter 6, we describe a weak coing
flipping protocol with small bias and we give an alternative protocol for strong coin flipping
that achieves the best known bias and has a much simpler analysis. This is joint work with
Ashwin Nayak [39].
Conclusions
We conclude with a discussion of our results and some interesting directions for further
research.
21
Chapter 2
Preliminaries
In this chapter we give the essential definitions of the quantum formalism and
give some examples that will illustrate the definitions. We first define quantum states
and their evolution by unitary operations and measurements. Then, we describe bipartite
quantum systems and the notion of von Neummann entropy of a quantum system. In order
to illustrate the definitions we give the following examples: we show how to compute the
XOR of two bits by a single quantum query, we describe a proof of the impossibility of Bit
Commitment and prove a lower bound on the length of Random Access Codes. We also
provide some essential definitions of classical Information Theory.
2.1 Quantum states and evolution
The fundamental carrier of classical information is the bit, which is always in one
of two possible states, either 0 or 1. The quantum analog is the quantum bit or qubit, which
can be not only in one of these two classical states but also in any linear combination or
22
superposition of them. As a physical object, a qubit can be realized by a photon, an ion
trap, a cavity etc. Here we are interested in a qubit as a mathematical object with certain
properties.
More formally, let H denote a 2-dimensional complex vector space, equipped with
the standard inner product. We pick an orthonormal basis for this space, label the two basis
vectors |0〉 and |1〉, and for simplicity identify them with the vectors
1
0
and
0
1
,
respectively. A qubit is a unit length vector in this space, and so can be expressed as a
linear combination of the basis states:
α0|0〉+ α1|1〉 =
α0
α1
.
Here α0, α1 are complex amplitudes, and |α0|2 + |α1|2 = 1.
An m-qubit system is a unit vector in the m-fold tensor space H ⊗ · · · ⊗H. The
2m basis states of this space are the m-fold tensor products of the states |0〉 and |1〉. For
example, the basis states of a 2-qubit system are the four 4-dimensional unit vectors |0〉⊗|0〉,
|0〉 ⊗ |1〉, |1〉 ⊗ |0〉, and |1〉 ⊗ |1〉. We abbreviate, e.g., |1〉 ⊗ |0〉 to |0〉|1〉, or |1, 0〉, or |10〉,
or even |2〉 (since 2 is 10 in binary). With these basis states, an m-qubit state |φ〉 is a
2m-dimensional complex unit vector
|φ〉 =∑
i∈0,1m
αi|i〉.
We use 〈φ| = |φ〉∗ to denote the conjugate transpose of the vector |φ〉, and 〈φ|ψ〉 = 〈φ|·|ψ〉 for
the inner product between states |φ〉 and |ψ〉. These two states are orthogonal if 〈φ|ψ〉 = 0.
The norm of |φ〉 is ‖φ‖ =√
〈φ|φ〉.
23
A mixed state pi, |φi〉 is a classical distribution over pure quantum states, where
the system is in state |φi〉 with probability pi. We can represent a mixed quantum state
by the density matrix which is defined as ρ =∑
i pi|φi〉〈φi|. Note that ρ is a positive
semidefinite operator with trace (sum of diagonal entries) equal to 1. The density matrix
of a pure state |φ〉 is ρ = |φ〉〈φ|.
A quantum state can evolve by a unitary operation or by a measurement. A
unitary transformation is a linear mapping that preserves the `2 norm or in other words it
is a solid rotation. If we apply a unitary U to a state |φ〉, it evolves to U |φ〉. A mixed state
ρ evolves to UρU †.
Apart from unitary operations, we can perform a measurement on a quantum state.
A special type of measurement is a projective measurement. Let |φ〉 be an m-qubit state and
B = |bi〉 form an orthonormal basis of the m-qubit space. “Measuring in the B-basis”
means that we apply the projective measurement given by the projections Ei = |bi〉〈bi|.
Applying the measurement to the pure state |φ〉 gives a resulting state |bi〉 with probability
pi = |〈φ|bi〉|2. In other words, measuring a state |φ〉 in a basis B collapses the state into
one of basis states |bi〉 with probability equal to the square of the inner product between
|φ〉 and |bi〉.
The most general measurement allowed by quantum mechanics is specified by a
family of positive semidefinite operators Ei = M∗i Mi, 1 ≤ i ≤ k, subject to the condition
that∑
iEi = I. Given a density matrix ρ, the probability of observing the ith outcome
under this measurement is given by the trace pi = Tr(Eiρ) = Tr(MiρM∗i ). These pi are
24
nonnegative because Ei and ρ are positive semidefinite. They also sum to 1, as they should:
k∑
i=1
pi =k∑
i=1
Tr(Eiρ) = Tr(k∑
i=1
Eiρ) = Tr(Iρ) = 1.
If the measurement yields outcome i, then the resulting quantum state isMiρM∗i /Tr(MiρM
∗i ).
In particular, if ρ = |φ〉〈φ|, then pi = 〈φ|Ei|φ〉 = ‖Mi|φ〉‖2, and the resulting state is
Mi|φ〉/‖Mi|φ〉‖.
2.2 Quantum Queries
Let as assume that we have a black box or oracle for some n-bit string x. The
string x is unknown to us but we can make queries to the oracle by sending an index i
and getting back the bit xi. Our goal is to compute some function f(x) with the minimum
number of queries to the oracle. For example, if we want to compute the parity of x, then
we need to make n queries to the oracle.
In the quantum oracle setting we are allowed to make quantum queries, i.e. ask a
query in superposition. After defining the quantum query formally, we will describe how a
single quantum query can compute the XOR of two bits.
A query to an n-bit string x is commonly formalized as the following unitary
transformation, where j ∈ [n]:
|j〉 7→ (−1)xj |j〉
A quantum computer may apply this to any superposition. An equivalent formulation of a
25
quantum query is the unitary transformation, where b ∈ 0, 1 is called the target bit :
|j〉|b〉 7→ |j〉|b ⊕ xj〉.
For example, in order to compute x0 ⊕ x1 we need to make two classical queries.
However, as we saw in chapter 1, we can solve this problem with a single quantum query
1√2(|0〉+ |1〉)
The oracle maps this state to
|φ〉 =1√2((−1)x0 |0〉 + (−1)x1 |1〉).
We then perform a measurement in the basis B = b0, b1 = 1√2(|0〉 + |1〉), 1√
2(|0〉 − |1〉).
The probability that the outcome of the measurement is the state b0 is
Pr[outcome b0] = |〈φ|b0〉|2 =1
2((−1)x0 + (−1)x1)
Similarly,
Pr[outcome b1] = |〈φ|b1〉|2 =1
2((−1)x0 − (−1)x1)
Hence, if the outcome of the measurement is |b0〉 then we know that x0 ⊕ x1 = 0
and if the outcome is |b1〉, then x0 ⊕ x1 = 1.
As we saw, we use a generalization of this algorithm in our results in Communi-
cation Complexity and Locally Decodable Codes.
2.3 Bipartite states and Impossibility of Bit Commitment
A quantum system is called bipartite if it consists of two subsystems A and B.
We can describe the state of each of these subsystems separately with the reduced density
26
matrix. For example, if a quantum state has the form |φ〉 =∑
i√pi|i〉|φi〉, then the state of
a system holding only the second part of |φ〉 is described by the (reduced) density matrix
∑
i pi|φi〉〈φi|.
In general, the reduced density matrix for the system A is defined by
ρA = TrB(ρAB),
where TrB is the partial trace over that system B. The partial trace is defined by
TrB(|a1〉〈a2| ⊗ |b1〉〈b2|) = 〈b1|b2〉|a1〉〈a2|,
where |a1〉 and |a2〉 are any two vectors in the state space of A and |b1〉 and |b2〉 are any
two vectors in the state space of B.
For example let us consider the state |φ〉 = 1√2(|00〉+ |11〉). The density matrix of
this state is
ρ = |φ〉〈φ| = 1
2(|00〉〈00| + |00〉〈11| + |11〉〈00| + |11〉〈11|)
Let us now trace out the second qubit of this state and consider the reduced density
matrix of the first qubit
ρ1 = Tr2(ρ)
=1
2(Tr2(|00〉〈00|) + Tr2(|00〉〈11|) + Tr2(|11〉〈00|) + Tr2(|11〉〈11|))
=1
2(〈0|0〉|00〉〈00| + 〈0|1〉|00〉〈11| + 〈1|0〉|11〉〈00| + 〈1|1〉|11〉〈11|)
=1
2(|00〉〈00| + |11〉〈11| = I/2
27
Another very useful fact about bipartite quantum systems is the Schmidt decom-
position. Suppose |ψ〉 is a pure quantum state of a bipatrite system. Then there always
exist bases |iA〉 for A and |iB〉 for B such that
|ψ〉 =∑
i
λi|iA〉|iB〉,
where λi are non negative reals and∑
i λ2i = 1. If we look at the reduced density matrices
we have ρA =∑
i λ2i |iA〉〈iA| and ρB =
∑
i λ2i |iB〉〈iB |
Impossibility of Bit Commitment
We illustrate these definitions by giving a sketch of the proof of the impossibility
of bit commitment. This is a striking result, in contrast to the fact that key distribution
is possible. In a bit commitment scheme, Alice wants to commit to a bit b ∈ 0, 1 in
such a way that Bob cannot gain any information about which bit she has committed to.
Moreover, when Bob asks her to reveal the bit, she should not be able at all to change her
mind about which bit she had committed to. This is one of the most important primitives
in cryptography and it’s impossible to have an information theoretically secure bit com-
mittment scheme in the classical world. We will see that it’s also impossible even using
quantum encodings.
Let’s assume that Alice and Bob perform some quantum protocol which is sup-
posed to be a secure bit commitment scheme. Let |ψ〉bAB be the combined state of Alice’s
and Bob’s qubits after the commitment step, when Alice has committed to the bit b. The
Schmidt decomposition is
|ψ〉bAB ==∑
i
λbi |ibA〉|ibB〉
28
If this protocol is secure, then Bob should not be able to gain any information about b, in
other words, his reduced density matrix should be the same, whether Alice committed to
a 0 or 1. Hence, ρA0 = ρA
1 , which implies that λ0i = λ1
i and the two bases |i0B〉 and |i1B〉 are
the same. We can rewrite the two commitment states as
|ψ〉bAB =∑
i
λi|ibA〉|iB〉
Now, it’s easy to see that when Bob asks Alice to reveal her commitment, she is
able to change her commitment from a 0 to a 1 (and vice versa). She can transform the
state |ψ〉0AB into |ψ〉1AB because these two states are related by a unitary transformation
that acts only on her part of the state. Namely, she performs the unitary transformation
that maps the basis |i0A〉 to |i1A〉. Hence, if Bob gains no information about the bit b that
Alice committed to, then Alice can change her mind at the reveal step. The same proof
also excludes any almost-perfect bit commitment scheme, i.e. if Bob can gain very little
information about b then Alice can cheat almost perfectly.
2.4 Quantum Information Theory and Random Access Codes
Classical Information Theory
We briefly review next some useful facts from information theory. We refer the
reader to the textbook of Cover and Thomas [24] for details and proofs.
The distribution of a random variable X is denoted by µX , and let µX(x)def=
Pr[X = x]. The entropy of X (or, equivalently, of µX) is H(X)def=∑
x∈X µX(x) log 1µX(x) ,
where X is the domain of X. The entropy of a Bernoulli random variable with probability of
29
success p is called the binary entropy function of p and is denoted H2(p). The joint entropy
of X and Y is the entropy of the joint distribution µXY of X and Y . The conditional entropy
of X given an event A, denoted H(X|A), is the entropy of the conditional distribution of
µX given A. The conditional entropy of X given Y is H(X|Y )def=∑
y∈Y µY (y)H(X|Y = y),
where Y is the domain of Y . The mutual information between X and Y is I(X;Y )def=
H(X) − H(X|Y ) = H(Y ) − H(Y |X). The conditional mutual information between X and
Y given Z is I(X;Y |Z) = H(X|Z)−H(X|Y,Z) = H(Y |Z)−H(Y |X,Z).
Some basic properties of entropy and mutual information we are using in this thesis
are the following.
Theorem 1 Let X,Y,Z be random variables.
1. H(X) ≤ log |X |, where X is the domain of X. Equality holds iff X is uniform on X .
2. Conditioning reduces entropy: H(X|Y ) ≤ H(X). Equality holds iff X,Y are indepen-
dent.
3. Data processing inequality: For any function f ,
I(X; f(Y )) ≤ I(X;Y ).
4. Chain rule for mutual information:
I(X;Y,Z) = I(X;Y ) + I(X;Z|Y ).
5. I(X;Y ) = 0 iff X,Y are independent.
6. If X,Y are jointly independent of Z, then I(X;Y |Z) = I(X;Y ).
7. For any positive integers n and m ≤ n/2, ∑mi=0
(ni
)
≤ 2nH2(m/n).
30
Theorem 2 (Fano’s inequality) Let X be a binary random variable, and let Y be any
random variable on a domain Y. Let f : Y → 0, 1 be a prediction function, which tries to
predict the value of X based on an observation of Y . Let δdef= Pr(f(Y ) 6= X) be the error
probability of the prediction function. Then, H2(δ) ≥ H(X|Y ).
Theorem 3 Let C ⊆ 0, 1∗ be a finite prefix-free code (i.e., no codeword in C is a prefix
of any other codeword in C). Let X be a random variable corresponding to a uniformly
chosen codeword in C. Then, H(X) ≤ E(|X|).
Quantum Information Theory
In the quantum systems, the probabilities distributions are replaced by density
matrices. Von Neumann defined the entropy of a quantum state ρ as
S(ρ) = −Tr(ρ log ρ)
or equivalently
S(ρ) = −∑
i
λi log λi,
where λi’s are the eigenvalues of ρ.
Von Neumann entropy has many properties similar to the Shannon entropy, for
example it’s non-negative and it’s at most (log d) for a d-dimensional quantum system. We
can also define the conditional entropy of A given B as S(A|B) = S(AB) − S(B) and the
mutual information between A andB as S(A : B) = S(A)+S(B)−S(AB) = S(A)−S(A|B).
One important and highly non- trivial property that we are going to use is the strong
subadditivity of von Neumann entropy. For more details and proofs of the properties of von
Neumann entropy we refer the reader to [58, Chapters 11 and 12].
31
Random Access Codes
We present a result of Nayak [54] about the efficiency of quantum Random Access
Codes which will be central to our results in the following chapters. A quantum random
access code is an encoding x 7→ ρx, such that any bit xi can be recovered with probability
p ≥ 1/2 + ε from ρx. We reprove Nayak’s [54] result, that shows that the length m of such
codes is at least (1−H(p))n.
We define an n+m-qubit state XM as follows:
1
2n
∑
x∈0,1n
|x〉〈x| ⊗ ρx.
We use X to denote the first subsystem, Xi for its individual bits, and M for the second
subsystem. We are going to prove an upper and lower bound on the mutual information
S(X : M) which will give us the result. Since M is an m-qubit state, we know that its
entropy is S(M) ≤ m and so
S(X : M) = S(M)− S(M |X) ≤ S(m) ≤ m
On the other hand, we know that S(X) = n and also using the strong subadditivity of von
Neumann entropy we can easily see that S(X|M) ≤∑ni=1 S(Xi|M). Hence,
S(X : M) = S(X) − S(X|M) ≥ n−n∑
i=1
S(Xi|M)
Since we can predict Xi from M with success probability p, Fano’s inequality implies that
H(p) ≥ S(Xi|M) and finally
S(X : M) ≥ (1−H(p))n
32
Putting the two bounds together we get that m ≥ (1−H(p))n. Note that this lower bound
is exactly the same as in the case of classical random access codes.
33
Chapter 3
Communication Complexity
In this chapter we define the model of communication complexity and prove our
first result, namely an exponential separation between quantum and classical one-way com-
munication complexity. We describe a problem which can be solved by a quantum one-way
communication protocol exponentially faster than any classical one. No asymptotic gap was
previously known. We prove a similar result in the model of Simultaneous Messages with
public coins.
A communication complexity problem is defined by three sets X,Y,Z and a rela-
tion R⊆ X × Y × Z. The two players, Alice and Bob, are given inputs x ∈ X and y ∈ Y
respectively. Their goal is to output an answer z ∈ Z, such that (x, y, z) ∈ R. The com-
munication complexity of the problem is the number of bits Alice and Bob must exchange
in the best protocol that outputs such an answer z, for the worst case inputs x, y. The two
players have unlimited computational power. The model of communication complexity for
functions was introduced by Yao [69] and was generalized to relations by Karchmer and
34
Wigderson [37]. One important special case of the above model is one-way communication
complexity, where Alice is allowed to send a single message to Bob, after which Bob com-
putes the output. Simultaneous Messages (SM) is a variant in which Alice and Bob cannot
communicate directly with each other; instead, each of them sends a single message to a
third party, the “referee”, who announces the output based on the two messages.
Depending on the kind of allowed protocols we can define different measures of
communication complexity for a problem P. The classical deterministic communication
complexity of P is the number of bits exchanged during the execution of the optimal de-
terministic protocol for P. In a bounded-error randomized protocol with error probability
δ > 0, both players have access to public random coins and the output of the protocol is
required to be correct with probability at least 1 − δ. The cost of such a protocol is the
number of bits Alice and Bob exchange on the worst-case choice of inputs and of values for
the random coins. The randomized communication complexity of P (w.r.t. δ) is the cost of
the optimal randomized protocol for P. In a 0-error randomized protocol (a.k.a. Las Vegas
protocol) the players need to output the correct value with probability 1. The cost of such
a protocol is the expected number of bits Alice and Bob exchange on the worst-case choice
of inputs. These complexity measures can also be specialized by restricting the communi-
cation model to be SM or one-way communication. An interesing variant for randomized
protocols in the SM model is when the random coins are restricted to be private 1.
The definition of quantum communication complexity is due to Yao [71]. Similarly
to the classical case, quantum communication complexity, apart from being of interest in
1The difference in complexity between public and private coins for the other models is only O(log n) [56].
35
itself, has been used to prove bounds on quantum formulae size, automata, data structures,
etc. (e.g., [71, 43, 61]). In this setting, Alice and Bob hold qubits, some of which are
initialized to the input. In a communication round, each player can perform some arbitrary
unitary operation on his/her part of the qubits and send some of them to the other player.
At the end of the protocol they perform a measurement and decide on an outcome. The
output of the protocol is required to be correct with probability 1− δ, for some δ > 0. The
quantum communication complexity of P is the number of qubits exchanged in the optimal
bounded-error quantum protocol for P. It can be shown that the quantum communication is
at least as powerful as bounded-error randomized communication with private coins2, even
when restricted to variants such as one-way communication and SM. It is a natural and
important question to ask whether quantum channels can significantly reduce the amount
of communication necessary to solve certain problems.
It is known that randomized one-way communication protocols can be much more
efficient than deterministic protocols. For example, the equality function can be solved
by a O(1) randomized one-way protocol, though its deterministic one-way communication
complexity is Ω(n) (cf. [46]). However, the question of whether quantum one-way commu-
nication could be exponentially more efficient than the randomized one remained open. We
resolve this in the affirmative, by exhibiting a problem for which the quantum complexity
is exponentially smaller than the randomized one.
2As noted earlier, the distinction between public and private coins is significant only for the SM model.
36
3.1 The Hidden Matching Problem
Our main result is the definition and analysis of the communication complexity
of the Hidden Matching Problem. This provides the first exponential separation between
quantum and classical one-way communication complexity.
The Hidden Matching Problem:
Let n be a positive even integer. In the Hidden Matching Problem, denoted HMn,
Alice is given x ∈ 0, 1n and Bob is given M ∈Mn (Mn denotes the family of all possible
perfect matchings on n nodes). Their goal is to output a tuple 〈i, j, b〉 such that the edge
(i, j) belongs to the matching M and b = xi ⊕ xj .
This problem is new and we believe that its definition plays the major role in
obtaining our result. The inspiration came from our work on Locally Decodable Codes
that we will describe in the next chapter. Let us give the intuition why this problem is
hard for communication complexity protocols. Suppose (to make the problem even easier)
that Bob’s matching M is restricted to be one of n fixed edge- disjoint matchings on x.
Bob’s goal is to find the value of xi ⊕ xj for some (i, j) ∈ M . However, since Alice has
no information about which matching Bob has, her message needs to contain information
about the parity of at least one pair from each matching. Hence, she needs to communicate
parities of Ω(n) different pairs to Bob. It can be shown that such message must be of
size Ω(√n). In Section 3.4 we turn this intuition into a proof for the randomized one-
way communication complexity of HMn. We also show that our lower bound is tight by
37
describing a randomized one-way protocol with communication O(√n). In this protocol,
Alice just sends O(√n) random bits of her input. By the birthday paradox, with high
probability, Bob can recover the value of at least one of his matching pairs from Alice’s
message.
Remarkably, this problem remains easy for quantum one-way communication. Al-
ice only needs to send a uniform superposition of her string x, hence communicating only
log n qubits. Bob can perform a measurement on this superposition which depends on the
matching M and then output the parity of some pair in M . In Section 3.3 we describe the
quantum protocol in more detail.
In section 3.5 we show that HMn also provides the first exponential separation
between quantum SM and randomized SM with public coins. Previously such a bound was
known only in the private coins model.
Our main result exhibits a separation between quantum and classical one-way
communication complexity for a relation. Ideally, one would like to prove such a separation
for the most basic type of problems—total Boolean functions. The best known separation
between quantum and classical communication complexity (even for an arbitrary number of
rounds) for such functions is only quadratic [22]. In fact, it is very conceivable that for total
functions, the two models are polynomially related. Raz’s result [60] shows an exponential
gap for a partial Boolean function (i.e., a Boolean function that is defined only on a subset
of the domain X × Y) and for two-way communication protocols.
We consider a partial Boolean function induced by the Hidden Matching Problem,
defined below. In the definition we view each matching M ∈ Mn as an n2 × n edge-vertex
38
incidence matrix. For two Boolean vectors v,w, we denote by v ⊕w the vector obtained
by XORing v and w coordinate-wise. For a bit b ∈ 0, 1, we denote by b the vector all of
whose entries are b.
The Boolean Hidden Matching Problem:
Let n be a positive even integer. In the Boolean Hidden Matching Problem, de-
noted BHMn, Alice is given x ∈ 0, 1n and Bob is given M ∈ Mn and w ∈ 0, 1n/2,
which satisfy the following promise: either Mx⊕w = 1 (a Yes instance) or Mx⊕w = 0
(a No instance).
The same quantum protocol that solves HMn also solves BHMn with O(log n)
qubits. We were unable to extend the randomized lower bound for HMn to a similar
lower bound for BHMn. Yet, we believe that BHMn should also exhibit an exponential
gap in its quantum and classical one-way communication complexity. We give a strong
indication of that with two lower bounds. First, we prove an Ω(n) lower bound on the
0-error randomized one-way communication complexity of BHMn. We then show that a
natural class of randomized bounded-error protocols require Ω( 3√n) bits of communication
to compute BHMn. The protocols we refer to are linear ; that is, Alice and Bob use the
public coins to choose a random matrix A, and Alice’s message on input x is simply Ax.
These protocols are natural for our problem, because what Bob needs to compute is a linear
transformation of Alice’s input. In particular, the O(√n) communication protocol that we
described earlier is trivially a linear protocol. Generalizing this lower bound to the case of
39
non-linear randomized protocols remains an open problem. These results are described in
Section 3.6.
3.2 The Hidden Matching Problem and complete problems
for one-way communication
Kremer [45] defined a complete problem for quantum one-way communication com-
plexity of Boolean functions. This problem was also considered by Raz [60].
The Problem P0(θ): Alice gets as input a unit vector x ∈ Rn. Bob gets as input two
orthogonal vector-spaces M0,M1 ⊆ Rn of dimension n/2 each. Bob’s goal is to output 0 if x
is of distance ≤ θ fromM0 and 1 if x is of distance ≤ θ fromM1, (and any answer otherwise).
This problem is complete for the class of Boolean functions whose quantum one-
way communication complexity is polylog(n).
We generalize this problem for the case of non-Boolean functions f : X×Y → [n].
The Problem Q0(θ): Alice gets as input a unit vector x ∈ Rn. Let M0,M1 ⊆ Rn be
two orthogonal vector-spaces of size n/2. Bob gets as input an orthonormal basis B =
b1, . . . ,bn of Rn with the property that the basis vectors bi’s can be partitioned into
a basis for M0 and a basis for M1. However, this partition is unknown to Alice and Bob.
Bob’s goal is to output a value in i | bi ∈M0 if x is of distance ≤ θ from M0 and a value
in i | bi ∈M1 if x is of distance ≤ θ from M1, (and any answer otherwise).
40
We can prove the following:
Theorem 4 For any 0 ≤ θ < 1/√
2, the problem Q0(θ) is complete for the class of relations
R ⊆ X × Y × Z whose quantum one-way communication complexity is polylog(n).
Proof. We first show that the problem can be solved efficiently by a quantum one-way
Protocol. Alice just encodes the unit vector x ∈ Rn by log n qubits and sends this to
Bob. Bob measures in the basis B. If x is of distance ≤ θ from M0 the answer will be in
i|bi ∈M0 with probability ≥ 1 − θ2. If x is of distance ≤ θ from M1 the answer will be
an i|bi ∈M1 with probability ≥ 1− θ2.
Next, we want to reduce any problemR ⊆ X×Y ×Z with one-way communication
d to the problem Q0(θ) with input size n = 2O(d) and 0 < θ < 1/√
2. In any protocol Alice
applies some unitary operation U on the initial state |0〉 that depends on the input x and Bob
measures in some basis B of Rn. Let x′ be the unit vector U |0〉. Since the communication
is d, x′ lies in R2O(d)(see also page 9, first comment in [60]). Then, x′ and the basis B can
be used as the inputs in Q0(θ). 2
Hence, the lower bound we obtain for the problem HMn translates into a lower
bound for the complete problem Q0(θ). Also, the lower bounds we prove for the Boolean
Hidden Matching Problem translate into bounds for the complete Boolean problem P0(θ).
41
3.3 The quantum upper bound
We present a quantum protocol for the hidden matching problem with communi-
cation complexity of log n qubits. Let x = x1 . . . xn be Alice’s input and M ∈Mn be Bob’s
input.
Quantum protocol for HMn
1. Alice sends the state |ψ〉 = 1√n
∑ni=1(−1)xi |i〉.
2. Bob performs a measurement on the state |ψ〉 in the orthonormal basis
B = 1√2(|k〉 ± |`〉) | (k, `) ∈M.
The probability that the outcome of the measurement is a basis state 1√2(|k〉+ |`〉) is
|〈ψ| 1√2(|k〉+ |`〉)〉|2 =
1
2n((−1)xk + (−1)x`)2.
This equals to 2/n if xk ⊕ x` = 0 and 0 otherwise. Similarly for the states 1√2(|k〉 − |`〉)
we have that |〈ψ| 1√2(|k〉 − |`〉)〉|2 is 0 if xk ⊕ x` = 0 and 2/n if xk ⊕ x` = 1. Hence, if the
outcome of the measurement is a state 1√2(|k〉 + |`〉) then Bob knows with certainty that
xk ⊕ x` = 0 and outputs 〈k, `, 0〉. If the outcome is a state 1√2(|k〉 − |`〉) then Bob knows
with certainty that xk ⊕ x` = 1 and hence outputs 〈k, `, 1〉. Note that the measurement
depends only on Bob’s input and that the algorithm is 0-error.
42
3.4 The randomized lower bound
Theorem 5 Any one-way randomized protocol for computing HMn with error probability
less than 1/16 requires Ω(√n) bits of communication.
Proof. Using Yao’s Lemma [70], in order to prove the lower bound, it suffices to construct
a “hard” distribution µ over instances of HMn, and prove a distributional lower bound
w.r.t. deterministic one-way protocols. We define µ as follows: let X be a uniformly chosen
bitstring in 0, 1n; let M be an independent and uniformly chosen perfect matching in
M, where M is any set of m = Ω(n) pairwise edge-disjoint matchings. (For example, let
m = n/2 and M = M1,M2, . . . ,Mm, where Mi is defined by the edge set (j,m + (j +
i) mod m) | j = 0, . . . ,m− 1.)
In the proof we use the following simple fact, which is proved using Markov’s
inequality:
Proposition 1 Let X be a random variable with values in the interval [0, 1]. If E[X] ≥
1− ε, then for any t ≥ 1, Pr[X ≥ 1− tε] ≥ 1− 1/t.
Let Π be a deterministic protocol for this problem with (distributional) error
δ < 1/16 with respect to µ.
Define the 2n ×m protocol matrix P whose rows and columns are indexed by the
inputs x to Alice and inputs M to Bob, respectively. The entry at (x,M) is the output
〈i, j, b〉 of Π on (x,M). We will assume, without loss of generality, that (i, j) ∈M , thus an
error occurs at (x,M) only when b 6= xi ⊕ xj. When this happens we say that the entry of
P at (x,M) is incorrect.
43
For any possible message τ of Alice, let Sτ denote the set of Alice’s inputs on
which she sends τ . The weight of τ is defined to be the fraction |Sτ |/2n. We call τ good,
if at least 1 − 2δ fraction of the entries in the submatrix Sτ ×M of P are correct. By
Proposition 1, the total weight of good τ ’s is at least 1/2. For any such τ , we will show
that its weight is at most 2−Ω(√
n). It would follow that the number of good τ ’s is at least
(1/2)/2−Ω(√
n) = 2Ω(√
n), and therefore the communication cost of Π has to be at least
Ω(√n). For the rest of the proof fix such a τ .
Each entry in P consists of an edge and a bit.t Therefore, any row of P defines a
graph obtained by taking the m edges in that row, and a vector in 0, 1m corresponding
to the bits in that row. Since Π is a one-round protocol, the output depends only on τ and
Bob’s input. It follows that the rows of P corresponding to inputs in Sτ are associated with
the same graph G = Gτ and the same vector u = uτ .
Recall that since τ is good, the fraction of correct entries in the submatrix Sτ ×M
is at least 1−2δ. By another application of Proposition 1, it follows that for at least half of
the columns in this submatrix, the fraction of correct entries in any such column is at least
1− 4δ. Let G′ denote the set of edges associated with these columns. Thus, |G′| ≥ m/2.
We next show thatG′ contains a forest with Ω(√m) = Ω(
√n) edges. Let C1, C2, . . . , Cs
be the connected components of G′, and let b1, b2, ..., bs be the number of edges they have
(b1 + b2 + ...+ bs = |G′|). Ci has at least Ω(√bi) nodes, and thus has a spanning tree with
Ω(√bi) edges. Therefore, G′ contains a forest F with at least
∑
i Ω(√bi) = Ω(
√m) edges,
using the fact that√u+√v ≥ √u+ v.
Consider now the submatrix Sτ × F . Since F is a subgraph of G′, at least 1− 4δ
44
of the entries in this submatrix are correct. Thus, by Proposition 1, at least 1/2 of the rows
have the property that at least 1 − 8δ fraction of the entries in each such row are correct.
Call these rows S ′τ .
Let N denote the n× |F | vertex-edge incidence matrix of F , and let v ∈ 0, 1|F |
denote the projection of u on F . For any x ∈ 0, 1n, the vector xN taken over GF [2] is
of length |F |. If the i-th coordinate corresponds to the edge (j, k), then the value of xN at
this coordinate equals xj ⊕ xk. Any input x ∈ S ′τ therefore satisfies h(xN,v) ≤ 8δ, where
h(·, ·) denotes the relative Hamming distance. If W = w : h(w,v) ≤ 8δ, it follows that
S′τ ⊆
⋃
w∈Wx : xN = w.
Since F is a forest, the columns of N are linearly independent over GF [2], implying
that the null space z : zN = 0 has dimension n − |F |. Therefore, for any w ∈ W , the
number of solutions x such that xN = w is at most 2n−|F |. By the estimate given in
Theorem 1, part (8), |W | ≤ 2|F |·H2(8δ), where H2(·) denotes the binary entropy function.
Thus, |S′τ | ≤ 2n−|F |(1−H2(8δ)).
Recall that |F | = Ω(√n) and that |S ′
τ | ≥ |Sτ |/2. We conclude that the weight of
Sτ , |Sτ |/2n, is at most 2|S ′τ |/2n ≤ 21−|F |(1−H2(8δ)) = 2−Ω(
√n), as needed. 2
We next describe a public-coin randomized protocol of complexity O(√n) for HMn.
Alice uses the shared random string to pick√n locations in [n] and sends the corresponding
bits to Bob. Standard calculation shows that these bits include the end-points of at least
one edge of the matching with constant probability. This shows that our lower bound is
tight and thus:
45
Theorem 6 The randomized one-way communication complexity of HMn is Θ(√n).
3.5 An exponential separation for Simultaneous Messages
Recall that in the model of Simultaneous Messages (SM), Alice and Bob both send
a single message to a Referee, after which he computes the output. We prove an exponential
separation in this model between quantum and public-coin randomized communication
complexity. To this end, we use a restricted version of the Hidden Matching problem,
in which Bob’s input is not any perfect matching on n vertices, but rather only one of
m = Ω(n) fixed matchings (Alice, Bob, and referee know this collection of m matchings a
priori). We denote this problem by HMsmalln .
The lower bound we proved for HMn in the model of one-way communication
(Theorem 5) is in fact a lower bound for HMsmalln . This lower bound holds also in the SM
model since this model is no more powerful than one-way communication. On the other
hand, the problem is still easy in the quantum case. Bob sends the index of his matching to
the Referee using log n bits and Alice sends a superposition of her input string using log n
qubits, similarly to the one-way protocol. Since the referee knows Bob’s matching, he can
perform the same measurement Bob performed in the one-way protocol and compute the
XOR of some pair in the matching.
3.6 The complexity of Boolean Hidden Matching
The O(log n) qubit quantum protocol for HMn can be tweaked to solve also BHMn:
after obtaining the value 〈k, `, c〉 from that protocol, where (k, `) is the i-th pair in Bob’s
46
input matching M , Bob outputs wi ⊕ c. Note that if c = xk ⊕ x`, then wi ⊕ c equals the
desired bit b.
3.6.1 Lower bound for 0-error protocols
In order to prove the lower bound for 0-error protocols, we note the following char-
acterization of 0-error randomized one-way communication complexity of partial functions.
Let f : X × Y → 0, 1, ∗ be a partial Boolean function. We say that the input (x, y) is
legal, if f(x, y) 6= ∗. A protocol for f is required to be correct only on legal inputs; it is
allowed to output arbitrary answers on illegal inputs. The confusion graph Gf of f is a
graph whose vertex set is X ; (x, x′) is an edge in Gf if and only if there exists a y such that
both (x, y) and (x′, y) are legal inputs and f(x, y) 6= f(x′, y).
It is known [46] that the deterministic one-way communication complexity of f is
log χ(Gf ) + O(1), where χ(Gf ) is the chromatic number of the graph Gf . We will obtain
a lower bound on the 0-error randomized one-way communication complexity via another
measure on Gf . For any graph G = (V,E), let
θ(G) = maxW⊆V
|W |α(GW )
,
where GW is the subgraph of G induced on W and α(GW ) is the independence number of
GW . It is easy to see that χ(G) ≥ θ(G). The following theorem shows that θ(Gf ) is a lower
bound on the 0-error communication complexity of f .
Theorem 7 The 0-error randomized one-way communication complexity of any partial
Boolean function f is at least log θ(Gf ).
47
Proof. Let Gf = (V,E) and let W ⊆ V achieve the maximum for θ(Gf ). Define µ to be
the uniform distribution on W .
Suppose Π is a randomized 0-error one-way protocol for f with public randomness
R, and whose cost is c+ 1 (Bob just outputs a bit which is the last bit of the transcript).
Let A(x, R) be the message sent by Alice on input x, and let B(τ, y,R) be the output of
the protocol given by Bob on input y when the message sent by Alice is τ . For any legal
input (x, y), we have E[|A(x,R)|] ≤ c, and Pr[B(A(x,R), y, R) = f(x, y)] = 1.
Let X be a random input for Alice whose distribution is µ. Then E[|A(X,R)|] ≤ c,
where the randomness is now over both X and R. Therefore, there exists a choice r∗ for
R such that E[|A(X, r∗)|] ≤ c. Define a deterministic protocol where A′(x) = A(x, r∗) and
B′(τ, y) = B(τ, y, r∗). Note that this protocol correctly computes f and E[|A′(X)|] ≤ c. Let
T be the set of messages sent by Alice in this new protocol. For any message τ ∈ T , define
Sτ = x ∈ W : A′(x) = τ. By the definition of Gf , it follows that Sτ is an independent
set, so |Sτ | ≤ α(GW ). Therefore, the entropy of the random variable A′(X) satisfies:
H(A′(X)) =∑
τ∈T
|Sτ ||W | log
( |W ||Sτ |
)
(3.1)
≥∑
τ∈T
|Sτ ||W | log
( |W |α(GW )
)
= log θ(Gf ),
because the Sτ ’s partition W .
Finally, if we assume that the messages are prefix-free (which can be achieved
with a constant factor blow-up in the communication cost), then E[|A′(X)|] ≥ H(A′(X))
(Theorem 3). It follows from Equation 3.1 that c ≥ log θ(Gf ). 2
48
We use this characterization to prove the lower bound for BHMn:
Theorem 8 Let n = 4p, where p is prime. Then, the 0-error randomized one-way commu-
nication complexity of BHMn is Ω(n).
Proof. Let f denote the partial function BHMn. The vertex set of the confusion graph Gf
is 0, 1n. We next show that (x,x′) is an edge in Gf if and only if the Hamming distance
between x and x′ is exactly n/2.
Suppose (x,x′) is an edge in Gf . Therefore, there exists a matching M and a
vector w, so that Mx ⊕ w = 0 and Mx′ ⊕ w = 1, or vice versa. That means that for
every edge (i, j) ∈ M , xi ⊕ xj 6= x′i ⊕ x′j , and thus x,x′ agree on one of the position i, j
and disagree on the other. Hence, the Hamming distance between x and x′ is exactly
n/2. Conversely, given two strings x,x′ of Hamming distance n/2, let M be any matching
between the positions on which x,x′ agree and the positions on which they disagree. Let
w = Mx. Clearly, Mx⊕w = 0. For each edge (i, j) in M we have xi ⊕ xj 6= x′i ⊕ x′j, and
therefore Mx′ ⊕w = 1, implying (x,x′) is an edge in Gf .
If n/2 is odd, Gf is the bipartite graph between the even and odd parity vertices.
Therefore, Gf is 2-colorable, implying that f has a O(1) protocol (Alice just sends the
parity of her input). We will show that the situation changes dramatically when n is a
multiple of 4.
Proposition 2 (Frankl and Wilson [30]) Let m = 4p− 1, where p is prime. Define the
graph G = (V,E) where V = A ⊆ [m] : |A| = 2p − 1, and (A,B) ∈ E if and only if
49
|A ∩B| = p− 1. Then,
α(G) ≤p−1∑
i=0
(
m
i
)
.
Let m = 4p − 1 and let G be the graph defined by Proposition 2. We claim that
G is isomorphic to a vertex-induced subgraph of the confusion graph Gf : for every vertex
A in G, the corresponding vertex in Gf is the characteristic vector of the set A∪ 4p. Let
V denote the vertex set of G; it follows that θ(Gf ) ≥ |V |/α(G).
We have, |V | =(
m2p−1
)
≈ 2m/√m, and by Proposition 2, α(G) ≤ 2mH2(γ), where
H2 is the binary entropy function and γ = (p− 1)/(4p − 1) ≤ 1/4. The result now follows
from Theorem 7. 2
3.6.2 Lower bound for linear randomized protocols
Theorem 9 Let n be a positive integer multiple of 4, and let 0 < δ < 1/2 be a constant
bounded away from 1/2. Then, any δ-error public-coin one-way linear protocol for BHMn
requires Ω( 3√n log n) bits of communication.
Proof. Using Yao’s Lemma [70], in order to prove the lower bound, it suffices to construct
a “hard” distribution µ over instances of BHMn, and prove a distributional lower bound
w.r.t. deterministic one-way linear protocols. We define µ as follows: let X be a uniformly
chosen bitstring in 0, 1n; let M be a uniformly chosen perfect matching inMn; and let B
be a uniformly chosen bit. W is a random bitstring in 0, 1n/2, defined as Wdef= MX⊕B
(recall that B is the vector all of whose entries are B).
50
Let Π be any deterministic one-way linear protocol that has error probability of at
most δ when solving BHMn on inputs drawn according to µ. Let c be the communication
cost of Π.
Since Π is deterministic, one-way, and linear, there exists a fixed c × n Boolean
matrix A, such that the message of Π on any input x is Ax. By adding at most one
bit to the communication cost of Π, we can assume 1 is one of the rows of A. We also
assume, without loss of generality, that A has a full row rank, because otherwise Alice
sends redundant information, which Bob can figure out by himself.
We assume c satisfies c3/ log c ≤ 3n/4, since, otherwise, c ≥ Ω( 3√n log n), and we
are done.
For a matrix T , we denote by sp(T ) the span of the row vectors of T over the field
GF (2). Clearly, for any matrix T , 0 ∈ sp(T ). In particular, 0 ∈ sp(M) ∩ sp(A), for any
matching M ∈ Mn (recall that we view a matching M as an n2 × n edge-vertex incidence
matrix). By our assumption about A, 1 ∈ sp(A). Since M is a perfect matching, the sum
of its rows is 1, thus 1 ∈ sp(M). We conclude that for any M , 0,1 ⊆ sp(M)∩ sp(A). Let
Z be an indicator random variable of the event sp(M) ∩ sp(A) = 0,1, meaning that 0
and 1 are the only vectors in the intersection of the spans.
In the protocol Π, Bob observes values of the random variables AX,M, and W
and uses them to predict the random variable B with error probability δ. Therefore, by
Fano’s inequality (Theorem 2),
H2(δ) ≥ H(B | AX,M,W). (3.2)
51
Since conditioning reduces entropy,
H(B | AX,M,W) ≥ H(B | AX,M,W, Z)
= H(B | AX,M,W, Z = 1) · Pr(Z = 1)
+H(B | AX,M,W, Z = 0) · Pr(Z = 0)
≥ H(B | AX,M,W, Z = 1) · Pr(Z = 1). (3.3)
The following two lemmas bound the two factors in the last expression:
Lemma 1 H(B | AX,M,W, Z = 1) = 1.
Lemma 2 Pr(Z = 1) ≥ 1−O( c3
n log c).
The proofs of the Lemma 1 and 2 are provided below. Let us first show how the
two lemmas derive the theorem. By combining Equations 3.2 and 3.3, and Lemmas 1 and
2, we have:
H2(δ) ≥ 1−O(c3
n log c).
Therefore,
c ≥ Ω( 3√
n(1−H2(δ)) · log(n(1−H2(δ))))
= Ω( 3√
n log n),
since H2(δ) is a constant bounded away from 1. This completes the proof of the theorem.
2
Proof. [of Lemma 1] Recall that we assume 1 is one of the rows of A and that A has a full
row rank. Let A′ be the submatrix of A consisting of all the rows of A, except 1. Clearly,
52
sp(A′) ⊆ sp(A) and 1 6∈ sp(A′). It follows that the event sp(M) ∩ sp(A) = 0,1 is the
same as the event sp(M) ∩ sp(A′) = 0. Thus, from now on we will think of Z as an
indicator random variable of the latter.
Observe that since n is a multiple of 4, the parity of the bits of w always equals
to the parity of the bits of x. It follows that in the pair of random variables (AX,W) the
same information (that is, the random variable 1 ·X) is repeated twice. We can therefore
rewrite H(B | AX,M,W, Z = 1) as H(B | A′X,M,W, Z = 1).
By the definition of mutual information,
H(B | A′X,M,W, Z = 1)
= H(B |M,W, Z = 1)− I(B ; A′X |M,W, Z = 1).
The next proposition shows that the random variables B,M, and W are mutually in-
dependent given the event Z = 1, implying that H(B | M,W, Z = 1) = H(B|Z =
1) = H(B) = 1. Thus, in order to prove the lemma it would suffice to show that
I(B ; A′X |M,W, Z = 1) = 0.
Proposition 3 The random variables B,M, and W are mutually independent, given the
event Z = 1.
Proof. We will show the random variables B,M, and W are mutually independent
unconditionally. This independence would then hold even given the event Z = 1, because
this event is a function of M only.
The random variables B and M are independent, by definition. Let M be any
value of the random variable M, and let b be any value of the random variable B. In order
53
to show the desired independence, we need to prove that for any possible value w of W,
Pr(W = w |M = M,B = b) = Pr(W = w).
Using conditional probability, we can rewrite Pr(W = w | M = M,B = b) as
follows:
Pr(W = w |M = M,B = b) =
∑
x∈0,1n
Pr(W = w |M = M,B = b,X = x)
· Pr(X = x |M = M,B = b).
Since X,M,and B are mutually independent by definition, then Pr(X = x |M = M,B =
b) = Pr(X = x) = 1/2n. Pr(W = w |M = M,B = b,X = x) = 1 only if w = Mx⊕b, and
it is 0 otherwise. The number of x’s that satisfy this condition is the number of solutions
to the linear system Mx = w⊕b over Zn2 . Since M is an n
2 × n matrix that has a full row
rank, this number is 2n/2. Therefore, Pr(W = w |M = M,B = b) = 2n/2/2n = 1/2n/2.
Consider now the quantity Pr(W = w). Using conditional probability we can
rewrite it as:
Pr(W = w) =
∑
M,b
Pr(W = w |M = M,B = b) · Pr(M = M,B = b).
We already proved that for all M and b, Pr(W = w |M = M,B = b) = 1/2n/2. Therefore,
also Pr(W = w) = 1/2n/2, completing the proof. 2
Next we prove I(B ; A′X | M,W, Z = 1) = 0. By the chain rule for mutual
54
information,
I(B,M,W ; A′X | Z = 1)
= I(M,W ; A′X | Z = 1) + I(B ; A′X |M,W, Z = 1).
Since mutual information is always a non-negative quantity, it would thus suffice to show
that I(B,M,W ; A′X | Z = 1) = 0.
The function f(b,M,w) = (b,M,w⊕b) is a 1-1 function. Note that f(B,M,W) =
(B,M,W⊕B) = (B,M,MX). Therefore, by the data processing inequality (applied using
both f and f−1), we have:
I(B,M,W ; A′X | Z = 1) = I(B,M,MX ; A′X | Z = 1).
Using again the chain rule for mutual information we have:
I(B,M,MX ; A′X | Z = 1) = (3.4)
I(B,M ; A′X | Z = 1) + I(MX ; A′X | B,M, Z = 1).
We next show that each of the above mutual information quantities is 0. By the definition
of the input distribution µ, the random variables B,M, and X are mutually independent.
This holds even given the event Z = 1, because the latter is a function of M only. It
follows that also B,M, A′X are mutually independent given the event Z = 1, and thus
I(B,M ; A′X | Z = 1) = 0.
As for the second mutual information quantity on the RHS of Equation 3.4, we
use again the independence of B,M, and A′X given Z = 1 to derive I(MX ; A′X |
B,M, Z = 1) = I(MX ; A′X | M, Z = 1). The following proposition proves that for any
55
matching M satisfying the condition indicated by the event Z = 1, the random variables
MX and A′X are independent. It then follows that I(MX ; A′X |M, Z = 1) = 0.
Proposition 4 For any matching M ∈Mn satisfying the condition sp(M)∩ sp(A′) = 0,
the random variables MX and A′X are independent.
Proof. Let z be any possible value for the random variable MX and let y be any possible
value for the random variable A′X. In order to prove the independence, we need to show
that Pr(MX = z | A′X = y) = Pr(MX = z).
M is an n2 × n Boolean matrix that has a full row rank. Therefore, the number of
solutions to the linear system Mx = z over Zn2 is exactly 2n/2. Recall that X was chosen
uniformly at random from Zn2 . Therefore, Pr(MX = z) = 1/2n/2.
By the definition of conditional probability, Pr(MX = z | A′X = y) = Pr(MX =
z, A′X = y)/Pr(A′X = y). Since A′ is a (c−1)×n Boolean matrix and has a full row rank,
the same argument as above shows that Pr(A′X = y) = 1/2n−c+1. LetD be an (n2 +c−1)×n
matrix, which is composed by putting M on top of A′. Since sp(M)∩sp(A′) = 0, D is has
a full row rank. We thus obtain Pr(MX = z, A′X = y) = Pr(DX = (z,y)) = 1/2n/2−c+1.
Hence, Pr(MX = z | A′X = y) = 2n/2−c+1/2n−c+1 = 1/2n/2 = Pr(MX = z). The
proposition follows. 2
This completes the proof of Lemma 1. 2
Proof. [of Lemma 2] Denote the event sp(M) ∩ sp(A) 6= 0,1 by E. We would like
to prove Pr(E) ≤ O(c3/(n log c)). For 0 ≤ k ≤ n, define spk(A) to be the vectors in sp(A)
whose Hamming weight is k. Define Ek to be the event sp(M) ∩ spk(A) 6= ∅. Since
56
sp0(A) = 0 and spn(A) = 1, the event E can be rewritten as∨n−1
k=1 Ek. Thus, using
the union bound, we can bound the probability of E as follows:
Pr(E) ≤n−1∑
k=1
Pr(sp(M) ∩ spk(A) 6= ∅). (3.5)
Let M be any matching inMn. Any vector v in sp(M), when viewed as a set Sv (i.e., v is
the characteristic vector of Sv), is a disjoint union of edges from M . We thus immediately
conclude that v has to have an even Hamming weight. This implies that for all odd 1 ≤
k ≤ n− 1,
Pr(sp(M) ∩ spk(A) 6= ∅) = 0. (3.6)
Consider then an even k, and let v be any vector in spk(A). If v belongs to sp(M), then M
can be partitioned into two perfect “sub-matchings”: a perfect matching on Sv and perfect
matching on [n] \ Sv. We conclude that the number of matchings M in Mn, for which
v ∈ sp(M), is exactly mk ·mn−k, where m` is the number of perfect matchings on ` nodes.
Note that m` = `!(`/2)!2`/2 , and thus,
Pr(v ∈ sp(M)) =mk ·mn−k
mn=
(n2k2
)
(nk
) .
It follows, by the union bound, that for any even k,
Pr(sp(M) ∩ spk(A) 6= ∅) ≤ | spk(A)| ·
(n2k2
)
(nk
) . (3.7)
Since 1 ∈ sp(A), then | spk(A)| = | spn−k(A)|, for all 0 ≤ k ≤ n. Combining this
and Equations 3.5, 3.6, and 3.7, it would thus suffice to prove the following:
n/4∑
j=1
| sp2j(A)| ·(n
2j
)
(n2j
) ≤ O(c3
n log c). (3.8)
57
We start by bounding the ratio in each of the terms:
(n2j
)
(n2j
) =(n
2 )! · (2j)! · (n− 2j)!
(n2 − j)! · j! · n!
=n2 · · · (n
2 − j + 1) · (2j) · · · (j + 1)
n · · · (n− 2j + 1)
≤ (1
2)j · ( 2j
n− j )j = (j
n− j )j ≤ (
4j
3n)j . (3.9)
The last inequality follows from the fact j ≤ n/4. We next bound | sp2j(A)| for small values
of j:
Proposition 5 For every 1 ≤ j ≤ bc/2c, | sp2j(A)| ≤∑2ji=1
(ci
)
.
Proof. Using just the elementary row operations of Gaussian Elimination, we can trans-
form A into a matrix A′, which has exactly the same span as A, and that has the c × c
identity matrix as a submatrix. (Recall that A has a full row rank.) It follows that any lin-
ear combination of t rows of A′ results in a vector of Hamming weight at least t. Therefore,
the only linear combinations to give vectors in sp2j(A) are ones that use at most 2j rows
of A′. The proposition follows, since the number of the latter is∑2j
i=1
(
ci
)
. 2
We conclude that for 1 ≤ j ≤ bc/2c, | sp2j(A)| ≤ ∑2ji=1 c
i = c2j−1c−1 · c ≤ 2c2j (assuming
c ≥ 2). On the other hand, we have for all 1 ≤ j ≤ n/4, | sp2j(A)| ≤ | sp(A)| ≤ 2c. Note
that the quantity 2c2j exceeds 2c, when j ≥ c−12 log c . We thus define `
def= b c−1
2 log cc and break
58
the sum on the RHS of Equation 3.8, which we need to bound, into two parts as follows:
n/4∑
j=1
| sp2j(A)| ·(n
2j
)
(n2j
)
=∑
j=1
| sp2j(A)| ·(n
2j
)
(
n2j
) +
n/4∑
j=`+1
| sp2j(A)| ·(n
2j
)
(
n2j
)
≤∑
j=1
(2c2j) · ( 4j
3n)j + 2c · max
`<j≤n/4(4j
3n)j . (3.10)
The last inequality follows from Equation 3.9, from Proposition 5, and from the fact
∑n/4j=`+1 | sp2j(A)| ≤ | sp(A)| ≤ 2c. We bound each of the terms on the RHS of Equation
3.10 separately. We start with the first one:
∑
j=1
(2c2j) · ( 4j
3n)j = 2 ·
∑
j=1
(4c2j
3n)j ≤ 2 ·
∑
j=1
(4c2`
3n)j
Recall that we assumed c3/ log c ≤ 3n/4. Hence, 4c2`/(3n) ≤ 2c3/(3n log c) ≤ 1/2. We can
thus bound the geometric series as follows:
2 ·∑
j=1
(4c2`
3n)j ≤ 2 · 4c
2`
3n· 1
1− 4c2`3n
≤ 16c2`
3n
≤ 8c3
3n log c. (3.11)
We now turn to bounding the second term on the RHS of Equation 3.10.
Proposition 6 The function g(j) = (aj)j , where a > 0, has a local minimum at j∗ = 1ae
in the interval (0,∞).
Proof. We rewrite g as follows: g(j) = ej ln(aj). The derivative of g is the following:
g′(j) = ej ln(aj) · (ln(aj) + 1).
59
Thus, g has a local extremum at j∗ = 1ae . We next verify it is a local minimum. The second
derivative of g is the following:
g′′(j) = g′(j) · (ln(aj) + 1) + g(j) · 1j
= g(j) · ((ln(aj) + 1)2 +1
j).
Since g is positive in the interval (0,∞), then g ′′(j) > 0 for all j in this interval. In
particular, g′′(j∗) > 0, implying j∗ is a local minimum. 2
Proposition 6 shows that the function g(j) = (aj)j has a local minimum at j∗ = 1ae
in the interval (0,∞). In our case a = 43n , and thus j∗ = 3n/(4e) ≥ n/4. Therefore the
maximum of ( 4j3n)j in the interval [`, n/4] is obtained at j = `. We conclude that:
2c · max`<j≤n/4
(4j
3n)j ≤ 2c · ( 4`
3n)` ≤ 2c · ( 2c
3n log c)
c2 log c
≤ (4c
3n log c)c ≤ (
2c
n)c ≤ c
n
≤ c3
n log c. (3.12)
In the next to the last inequality we used the fact 2 ≤ c ≤ n/4. Combining Equations 3.10,
3.11, and 3.12, we have
n/4∑
j=1
| sp2j(A)| ·(n
2j
)
(n2j
) ≤ 8c3
3n log c+
c3
n log c≤ O(
c3
n log c).
This completes the proof of Lemma 2. 2
60
3.7 Remarks
The main question in quantum communication complexity is to characterize its
power in relation to classical communication complexity. For partial Boolean functions it
was known that quantum two-way communication complexity could be exponentially lower
than the classical one [60]. Here we prove a similar result even for one-way communication
complexity. The main open question is what is the relationship between quantum and
classical communication complexity for total functions. Are they polynomially related for
all total functions? Is this relationship even tighter in the case of one-way communication
complexity? Moreover, can we show an exponential separation between quantum one-way
communication complexity and randomized two-way communication complexity?
It is also very intriguing to study the connection between quantum one-way com-
munication complexity and quantum advice and proofs. For example, can our result be used
to prove an oracle separation between the classes BQP/poly and BQP/qpoly or between
QMA and QCMA?
61
Chapter 4
Locally Decodable Codes
In this chapter, we study Locally Decodable Codes, which are central in the area of
Probabilistically Checkable Proofs. We resolve a main open question about their efficiency
by first reducing the problem into one about quantum codes and then employing tools from
quantum information theory to solve the latter. This is the first example of a proof of a
classical result in an “inherently quantum” way.
Definition 1 C : 0, 1n → 0, 1m is a (q, δ, ε)-locally decodable code (LDC) if there is
a classical randomized decoding algorithm A such that
1. A makes at most q queries to m-bit string y, non-adaptively.
2. For all x and i, and all y ∈ 0, 1m with Hamming distance d(C(x), y) ≤ δm we have
Pr[Ay(i) = xi] ≥ 1/2 + ε.
The LDC is called linear if C is a linear function over GF (2) (i.e., C(x+y) = C(x)+C(y)).
By allowing A to be a quantum computer and to make queries in superposition, we
62
can similarly define (q, δ, ε)-locally quantum-decodable codes (LQDCs).
It will be convenient to work with non-adaptive queries, as used in the above definition, so
the distribution on the queries that A makes is independent of y. However, our main lower
bound also holds for adaptive queries, see the first remark at the end of Section 4.1.3.
The main result of this chapter is an exponential lower bound for general 2-query
LDCs:
Theorem 13 If C : 0, 1n → 0, 1m is a (2, δ, ε)-locally decodable code, then
m ≥ 2cn−1,
for c = 3δε2/(98 ln 2).
While Section 4.1 focuses only on codes over the binary alphabet, in Section 4.2.1
we extend our result to the case of larger alphabets, using a classical reduction due to
Trevisan [67]. In Section 4.2.2 we look at LDCs with q ≥ 3 queries and improve the
polynomial lower bounds of Katz and Trevisan [38]. Our bounds are still polynomial and
far from the best known upper bounds. In Section 4.2.3 we observe that our construction
implies the existence of 1-query quantum-decodable codes for all n. The Hadamard code
is an example of this. Here the codewords are still classical, but the decoder is quantum.
As mentioned before, if we only allow one classical query, then LDCs do not exist for n
larger than some constant depending on δ and ε [38]. For larger q, it turns out that the
best known (2q, δ, ε)-LDCs, due to Beimel et al. [17], are actually (q, δ, ε)-LQDCs. Hence
for fixed number of queries q, we obtain LQDCs that are significantly shorter than the best
known LDCs. In particular, Beimel et al. give a 4-query LDC with length m = 2O(n3/10)
63
which is a 2-query LQDC. This is significantly shorter than the m = 2Θ(n) that 2-query
LDCs need. We summarize the situation in Table 4.1, where our contributions are indicated
by boldface.
Queries Length of LDC Length of LQDC
q = 1 don’t exist 2Θ(n)
q = 2 2Θ(n) 2O(n3/10)
q = 3 2O(n1/2) 2O(n1/7)
q = 4 2O(n3/10) 2O(n1/11)
Table 4.1: Best known bounds on the length of LDCs and LQDCs with q queries
4.1 Lower Bound for 2-Query Locally Decodable Codes
Our proof has two parts, each with a clear intuition but requiring quite a few
technicalities:
1. A 2-query LDC is a 1-query LQDC, because one quantum query can compute the same
Boolean functions as two classical queries (albeit with slightly worse error probability).
2. The length m of a 1-query LQDC must be exponential, because a uniform superposi-
tion over all indices contains only logm qubits, but induces a quantum random access
code for x, for which a linear lower bound is already known [54].
64
4.1.1 From 2 Classical to 1 Quantum Query
In Chapter 2 we gave the following definition for a quantum query . A query to
an m-bit string x is the following unitary transformation, where j ∈ [m]:
|j〉 7→ (−1)xj |j〉.
A quantum computer may apply this to any superposition. An equivalent formalization
that we will be using here, is:
|c〉|j〉 7→ (−1)c·yj |c〉|j〉.
Here c is a control bit that controls whether the phase (−1)yj is added or not. Given some
extra workspace, one query of either type can be simulated exactly by one query of the
other type.
The key to the first step is the following lemma, which is a generalisation of the
algorithm we described in Chapter 2 for computing the XOR of two bits with one quantum
query:
Lemma 3 Let f : 0, 12 → 0, 1 and suppose we can make queries to the bits of some
input string a = a1a2 ∈ 0, 12. There exists a quantum algorithm that makes only one
query (one that is independent of f) and outputs f(a) with probability exactly 11/14, and
outputs 1− f(a) otherwise.
Proof. If we could construct the state
|ψa〉 =1
2(|0〉|1〉 + (−1)a1 |1〉|1〉 + (−1)a2 |1〉|2〉 + (−1)a1+a2 |0〉|2〉)
65
with one quantum query then we could determine a with certainty, since the four possible
states |ψb〉 (b ∈ 0, 12) form an orthonormal basis. We could also see these states as the
Hadamard encoding of the strings b ∈ 0, 12. Unfortunately we cannot construct |ψa〉
perfectly with one query. Instead, we approximate this state by making the query
1√3
(|0〉|1〉 + |1〉|1〉 + |1〉|2〉) ,
where the first bit is the control bit, and the appropriate phase (−1)aj is put in front of |j〉
if the control bit is 1. The result of the query is the state
|φ〉 =1√3
(|0〉|1〉 + (−1)a1 |1〉|1〉 + (−1)a2 |1〉|2〉) .
The algorithm then measures this state |φ〉 in the orthonormal basis consisting of the four
states |ψb〉. The probability of getting outcome a is |〈φ|ψa〉|2 = 3/4, and each of the other
3 outcomes has probability 1/12. The algorithm now determines its output based on f and
on the measurement outcome b. We distinguish 3 cases for f :
1. |f(1)−1| = 1 (the case |f(1)−1| = 3 is completely analogous, with 0 and 1 reversed). If
f(b) = 1, then the algorithm outputs 1 with probability 1. If f(b) = 0 then it outputs
0 with probability 6/7 and 1 with probability 1/7. Accordingly, if f(a) = 1, then the
probability of output 1 is Pr[f(b) = 1] · 1 + Pr[f(b) = 0] · 1/7 = 3/4 + 1/28 = 11/14.
If f(a) = 0, then the probability of output 0 is Pr[f(b) = 0] · 6/7 = (11/12) · (6/7) =
11/14.
2. |f(1)−1| = 2. Then Pr[f(a) = f(b)] = 3/4+1/12 = 5/6. If the algorithm outputs f(b)
with probability 13/14 and outputs 1−f(b) with probability 1/14, then its probability
of output f(a) is exactly 11/14.
66
3. f is constant. In that case the algorithm just outputs that value with probability
11/14.
Thus we always output f(a) with probability 11/14. 2
Peter Høyer (personal communication) recently improved the 11/14 in the lemma
to 9/10. We describe his algorithm in section 4.2.5 and show that this success probability
is best possible if we have only one quantum query.
Using our lemma we can prove:
Theorem 10 A (2, δ, ε)-LDC is a (1, δ, 4ε/7)-LQDC.
Proof. Consider i, x, and y such that d(C(x), y) ≤ δm. The 1-query quantum decoder
will use the same randomness as the 2-query classical decoder. The random string of the
classical decoder determines two indices j, k ∈ [m] and an f : 0, 12 → 0, 1 such that
Pr[f(yj, yk) = xi] = p ≥ 1/2 + ε,
where the probability is taken over the decoder’s randomness. We now use Lemma 3 to
obtain a 1-query quantum decoder that outputs some bit b such that
Pr[b = f(yj, yk)] = 11/14.
67
The success probability of this quantum decoder is:1
Pr[b = xi] = Pr[b = f(yj, yk)] · Pr[f(yj, yk) = xi] +
Pr[b 6= f(yj, yk)] · Pr[f(yj, yk) 6= xi]
=11
14p+
3
14(1− p)
=3
14+
4
7p
≥ 1
2+
4ε
7,
as promised. 2
4.1.2 Lower Bound for 1-Query LQDCs
A quantum random access code is an encoding x 7→ ρx of n-bit strings x into m-
qubit states ρx, possibly mixed, such that any bit xi can be recovered with some probability
p ≥ 1/2 + ε from ρx. The following lower bound is known on the length of such quantum
codes [54] (see Chapter 2, section 2.4).
Theorem 11 (Nayak) An encoding x 7→ ρx of n-bit strings into m-qubit states with re-
covery probability at least p, has m ≥ (1−H(p))n.
This allows us to prove an exponential lower bound for 1-query LQDCs:
1Here we use the ‘exactly’ part of Lemma 3. To see what could go wrong if the ‘exactly’ were ‘at least’,suppose the classical decoder outputs AND(y1, y2) = xi with probability 3/5 and XOR(y3, y4) = 1 − xi
with probability 2/5. Then it outputs xi with probability 3/5 > 1/2. However, if our quantum procedurecomputes AND(y1, y2) with success probability 11/14 but XOR(y3, y4) with success probability 1, then itsrecovery probability is (3/5)(11/14) < 1/2.
68
Theorem 12 If C : 0, 1n → 0, 1m is a (1, δ, ε)-LQDC, then
m ≥ 2cn−1,
for c = δε2/(16 ln 2).
Proof. Our goal below is to show that we can recover each xi with good probability from
a number of copies of the uniform log(m) + 1-qubit state
|U(x)〉 =1√2m
∑
c∈0,1,j∈[m]
(−1)c·C(x)j |c〉|j〉.
The intuitive reason for this is as follows. Since C is an LQDC, it is able to recover
xi even from a codeword that is corrupted in many (up to δm) places. Therefore the
“distribution” of queries of the decoder must be “smooth”, i.e., spread out over almost all
the positions of the codeword—otherwise an adversary could choose the corrupted bits in
a way that makes the recovery probability too low. The uniform distribution provides a
reasonable approximation to such a “smooth” distribution. Since the uniform state |U(x)〉 is
independent of i, we can actually recover any bit xi with good probability, so it constitutes
a quantum random access code for x. Applying Theorem 11 then gives the result.
Let us be more precise. The most general query that the quantum decoder could
make to recover xi, is of the form
|Qi〉 =∑
c∈0,1,j∈[m]
αcj|c〉|j〉|φcj〉,
where the |φcj〉 are pure states in the decoder’s workspace and the αcj are non-negative reals
(any phases could be put in the |φcj〉). This workspace can also incorporate any classical
randomness used. However, the decoder could equivalently add these workspace states after
69
the query, using the unitary map |c〉|j〉|0〉 7→ |c〉|j〉|φcj〉. Hence we can assume without loss
of generality that the actual query is
|Qi〉 =∑
c∈0,1,j∈[m]
αcj|c〉|j〉,
and that the decoder just measures the state resulting from this query. Let D and I−D be
the two measurement operators that the decoder uses for this measurement, corresponding
to outputs 1 and 0, respectively. Its probability of giving output 1 on query-result |R〉 is
p(R) = 〈R|D|R〉 (for clarity we don’t write the |·〉 inside the p(·)).
Inspired by the smoothing technique of [38], we split the amplitudes αj of the
query |Qi〉 into small and large ones: A = (c, j) : αcj ≤√
1/δm and B = (c, j) : αcj >
√
1/δm. Since the query does not affect the |0〉|j〉-states, we can assume without loss of
generality that α0j is the same for all j, so α0j ≤ 1/√m ≤ 1/
√δm and hence (0, j) ∈ A.
Let a =√
∑
(c,j)∈A α2cj be the norm of the “small-amplitude” part. Since
∑
(c,j)∈B α2cj ≤ 1,
we have |B| < δm. Define non-normalized states
|A(x)〉 =∑
(c,j)∈A
(−1)c·C(x)jαcj|c〉|j〉
|B〉 =∑
(c,j)∈B
αcj|c〉|j〉.
The pure states |A(x)〉 + |B〉 and |A(x)〉 − |B〉 each correspond to a y ∈ 0, 1m that is
corrupted (compared to C(x)) in at most |B| ≤ δm positions, so the decoder can recover
xi from each of these states. If x has xi = 1, then we have:
p(A(x) +B) ≥ 1/2 + ε
p(A(x)−B) ≥ 1/2 + ε.
70
Since p(A±B) = p(A)+p(B)±(〈A|D|B〉+〈B|D|A〉), averaging the previous two inequalities
gives
p(A(x)) + p(B) ≥ 1/2 + ε.
Similarly, if x′ has x′i = 0, then
p(A(x′)) + p(B) ≤ 1/2− ε.
Hence, for the normalized states 1a |A(x)〉 and 1
a |A(x′)〉:
p
(
1
aA(x)
)
− p(
1
aA(x′)
)
≥ 2ε/a2.
Since this holds for every x, x′ with xi = 1 and x′i = 0, there are constants q1, q0 ∈ [0, 1],
q1 − q0 ≥ 2ε/a2, such that p( 1aA(x)) ≥ q1 whenever xi = 1 and p( 1
aA(x)) ≤ q0 whenever
xi = 0.
If we had a copy of the state 1a |A(x)〉, then we could run the procedure below to
recover xi. Here we assume that q1 ≥ 1/2+ ε/a2 (if not, then we must have q0 ≤ 1/2− ε/a2
and we can use the same argument with 0 and 1 reversed), and that q1 + q0 ≥ 1 (if not,
then q0 ≤ 1/2 − ε/a2 and we’re already done).
Output 0 with probability q = 1− 1/(q1 + q0),and otherwise output the result of the decoder’s 2-outcome measurement on1a |A(x)〉.
If xi = 1, then the probability that this procedure outputs 1 is
(1− q)p(
1
aA(x)
)
≥ (1− q)q1 =q1
q1 + q0=
1
2+
q1 − q02(q1 + q0)
≥ 1
2+
ε
2a2.
If xi = 0, then the probability that the procedure outputs 0 is
q + (1− q)(
1− p(
1
aA(x)
))
≥ q + (1− q)(1− q0) = 1− q0q1 + q0
=q1
q1 + q0≥ 1
2+
ε
2a2.
71
Thus we can recover xi with good probability if we have the state 1a |A(x)〉 (which depends
on i as well as x).
It remains to show how we can obtain 1a |A(x)〉 from |U(x)〉 with reasonable prob-
ability. This we do by applying a measurement with operators M †M and I −M †M to
|U(x)〉, where M =√δm∑
(c,j)∈A αcj|c, j〉〈c, j|. Both M †M and I −M †M are positive
operators (as required for a measurement) because 0 ≤√δmαcj ≤ 1 for all (c, j) ∈ A. The
measurement gives the first outcome with probability
〈U(x)|M †M |U(x)〉 =δm
2m
∑
cj∈A
α2cj =
δa2
2.
In this case we have obtained the normalized version of M |U(x)〉, which is 1a |A(x)〉. Suppose
we have r = 2/(δa2) copies of |U(x)〉 and we do the measurement separately on each of them.
Then with probability 1 − (1 − δa2/2)r ≥ 1/2, one of those will give the first outcome, in
which case we can predict xi with probability 12 + ε
2a2 . If all measurements give the second
outcome then we just output a fair coin flip as our guess for xi. Overall, our recovery
probability is now
p ≥ 1
2
(
1
2+
ε
2a2
)
+1
2· 12
=1
2+
ε
4a2.
Accordingly, r copies of the (log(m)+1)-qubit state |U(x)〉 form a quantum random access
code with recovery probability p. Using Theorem 11, 1−H(1/2+η) ≥ 2η2/ ln 2, and a2 ≤ 1,
gives
r(log(m) + 1) ≥ (1−H(p))n ≥ ε2n
8a4 ln 2≥ ε2n
8a2 ln 2,
hence
logm ≥ δε2n
16 ln 2− 1.
72
2
4.1.3 Lower Bound for 2-Query LDCs
Theorem 13 If C : 0, 1n → 0, 1m is a (2, δ, ε)-locally decodable code, then
m ≥ 2cn−1,
for c = 3δε2/(98 ln 2).
Proof. The theorem combines Theorem 10 and 12. Straightforwardly, this would give
a constant of δε2/(49 ln 2). We get the better constant claimed here by observing that
the 1-query LQDC derived from the 2-query LDC actually has 1/3 of the overall squared
amplitude on queries where the control bit c is zero (and all those α0j are in A). Hence
in the proof of Theorem 12, we can redefine “small amplitude” to αcj ≤√
2/(3δm), and
still B will have at most δm elements because∑
(c,j)∈B α2cj ≤ 2/3. This in turns allows us
to make M a factor√
3/2 larger, which improves the probability of getting 1a |A(x)〉 from
|U(x)〉 to 3δa2/4 and allows us to decrease r to 4/(3δa2). This translates to a lower bound
logm ≥ 3δε2n/(32 ln 2)− 1. Combining that with Theorem 10 (which makes ε a factor 4/7
smaller) gives c = 3δε2/(98 ln 2), as claimed. 2
Remarks:
(1) Note that a (2, δ, ε)-LDC with adaptive queries gives a (2, δ, ε/2)-LDC with
non-adaptive queries: if query q1 would be followed by query q02 or q1
2 depending on the
73
outcome of q1, then we can just guess in advance whether to query q1 and q02, or q1 and q1
2 .
With probability 1/2, the second query will be the one we would have made in the adaptive
case and we’re fine, in the other case we just flip a coin, giving overall recovery probability
1/2(1/2 + ε) + 1/2(1/2) = 1/2 + ε/2. Thus we also get slightly weaker but still exponential
lower bounds for adaptive 2-query LDCs.
(2) The constant 3/(98 ln 2) can be optimized a bit further by choosing the number
r of copies a bit larger in the proof of Theorem 12 and by using Peter Høyer’s 9/10-algorithm
(Section 4.2.5) instead of our 11/14-algorithm from Lemma 3. More interesting, however,
is the question whether the quadratic dependence on ε can be improved.
(3) For a (2, δ, ε)-LDC where the decoder’s output is the XOR of its two queries,
we can give a better reduction than in Theorem 10. Now the quantum decoder can query
1√2(|1〉|1〉 + |1〉|2〉) , giving
1√2
((−1)a1 |1〉|1〉 + (−1)a2 |1〉|2〉) = (−1)a11√2
(
|1〉|1〉 + (−1)a1⊕a2 |1〉|2〉)
,
and extract a1⊕ a2 from this with certainty. Thus the recovery probability remains 1/2+ ε
instead of going down to 1/2 + 4ε/7. Accordingly, we also get better lower bounds for
2-query LDCs where the output is the XOR of the two queries, with c = δε2/(16 ln 2) in
the exponent.
(4) The second part of our proof is a reduction from a Locally Quantum-Decodable
Code to a “smooth” quantum code and then to a code where the distribution of the queries is
uniform. This reduction is known for classical codes as well (see the next section). Hence, an
alternative way to get the exponential lower bound on m would be first to invoke the result
by Katz and Trevisan that reduces an LDC to a code with a uniform query distribution.
74
We can reduce further to the case where the decoder outputs the XOR of the q queried bits.
Starting with such a uniformly smooth code, we can then use our reduction from 2 classical
queries to 1 quantum query without any loss in recovery probability (see Remark 3). After
this reduction we immediately end up with a quantum random access code of logm qubits
and we are done. However, this proof would give a worse dependence on δ and ε than our
current result.
(5) The connection to the Hidden Matching Problem in Chapter 3 should be clear.
In both cases we need to compute the XOR of some edge of a matching on the coordinates
of a string. That can be done from a uniform superposition of the bits of the string.
4.2 Extentions
In this section we give various extensions and variations of the lower bound of the
previous section.
4.2.1 Non-Binary Alphabets
Here we extend our lower bounds for binary 2-query LDCs to the case of 2-query
LDCs over larger alphabets. For simplicity we assume the alphabet is Σ = 0, 1`, so
a query to position j now returns an `-bit string C(x)j. The definition of (q, δ, ε)-LDC
carries over immediately, with d(C(x), y) now measuring the Hamming distance between
C(x) ∈ Σm and y ∈ Σm.
We will need the notion of smooth codes and their connection to LDCs as stated
in [38].
75
Definition 2 C : 0, 1n → Σm is a (q, c, ε)-smooth code if there is a classical randomized
decoding algorithm A such that
1. A makes at most q queries, non-adaptively.
2. For all x and i we have Pr[AC(x)(i) = xi] ≥ 1/2 + ε.
3. For all x, i, and j, the probability that on input i machine A queries index j is at
most c/m.
Note that smooth codes only require good decoding on codewords C(x), not on
y that are close to C(x). Katz and Trevisan [38, Theorem 1] established the following
connection:
Theorem 14 (Katz & Trevisan) A (q, δ, ε)-LDC C : 0, 1n → Σm is a (q, q/δ, ε)-
smooth code.
A converse to Theorem 14 also holds: a (q, c, ε)-smooth code is a (q, δ, ε−cδ)-LDC,
because the probability that the decoder queries one of δm corrupted positions is at most
(c/m)(δm) = cδ. Hence LDCs and smooth codes are essentially equivalent, for appropriate
choices of the parameters.
To prove the exponential lower bound for LDCs over non-binary alphabet Σ, we
will reduce a smooth code over Σ to a somewhat longer binary smooth code that works well
averaged over x. Then, we will show a lower bound on such average-case binary smooth
codes in a way very similar to the proof of Theorem 13. The following key lemma was
suggested to us by Luca Trevisan [67].
76
Lemma 4 (Trevisan) Let C : 0, 1n → Σm be a (2, c, ε)-smooth code. Then there exists
a (2, c · 2`, ε/22`)-smooth code C ′ : 0, 1n → 0, 1m·2`that is good on average, i.e., there
is a decoder A such that for all i ∈ [n]
1
2n
∑
x∈0,1n
Pr[AC′(x)(i) = xi] ≥1
2+
ε
22`.
Proof. We form the new binary code C ′ by replacing each symbol C(x)j ∈ Σ of the old
code by its Hadamard code, which consists of 2` bits. The length of C ′(x) is m · 2` bits.
The new decoding algorithm uses the same randomness as the old one. Let us fix the two
queries j, k ∈ [m] and the output function f : Σ2 → 0, 1 of the old decoder. We will
describe a new decoding algorithm that is good for an average x and looks only at one bit
of the Hadamard codes of each of a = C(x)j and b = C(x)k.
First, if for this specific j, k, f we have Prx[f(a, b) = xi] ≤ 1/2, then the new
decoder just outputs a random bit, so in this case it is at least as good as the old one for
an average x. Now consider the case Prx[f(a, b) = xi] = 1/2 + η for some η > 0. Switching
from the 0, 1-notation to the −1, 1-notation enables us to say that Ex[f(a, b) ·xi] = 2η.
Viewing a and b as two `-bit strings, we can represent f by its Fourier representation (see
e.g. [15]): f(a, b) =∑
S,T⊆[`] fS,T∏
s∈S as∏
t∈T bt and hence
∑
S,T
fS,TEx
[
∏
s∈S
as
∏
t∈T
bt · xi
]
= Ex
∑
S,T
fS,T
∏
s∈S
as
∏
t∈T
bt
· xi
= Ex[f(a, b) · xi] = 2η.
Averaging and using that |fS0,T0 | ≤ 1, it follows that there exist subsets S0, T0 such that∣
∣
∣
∣
∣
∣
Ex
∏
s∈S0
as
∏
t∈T0
bt · xi
∣
∣
∣
∣
∣
∣
≥ fS0,T0Ex
∏
s∈S0
as
∏
t∈T0
bt · xi
≥ 2η
22`.
Returning to the 0, 1-notation, we must have either
Prx
[(S0 · a⊕ T0 · b) = xi] ≥ 1/2 + η/22`
77
or
Prx
[(S0 · a⊕ T0 · b) = xi] ≤ 1/2 − η/22`,
where S0 · a and T0 · b denote inner products mod 2 of `-bit strings. Accordingly, either the
XOR of the two bits S0 · a and T0 · b, or its negation, predicts xi with average probability
≥ 1/2 + η/22`. Both of these bits are in the binary code C ′(x). The c-smoothness of C
translates into c · 2`-smoothness of C ′. Averaging over the classical randomness (i.e. the
choice of j, k, and f) gives the lemma. 2
This lemma enables us to modify our proof of Theorem 13 so that it works for
non-binary alphabets Σ:
Theorem 15 If C : 0, 1n → Σm = (0, 1`)m is a (2, δ, ε)-locally decodable code, then
m ≥ 2cn−`,
for c = Θ(δε2/25`).
Proof. Using Theorem 14 and Lemma 4, we turn C into a binary (2, 2`+1/δ, ε/22`)-smooth
code C ′ that has average recovery probability 1/2+ε/22` and length m′ = m ·2` bits. Since
its decoder XORs its two binary queries, we can reduce this to one quantum query without
any loss in the average recovery probability (see the third remark following Theorem 13).
We now reduce this quantum smooth code to a quantum random access code,
by a modified version of the proof of Theorem 13. The smoothness of C ′ implies that all
amplitudes αj (which depend on i) in the one quantum query satisfy αj ≤√
2`+1/δm′.
78
Hence there is no need to split the set of j’s into A and B. Also, the control bit c will
always be 1, so we can ignore it.
Consider the states |U(x)〉 = 1√m′
∑m′
j=1(−1)C(x)′j |j〉 and |A(x)〉 =∑m′
j=1 αj(−1)C(x)′j |j〉,
and the 2-outcome measurement with operators M =√
δm′/2`+1∑
j αj|j〉〈j| and I −M .
The probability that the measurement takes us from |U(x)〉 to the renormalized M |U(x)〉
(= |A(x)〉) is equal to 〈U(x)|M ∗M |U(x)〉 = δ/2`+1. Hence r = 2`+1/δ copies of |U(x)〉
forms a quantum random access code with average success probability
p ≥ 1
2
(
1
2+
ε
22`
)
+1
2· 12
=1
2+
ε
22`+1.
The (1−H(p))n lower bound for a quantum random access code holds even if the recovery
probability p is only an average over x, which gives
r · log(m′) ≥ (1−H(p))n,
which implies the statement of the theorem. 2
Recently, Wehner and de Wolf [68] proved an exponential bound for the case of
LDCs over a large alphabet, where the decoder uses only a constant number of bits from
each queried position of the codeword.
4.2.2 Bounds for More Than 2 Queries
Here we address the case of LDCs over the binary alphabet where the decoder asks
more than 2 queries. There is no obvious way to extend our 2-to-1 reduction to more than 2
classical queries, since a quantum computer needs dq/2e queries to compute the parity of q
79
bits with any advantage [13, 28]. In particular, it needs 2 quantum queries to compute the
parity of 3 bits, and we don’t have any lower bounds for 2-query LQDCs. Still, for LDCs
with q ≥ 3 queries we were able to improve the polynomial lower bounds m = Ω(n1+1/(q−1))
of Katz and Trevisan [38] somewhat:
Theorem 16 If C : 0, 1n → 0, 1m is a (q, δ, ε)-locally decodable code, then
m = Ω
(
(
n
log n
)1+1/(dq/2e−1))
,
where the constant under the Ω(·) depends on δ and ε.
Proof. Suppose for simplicity that q is even and m is a multiple of q. By Theorem 14, it
suffices to prove a bound for a (q, c, ε)-smooth code, with c = q/δ. We will use the following
result to make the smooth code uniform.
Fact (Katz & Trevisan [38, discussion in Section 4]): A (q, c, ε)-smooth code is a
(q, q, ε2/2c)-smooth code that is good on average. For every i, the new q-query decoder has
a fixed partition Mi of [m] into m/q q-tuples; it just picks a random q-tuple (j1, . . . , jq) ∈Mi
and outputs the XOR of the q bits C(x)j1 , . . . , C(x)jq . For every i, the decoding of xi will
be correct with probability at least 1/2 + ε2/2c averaged over all x.
We will derive a quantum random access code from this uniform smooth code. Let
Pij = |i〉〈i|+|j〉〈j| be the projector on the states |i〉 and |j〉. Suppose (i1, j1), . . . , (im/2, jm/2)
is a partition of all the q-tuples in Mi into pairs. By measuring the uniform state
|U(x)〉 =1√m
m∑
j=1
(−1)C(x)j |j〉
80
with operators Pi1j1 , . . . , Pim/2jm/2, we get
1√2
(
(−1)C(x)i` |i`〉+ (−1)C(x)j` |j`〉)
,
for random 1 ≤ ` ≤ m/2. From this we can obtain C(x)i` ⊕ C(x)j`, so we can generate
the XOR of a random pair from the partition. In order to recover xi we need to find q/2
different pairs that come from the same q-tuple.
Each state |U(x)〉 gives us a random pair out of the possible m/2. By the Birthday
Paradox, if we have O(m1−2/q) copies of the logm-qubit state |U(x)〉, then with high prob-
ability we will find q/2 different pairs that come from the same q-tuple and hence be able to
recover xi. In other words, O(m1−2/q) copies of the logm-qubit state |U(x)〉 constitute an
(average) random access code. The random access code lower bound (Appendix 2.4) now
gives
m1−2/q · logm = Ω(n),
which implies m = Ω((n/ log n)1+2/(q−2)). 2
For example, for q = 4 queries our lower bound is m = Ω((n/ log n)2) while Katz
and Trevisan have m = Ω(n4/3).
4.2.3 Locally Quantum-Decodable Codes with Few Queries
The third remark of Section 4.1.3 immediately generalizes to:
Theorem 17 A (2q, δ, ε)-LDC where the decoder’s output is the XOR of the 2q queried
bits, is a (q, δ, ε)-LQDC.
81
LDCs with q queries can be obtained from q-server PIR schemes with 1-bit an-
swers by concatenating the answers that the servers give to all possible queries of the user.
Beimel et al. [17, Corollary 4.3] recently improved the best known upper bounds on q-
query LDCs, based on their improved PIR construction. They give a general upper bound
m = 2nO(log log q/q log q)for q-query LDCs, for some constant depending on δ and ε, as well
as more precise estimates for small q. In particular, for q = 4 they construct an LDC of
length m = 2O(n3/10). All their LDCs are of the XOR-type, so we can reduce the number
of queries by half when allowing quantum decoding. For instance, their 4-query LDC is
a 2-query LQDC with length m = 2O(n3/10). In contrast, any 2-query LDC needs length
m = 2Ω(n) as proved above.
For general LDCs we can do something nearly as good, using van Dam’s result
that a q-bit oracle can be recovered with probability nearly 1 using q/2 +O(√q) quantum
queries [26]:
Theorem 18 A (q, δ, ε)-LDC is a (q/2 +O(√q), δ, ε/2)-LQDC.
4.2.4 Locally Decodable Erasure Codes
Recently, the notion of a Locally Decodable Erasure Code (LDEC) was used in
the construction of extractors [50, Section 3.1]. This is a code where, even if (1 − ε)m of
all positions of the codeword are erased, we can still recover each xi using only q queries to
the remaining positions.
Definition 3 Consider a map C : 0, 1n → Σm. We say that message position i is decod-
able from codeword positions j1, . . . , jq if there exists a function f such that f(C(x)j1 , . . . , C(x)jq ) =
82
xi for all x. C is a (q, ε)-LDEC, if for every i, in every ε-fraction of the positions of the
codeword, there exists a q-tuple of positions from which i is decodable.
Here we show that LDECs are equivalent to smooth codes, as defined in Sec-
tion 4.2.1, and hence to LDCs. Consider some LDEC with codewords of length m. This
equivalence shows that our lower bounds also hold for LDECs. In particular, (2, ε)-LDECs
need exponential length.
First consider some LDEC. Take S to be the set of an ε-fraction of positions of
the codeword. By definition, there exists a “good” q-tuple in S, i.e., one from which we can
decode message position i. Remove these q positions of the codeword from S and replace
them by some other q positions. Now in this new set S ′ of positions there should still be a
“good” q-tuple. Remove it and go on. You can repeat this substitution (1 − ε)m/q times,
where m is the size of the code. Therefore, there are Ω(m) disjoint q-tuples that are “good”
for xi and so the code is a smooth code: the smooth decoder just picks one of these tuples
at random and queries it positions.
The converse is also true. A smooth code contains Ω(m) disjoint q-tuples, say βm
of them, that are “good” for xi. Hence, in any subset of the positions of the codeword of
size (1 − β)m + 1, there exists a “good” q-tuple and therefore the code is an LDEC with
ε ≈ 1− β.
4.2.5 Optimal 1-Query Quantum Algorithms for 2-Bit Functions
In this section we show that every 2-bit Boolean function f can be computed
with success probability 9/10 using only one quantum query, and that this is optimal for
83
functions like AND and OR.2 If f is constant or depends on only one of its 2 input bits x1
and x2, then we can obviously compute it with one query. If f is PARITY or its negation,
then it is well known that f can be computed exactly with one quantum query. The only
remaining case is where f is an imbalanced function, i.e. has one 1-input and three 0-inputs,
or vice versa. These 8 possible functions are all equivalent, so we will restrict attention to
the NOR-function, which is 1 iff x = 00.
Peter Høyer discovered the following algorithm for doing the 2-bit NOR with 1
quantum query and error probability ε = 1/10. Using one quantum query we can obtain
the state
1√3
(|0〉 + (−1)x1 |1〉+ (−1)x2 |2〉) .
We now use a 2-outcome measurement where the first operator is the projection on the
uniform superposition. We output 1 iff the measurement gives the first outcome. This has
error probability 0 on the x = 00 input (where NOR = 1), and error probability 1/9 on each
of the three other inputs. We can balance this to an algorithm with 2-sided error 1/10, by
producing output 0 with probability 1/10, and running the above 1-query algorithm with
probability 9/10.
We will now prove that his error ε = 1/10 is optimal. By the analysis of [13], the
amplitudes of the final state of a 1-query quantum algorithm are degree-1 polynomials in
the input variables, so the acceptance probability of the algorithm is a polynomial
p(x1, x2) =∑
j
|aj + bj(−1)x1 + cj(−1)x2 |2 ,
2Unlike our 11/14 solution in Lemma 3, the query of the optimal 9/10 algorithm will depend on f . Thismeans that we cannot directly use this algorithm in the PIR-context, as the query could leak informationabout f (and hence possibly about i) to the server.
84
where j ranges over all basis states that would yield a 1 as output, and the aj , bj , cj are
complex numbers that are independent of the input. Let a = (aj) be the vector of ajs,
‖a‖ =√
〈a|a〉 its Euclidean norm, and similarly for b and c. If the algorithm has error
probability ≤ ε, then we have the following four conditions, one for each of the possible
inputs:
(A) 1− ε ≤ p(0, 0) = ‖a+ b+ c‖2
(B) p(0, 1) = ‖a+ b− c‖2 ≤ ε
(C) p(1, 0) = ‖a− b+ c‖2 ≤ ε
(D) p(1, 1) = ‖a− b− c‖2 ≤ ε
Averaging (B) and (C) gives
‖a‖2 + ‖b− c‖2 ≤ ε, hence ‖a‖ ≤ √ε.
Triangle inequality and (D) gives
‖b+ c‖ − ‖a‖ ≤ ‖a− b− c‖ ≤ √ε, hence ‖b+ c‖ ≤ ‖a‖+√ε ≤ 2
√ε.
Subtracting (D) from (A), and using Cauchy-Schwarz, gives
1− 2ε ≤ 4|〈a|b + c〉| ≤ 4‖a‖ · ‖b+ c‖ ≤ 4 · √ε · 2√ε = 8ε,
hence ε ≥ 1/10.
4.3 Remarks
This is the first classical result that is proved using techniques from quantum
computing in an apparently essential way (at least, we don’t know a classical proof of the
85
same result). Clearly, it would be very interesting to find other such applications. This
would much broaden the relevance of quantum computing and make it less conditional on
whether an actual quantum computer will ever be built.
There are also many interesting open questions related to the tradeoffs between
the various parameters in LDCs. In particular, it is still open whether one can achieve
m = poly(n) LDCs using only a constant (or even sublogarithmic) number of queries. We
would like to obtain better lower bounds for q > 2 queries and explore the connections of
LDCs to other combinatorial constructions.
86
Chapter 5
Private Information Retrieval
Quantum computation and information has had a great impact on the area of
modern cryptography. On one hand, Shor’s algorithm for factoring showed that classical
cryptosystems, such as RSA, are not secure against quantum attacks. On the other hand,
unconditionally secure key distribution is possible when the communication is done over
quantum channels. A lot of research in quantum cryptography is focused on finding more
cryptographic primitives that are possible only in the quantum world.
One very interesting and well-studied cryptographic task is private information
retrieval. A Private Information Retrieval scheme allows a user to retrieve some information
from a database, without revealing what information she is interested in. For example, we
can imagine a database of stock prices or of real estate prices, where a user wants to learn
some price of a stock or real state. However, she wants to do it without revealing her
interest, because that may result into an increase of the price.
Definition 4 A one-round, (1 − δ)-secure, k-server private information retrieval (PIR)
87
scheme with recovery probability 1/2 + ε, query size t, and answer size a, consists of a
randomized algorithm (the user), and k deterministic algorithms S1, . . . , Sk (the servers),
such that
1. On input i ∈ [n], the user produces k t-bit queries q1, . . . , qk and sends these to the
respective servers. The jth server sends back an a-bit string aj = Sj(x, qj). The user
outputs a bit b depending on i, a1, . . . , ak, and his randomness.
2. For all x and i, the probability (over the user’s randomness) that b = xi is at least
1/2 + ε.
3. For all x and j, the distributions on qj (over the user’s randomness) are δ-close (in
total variation distance) for different i.
The scheme is called linear if, for every j and qj, the jth server’s answer Sj(x, qj) is a
linear combination over GF (2) of the bits of x.
We can straightforwardly generalize these definitions to quantum PIR for the case
where δ = 0 (the server’s state after the query should be independent of i). That is the only
case we need here.
All known upper bounds on PIR have one round of communication, ε = 1/2
(perfect recovery) and δ = 0 (the servers get no information whatsoever about i). Below
we will assume one round and δ = 0 without mentioning this further.
If there is only one server (k = 1), then privacy can be maintained by letting the
server send the whole n-bit database to the user. This takes n bits of communication and
is optimal. If the database is replicated over k ≥ 2 servers, then there exist protocols with
88
significantly less communication. Chor et al. [23] exhibited a 2-server PIR scheme with
communication complexity O(n1/3) and one with O(n1/k) for k > 2. Ambainis [4] improved
the latter to O(n1/(2k−1)). Beimel et al. [17] improved the communication complexity to
O(n2 log log k/k log k). Their results improve the previous best bounds for all k ≥ 3 but not
for k = 2.
Katz and Trevisan, and Goldreich et al. established a close connection between
locally decodable codes and private information retrieval (PIR) schemes. Roughly, the
queries in an LDC correspond to the servers in a PIR scheme. In fact, the best known
LDCs for constant q are derived from PIR schemes. No general lower bounds better than
Ω(log n) are known for PIRs with k ≥ 2 servers. For the case of 2 servers, the best known
lower bound was 4 log n, due to Mann [51].
A PIR scheme is linear if for every query that the user makes, the answer bits are
linear combinations of the bits of x. Goldreich et al. [35] proved that linear 2-server PIRs
with t-bit queries and a-bit answers where the user looks only at k predetermined positions
in each answer require t = Ω(n/ak).
The techniques we developed for the study of Locally Decodable Codes allow us to
reduce classical 2-server PIR schemes with 1-bit answers to quantum 1-server PIRs, which
in turn can be reduced to a random access code [54]. Thus we obtain an Ω(n) lower bound
on the communication complexity for all classical 2-server PIRs with 1-bit answers. We
extend our results to PIR schemes with larger answers, a 4.4 log n lower bound for the
general 2-server PIR and obtain quantum PIR schemes that beat the best known classical
PIRs. We extend our lower bound to PIR schemes with larger answers. Previously, such
89
a bound was known only for linear PIRs (first proven in [23, Section 5.2] for 1-bit answers
and extended to constant-length answers in [35]). Furthermore, our results combined with
those of Katz and Trevisan give a 4.4 log n lower bound for the general 2-server PIR. This
is the first, very modest improvement on the bound of Mann [51]. Apart from giving new
lower bounds for classical PIR, we can also use our 2-to-1 reduction to obtain quantum PIR
schemes that beat the best known classical PIRs.
Subsequently to our work, Beigel, Fortnow, and Gasarch [16] found a classical
proof that a 2-server PIR with perfect recovery (ε = 1/2) and 1-bit answers needs query
length ≥ n − 2. However, their proof does not seem to extend to the case ε < 1/2, or to
larger answers.
5.1 Lower Bounds for Binary 2-Server PIR
To get lower bounds for 2-server PIRs with 1-bit answers, we again give a 2-step
proof: a reduction of 2 classical servers to 1 quantum server, combined with a lower bound
for 1-server quantum PIR.
Theorem 19 If there exists a classical 2-server PIR scheme with t-bit queries, 1-bit an-
swers, and recovery probability 1/2 + ε, then there exists a quantum 1-server PIR scheme
with (t+ 2)-qubit queries, (t+ 2)-qubit answers, and recovery probability 1/2 + 4ε/7.
Proof. The proof is analogous to the proof for locally decodable codes (Theorem 10).
If we let the quantum user employ the same randomness as the classical one, the problem
boils down to computing some f(a1, a2), where a1 is the first server’s 1-bit answer to query
90
q1, and a2 is the second server’s 1-bit answer to query q2. However, in addition we now
have to hide i from the quantum server. This we do by making the quantum user set up
the (4 + t)-qubit state
1√3
(
|0〉|0, 0t〉+ |1〉|1, q1〉+ |2〉|2, q2〉)
,
where ‘0t’ is a string of t 0s. The user sends everything but the first register to the server.
The state of the server is now a uniform mixture of |0, 0t〉, |1, q1〉, and |2, q2〉. By the security
of the classical protocol, |1, q1〉 contains no information about i (averaged over the user’s
randomness), and the same holds for |2, q2〉. Hence the server gets no information about i.
The quantum server then puts (−1)aj in front of |j, qj〉 (j ∈ 1, 2), leaves |0, 0t〉
alone, and sends everything back. Note that we need to supply the name of the classical
server j ∈ 1, 2 to tell the server in superposition whether it should play the role of server
1 or 2. The user now has
1√3
(
|0〉|0, 0t〉+ (−1)a1 |1〉|1, q1〉+ (−1)a2 |2〉|2, q2〉)
.
From this we can compute f(a1, a2) with success probability exactly 11/14, giving overall
recovery probability 1/2 + 4ε/7 as in Theorem 10. 2
Combining the above reduction with the quantum random access code lower
bound, we obtain the first Ω(n) lower bound that holds for all 1-bit-answer 2-server PIRs,
not just for linear ones.
Theorem 20 A classical 2-server PIR scheme with t-bit queries, 1-bit answers, and recov-
ery probability 1/2 + ε, has t ≥ (1−H(1/2 + 4ε/7))n − 2.
91
Proof. We first reduce the 2 classical servers to 1 quantum server in the way of Theorem 19.
Now consider the state of the quantum PIR scheme after the user sends his (t + 2)-qubit
message |φi〉:∑
r
√
pr
3|r〉(
|0〉|0, 0t〉+ |1〉|1, q1(r, i)〉 + |2〉|2, q2(r, i)〉)
.
Here the pr are the classical probabilities of the user (these depend on i) and qj(r, i) is the
t-bit query that the user sends to server j in the classical 2-server scheme, if he wants xi
and has random string r. Letting B = 0t+1 ∪ 1, 2 × 0, 1t be the server’s basis states,
we can write |φi〉 as:
|φi〉 =∑
b∈B
λb|aib〉|b〉.
Here the |aib〉 are pure states that do not depend on x. The coefficients λb are non-negative
reals that do not depend on i, for otherwise a measurement of b would give the server
information about i, contradicting privacy. The server then tags on the appropriate phase
sbx, which is 1 for b = 0t+1 and (−1)Sj (x,qj) for b = jqj, j ∈ 1, 2. This gives
|φix〉 =∑
b∈B
λb|aib〉sbx|b〉.
Now the following pure state will be a random access code for x
|ψx〉 =∑
b∈B
λbsbx|b〉,
because a user can unitarily map |0〉|b〉 7→ |aib〉|b〉 to map |0〉|ψx〉 7→ |φix〉, from which he
can get xi with probability p = 1/2 + 4ε/7 by completing the quantum PIR protocol. The
state |ψx〉 has t+ 2 qubits, hence from Theorem 11 we obtain t ≥ (1−H(p))n− 2. 2
92
For the special case where the classical PIR outputs the XOR of the two answer
bits, we can improve our lower bound to t ≥ (1−H(1/2+ ε))n− 1. In particular, t ≥ n− 1
in case of perfect recovery (ε = 1/2), which is tight.
5.2 Extentions
5.2.1 Lower Bounds for 2-Server PIR with Larger Answers
We can also extend our linear lower bound on 2-server PIR schemes with answer
length a = 1 (Theorem 20) to the case of 2-server PIR larger answer length. We use the
translation from PIR to smooth codes given by Lemma 7.1 of Goldreich et al. [35]:
Lemma 5 (GKST) If there is a classical 2-server PIR scheme with query length t, answer
length a, and recovery probability 1/2+ε, then there is a (2, 3, ε)-smooth code C : 0, 1n →
Σm for Σ = 0, 1a and m ≤ 6 · 2t.
Going through roughly the same steps as for the proof of Theorem 15, we obtain:
Theorem 21 A classical 2-server PIR scheme with t-bit queries, a-bit answers, and recov-
ery probability 1/2 + ε, has t ≥ Ω(nε2/25a).
5.2.2 Lower Bounds for General 2-Server PIR
The previous lower bounds on the query length of 2-server PIR schemes were signif-
icant only for protocols with short answer length. Here we slightly improve the best known
bound of 4 log n [51] on the overall communication complexity of 2-server PIR schemes, by
combining our Theorem 21 and Theorem 6 of Katz and Trevisan [38]. We restate their
93
theorem here for the PIR setting. For the remainder of this section, we assume ε to be
some fixed positive constant.
Theorem 22 (Katz & Trevisan) Every 2-server PIR scheme with t-bit queries and a-bit
answers has
t ≥ 2 logn
a−O(1).
We now prove the following lower bound on the total communication C = 2(t+a)
of any 2-server PIR scheme with t-bit queries and a-bit answers:
Theorem 23 Every 2-server PIR scheme has total communication
C ≥ (4.4− o(1)) log n.
Proof. We distinguish three cases, depending on the answer length of the scheme. Let
δ = log log n/ log n.
case 1: a ≤ (0.2 − δ) log n. Then from Theorem 21 we get that C ≥ t = Ω(n5δ) =
Ω((log n)5).
case 2: (0.2 − δ) log n < a < 2.2 log n. Then from Theorem 22 we have
C = 2(t+ a) > 2 (2 log(n/(2.2 log n))−O(1) + (0.2− δ) log n) = (4.4− o(1)) log n.
case 3: a ≥ 2.2 log n. Then obviously C = 2(t+ a) ≥ 4.4 log n.
2
94
5.2.3 Upper Bounds for Quantum PIR
The best known LDCs are derived from classical PIR schemes with 1-bit answers
where the output is the XOR of the 1-bit answers that the user receives. By allowing
quantum queries, we can reduce the number of queries by half to obtain more efficient
LQDCs. Similarly, we can also turn the underlying classical k-server PIRs directly into
quantum PIRs with k/2 servers.
Most interestingly, there exists a 4-server PIR with 1-bit answers and communica-
tion complexity O(n3/10) [17, Example 4.2]. This gives us a quantum 2-server PIR scheme
with O(n3/10) communication, improving upon the communication required by the best
known classical 2-server PIR scheme, which has been O(n1/3) ever since the introduction of
PIR by Chor et al. [23]. We can similarly give quantum improvements over the best known
k-server PIR schemes for k > 2. However, this does not constitute a true classical-quantum
separation in the PIR setting yet, since no good lower bounds are known for classical PIR.
We summarize the best known bounds for classical and quantum PIR in Table 5.1.
Servers PIR complexity QPIR complexity
k = 1 Θ(n) Θ(n)
k = 2 O(n1/3) O(n3/10)
k = 3 O(n1/5.25) O(n1/7)
k = 4 O(n1/7.87) O(n1/11)
Table 5.1: Best known bounds on the communication complexity of classical and quantumPIR
95
5.3 Symmetrically Private Information Retrieval
In its standard form, PIR just protects the privacy of the user : the individual
servers learn nothing about i. But now suppose we also want to protect the privacy of
the data. That is, we don’t want the user to learn anything about x beyond the xi that
he asks for. For example, because the user should pay a fee for every xi that he learns
(pay-per-view), or because the database contains very sensitive information. This setting of
Symmetrically-Private Information Retrieval (SPIR) was introduced by Gertner et al. [33]
and is closely related to oblivious transfer.
Definition 5 A symmetrically-private information retrieval (SPIR) scheme is a PIR scheme
with the additional property of data privacy: the user’s “view” (i.e. the concatenation of
his various states during the protocol) does not depend on xj, for all j 6= i. We distinguish
between private-randomness and shared-randomness SPIR schemes, depending on whether
the servers individually flip coins or have a shared random coin (hidden from the user). We
also distinguish between honest-user and dishonest-user SPIR, depending on whether data
privacy should hold even when the user deviates from the protocol.
Definition 6 We define quantum versions QPIR and QSPIR of PIR and SPIR, respec-
tively, in the obvious way: the user and the servers are quantum computers, and the com-
munication uses quantum bits; user privacy means that the density matrix of each server is
independent of i at all points in the protocol; data privacy means that the concatenation of
the density matrices that the user has at the various points of the protocol, is independent
of xj, for all j 6= i. For QSPIR, we still have the distinctions of private/public-randomness
96
and honest/dishonest-user.
Gertner et al. [33, Appendix A] showed that SPIR is impossible even if the user
is honest—i.e., follows the protocol—and the servers can individually flip coins [33, Ap-
pendix A]. This no-go result holds no matter how many servers and how many bits and
rounds of communication we allow. Therefore they extended the PIR model by allowing
the servers to share a random string that is hidden from the user, and showed how to turn
any PIR scheme into a SPIR scheme with shared randomness among the servers, at a small
extra communication cost. The resulting schemes are information-theoretically secure even
against dishonest users, and use a number of random bits that is of the same order as the
communication.
One might think that these can be turned into SPIR schemes with deterministic
servers as follows: the user picks a random string, sends it to each of the servers (along with
the queries) to establish shared randomness between them, and then erases (or “forgets”)
his copy of the random string. However, this erasing of the random string by the user is
ruled out by the definition, since the user’s view includes the random string he drew. In
fact, Gertner et al. [33, Appendix A] showed that shared randomness between the servers
is necessary for the existence of classical SPIR (even for multi-round protocols):
Fact 1 For every k ≥ 1, there is no k-server private-randomness SPIR scheme.
Intuitively, the reason is that since the servers have no knowledge of i (by user
privacy), their individual messages need to be independent of all bits of x, including xi, to
ensure data privacy. But since they cannot coordinate via shared randomness, their joint
messages will be independent of the whole x as well, so the user cannot learn xi.
97
The necessity of shared randomness for classical SPIR schemes is a significant
drawback, since information-theoretic security requires new shared randomness for each
application of the scheme. This either requires a lot of extra communication between the
servers (if new shared randomness is generated for each new application) or much memory
on the parts of the servers (if randomness is generated once for many applications, each
server needs to store this).
In this thesis, we study the existence and efficiency of Symmetrically PIR schemes
in the quantum world, where user and servers have quantum computers and can commu-
nicate qubits. Our main result is that honest-user quantum SPIR schemes exist even in
the case where the servers do not share any randomness. Such honest-user SPIRs without
shared randomness are impossible in the classical world. This gives another example of a
cryptographic task that can be performed with information-theoretic security in the quan-
tum world but that is impossible classically (key distribution [19] is the main example of
this). The communication complexity of our k-server QSPIR schemes is of the same order
as that of the best known classical k-server PIR schemes.
At first sight, one might think this trivial: just take a classical scheme, ensure
data privacy using shared randomness among the servers, and then get rid of the shared
randomness by letting the user entangle the messages to the servers. However, this would
violate data privacy, as the user would now have “access” to the servers’ shared randomness.
In actuality we do something quite different, making use of the fact that the servers can
add phases that multiply out to an overall phase. This phase allows the user to extract xi,
but nothing else.
98
The notion of an honest user is somewhat delicate, because clearly users cannot be
trusted to follow the protocol in all cases. Still, there are scenarios where the assumption
of a honest user is not unreasonable, for example in pay-per-view systems where the user
accesses the system via some box, attached to his TV, that is sealed or otherwise protected
from tampering. In this case the user cannot deviate from the protocol, but he can still be
curious, trying to observe what goes on inside of his box to try to extract more information
about the database. Our honest-user QSPIRs are perfectly secure against such users.
It would be nice to have SPIR schemes that are secure even against dishonest
users. However, we exhibit a large class of PIR schemes (quantum as well as classical) that
can all be cheated by a dishonest quantum user. Our honest-user QSPIRs fall in this class
and hence are not secure against dishonest users. Fortunately, if we are willing to allow
shared randomness between the servers then the best classical SPIRs can easily be made
secure against even dishonest quantum users: if the servers measure the communication in
the computational basis, the scheme is equivalent to the classical scheme, even if the user
is quantum.
5.3.1 Honest-user QSPIRs from PIR schemes
Our honest-user QSPIR schemes work on top of the PIR schemes recently devel-
oped by Beimel et al. [17]. These, as well as all others known, work as follows: the user
picks a random string r, and depending on i and r, picks k queries q1, . . . , qk ∈ 0, 1t. He
sends these to the respective servers, who respond with answers a1, . . . , ak ∈ 0, 1a. The
99
user then outputsk∑
j=1
aj · bj = xi,
where b1, . . . , bk ∈ 0, 1a are determined by i and r, and everything is modulo 2.
We will now describe the quantum SPIR scheme. As before, the user picks
r, q1, . . . , qk. In addition, he picks k random strings r1, . . . , rk ∈ 0, 1a. He defines
r′j = rj + bj and sets up the following (k + 1)-register state
1√2|0〉|q1, r1〉 · · · |qk, rk〉+
1√2|1〉|q1, r′1〉 · · · |qk, r′k〉.
The user keeps the first 1-qubit register to himself, and sends the other k registers to the
respective servers. The jth server sees a random mixture of |qj, rj〉 and |qj , r′j〉. Since qj
gives no information about i (by the user privacy of the classical PIR scheme) and each of
rj and r′j is individually random, the server learns nothing about i. The jth server performs
the following unitary mapping
|qj, r〉 7→ (−1)aj ·r|qj , r〉,
which he can do because aj only depends on qj and x. The servers then send everything
back to the user; the overall communication is 2k(t+ a) qubits, double that of the original
scheme. The user now has the state
1√2|0〉(−1)a1 ·r1 |q1, r1〉 · · · (−1)ak ·rk |qk, rk〉+
1√2|1〉(−1)a1 ·r′1 |q1, r′1〉 · · · (−1)ak ·r′k |qk, r′k〉.
Up to an insignificant global phase (−1)P
j aj ·rj , this is equal to
1√2|0〉|q1, r1〉 · · · |qk, rk〉+
1√2|1〉(−1)
Pkj=1 aj ·bj |q1, r′1〉 · · · |qk, r′k〉 =
1√2|0〉|q1, r1〉 · · · |qk, rk〉+
1√2|1〉(−1)xi |q1, r′1〉 · · · |qk, r′k〉.
100
The user can learn xi from this by returning everything except the first qubit to 0, and then
applying the Hadamard transform to the first qubit, which maps 1√2
(|0〉+ (−1)xi |1〉) 7→ |xi〉.
On the other hand, he can learn nothing else, since the various states of the user during
the protocol never depend on any other xj. Accordingly, we have an honest-user QSPIR
scheme with recovery probability 1 and 2k(t+ a) qubits of communication.
Note that nowhere in the protocol do the servers have shared randomness: they
do not start with it, the random strings rj, r′j are not correlated between servers, and the
servers do not end with any shared randomness (in fact they end with nothing). Moreover,
there is hardly any entanglement in the state either: tracing out the one qubit that the user
keeps to himself, the state becomes unentangled.
Plugging in the best known classical PIR schemes, due to [17], gives
Theorem 24 For every k ≥ 2, there exists a honest-user QSPIR (without shared random-
ness) with communication complexity nO(log log(k)/k log(k)).
Slightly better complexities can be obtained for small k, as stated in the first
column of Table 5.1 in the introduction. For k = 1 our scheme communicates 2n qubits
(just start from a 1-server scheme with query length 0, a1 = x and b1 = ei), for k = 2 it
uses O(n1/3) qubits, for k = 3 it uses O(n1/5.25) qubits etc. Notice that we cannot use the
(slightly better) k-server QPIR schemes from the second column of Table 5.1, since these
reveal more than 1 bit about x.
101
5.3.2 Honest-user 2-server QSPIR with Bell states
The QSPIR scheme of the previous section requires communication O(n1/3) for
the case of two servers. Here we present a different scheme based on the Bell states. The
scheme is suboptimal since it requires linear communication and moreover a dishonest user
can learn the entire database. However, it makes use of some interesting properties of
the Bell states and it could be easier to implement in the lab and so we include it for
completeness.
Our scheme works for even n = 2m, but for odd n we can just add a dummy bit
to x to make it even. It relies on three of the Bell states:
|B00〉 =|00〉 + |11〉√
2, |B01〉 =
|01〉+ |10〉√2
, |B10〉 =|00〉 − |11〉√
2
and the four Pauli matrices
σ00 =
1 0
0 1
, σ01 =
0 1
1 0
,
σ10 =
1 0
0 −1
, σ11 =
0 −1
1 0
.
We first describe our scheme for n = 2. If the user wants to know x1, he builds the following
3-qubit state
1√2
(|0〉|B00〉+ |1〉|B01〉) ,
and if he wants to know x2 he builds
1√2
(|0〉|B00〉+ |1〉|B10〉) .
102
He sends the second qubit to server 1 and the third to server 2, keeping the first qubit to
himself. It is easy to see that each server always gets a completely mixed qubit, so the
servers learn nothing about i. Both servers will now apply σx1x2 to the qubit they receive.
That is, they will apply a phase flip if x1 = 1 and a bit flip if x2 = 1. The following
properties are easily verified:
(σx ⊗ σx)|B00〉 = |B00〉
(σx ⊗ σx)|B01〉 = (−1)x1 |B01〉
(σx ⊗ σx)|B10〉 = (−1)x2 |B10〉
The servers then send their qubit back to the user. By the above properties, if the user
wanted to know x1, then he now has
1√2
(|0〉|B00〉+ (−1)x1 |1〉|B01〉) ,
and if he wanted x2 he has
1√2
(|0〉|B00〉+ (−1)x2 |1〉|B10〉) .
From this the user can extract the bit xi of his choice (with probability 1)—and nothing else.
Thus we have an honest-user 2-server QSPIR for n = 2 with 4 qubits of communication.
Somewhat paradoxically, by giving the user more power (namely to send entangled states)
we can protect the data privacy.
To generalize to arbitrary n = 2m, the user can employ a larger state that involves
m Bell states to extract xi. Namely, if i = 2j − 1 (1 ≤ j ≤ m) then he uses
1√2
(
|0〉|B00〉⊗m + |1〉|B00〉⊗j−1|B01〉|B00〉⊗m−j)
,
103
and if i = 2j then he uses
1√2
(
|0〉|B00〉⊗m + |1〉|B00〉⊗j−1|B10〉|B00〉⊗m−j)
.
The user sends the left qubit of each of the Bells states to server 1, the right qubit of each
Bell state to server 2, and keeps the first qubit to himself. The servers then apply σx2j−1x2j
to the jth qubit they receive (for all 1 ≤ j ≤ m) and send back the result. Using the same
properties as before, it can easily be verified that we just get the appropriate phase-factor
(−1)xi in the |1〉-part of the user’s total state and nothing else. Thus we have a scheme that
works for all n and that simultaneously hides i from the servers and x− xi from an honest
user. In total, the scheme uses 2n qubits of communication: m = n/2 to each server, and
m = n/2 back.
5.3.3 Dishonest-user quantum SPIR schemes
The assumption that the user is honest (i.e., follows the protocol) is somewhat
painful, since the servers cannot rely on this. In particular, a dishonest quantum user can
extract about log n bits of information about x of any honest-user QSPIR where the user’s
final state is pure, as follows. Consider such a pure QSPIR scheme, with as many servers
and communication as you like. From the user’s high level perspective, this can be viewed
as a unitary that maps
|i〉|0〉 7→ |i〉|xi〉|φi,xi〉.
Because of data privacy, the state |φi,xi〉 only depends on i and xi. Therefore by one
application of the QSPIR and some unitary post-processing, the user can erase |φi,xi〉,
104
mapping
|i〉|0〉 7→ |i〉|xi〉,
for any i or superposition of is of his choice. That is, one run of the QSPIR can be used to
make one query to x. Van Dam [26] has shown how one quantum query to x can be used
to obtain Ω(log n) bits of information about x (in the information-theoretic sense that is,
not necessarily log n specific database-bits xj). Accordingly, any pure QSPIR that is secure
against an honest user will leak at least Ω(log n) bits of information about x to a cheating
user. This includes our schemes from the previous section. Even worse, the servers cannot
even detect whether the user cheats, because they will have the same state in the honest
scheme as well as in the cheating scheme.
By Holevo’s theorem [36], the information that a dishonest quantum user can
obtain about x is upper bounded by the total communication of the scheme. This is
sublinear for all k ≥ 2, but still quite a lot. How to achieve perfect privacy against dishonest
quantum users? In fact, for k ≥ 2 servers we can just use a classical SPIR that is secure
against dishonest users (of course, this will be a shared-randomness scheme again). If
we require the servers to measure what they receive in the computational basis, then a
dishonest quantum user cannot extract more information than a classical dishonest user—
that is, nothing except one xi.
The case of SPIR with a single server is different. This primitive is equivalent
to 1-out-of-n Oblivious Transfer (OT) and, when we require perfect information-theoretic
privacy against a dishonest server and user, it is impossible both in the classical and in the
quantum world [48]. Crepeau [25] has exhibited a quantum scheme for so-called 1-out-of-2
105
OT, which suffices to construct 1-out-of-n OT and is perfectly secure against honest users
and a dishonest server. However, a dishonest user can learn all n bits of the database. In
fact, in any OT scheme which is perfectly secure against dishonest servers, a dishonest user
can always learn all n bits of the database [48]. By replicating the database in more than
one servers, we can overcome this difficulty.
5.4 Remarks
The main complexity questions about general PIR schemes are still wide open,
even for the 2-server case if we don’t restrict the answer size. The O(n1/3)-protocol of [23]
has been the best known for a long time for the 2-server case, and it would be very nice
to show that this is close to optimal. Finally, we exhibited 2-server quantum PIR schemes
that are more efficient than the best known classical ones. It would be very interesting to
improve these further, and to prove that QPIR is more efficient than the best (rather than
the best known) classical PIR schemes.
We have also shown that the best known PIR schemes can be turned into quantum
PIR schemes that are symmetrically private with respect to a honest user, i.e., except for
the bit xi that he asks for, the honest user receives no information whatsoever about the
database x. Shared randomness among the servers is necessary for achieving SPIR in the
classical world. Our quantum SPIR schemes don’t need this.
Rather interestingly, the best known quantum PIR schemes use polynomially less
communication than the best known classical schemes (Table 5.1), but our PIR-to-QSPIR
reduction does not seem to work starting from a quantum PIR system. We leave it as an
106
open question whether the communication complexity of QSPIR schemes can be signifi-
cantly reduced, either based on the QPIR schemes of [40] or via some other method.
107
Chapter 6
Weak Coin Flipping
In this chapter, we study another very important cryptographic primitive, Coin
Flipping. We desribe a weak coing flipping protocol with small bias and we give an alter-
native protocol for strong coin flipping that achieves the best known bias and has a much
simpler analysis.
A weak coin flipping protocol with bias ε, is a two-party communication game in the
style of [71], in which the players start with no inputs, and compute a value cA, cB ∈ 0, 1
respectively or declare that the other player is cheating. The protocol is deemed successful
if Alice and Bob agree on the outcome, i.e. cA = cB . Then, the outcome 0 is identified
with Alice winning, and 1 with Bob winning. The protocol satisfies the following additional
properties:
1. If both players are honest (i.e., follow the protocol), then they agree on the outcome
of the protocol: cA = cB , and the game is fair: Pr(cA = cB = b) = 1/2, for b ∈ 0, 1.
2. If one of the players is honest (i.e., the other player may deviate arbitrarily from the
108
protocol in his or her local computation), then the other party wins with probability
at most 1/2 + ε. In other words, if Bob is dishonest, then Pr(cA = cB = 1) ≤ 1/2 + ε,
and if Alice is dishonest, then Pr(cA = cB = 0) ≤ 1/2 + ε.
In a strong coin flipping protocol, the goal is instead to produce a random bit
which is biased away from any particular value 0 or 1. Clearly, any strong coin flipping
protocol with bias ε leads to weak coin flipping with the same bias. We may also derive a
strong coin-flipping protocol from a weak one. A simple way to do this is to have the winner
of the game flip the coin. This results in an increase in the bias of the protocol, however:
if when one player, say Alice, is dishonest, and the other (Bob) honest, the probability of
Alice winning is pw ≥ 1/2, and the probability of Bob winning is p`, then the coin will have
bias pw + (p` − 1)/2.
The primitive of quantum strong coin flipping has been studied extensively, e.g.
in [49, 52, 2, 5, 64]. The best known protocol, with bias 1/4 = 0.25, is due to Ambainis [5],
also independently proposed by Spekkens and Rudolph [64]. We present a protocol that
demonstrates that weak coin flipping with bias ≈ 0.239, less than 1/4, is possible. Our
protocol is obtained by modifying the protocol of [5] especially so that the winning party is
checked for cheating. We also describe a related strong coin flipping protocol with bias 1/4
that has the advantage over [5] that the analysis is considerably simpler. A similar analysis
for a restricted class of cheating strategies has been given by [64].
Since the discovery of the abovementioned protocol, we have learnt of several
exciting developments. Kitaev [42] has shown that in any protocol for strong coin flipping,
the product of the probabilities with which each of the players can achieve outcome (say) 0,
109
has to be at least 1/2. Hence the protocols with arbitrarily small bias are not possible; the
bias is always at least 1/√
2− 1/2 ≈ 0.207. (Previous lower bounds applied only to certain
kinds of protocol [5, 64, 55].) Furthermore, Ambainis [6] and Spekkens and Rudolph [65]
have constructed a family of protocols for weak coin flipping, where the product of the
winning probabilities is exactly 1/2. By making the winning probabilities equal, they get
protocols in which each player wins with probability at most 1/√
2, and hence the bias
is 1/√
2−1/2 ≈ 0.207. Subsequently, Mochon [53] constructed a weak coin flipping protocol
with bias 0.192, which is less than 1/√
2 − 1/2, hence showing that weak coin flipping is
strictly weaker than strong coin flipping. Whether there exists a weak coin flipping with
arbitrarily small bias is still an open question.
6.1 A game with small bias
Below, we describe a weak coin flipping game that has bias less than 1/4. The game
is derived from the protocol for strong coin flipping of [5], which achieves the previously
best known bias of 1/4.
The protocol is parametrised by α ∈ [0, 1], which we will optimise over later.
For x ∈ 0, 1, define the state |ψx〉 = |ψx(α)〉 in a Hilbert space Hs ⊗Ht = C3 ⊗ C
3 as:
|ψx〉 =√α |xx〉+
√1− α |22〉. (6.1)
The protocol has the following rounds:
1. Alice picks a ∈R 0, 1, prepares the state |ψa〉 in Hs⊗Ht (i.e., over a pair of qutrits)
and sends Bob the Ht qutrit.
110
2. Bob picks b ∈R 0, 1 and sends it to Alice.
3. Alice then reveals the bit a to Bob. Let c = a⊕ b.
If c = 0, then cA ← 0 and she sends the other part of - the state |ψa〉 (the Hs
qutrit). Bob checks that the qutrit pair he received in the first and the current
rounds are indeed in state |ψa〉 by measuring according to the orthogonal projection
operators Pa = |ψa〉〈ψa| and I−Pa. If the test is passed, Alice wins (cB ← 0 as well),
else Bob concludes that Alice has deviated from the protocol, and aborts.
4. If, on the other hand, c = a ⊕ b = 1, then cB ← 1, and Bob returns the qutrit he
received in round 1. Alice checks that her qutrits are in state |ψa〉 by measuring
according to Pa, I − Pa. If the test is passed, Bob wins the game (cA ← 1), else,
Alice concludes that Bob has tampered with her qutrit to bias the game, and aborts.
Theorem 25 The abovementioned protocol is a weak coin flipping protocol with bias 0.239.
The theorem follows from Lemmata 6 and 7 that analyse the situation where either Alice
or Bob cheat. Also, if the two players follow this protocol, the game is fair.
Lemma 6 If Bob is honest, then the probability that Alice wins Pr(cB = 0) ≤ 1− α2 .
Proof. We assume w.l.o.g. that a dishonest Alice tries to maximize her probability of
winning, and therefore sends a = b (so that c = a⊕b = 0) in round 3. Her cheating strategy
then takes the following form. Alice uses some ancillary space H and prepares some state
|ψ〉 ∈ H ⊗ Hs ⊗ Ht. She keeps the part of the state in H ⊗ Hs and sends the qutrit part
in Ht to Bob. Let σ denote the density matrix of Bob after the first round of the protocol
111
(i.e., of the Ht qutrit). Let ρa be the density matrix he would have if Alice had prepared
the honest state |ψa〉:
ρa = TrHs |ψa〉〈ψa|
= α |a〉〈a| + (1− α) |2〉〈2|.
In the second round, Bob replies with a random bit b. So that she wins, Alice
sends a = b to Bob and subsequently tries to pass his check. For that, she performs some
unitary operation Ub on her part of the state, and gets |ψb〉 = (Ub ⊗ I)|ψ〉. After that, she
sends the part of the state in Hs to Bob. The final joint state can be written now as
|ψb〉 =∑
i
√pi|i〉|ψi,b〉.
As we see, at the end of the protocol Bob has the density matrix σb =∑
i pi|ψi,b〉〈ψi,b|.
The probability that Alice wins the game is equal to the probability that she passes
Bob’s check at the end of the protocol, i.e. that Bob measures his part of the joint state
and gets |ψb〉 as the outcome
Pr[Alice wins | Bob sends b] =∑
i
pi|〈ψb|ψi,b〉|2
= F (σb, |ψb〉〈ψb|)
≤ F (TrHs(σb),TrHs |ψb〉〈ψb|)
= F (σ, ρb),
where F (ω, τ) = ‖√ω√τ‖2tr is the fidelity of two density matrices. Here, we have used the
fact that the fidelity between two states can only increase when we trace out a part of the
states. Note also that the state TrHs(σb) is equal to σ, which is independent of b.
112
Finally we have,
Pr[Alice wins] ≤ 1
2[F (σ, ρ0) + F (σ, ρ1)]
≤ 1
2[1 +
√
F (ρ0, ρ1) ]
= 1− α
2.
The second inequality is due to [64, Lemma 2], also [55, Lemma 3.2]. Moreover, F (ρ0, ρ1) =
(1− α)2. This completes the proof. 2
Note that the analysis above is tight in the sense that Alice can cheat with prob-
ability equal to 1 − α2 . She does this by preparing the state |ψ0〉 + |ψ1〉 (normalised) and
sending one qutrit to Bob in the first round. In the third round, she sends a = b, and the
remaining qutrit from the above state.
If Bob is the dishonest player, we can show the following bound.
Lemma 7 If Alice is honest, then Pr(cA = 1) ≤(
1−α√2
+ α)2
.
Proof. A cheating Bob tries to infer the value of the bit a that Alice picked from the qutrit
he receives in round 1 so that he can send b = a = 1 ⊕ a. However, he has to minimize
the disturbance caused to the over all state |ψa〉. Suppose that Bob applies the unitary
transformation U on Ht⊗H⊗C2 to the qutrit he receives from Alice, some ancillary qubits
initialised to |0〉, and a qubit reserved for his reply, and that:
U : |i〉|0〉|0〉 7→ |φi,0〉|0〉+ |φi,1〉|1〉. (6.2)
He measures the last qubit, and sends that across in round 2. If he wins, i.e., if the XOR
of the bit he sent and the one that Alice picked is 1 (b = a), in round 4 he sends one qutrit
113
(the Ht part) from the above state across to Alice. (Note that any transformation he may
do after learning that he won, i.e., after round 3, may be incorporated into U .)
Assume that Alice had picked the bit a ∈ 0, 1 in round 1. Then, the joint state
under the above cheating strategy before Bob measures his reply for round 2 is:
√α |a〉(|φa,0〉|0〉+ |φa,1〉|1〉) +
√1− α |2〉(|φ2,0〉|0〉 + |φ2,1〉|1〉).
The unnormalised residual state when the outcome of his measurement is a is thus:
√α |a〉|φa,a〉+
√1− α |2〉|φ2,a〉.
Then Bob sends to Alice the Ht part of his state (The states |φa,a〉, |φ2,a〉 are in Ht ⊗H).
After round 4 their joint state is in Hs⊗Ht⊗H, where the Hs⊗Ht part is with Alice and
the H part is with Bob.
So Bob’s probability of winning, given that Alice’s bit is a, may be bounded as:
‖(Pa ⊗ I)(√α |a〉|φa,a〉+
√1− α |2〉|φ2,a〉)‖2
= ‖α(〈a| ⊗ I)|φa,a〉+ (1− α)(〈2| ⊗ I)|φ2,a〉‖2
≤ (α‖φa,a‖+ (1− α)‖φ2,a‖)2
≤ (α+ (1− α)‖φ2,a‖)2 .
Now, consider Pr[Bob wins], which is the average of the above expression over a ∈
0, 1. This is maximised when ‖φ2,0‖ = ‖φ2,1‖ = 1/√
2 (recall from equation (6.2)
that ‖φ2,0‖2 + ‖φ2,1‖2 = 1). Thus, the probability of Bob winning is bounded by
(
1− α√2
+ α
)2
,
114
as claimed. 2
There is a cheating strategy for Bob that achieves the above probability of success.
Bob can use the following transformation on the qutrit he receives and an ancillary qubit:
|2〉|0〉 7→ |2〉 ⊗ 1√2(|0〉 + |1〉), and
|x〉|0〉 7→ |x〉|x〉, for x ∈ 0, 1.
He then measures the ancilla to get the bit b he is supposed to send in the second round.
Proof. (Theorem 25)
The proof of the theorem follows from Lemmata 6 and 7. Note also that when
the players follow the protocol then the game is fair. As we vary the parameter α from
0 to 1 Alice’s cheating probability decreases from 1 to 1/2 and Bob’s cheating probability
increases from 1/2 to 1. The bias is minimized when the two probabilities are made equal:
1− α
2=
(
1− α√2
+ α
)2
.
By choosing α to satisfy the above equation, we get a protocol in which no player can win
the game with probability greater than 0.739. The bias is then 0.239 < 1/4. 2
6.2 A strong coin flipping protocol
Finally, we present a variant of the strong coin flipping protocol of [5], which has
the same bias, but is much more simple to analyse. The idea behind this protocol also occurs
115
in the “purification protocol” for bit-commitment in [64]. The protocol has the following
three rounds:
1. Alice picks a ∈R 0, 1, prepares the state |ψa〉 ∈ Hs ⊗ Ht as in equation (6.1) and
sends Bob the qutrit.
2. Bob picks b ∈R 0, 1 and sends it to Alice.
3. Alice then reveals the bit a to Bob and sends the second half of the state |ψa〉. Bob
checks that the qutrit pair he received are indeed in state |ψa〉. If the test is passed,
Bob accepts the outcome c = a⊕ b, else Bob concludes that Alice deviated from the
protocol, and aborts.
Theorem 26 The abovementioned protocol is a strong coin flipping protocol with bias 1/4.
Proof.
The protocol is fair when both players are honest. The analysis for Bob’s cheating
strategy is the same as in [5] and his cheating probability is at most
1
2+‖ρ0 − ρ1‖tr
4=
1
2(1 + α).
The analysis for Alice’s cheating strategy is the same as in Lemma 6 above, and
the same bound of 1− α2 holds here as well. This analysis is considerably simpler and does
not require the symmetrization in [5] for the state sent in the first round.
By making the two cheating probabilities equal
1− α
2=
1
2(1 + α),
116
we achieve the bias of 1/4 for α = 1/2. 2
6.3 Remarks
The result of Mochon [53] shows that weak coin flipping is a strictly weaker cryp-
tographic primitive than strong coin flipping. It is still an open question if it is possible to
have a weak coin flipping protocol with arbitrarily small bias. It would be very interesting
to find such a protocol or provide a lower bound on the bias of any weak coin flipping
protocol.
117
Chapter 7
Conclusions
In this thesis, we have studied the power of quantum encodings. We showed that in
the model of one-way communication complexity, quantum protocols can be exponentially
more efficient than classical ones resolving a main open question in this area. Moreover,
we studied Locally Decodable Codes, and proved an exponential lower bound on the size of
2-query LDCs via a quantum argument. Our result answers a long standing open question
and also introduces quantum information theory as a powerful tool in the study of classical
coding and complexity theory. Finally, we studied quantum cryptography and namely the
primitives of Private and Symmetrically-Private Information Retrieval and Coin Flipping.
Quantum computation and information has become a central research area in
theoretical computer science. Since it is a relatively young field, there are many challenges
ahead of us. We have already described some open problems that are closely related to the
work in this thesis. We will briefly sketch some other research directions in connection to
the contents of our research.
118
It is a long standing open question to find superlogarithmic lower bounds for the
depth of Boolean circuits. This reduces to finding lower bounds in the communication
complexity model for some relations defined by Karchmer and Wigderson. Proving such
a lower bound has been notoriously hard and all the known techniques have failed. Can
we use quantum arguments to prove such lower bounds, in a way similar to our result on
Locally Decodable Codes.
In cryptography, the study of primitives like bit commitment and oblivious transfer
had led to the notion of Zero Knowledge. A zero-knowledge proof is an interactive proof
in which the verifier learns nothing from the interaction with the prover, other than the
fact that the assertion being proven is true. It is natural to extend this notion to quantum
zero-knowledge proofs. However, the definition of a quantum zero-knowledge proof is not
immediate, in fact there is still no definition which is fully satisfactory. It is very intriguing
to provide a correct definition for quantum zero-knowledge and prove interesting properties
of this class.
Our result on Locally Decodable Codes is a prime example of a classical result
that depends on quantum techniques. Since then, there have been other quantum-inspired
classical results, i.e. [3, 1], but this is definitely still the beginning of quantum information as
a powerful tool in the study of computation. We would like to further pursue this direction
and bring quantum and classical computation and information closer together.
119
Bibliography
[1] S. Aaronson. Lower bounds for local search by quantum arguments. In Proceedings of
36th ACM STOC, pages 464–474, 2004. quant-ph/0307149.
[2] D. Aharonov, A. Ta-Shma, U. Vazirani, and A. Yao. Quantum bit escrow. In Proceed-
ings of 32nd ACM STOC, pages 705–714, 2000. quant-ph/0004017.
[3] Dorit Aharonov and Oded Regev. Lattice problems in NP intersect coNP. In Proc.
45th Annual IEEE Symp. on Foundations of Computer Science (FOCS), 2004.
[4] A. Ambainis. Upper bound on communication complexity of private information re-
trieval. In Proceedings of the 24th ICALP, volume 1256 of Lecture Notes in Computer
Science, pages 401–407, 1997.
[5] A. Ambainis. A new protocol and lower bounds for quantum coin flipping. In Proceed-
ings of 33rd ACM STOC, pages 134–142, 2001.
[6] A. Ambainis. Personal communication, 2001.
[7] A. Ambainis, A. Nayak, A. Ta-Shma, and U. Vazirani. Quantum dense coding and a
120
lower bound for 1-way quantum finite automata. In Proceedings of 31st ACM STOC,
pages 376–383, 1999. quant-ph/9804043.
[8] A. Ambainis, L. J. Schulman, A. Ta-Shma, U. V. Vazirani, and A. Wigderson. The
quantum communication complexity of sampling. SIAM J. on Computing, 32(6):1570–
1585, 2003.
[9] L. Babai, L. Fortnow, L. Levin, and M. Szegedy. Checking computations in polyloga-
rithmic time. In Proceedings of 23rd ACM STOC, pages 21–31, 1991.
[10] L. Babai, L. Fortnow, N. Nisan, and A. Wigderson. BPP has subexponential time simu-
lations unless EXPTIME has publishable proofs. Computational Complexity, 3(4):307–
318, 1993.
[11] L. Babai and P. G. Kimmel. Randomized simultaneous messages: Solution of a problem
of Yao in communication complexity. In Proceedings of the 12th IEEE Conference on
Computational Complexity (CCC), pages 239–246, 1997.
[12] Z. Bar-Yossef, T. S. Jayram, and I. Kerenidis. Exponential separation of quantum
and classical one-way communication complexity. In Proceedings of 36th ACM STOC,
2004.
[13] R. Beals, H. Buhrman, R. Cleve, M. Mosca, and R. de Wolf. Quantum lower bounds
by polynomials. Journal of the ACM, 48(4):778–797, 2001. Earlier version in FOCS’98.
quant-ph/9802049.
[14] D. Beaver and J. Feigenbaum. Hiding instances in multioracle queries. In Proceedings
121
of 7th Annual Symposium on Theoretical Aspects of Computer Science (STACS’90),
volume 415 of Lecture Notes in Computer Science, pages 37–48. Springer, 1990.
[15] R. Beigel. The polynomial method in circuit complexity. In Proceedings of the 8th
IEEE Structure in Complexity Theory Conference, pages 82–95, 1993.
[16] R. Beigel, L. Fortnow, and W. Gasarch. Nearly tight bounds for private information
retrieval systems. Technical Report 2002-L001N, NEC Laboratories America, October
2002.
[17] A. Beimel, Y. Ishai, E. Kushilevitz, and J. Raymond. Breaking the O(n1/(2k−1)) barrier
for information-theoretic Private Information Retrieval. In Proceedings of 43rd IEEE
FOCS, pages 261–270, 2002.
[18] E. Ben-Sasson, O. Goldreich, P. Harsha, M. Sudan, and S. Vadhan. Robust PCPs
of proximity, shorter PCPs and applications to coding. In Proceedings of 36th ACM
STOC, 2004.
[19] C. H. Bennett and G. Brassard. Quantum cryptography: Public key distribution and
coin tossing. In Proceedings of the IEEE International Conference on Computers,
Systems and Signal Processing, pages 175–179, 1984.
[20] M. Blum. Coin flipping by telephone: A protocol for solving impossible problems. In
Advances in Cryptology: Report on CRYPTO’81, pages 11–15, 1981.
[21] H. Buhrman, R. Cleve, J. Watrous, and R. de Wolf. Quantum fingerprinting. Physical
Review Letters, 87(16), 2001.
122
[22] H. Buhrman, R. Cleve, and A. Wigderson. Quantum vs. classical communication and
computation. In Proceedings of the 30th ACM Symposium on Theory of Computing
(STOC), pages 63–68, 1998.
[23] B. Chor, O. Goldreich, E. Kushilevitz, and M. Sudan. Private information retrieval.
Journal of the ACM, 45(6):965–981, 1998. Earlier version in FOCS’95.
[24] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons,
Inc., 1991.
[25] C. Crepeau. Quantum oblivious transfer. Journal of Modern Optics, 41(12):2455–2466,
1994.
[26] W. van Dam. Quantum oracle interrogation: Getting all information for almost half
the price. In Proceedings of 39th IEEE FOCS, pages 362–367, 1998. quant-ph/9805006.
[27] A. Deshpande, R. Jain, T. Kavitha, S. Lokam, and J. Radhakrishnan. Better lower
bounds for locally decodable codes. In Proceedings of 17th IEEE Conference on Com-
putational Complexity, pages 184–193, 2002.
[28] E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser. A limit on the speed of quan-
tum computation in determining parity. Physical Review Letters, 81:5442–5444, 1998.
quant-ph/9802045.
[29] J. Feigenbaum and L. Fortnow. Random-self-reducibility of complete sets. SIAM
Journal on Computing, 22(5):994–1005, 1993. Earlier version in Structures’91.
123
[30] P. Frankl and R. M. Wilson. Intersection theorems with geometric consequences. Com-
binatorica, 1(4):357–368, 1981.
[31] P. Gemmell, R. Lipton, R. Rubinfeld, M. Sudan, and A. Wigderson. Self-
testing/correcting for polynomials and for approximate functions. In Proceedings of
23rd ACM STOC, pages 32–42, 1991.
[32] P. Gemmell and M. Sudan. Highly resilient correctors for polynomials. Information
Processing Letters, 43(4):169–174, 1992.
[33] Y. Gertner, Y. Ishai, E. Kushilevitz, and T. Malkin. Protecting data privacy in private
information retrieval schemes. Journal of Computer and Systems Sciences, 60(3):592–
629, 2000. Earlier version in STOC 98.
[34] N. Gisin, R. Renner, and S. Wolf. Linking classical and quantum key agreement: Is
there a classical analog to bound entanglement? Algorithmica, 34(4):389–412, 2002.
Earlier version in Crypto’2000.
[35] O. Goldreich, H. Karloff, L. Schulman, and L. Trevisan. Lower bounds for linear
locally decodable codes and private information retrieval. In Proceedings of 17th IEEE
Conference on Computational Complexity, pages 175–183, 2002. Also on ECCC.
[36] A. S. Holevo. Bounds for the quantity of information transmitted by a quantum commu-
nication channel. Problemy Peredachi Informatsii, 9(3):3–11, 1973. English translation
in Problems of Information Transmission, 9:177–183, 1973.
124
[37] M. Karchmer and A. Wigderson. Monotone circuits for connectivity require super-
logarithmic depth. SIAM J. on Computing, 26(5):255–265, 1990.
[38] J. Katz and L. Trevisan. On the efficiency of local decoding procedures for error-
correcting codes. In Proceedings of 32nd ACM STOC, pages 80–86, 2000.
[39] I. Kerenidis and A. Nayak. Weak coin flipping with small bias. Information Processing
Letters, Volume 89, Issue 3, pages 131–135, 2004.
[40] I. Kerenidis and R. de Wolf. Exponential lower bound for 2-query locally decodable
codes via a quantum argument. Journal of Computer and Systems Sciences, 2004.
Earlier version in STOC’03. quant-ph/0208062.
[41] I. Kerenidis and R. de Wolf. Quantum symmetrically-private information retrieval.
Information Processing Letters, Volume 90, Issue 3, pages 109–114, 2004.
[42] A. Kitaev. Personal communication, 2001.
[43] H. Klauck. On quantum and probabilistic communication: Las Vegas and one-way pro-
tocols. In Proceedings of the 32nd ACM Symposium on Theory of Computing (STOC),
pages 644–651, 2000.
[44] H. Klauck, A. Nayak, A. Ta-Shma, and D. Zuckerman. Interaction in quantum com-
munication and the complexity of set disjointness. In Proceedings of 33rd ACM STOC,
pages 124–133, 2001.
[45] I. Kremer. Quantum Communication. Master’s Thesis, The Hebrew University of
Jerusalem, 1995.
125
[46] E. Kushilevitz and N. Nisan. Communication Complexity. Cambridge University Press,
1997.
[47] R. Lipton. New directions in testing. In Vol. 2 of Series in Discrete Mathematics and
Theoretical Computer Science, pages 191–202. ACM/AMS, 1991.
[48] H-K. Lo. Insecurity of quantum secure computations. Physical Review A, 56:1154,
1997. quant-ph/9611031.
[49] H-K. Lo and H. F. Chau. Why quantum bit commitment and ideal coin tossing are
impossible. Physical Review Letters, 78:3410, 1997. quant-ph/9603004.
[50] C. Lu, O. Reingold, S. Vadhan, and A. Wigderson. Extractors: Optimal up to constant
factors. In Proceedings of 35th ACM STOC, pages 602–611, 2003.
[51] E. Mann. Private access to distributed information. Master’s thesis, Technion - Israel
Institute of Technology, Haifa, 1998.
[52] D. Mayers. Unconditionally secure quantum bit commitment is impossible. Physical
Review Letters, 78:3414–3417, 1997. quant-ph/9605044.
[53] C. Mochon. Quantum weak coin-flipping with bias of 0.192. In Proceedings of 45th
IEEE FOCS, 2004. quant-ph/0403193.
[54] A. Nayak. Optimal lower bounds for quantum automata and random access codes. In
Proceedings of 40th IEEE FOCS, pages 369–376, 1999. quant-ph/9904093.
[55] A. Nayak and P. Shor. Bit-commitment-based quantum coin flipping. In Physical
Review A, 67, article 012304, 2003.
126
[56] I. Newman. Private vs. common random bits in communication complexity. Informa-
tion Processing Letters, 39:67–71, 1991.
[57] I. Newman and M. Szegedy. Public vs. private coin flips in one round communication
games. In Proceedings of the 28th ACM Symposium on Theory of Computing (STOC),
pages 561–570, 1996.
[58] M. A. Nielsen and I. L. Chuang. Quantum Computation and Quantum Information.
Cambridge University Press, 2000.
[59] J. Radhakrishnan, P. Sen, and S. Venkatesh. The quantum complexity of set member-
ship. In Proceedings of 41st IEEE FOCS, pages 554–562, 2000. quant-ph/0007021.
[60] R. Raz. Exponential separation of quantum and classical communication complexity.
In Proceedings of the 31st ACM Symposium on Theory of Computing (STOC), pages
358–367, 1999.
[61] P. Sen and S. Venkatesh. Lower bounds in the quantum cell probe model. In Proceedings
of the 28th ICALP, volume 2076, pages 358–369, 2001.
[62] P. W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms
on a quantum computer. SIAM Journal on Computing, 26(5):1484–1509, 1997. Earlier
version in FOCS’94. quant-ph/9508027.
[63] M. Sipser and D. A. Spielman. Expander codes. IEEE Transactions on Information
Theory, 42:1710–1722, 1996. Earlier version in FOCS’94.
127
[64] R. W. Spekkens and T. Rudolph. Degrees of concealment and bindingness in quantum
bit commitment protocols. In Physical Review A, 65, article 012310, 2002.
[65] R. W. Spekkens and T. Rudolph. A quantum protocol for cheat-sensitive weak coin
flipping. In Physical Review Letters, 89, article 227901, 2002.
[66] M. Sudan, L. Trevisan, and S. Vadhan. Pseudorandom generators without the XOR
lemma. In Proceedings of 31st ACM STOC, pages 537–546, 1999.
[67] L. Trevisan. Personal communication, September 2002.
[68] S. Wehner and R. de Wolf. Improved lower bounds for locally decodable codes and
private information retrieval. In quant-ph/0403140, 2004.
[69] A. C-C. Yao. Some complexity questions related to distributive computing. In Proceed-
ings of the 11th ACM Symposium on Theory of Computing (STOC), pages 209–213,
1979.
[70] A. C-C. Yao. Lower bounds by probabilistic arguments. In Proceedings of the 24th
Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 420–
428, 1983.
[71] A. C-C. Yao. Quantum circuit complexity. In Proceedings of 34th IEEE FOCS, pages
352–360, 1993.