Download - 27may_ex

Transcript
Page 1: 27may_ex

The Mathematics of Entanglement - Summer 2013 27 May, 2013

Exercise Sheet 1

Exercise 1

How can we realize a POVM (Qi) as a projective measurement on a larger Hilbert space?

Solution: Let H be a Hilbert space with POVM (Qi)ni=1. We can embed H into Hn, the direct

sum of n copies of the original Hilbert space, by sending a state ψ to (ψ, . . . , ψ). If we equip Hn

with the “inner product”

⟪(φ1, . . . , φn)∣(ψ1, . . . , ψn)⟫ =∑i

⟨φi∣Qi∣ψi⟩

(which is positive semidefinite since the Qi ≥ 0) then this embedding preserves the norm: Indeed,

⟪(ψ, . . . , ψ)∣(ψ, . . . , ψ)⟫ =∑i

⟨ψ∣Qi∣ψ⟩ = ⟨ψ∣ψ⟩

(since ∑iQi = 1). Now consider the projective measurement (Pj), where Pj is the projector ontothe j-th summand in Hn. The associated probabilities are

⟪(ψ, . . . , ψ)∣Pj ∣(ψ, . . . , ψ)⟫ = ⟨ψ∣Qj ∣ψ⟩.

Thus we seem to have a found a larger Hilbert space on which we can realize the POVM (Qj) asa projective measurement.

Now, the above reasoning is slightly flawed, since ⟪. . .⟫ does not necessarily define an innerproduct: There can be vectors for which it is zero. However, these vectors form a subspace N =

{(φi) ∶ ⟪(φ1, . . . , φn)∣(φ1, . . . , φn)⟫ = 0}, and we can fix the argument by replacing Hn with thequotient H̃ =Hn/N .

Exercise 2

Let Σ = {1, . . . , ∣Σ∣} be an alphabet, and p(x) a probability distribution on Σ. Let X1,X2, . . . bei.i.d. random variables with distribution p(x) each. In the lecture, typical sets were defined by

Tp,n,δ = {(x1, . . . , xn) ∈ Σn∶ ∣−

1

nlog p⊗n(x1, . . . , xn) −H(p)∣ ≤ δ}.

1. Show that P((X1, . . . ,Xn) ∈ Tp,n,δ)→ 1 as n→∞.

Hint: Use Chebyshev’s inequality.

Solution:

P((X1, . . . ,Xn) ∈ Tp,n,δ) = P(∣−1

nlog p⊗n(X1, . . . ,Xn) −H(p)∣ ≤ δ) = P(∣−

1

n

n

∑i=1

log p(Xi)

´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶=∶Z

−H(p)∣ ≤ δ).

1-1

Page 2: 27may_ex

The expectation of the random variable Z is equal to the entropy,

E(Z) = E(− log p(Xi)) =H(p),

because the Xi are all distributed according to the distribution p(x). Moreove, since the Xi

are independent, its variance is given by

Var(Z) =1

n2Var(

n

∑i=1

log p(Xi)) =1

nVar(log p(X1)).

Using Chebyshev’s inequality, we find that

P(Tp,n,δ) = 1 − P(∣Z −H(p)∣ > δ) ≥ 1 −Var(Z)

δ2= 1 −

1

n

Var(log p(X1))

δ2= 1 −O(1/n)

as n → ∞ (for fixed p and δ). (One can further show, although it is not necessary, thatVar(log p(X1)) ≤ log2(d).)

2. Show that the entropy of the source is the optimal compression rate. That is, show that wecannot compress to nR bits for R <H(p) unless the error does not go to zero as n→∞.

Hint: Pretend first that all strings are typical, and that the scheme uses exactly nR bits.

Solution: Suppose that we have a (deterministic) compression scheme that uses nR bits,where R <H(X). Denote by Cn∶Σ

n → {1, . . . ,2nR} the compressor and by Dn∶{1, . . . ,2nR}→

Σn the decompressor, and by An = {x⃗ ∶ x⃗ = Dn(En(x⃗))} the set of strings that can becompressed correctly. Note that An has no more than 2nR elements. The probability ofsuccess of the compression scheme is given by

psuccess = P(X⃗ = Dn(En(X⃗))) = P(X⃗ ∈ An).

Now,

P(X⃗ ∈ An) = P(X⃗ ∈ An ∩ Tp,n,δ) + P(X⃗ ∈ An ∩ Tcp,n,δ) ≤ P(X⃗ ∈ An ∩ Tp,n,δ) + P(X⃗ ∈ T cp,n,δ)

For any fixed choice of δ, the right-hand side probability converges to zero as n→∞ (by theprevious exercise). Now consider the first summand: The set An ∩ Tp,n,δ has at most 2nR

elements, since this is even true for An. Moreover, since all its elements are typical, we havethat p(x⃗) ≤ 2n(−H(X)+δ). It follows that

P(X⃗ ∈ An ∩ Tp,n,δ) ≤ 2n(R−H(X)+δ),

which converges to zero if we fix a δ such that R <H(X)− δ. Thus the probability of successof the compression scheme goes to zero as n→∞.

1-2