Hardness Amplification within NP against Deterministic Algorithms

34
Hardness Amplification within NP against Deterministic Algorithms Parikshit Gopalan U Washington & MSR-SVC Venkatesan Guruswami U Washington & IAS

description

Hardness Amplification within NP against Deterministic Algorithms. Why Hardness Amplification. Goal: Show there are hard problems in NP . Lower bounds out of reach. Cryptography, Derandomization require average case hardness. Revised Goal: Relate various kinds of hardness assumptions. - PowerPoint PPT Presentation

Transcript of Hardness Amplification within NP against Deterministic Algorithms

Page 1: Hardness Amplification within NP against Deterministic Algorithms

Hardness Amplification within NP against Deterministic Algorithms

Parikshit Gopalan

U Washington & MSR-SVC

Venkatesan Guruswami

U Washington & IAS

Page 2: Hardness Amplification within NP against Deterministic Algorithms

Why Hardness Amplification

Goal: Show there are hard problems in NP. Lower bounds out of reach. Cryptography, Derandomization require average

case hardness. Revised Goal: Relate various kinds of hardness

assumptions. Hardness Amplification: Start with mild

hardness, amplify.

Page 3: Hardness Amplification within NP against Deterministic Algorithms

Hardness Amplification

Generic Amplification Theorem:

If there are problems in class A that are mildly hard for algorithms in Z, then there are problems in A that are very hard for Z.

NP, EXP, PSPACE

P/poly, BPP, P

Page 4: Hardness Amplification within NP against Deterministic Algorithms

PSPACE versus P/poly, BPP

Long line of work:

Theorem: If there are problems in PSPACE that are worst case hard for P/poly (BPP), then there are problems that are ½ + hard for P/poly(BPP).

Page 5: Hardness Amplification within NP against Deterministic Algorithms

NP versus P/poly O’Donnell.

Theorem: If there are problems in NP that are 1 - hard for P/poly, then there are problems that are ½ + hard.

Starts from average-case assumption. Healy-Vadhan-Viola.

Page 6: Hardness Amplification within NP against Deterministic Algorithms

NP versus BPP Trevisan’03.

Theorem: If there are problems in NP that are 1 - hard for BPP, then there are problems that are ¾ + hard.

Page 7: Hardness Amplification within NP against Deterministic Algorithms

NP versus BPP Trevisan’05.

Theorem: If there are problems in NP that are 1 - hard for BPP, then there are problems that are ½ + hard.

BureshOppenheim-Kabanets-Santhanam: alternate proof via monotone codes. Optimal up to .

Page 8: Hardness Amplification within NP against Deterministic Algorithms

Our resultsAmplification against P.

Theorem 1: If there is a problem in NP that is 1 - hard for P, then there is a problem which is ¾ + hard.

Theorem 2: If there is a problem in PSPACE that is1 - hard for P, then there is a problem which is ¾ + hard.

Trevisan: 1 - hardness to 7/8 + for PSPACE.Goldreich-Wigderson: Unconditional hardness for EXP against P.

= 1/n100

= 1/(log n)100

Page 9: Hardness Amplification within NP against Deterministic Algorithms

Outline of This Talk:

1. Amplification via Decoding.

2. Deterministic Local Decoding.

3. Amplification within NP.

Page 10: Hardness Amplification within NP against Deterministic Algorithms

Outline of This Talk:

1. Amplification via Decoding.

2. Deterministic Local Decoding.

3. Amplification within NP.

Page 11: Hardness Amplification within NP against Deterministic Algorithms

Amplification via DecodingTrevisan, Sudan-Trevisan-Vadhan

101100

Encode

101100101

f: Mildly hard

g: Wildly hard

100110011

Decode

101100

Approx. to g

f

Page 12: Hardness Amplification within NP against Deterministic Algorithms

Amplification via Decoding.

Case Study: PSPACE versus BPP.

101100

Encode

101100101

f: Mildly hard g: Wildly

hard

• f’s table has size 2n.

• g’s table has size 2n2.

• Encoding in space n100.PSPACE

Page 13: Hardness Amplification within NP against Deterministic Algorithms

Amplification via Decoding.

Case Study: PSPACE versus BPP.

100110011

Decode

101100

BPP

• Randomized local decoder.

• List-decoding beyond ¼ error.

Approx. to g

f

Page 14: Hardness Amplification within NP against Deterministic Algorithms

Amplification via Decoding.

Case Study: NP versus BPP.

101100

Encode

101100101

f: Mildly hard

g: Wildly hard

• g is a monotone function M of f.

• M is computable in NTIME(n100)

• M needs to be noise-sensitive.NP

Page 15: Hardness Amplification within NP against Deterministic Algorithms

Amplification via Decoding.

Case Study: NP versus BPP.

100110011

Decode

101000

• Randomized local decoder.

• Monotone codes are bad codes.

• Can only approximate f.

BPPApprox.

to g

Approx. to f

Page 16: Hardness Amplification within NP against Deterministic Algorithms

Outline of This Talk:

1. Amplification via Decoding.

2. Deterministic Local Decoding.

3. Amplification within NP.

Page 17: Hardness Amplification within NP against Deterministic Algorithms

Deterministic Amplification.

100110011

Decode

101100

P

Deterministic local decoding?

Page 18: Hardness Amplification within NP against Deterministic Algorithms

Deterministic Amplification.

100110011

Decode

101100

• Can force an error on any bit.

• Need near-linear length encoding.

• Monotone codes for NP.P

Deterministic local decoding?

2n

2nn100

Page 19: Hardness Amplification within NP against Deterministic Algorithms

Deterministic Local Decoding …

… up to unique decoding radius. Deterministic local decoding up to 1 - from ¾ + agreement. Monotone code construction with similar parameters.

Main tool: ABNNR codes + GMD decoding. [Guruswami-Indyk, Akavia-Venkatesan]

Open Problem: Go beyond Unique Decoding.

Page 20: Hardness Amplification within NP against Deterministic Algorithms

The ABNNR Construction.

Expander graph.

• 2n vertices.

• Degree n100.

Page 21: Hardness Amplification within NP against Deterministic Algorithms

The ABNNR Construction.

0

0

1

0

1

Expander graph.

• 2n vertices.

• Degree n100.

Page 22: Hardness Amplification within NP against Deterministic Algorithms

The ABNNR Construction.

0

0

1

1 0 0

1 0 1

0 0 0

1 0 1

0 1 0

0

1

• Start with a binary code with small distance.

• Gives a code of large distance over large alphabet.

Expander graph.

• 2n vertices.

• Degree n100.

Page 23: Hardness Amplification within NP against Deterministic Algorithms

Concatenated ABNNR Codes.

0

0

1

1 0 0

1 0 1

0 0 0

1 0 1

0 1 0

0

1 1 0 1 0 1 1

0 1 1 0 0 1

0 0 0 0 0 0

0 1 1 0 0 1

0 1 0 1 1 0

Inner code of distance ½.

• Binary code of distance ½.

• [GI]: ¼ error, not local.

• [T]: 1/8 error, local.

Page 24: Hardness Amplification within NP against Deterministic Algorithms

Decoding ABNNR Codes.

1 1 1 0 0 1

0 1 0 0 0 1

0 0 1 0 0 0

0 1 0 0 1 1

0 1 1 1 0 0

Page 25: Hardness Amplification within NP against Deterministic Algorithms

Decoding ABNNR Codes.

1 0 0

0 0 1

0 0 0

0 0 1

0 1 0

1 1 1 0 0 1

0 1 0 0 0 1

0 0 1 0 0 0

0 1 0 0 1 1

0 1 1 1 0 0

Decode inner codes.

• Works if error < ¼.

• Fails if error > ¼.

Page 26: Hardness Amplification within NP against Deterministic Algorithms

Decoding ABNNR Codes.

0

0

1

1 0 0

0 0 1

0 0 0

0 0 1

0 1 0

0

0 1 1 1 0 0 1

0 1 0 0 0 1

0 0 1 0 0 0

0 1 0 0 1 1

0 1 1 1 0 0

Majority vote on the LHS.

[Trevisan]: Corrects 1/8 fraction of errors.

Page 27: Hardness Amplification within NP against Deterministic Algorithms

GMD decoding [Forney’67]

1 0 0 1 1 1 0 0 1c 2 [0,1]

If decoding succeeds, error 2 [0, ¼].

• If 0 error, confidence is 1.

• If ¼ error, confidence is 0.

c = (1 – 4).

Could return wrong answer with high confidence…

… but this requires close to ½.

Page 28: Hardness Amplification within NP against Deterministic Algorithms

GMD Decoding for ABNNR Codes.

1 0 0 c1

0 0 1 c2

0 0 0 c3

0 0 1 c4

0 1 0 c5

1 1 1 0 0 1

0 1 0 0 0 1

0 0 1 0 0 0

0 1 0 0 1 1

0 1 1 1 0 0

GMD decoding: Pick threshold, erase, decode. Non-local.

Our approach: Weighted Majority.

Thm: Corrects ¼ fraction of errors locally.

Page 29: Hardness Amplification within NP against Deterministic Algorithms

GMD Decoding for ABNNR Codes.

0

0

1

1 0 0 c1

0 0 1 c2

0 0 0 c3

0 0 1 c4

0 1 0 c5

0

1Thm: GMD decoding corrects

¼ fraction of error.

Proof Sketch:

1. Globally, good nodes have more confidence than bad nodes.

2. Locally, this holds for most neighborhoods of vertices on LHS.

Proof similar to Expander Mixing Lemma.

Page 30: Hardness Amplification within NP against Deterministic Algorithms

Outline of This Talk:

1. Amplification via Decoding.

2. Deterministic Local Decoding.

3. Amplification within NP.• Finding an inner monotone code [BOKS].

• Implementing GMD decoding.

Page 31: Hardness Amplification within NP against Deterministic Algorithms

The BOKS construction.

101100

101100101

k kr

x

T(x)

• T(x) : Sample an r-tuple from x, apply the Tribes function.

• If x, y are balanced, and (x,y) > , (T(x),T(y)) ¼ ½.

• If x, y are very close, so are T(x), T(y).

• Decoding: brute force.

Page 32: Hardness Amplification within NP against Deterministic Algorithms

GMD Decoding for Monotone codes.

0

0

1

1 0 1 0 c1

0 1 1 0 c2

1 1 0 0 c3

0 1 1 0 c4

1 0 1 0 c5

0

1

• Start with a balanced f, apply concatenated ABNNR.

• Inner decoder returns closest balanced message.

• Apply GMD decoding.

Thm: Decoder corrects ¼ fraction of error approximately.

• Analysis becomes harder.

Page 33: Hardness Amplification within NP against Deterministic Algorithms

GMD Decoding for Monotone codes.

0

0

1

1 0 1 0 c1

0 1 1 0 c2

1 1 0 0 c3

0 1 1 0 c4

1 0 1 0 c5

0

1

• Inner decoder finds the closest balanced message.

• Assume 0 error: Decoder need not return message.

• Good nodes have few errors, Bad nodes have many.

Thm: Decoder corrects ¼ fraction of error approximately.

Page 34: Hardness Amplification within NP against Deterministic Algorithms

Beyond Unique Decoding…

100110011

Deterministic local list-decoder:

Set L of machines such that:

- For any received word

- Every nearby codeword is computed by some M 2 L.

Is this possible?