Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol...
-
Upload
ashley-bryant -
Category
Documents
-
view
215 -
download
1
Transcript of Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol...
Search to Decision Reductions for Knapsacks and LWE
1
October 3, 2011
Daniele Micciancio, Petros Mol
UCSD Theory Seminar
Number Theoretic Cryptography• Number theory: standard source of hard problems
– Factoring: given N = pq, find p (or q) (p, q: large primes)– Discrete Log: given g, gx (mod p) , find x
2
• Very influential over the years• Extensively studied (algorithms & constructions)
• Two major concernsi. How do we know that a “random” N is hard to factor ?ii. Algorithmic progress
- subexponential (classical) algorithms- polynomial quantum algorithms
Cryptography from Learning Problems
3
…Finding s easy
just need ~ n samples
secret sintegers n, q (public)
New Problem: Learning With Errors (LWE)
4
…
secret sintegers n, q (public)
small error from a known distribution
Finding s possibly much hardernoise
b,
(mod q) random
small error vector
Am
n
A S e+=
secretCompactly…
The many interpretations of LWE
• Algebraic: Solving noisy random linear equations
• Geometric: Bounded Distance Decoding in Lattices
• Learning Theory: Learning linear functions over under random “classification” noise
• Coding theory: Decoding random linear q-ary codes
5
6
LWE Background• Introduced by Regev [R05]
• q = 2, Bernoulli noise -> Learning Parity with Noise (LPN)
• Extremely successful in Cryptography• IND-CPA Public Key Encryption [Regev05]• Injective Trapdoor Functions/ IND-CCA encryption [PW08]• Strongly Unforgeable Signatures [GPV08, CHKP10]• (Hierarchical) Identity Based Encryption [GPV08, CHKP10, ABB10]• Circular- Secure Encryption [ACPS09]• Leakage-Resilient Cryptography [AGV09, DGK+10, GKPV10]• (Fully) Homomorphic Encryption [GHV10, BV11]
Why LWE ?
7
• Worst-case/average-case reduction• Solving LWE* (average problem) at least as hard as
solving certain hard lattice problems in the worst-case
• High resistance to algorithmic attacks• no quantum attacks known• no subexponential algorithms known
*for specific error distribution
• Rich and Intuitive structure• Linear operations over fields (or rings)• Certain homomorphic properties
• Ability to embed a trapdoor• Wealth of Cryptographic applications
LWE: Search & Decision
8
Public parameters
Given:Goal: find s (or e)
Find
Given:
Distinguish
Goal: decide ifor
n: size of the secret, m: #samplesq: modulus, :error dis/ion
S-to-D: Worst-Case vs Average Case
9
• Clique (search): Given graph G and integer k, find a clique of size k in G, or say none exists.
• Clique (decision): Given graph G and integer k, output YES if G has a clique of size k and NO otherwise.
Theorem: Search <= Decision
Proof: Assume decider D.for v in V if D((G \ v, k)) = YES then G G \ vreturn G
Crucial: D is a worst-case (perfect) decider
S-to-D: Worst-Case vs Average Case
10
Pr[ D(A, As + e) = YES] - Pr[ D(A, t) = YES] = δ > 1/poly()
probability taken over sampling the instance and D’s randomness
D is an average (imperfect) deciderthis talkthis talk
Search-to-Decision reductions (S-to-D)
11
Why do we care?
- all LWE-based constructions rely on decisional LWE- strong indistinguishability flavor of security definitions
- their hardness is better understood
decision problems
searchproblems
Search-to-Decision reductions (S-to-D)
12
Why do we care?
- all LWE-based constructions rely on decisional LWE- strong indistinguishability flavor of security definitions
S-to-D reductions: “Primitive Π is ABC-Secure assuming search problem P is hard”
- their hardness is better understood
decision problems
searchproblems
Our results
13
• Powerful and usable criteria to establish Search-to-Decision equivalence for general classes of knapsack functions
• Use known techniques from Fourier analysis in a new context. Ideas potentially useful elsewhere
• Toolset for studying Search-to-Decision reductions for LWE with polynomially bounded noise.- Subsume and extend previously known ones- Reductions are in addition sample-preserving
Our results
14
• Powerful and usable criteria to establish Search-to-Decision equivalence for general classes of knapsack functions
• Use known techniques from Fourier analysis in a new context. Ideas potentially useful elsewhere
• Toolset for studying Search-to-Decision reductions for LWE with polynomially bounded noise.- Subsume and extend previously known ones- Reductions are in addition sample-preserving
Bounded knapsack functions over groupsParameters - integer m- finite abelian group G- set S = {0,…, s - 1} of integers, s: poly(m)
15
(Random) Knapsack familySampling where
Evaluation
Example(random) modular subset sum:
Knapsack functions: Computational problems
16
invert(search)
Input: Goal: Find x
Distinguish(decision)
Input: Samples from either:
Goal: Label the samples
Glossary: If decision problem is hard, function is pseudorandom (PsR) If search problem is hard, function is One-Way
distribution over public
Notation: family of knapsacks over G with distribution
Search-to-Decision: Known results
17
Decision as hard as search when…
[Fischer, Stern 96]: syndrome decoding
, vector group
uniform over all m-bit vectors with Hamming weight w.
[Impagliazzo, Naor 89] : (random) modular subset sum
, cyclic group
uniform over
Our contribution: S-to-D for general knapsack
18
One-Way
: knapsack family with range G and input distribution over
PsRPsR +
s: poly(m)
19
One-Way PsRPsR +Main Theorem
Our contribution: S-to-D for general knapsack: knapsack family with range G and input
distribution over s: poly(m)
20
One-Way PsRPsR +Main Theorem
PsR
Much less restrictive than it seems
✔In most interesting cases holds in a strong information theoretic sense
Our contribution: S-to-D for general knapsack
21
S-to-D for general knapsack: Examples
Any group G and any distribution over
Any group G with prime exponent and any distribution
Subsumes[IN89,FS96]and more
One-Way PsR
And many more…using known information theoretical tools (LHL, entropy bounds etc)
Proof overview
22
Input: Goal: Distinguish
DistinguisherInverter
Input: g , g.xGoal: Find x
Reminder
Proof overview
• Approach not (entirely) new [GL89, IN89]
23
Input: Goal: Distinguish
DistinguisherPredictor
Input: g , g.x, rGoal: find x.r (mod t)
Inverter
Input: g , g.xGoal: Find x
• Making it work in a general setting requires more advanced machinery and new technical ideas
<= <=
Proof Sketch: Step 1
24
Predictor
Input: g , g.x, rGoal: find x.r (mod t)
Inverter
Input: g , g.xGoal: Find x
<=
Proof Sketch: Step 1
25
Predictor
Input: f , f(x), rGoal: find x.r (mod t)
Inverter
Input: f , f(x)Goal: Find x
<=
This step holds for any function with domain
General conditions for inverting given noisy predictions for x.r (mod t) for possibly composite t-Goldreich–Levin: t=2-Goldreich, Rubinfeld, Sudan: t prime
Digression: Fourier Transform
26
Basis functions: where
Fourier Transform:
Fourier Representation:
27
Energy:
LHFC is given query access to h. It outputs a list L s.t:- if , then L will most likely include α
- LHFC runs in poly(m, 1/τ)
τ
LHFCLHFC
xi h(xi)
Digression: Learning heavy coefficients [AGS03]
Inverting <= Predicting (predictor)
28
Input: f, f(x)Goal: Find x
PredictorInverter
Input: f, f(x), rGoal: guess x.r (mod t)
Quality of Predictor
perfect Imperfect(but useful)
Imperfect(and useless)
Error distributione = guess – x.r (mod t)
bias:
Inverting <= Predicting (main idea)
29
Run. time: poly(m, 1/ε) TP
Succes Prob.: c.ε
Predictort >= s
Inverter
Run. time: TP
Bias: ε
Theorem
LHFCLHFCPred.Red.f, f(x)
Proof Idea
inverter
Inverting <= Predicting (main idea)
30
Predictort >= s
InverterTheorem
LHFCLHFCPred.Red.f, f(x)
Proof Idea
When the predictor’s bias is ε, the unknown xhas energy
inverterh
Run. time: poly(m, 1/ε) TP
Succes Prob.: c.εRun. time: TP
Bias: ε,
Proof Sketch: Step 2
31
Input: Goal: Distinguish
DistinguisherPredictor
Input: g , g.x, rGoal: find x.r (mod t)
<=
• Reduction specific to knapsack functions
• Proof follows outline of [IN89] but technical details very different.
Predicting <= Distinguishing
32
Input: Goal: DistinguishDist/erPredictorPredictor
Input: g , g.x, rGoal: Guess x.r (mod t)
Proof Idea
• Predictor makes an initial guess for x.r (mod t)• Uses guess to create some input to the distinguisher• If dist/er outputs “knapsack”, predictor outputs guess• Otherwise, it revises the guess
Key Property
• Correct guess • Incorrect guess “closer” to
• [IN89] used same idea for subset sum • For arbitrary groups proof significantly more involved
Our results
33
• Powerful and usable criteria to establish Search-to-Decision equivalence for general classes of knapsack functions
• Use known techniques from Fourier analysis in a new context. Ideas potentially useful elsewhere
• Toolset for studying Search-to-Decision reductions for LWE with polynomially bounded noise.- Subsume and extend previously known ones- Reductions are in addition sample-preserving
What about LWE?
34
s e+A ,e,g1 g2…gm
GA g1 g2…gm
G is the parity check matrix for the code generated by A
If A is “random”, G is also “random”
Error e from LWE unknown input of the knapsack
m
n
What about LWE?
35
e,g1 g2…gm
G
g1 g2…gm
The transformation works in the other direction as well
s e+A , A
(A, As +e ) <= (G, Ge) <= (G’, G’e) <= (A’,A’s’ + e)
Search Search Decision Decision
S-to-D for knapsack
Putting all the pieces together…
LWE Implications
36
LWE reductions follow from knapsacks reductions over
All known Search-to-Decision results for LWE/LPN with bounded error [BFKL93, R05, ACPS09, KSS10] follow as a direct corollary
Search-to-Decision for new instantiations of LWE
LWE: Sample Preserving S-to-D
37
If we can distinguish m LWE samples from uniform with 1/poly(n) advantage, we can find s with 1/poly(n) probability given m LWE samples
Caveat: Inverting probability goes down (seems unavoidable)
All reductions are in addition sample-preserving
A
Previous reductions
A’searchdecision
poly(m)m<=b
,
b’,
Why care about #samples?
38
LWE-based schemes often expose a certain number of samples, say m
With sample-preserving S-to-D we can base their security on the hardness of search LWE with m samples
Concrete algorithmic attacks against LWE [MR09, AG11] are sensitive to the number of exposed samples• for some parameters, LWE is completely broken by [AG11]
if number of given samples above a certain threshold
Open problems
Sample preserving reductions for
39
1. LWE with unbounded noise- Used in various settings [Pei09, GKPV10, BV11b, BPR11]- reductions are known [Pei09] but are not sample-preserving
2. ring LWE- Samples (a, a*s+e) where a, s, e drawn from R=Zq[x]/<f(x)>
- non sample-preserving reductions known [LPR10]