1 2 Probabilistic Computations Extend the notion of “efficient computation” beyond...
Transcript of 1 2 Probabilistic Computations Extend the notion of “efficient computation” beyond...
1
2
Probabilistic Computations
Extend the notion of “efficient computation” beyond polynomial-time-Turing machines.
We will still consider only machines that are allowed to run no more than a fixed polynomial in the input length.
7.1
3
Random vs. Non-Random vs. Non-deterministic computationsdeterministic computations In an NDTM, even a single wizard’s
witness is enough to cause the input be accepted.
In the randomized model - we require the probability that such a witness is detected to be larger than a certain threshold.
There are two approaches to randomness, online & offline, which we will show to be identical.
4
Online approach
NDTM : (<state>,<symbol>,<guess>) (<state>,<move>,<symbol>)
Randomized TM: (<state>,<symbol>,<toss>) (<state>,<move>,<symbol>)
NDTM accepts if a “good” guess exists - a single accepting computation is enough.
A Random TM accepts if sufficient accepting computations exist.
7.2
5
Probabilistic Turing MachinesProbabilistic Turing Machines
Probabilistic TMs have an “extra” tape: the random tape
M(x) Prr[M(x,r)]content of input
tape
“ standard” TMs
probabilistic TMs
content of random
tape
6
Does It Really Capture The Notion of Randomized Does It Really Capture The Notion of Randomized Algorithms?Algorithms?
It doesn’t matter if you toss all your coins in
advance or throughout the computation…
7
8
9
10
Online approachOnline approach
A random online TM has a creates a probability space on possible computations.
The computations can be described as a tree.
We usually regard only binary tree, because we can simulate any other tree as close as needed.
11
Offline approachOffline approach
A language LNP iff there exists a DTM and a polynomial p() and a TM M, s.t. for every xL y |y|p(|x|) and TM(x,y) accepts.
The DTM has access to a “witness” string of polynomial length.
Similarly, a random TM has access to a “guess” input string. BUT...
There must be sufficient such witnesses
7.3
12
The Random Complexity The Random Complexity classesclasses A language that can be recognized by a
polynomial time probabilistic TM with good probability, we consider it relatively easy.
We first consider a one-sided error complexity class.
13
A language LRP, iff a probabilistic TM M exists, such that xL Prob[M(x)=1] ½ xL Prob[M(x)=1] = 0
A language LcoRP, iff a probabilistic TM exists M, such thatxL Prob[M(x)=1] = 1xL Prob[M(x)=0] ½
These classes complement each other:coRP = { L: LRP }
The classes RP and coRPThe classes RP and coRP7.4
14
Comparing RP with NPComparing RP with NP
Let L be a language in NP or RP. Let RL be the relation defining the
witness/guess for L for a certain TM. Then: NP RP
xL y s.t. (x,y)RL xL Prob[(x,r)RL] ½xL y (x,y)RL xL r (x,r)RL
Obviously, RPNP
7.5
15
AmplificationAmplification
The constant ½ in the definition of RP is arbitrary. If we have a probabilistic TM that accepts xL with
probability p<½, we can run this TM several times to “amplify” the probability.
If xL, all runs will return 0. If xL, and we run it n times than the probability
that none of these accepts is
Prob[Mn(x)=1] = 1-Prob[Mn(x)1] = = 1-Prob[M(x)1]n = = 1-(1-Prob[M(x)=1])n = 1-(1-p)n
7.6
16
Alternative Definitions for RPAlternative Definitions for RP
RP1 will mistake quite often, while RP2 will almost never mistake:
Define: LRP1 iff probabilistic Poly-time TM M and a polynomial p(), s.t. xL Prob[M(x)=1] 1/p(|x|)xL Prob[M(x)=1] = 0
Define: LRP2 iff probabilistic Poly-time TM M and a polynomial p(), s.t. xL Prob[M(x)=1] 1-2-p(|x|)
xL Prob[M(x)=1] = 0
17
Claim: RP1=RP2Claim: RP1=RP2
Trivially, RP2RP1: if |x| is big enough, then 1-2-
p(|x|) 1/p(|x|) Suppose LRP1: there exists M1 s.t. xL:
Prob[M1(x,r)=1] 1/p(|x|)
M2 runs M1 t(|x|) times, to find that Prob[M2(x,r)=0] (1-1/p(|x|))t(|x|)
Solving (1-1/p(|x|))t(|x|) = 1-2-p(|x|) gives t(|x|) p(|x|)2/log2e
Hence, M2 runs in polynomial time and thus LRP2 RP1RP2
7.7
18
The class BPPThe class BPP
Define: LBPP iff there exists a polynomial-time probabilistic TM M, such that xL: Prob[M(x)=L(x)] 2/3
where L(x)=1 if xL, and L(x)=0 if xL.
The BPP machine success probability is bounded away from failure probability
7.8
19
BPP BPP (Bounded-Probability Polynomial-Time)(Bounded-Probability Polynomial-Time)
Definition: BPP is the class of all languages L which have a probabilistic polynomial time TM M, s.t
x Prr[M(x,r) = L(x)] 2/3L(x)=1 xL
such TMs are called ‘Atlantic City’
20
AmplificationAmplification
Claim: If LBPP, then there exists a probabilistic polynomial TM M’, and a polynomial p(n) s.t
x{0,1}n Prr{0,1}p(n)[M’(x,r)L(x)] <
1/(3p(n))
We can get better amplifications, but this will
suffice here...
21
22
23
Relations to P and NPRelations to P and NP
P BPP NP
ignore the random input
?
24
Does BPPDoes BPPNP?NP?
We may have considered saying:“Use the random string as a witness”
Why is that wrong?
25
Constant invarianceConstant invariance
An alternative definition: xL Prob[M(x)=1] p+xL Prob[M(x)=1] p-
We can build a machine M2 that runs M n times, and return the majority of results of M. After a constant number of executions (depending on p and but not on x) we would get the desired probability.
7.9
26
The weakest possible BPP The weakest possible BPP definitiondefinition
Define: LBPPW iff there exists a polynomial-time probabilistic TM M, and a polynomial-time computable function f:N[0,1] and a polynomial p(), such that xL: xL Prob[M(x)=1] f(|x|) + 1/p(|x|)xL Prob[M(x)=1] f(|x|) - 1/p(|x|)
If we set f(|x|)=½ and p(|x|)=6 we get the original definition.
Hence, BPPBPPW
7.10
27
Claim: BPPW=BPPClaim: BPPW=BPP
Let LBPPW and let M be its computing TM. Define M’(x)
invoke ti=M(x) i=1…n, compute p=ti/n, if p>f(|x|) return YES else return NO.
Notice p is the mean of a sample of size n of the random variable M(x).
By setting =1/p(|x|) and n=-ln(1/6)/22 we get the desired probability gap.
28
The strongest possible BPP The strongest possible BPP definitiondefinition
Define: LBPPS iff there exists a polynomial-time probabilistic TM M, and a polynomial p(), such that xL Prob[M(x)=L(x)] 1-2-p(|x|)
If we set p(|x|)=2 we get the original definition, because 1-2-2
Hence, BPPSBPP
7.11
29
BPP and other complexity BPP and other complexity classesclasses Clearly, RPBPP. BPPNP? Unknown. coBPP=BPP by definition.
7.12
30
The class PP The class PP
The class BPP allowed for a small but not negligible gap for mistakes.
The class PP allows for even smaller gaps, even a single random toss is enough.
Running a PP machine polynomially many times, would not necessarily help.
Define: LPP iff there exists a polynomial-time probabilistic TM M, such that xL: Prob[M(x)=L(x)] ½
7.13
31
Claim: PPClaim: PPPSPACE
Let LPP, M be the TM that recognizes L, and p() be the time bound.
Define M’(x) run M(x) using all p(|x|) possible coin tosses, and decides by majority.
M’ uses polynomial space for each of the (exponential number of) invocations, and only needs to count the number of successes.
7.14
32
Other variants for PPOther variants for PP
Define: LPP1 iff a TM M exists s.t.xL Prob[M(x)=1] ½ xL Prob[M(x)=0] ½
Obviously, PPPP1.
33
Claim: PP=PP1Claim: PP=PP1
Let LPP1, M be the TM that recognizes L, and p() be the time bound.
M’(x,(a1,a2,...,ap(|x|),b1,b2,...,bp(|x|))) if a1=a2=...=ap(|x|)=1 then return NO,else return M(x,(b1,b2,...,bp(|x|)))
M' is exactly the same as M, except for an additional probability of 2-p(|x|)+1 for returning NO.
7.15
34
PP=PP1 continuedPP=PP1 continued
If xL, Prob[M(x)=1] > ½ Prob[M(x)=1] ½+2-p(|x|)+1 Prob[M’(x)=1] (½+2-p(|x|))(1-2-p(|x|)+1) > ½
If xL, Prob[M(x)=0] ½ Prob[M’(x)=0] ½(1-2-p(|x|)+1)+2-p(|x|)+1 > ½
In any case Prob[M(x)=L(x)] > ½
35
Claim: Claim: NPPP
Let LNP, M be the NDTM that recognizes L, and p() be the time bound.
M’(x,(b1,b2,...,bp(|x|))) if b1=1 then return M(x,(b2,...,bp(|x|))), else return YES.
If xL, then Prob[M’(x)=1]=½ (b1=1 only). If xL, then Prob[M’(x)=1]>½ (a witness
exists) LPP1=PP NPPP Also: coNPPP because of symmetry.
7.16
36
The class ZPPThe class ZPP
Define: LZPP iff there exists a polynomial-time probabilistic TM M, such that xL: M(x)={0,1, }, Prob[M(x)=] ½, and Prob[M(x)=L(x) or M(x)=] = 1
Prob[M(x)=L(x)]>½ The symbol is “I don’t know”. The value ½ is arbitrary and can be
replaced by 2-p(|x|) or 1-1/p(|x|).
7.17
37
Claim: ZPP=RPClaim: ZPP=RPcoRPcoRP
Let LZPP, M be the NDTM that recognizes L. Define:
M’(x) – let b=M(x)– if b= then return 0, else return b.
If xL, M’(x) will never return 1. If xL, Prob[M’(x)=1]>½, as required. ZPPRP. The same way, ZPPcoRP.
38
Let LRPcoRP, MRP and McoRP be the NDTMs that recognize L according to RP and coRP.
Define:– M’(x) – if MRP(x)=YES return 1– if McoRP=NO return 0, else return .
MRP(x) never returns YES if xL, and McoRP(x) never returns NO if xL. Therefore, M’(x) never returns the opposite of L(x).
The probability that MRP and McoRP are bothwrong < ½ Prob[M’(x)=] < ½.
RPcoRPZPP
RPRPcoRPcoRPZPP
39
The random classes The random classes hierarchyhierarchy PZPPRPBPP It is believed that BPP=P
40
41
42