Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff Mohammad Ali Maddah-Ali Bell...
-
Upload
riley-durnal -
Category
Documents
-
view
219 -
download
2
Transcript of Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff Mohammad Ali Maddah-Ali Bell...
Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff
Mohammad Ali Maddah-AliBell Labs, Alcatel-Lucent
joint work with
Urs Niesen
Allerton October 2013
Video on Demand
0 2 4 6 8 10 12 14 16 18 20 220
20
40
60
80
100
Time
Nor
mal
ized
Dem
and
• High temporal traffic variability
• Caching (prefetching) can help to smooth traffic
Caching (Prefetching)
• Placement phase: populate caches
• Demands are not known yet
• Delivery phase: reveal request, deliver content
Server
Server
K Users
Cache Contents
Shared Link
Problem Setting
N Files
Size M
Placement: - Cache arbitrary function of the files (linear, nonlinear, …)Delivery: -Requests are revealed to the server
- Server sends a function of the files Question: Smallest worst-case rate R(M) needed in delivery phase?How to choose (1) caching functions, (2) delivery functions
Coded CachingN Files, K Users, Cache Size M
• Uncoded Caching• Caches used to deliver content locally
• Local cache size matters
• Coded Caching [Maddah-Ali, Niesen 2012]
• The main gain in caching is global
• Global cache size matters (even
though caches are isolated)
Centralized Coded CachingN=3 Files, K=3 Users, Cache Size M=2
A12 A13 A23
B12 B13 B23
C12 C13 C23
A23⊕B13⊕C12
Multicasting Opportunity between three users with different demands
A12 A12
A13
A13
A23 A23
B12
B13
B12
B23
B13
B23
C12
C13
C12
C23
C13
C23
1/3
A23
B13
C12
Maddah-Ali, Niesen, 2012
Approximately Optimum
Centralized Coded CachingN=3 Files, K=3 Users, Cache Size M=2
A12 A13 A23
B12 B13 B23
C12 C13 C23
A12 A12
A13
A13
A23 A23
B12
B13
B12
B23
B13
B23
C12
C13
C12
C23
C13
C23
• Centralized caching needs• Number and identity of the
users in advance
• In practice, it is not the case,• Users may turn off• Users may be asynchronous• Topology may time-varying
(wireless)
Question: Can we achieve similar gain without such knowledge?
Decentralized Proposed Scheme1 2 3 12 13 23 12301 2 3 12 13 23 1230
1 2 3 12 13 23 1230
1 12 13 123
1 12 13 123
1 12 13 123
2 12 23 123
2 12 23 123
2 12 23 123
3 13 23 123
3 13 23 123
3 13 23 123
32 ⊕
Prefetching: Each user caches 2/3 of the bits of each file - randomly, - uniformly, - independently.
Delivery: Greedy linear encoding
0 0 0
21 ⊕
31 ⊕
23 13 12⊕ ⊕
N=3 Files, K=3 Users, Cache Size M=2
Decentralized Caching
Decentralized Caching
1 2 3 12 13 23 12301 2 3 12 13 23 1230
1 2 3 12 13 23 1230
12 13 23
12 13 23
12 13 23
Centralized Prefetching:
Decentralized Prefetching:
Coded (Centralized): [Maddah-Ali, Niesen, 2012]
Coded (Decentralized)
Uncoded Local Cache Gain:
• Proportional to local cache size
• Offers minor gain
Global Cache Gain:
• Proportional to global cache size
• Offers gain in the order of number of users
ComparisonN Files, K Users, Cache Size M
Can We Do Better?
The proposed scheme is optimum within a constant factor in rate.
Theorem:
• Information-theoretic bound
• The constant gap is uniform in the problem parameters
• No significant gains beside local and global
Asynchronous Delivery
Segment 1 Segment 2 Segment 3
Segment 1 Segment 2 Segment 3
Segment 1 Segment 2 Segment 3
Conclusion
• We can achieve within a constant factor of the optimum caching
performance through
• Decentralized and uncoded prefetching
• Greedy and linearly coded delivery
• Significant improvement over uncoded caching schemes
• Reduction in rate up to order of number of users
• Papers available on arXiv:
• Maddah-Ali and Niesen: Fundamental Limits of Caching (Sept. 2012)
• Maddah-Ali and Niesen: Decentralized Coded Caching Attains Order-Optimal Memory-
Rate Tradeoff ( Jan. 2013)
• Niesen and Maddah-Ali: Coded Caching with Nonuniform Demands (Aug. 2013)