Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a...

26
Arithmetic Coding This material is based on K. Sayood, “Introduction to data compression,” San Francisco, CA: Morgan Kaufmann Publishers, 1996. III Edition 2006. Internet: [email protected] web site: http://mkp.com Useful for sources with small alphabets (ex-binary 0 or 1) and alphabets with highly skewed (unequal) probabilities. (Ex-facsimile, text etc.). The rate for Huffman Coding is within P max + 0.086 of entropy (P max = probability of most frequently occurring symbol). For small alphabet size and highly skewed probability P max can be very large. Example: A = (a 1 , a 2 , a 3 ) Huffman Code P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books .elsevier.com, http://textbooks.elsevier.com 1 Letter Probabili ty Code a 1 .95 0 a 2 .02 11 a 3 .03 10

Transcript of Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a...

Page 1: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

Arithmetic Coding

This material is based on K. Sayood, “Introduction to data compression,” San Francisco, CA: Morgan Kaufmann Publishers, 1996. III Edition 2006. Internet: [email protected] site: http://mkp.com

Useful for sources with small alphabets (ex-binary 0 or 1) and alphabets with highly skewed (unequal) probabilities. (Ex-facsimile, text etc.).

The rate for Huffman Coding is within Pmax + 0.086 of entropy (Pmax = probability of most frequently occurring symbol). For small alphabet size and highly skewed probability Pmax can be very large.

Example: A = (a1, a2, a3)

Huffman Code

Entropy = ∑i=1

3

Pi log2 Pi = 0.335 bits/symbol

Redundancy = 1.05-.335 = 0.67 bit/symbol

Huffman code gives 1.05 bits/symbol = (.95) 1 + (.02) 2 + (.03) 2Group symbols {a1, a2, a3} in blocks of two.{a1a1, a1a2, a1a3, a2a1, ...., a3a3}# of messages = 32 = 9

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

a1(0.95)

a3(0.03)

a2(0.02)

0

0 11 (0.05)

(1.0)Root

Huffman Tree

1

Letter Probability Codea1 .95 0a2 .02 11a3 .03 10

Page 2: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

Symbol Probability Code0 Root

a1a1 .9025 o 1.0 0 1

a1a3 .0285 o 0 o 0 o .0975 100 .0570

a3a1 .0285 o 1 1 101

a1a2 .0190 0 o .0405 110 1

a2a1 .0190 0 o 1110 .0215

a3a3 .0009 1 111100 0

o 0 o 1 .0015 .0025

a2a3 .0006 111101

a3a2 .0006 1 111110 0

o .001

a2a2 .0004 1 111111

Huffman Tree

Average rate = 1.222bits/message (coding in blocks of 2 symbols)Average rate = 0.611bit/symbol of original alphabet

Symbol Probability a1 0.95 a2 0.02 a3 0.03

P (a1) = 0.95, P(a2) = .02, P(a3) = .03Average bit rate = .611 bits/symbol, entropy = .335 bits/symbol

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

2

Page 3: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

Redundancy = 0.611-0.335 = 0.276Group symbols (a1, a2, a3) in blocks of three,Alphabet size = 27 = 33, (a1 a1 a1, a1 a1 a2,.....,a3 a3 a2, a3 a3 a3)Average bit rate by Huffman Coding can be reduced but alphabet size grows exponentially. Group symbols in blocks of four. Alphabet size = 34 = 81To assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing codewords for all possible sequences of length m. Arithmetic Coding assigns codes to particular sequences of length m without having to generate codes for all sequences of length m. Procedure:

I. Generate a unique identifier or tag for the sequence of length m to be coded.

II. Assign a unique binary code to this tag.

Generating a tag:Alphabet size = 3, A= (a1, a2, a3)

P(a1) = .7, P(a2) = .1, P(a3) = .2Cumulative distribution function CDF

iFx (i) = P(X = k)

k=1

0 0 .49 .546

a1 a1 a1 a1

a2

.7 .49 a2 .539 a2 .5558 Mid point of

.8 .56 .546 .5572 tag intervala3 a3 a3 .5565

1.0 .7 .56 .56

Tag for input sequence a1 a2 a3 a2

a1a1 = 0.7*0.7 = 0.49a1a2 = .7*0.1 = 0.07, 0.49+0.07 = 0.56a1a3 = 0.7*0.2 = 0.14, 0.56+0.14 = 0.7a1a2a1 = 0.07*0.7 = 0.049, 0.49+0.049 = 0.539

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

3

Page 4: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

a1a2a2 = 0.07*0.1 = 0.007, 0.539+0.007 = 0.546, a1a2a3 = 0.07*0.2 = 0.014, 0.546+0.014 = 0.56

Tag for input sequence (a1, a2, a3.....)

Each new symbol restricts the tag to a subinterval that is disjoint from any other subinterval. The tag interval for (a1, a2, a3, a1) is disjoint from the tag interval for (a1, a2, a3, a2)

Any member of an interval can be used to identify a tag.

1. Lower limit of the interval2. Midpoint of the interval3. Upper and lower limits of the interval

Use midpoint of the interval to identify a tag. Let the alphabet be A = (a1, a2, ...,am)

i-1TX (ai) = P (X=k) + (1/2) P(X=i)

k=1= FX (i-1) + (1/2) P(X = i)

Example Roll of a die (1, 2, ......,6)

P(X = K) = 1/6, K = 1, 2, .......,6

Assign a tag to a particular sequence Xi

TX (m) (Xi) = P(y) + (1/2) P(Xi) y<xi

y<x means y precedes x in the ordering, m = length of sequence.

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

4

Page 5: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

Roll of a die 1, 2, 3, 4, 5, 60 01 .0833 1 1/721/6 1/362 .25 2 3/721/3 2/363 .4166 3 5/721/2 3/364 .5883 4 7/722/3 4/36

5 .75 5 9/72

5/6 5/36

6 .9166 6 11/72

1.0 1/6

4TX (5) = P(X = k) + (1/2) P(X = 5) = .75 = 0.67 + (.16/2)

k=1

TX (2) (13) = P(X = 11) + P(X = 12) + (1/2) P(X = 13) = 1/36 + 1/36 + (1/2) 1/36 = 5/72We have to compute the probability of every sequence that is less than the sequence for which the tag is generated. This is prohibitive. However, the upper and lower limits of a tag interval can be computed recursively.For a sequence X = (X1, X2.......Xn)

l(n) = l(n-1) + [u(n-1) - l(n-1)] FX (Xn-1 )

u(n) = l(n-1) + [u(n-1) - l(n-1)] FX (Xn)

u(n) = upper limit of tag for Xl(n) = lower limit of tag for Xmidpoint of the interval for tag

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

5

Page 6: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

TX (n) (X) = (u (n) + l (n) ) 2

Example: A = {a1, a2, a3} P(a1) = .8, P(a2) = .02, P(a3) = .18

(small alphabet size & highlyEncode a sequence 1 3 2 1 (a1 a3 a2 a1) skewed probability)

0 0 .656 .7712

a1 a1 a1 a1 .772352

.8 .64 .7712 .773504 a2 a2 a2 a2

.82 .656 .77408 a3 a3 a3 a3

1.0 .8 .77408 .8

FX (1) = .8 , FX (2) = .82 P(a1) = 0.8P(a2) = 0.02

FX (3) = 1, FX (k) = 0, k < 0 P(a3) = 0.18 FX (k) = 1, k> 3

Sequence 1, 3, 2, 1 Recursion relations

l(0) = 0, u(0) = 1 l(n) = l(n-1) + [u(n-1) - l(n-1)] FX (Xn-1 ) First element = 1 u(n) = l(n-1) + [u(n-1) - l(n-1)] FX (Xn)

l(1) = 0 + (1-0) 0 = 0

u(1) = 0 + (1-0).8 = .8

II Element 3

l(2) = 0 + (.8 - 0) FX (2) = .656

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

6

Page 7: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

u(2) = 0 + (.8- 0) FX (3) = .8

a1a1 = 0.8*0.8 = 0.64 a1a3 = 0.8*0.18 = 0.144a1a2 = 0.8*0.02 = 0.016 0.656+0.144 = 0.80.64+0.016 = 0.656III Element 2

l(3) = .656 + (.8 - .656) .8 = .7712

u(3) = .656 + (.8 - .656) .82 = .77408

Last element 1

l(4) = .7712 + (.77408 - .7712) FX (0) = .7712 + 0.0 = .7712

u(4) = .7712 + (.77408 - .7712) FX (1) = .773504 = .7712 + .002888 x.8

Tag

TX(4) (1321) = .7712 + .773504

2 = .772352

Generating a binary code

Alphabet A = (a1, a2, a3, a4)

P(a1) = 1/2, P(a2) = 1/4, P(a3) = P(a4) = 1/8 = 0.5 = 0.25 = 0.1250

a1 .25 TX (1) Binary code for TX (x)= Represent TX(x)

.5 in binary and truncate to l (x),a2 .625 TX (2) l (x) = log2 1/p(x) + 1.75

a3 .8125 TX (3).875

a4 .9375 TX (4)

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

7

Page 8: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

X = smallest integer > (X)

Binary Code

P(x) Symbol Fx TX in binary log2 1/P(x) +1 Code

1/2 1 .5 .25 .010 2 011/4 2 .75 .625 .101 3 1011/8 3 .875 .8125 .1101 4 11011/8 4 1.0 .9375 .1111 4 1111

This is a prefix code i.e., no codeword is a prefix of another codeword. Average length (bits/symbol) for coding groups of symbols of length m is

H (x) < lA < H(x) + 2/m By increasing ‘m’ bothHuffman and arithmetic

H (x) = Entropy (block m symbols together) coding can reach closeto entropy.

H (x) < lH < H(x) + 1/m

lA = Average bit length for arithmetic coding

lH = Average bit length for Huffman coding

Alphabet size K, # of possible sequences of length m is Km. Codebook size = Km Ex: K = 4 (a1, a2, a3, a4), m = 3, (codebook size = 43 = 64)(a1a1a1, a1a1a2, a1a2a3,......a4a4a3, a4a4a4)m = 4, Codebook size = 44 = 256, (a1a1a1a1, ....., a4a4a4a4)Huffman Coding requires building the codes for the entire codebook. For arithmetic coding, obtain the tag corresponding to a given sequence.Synchronized rescaling.

As n gets larger (# of symbols in the group) l(n) and u(n) (lower and upper limits of the tag interval get closer and closer). This is avoided by rescaling

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

8

Page 9: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

while still preserving the information being transmitted. This is called synchronized rescaling. In general the tag is confined to lower [0, .5) or upper

half [.5, 1.0) of the [0, 1) interval. If this is not valid, the algorithm is modified. Mapping is

E1 : [0, .5) [0, 1), E1 (X) = 2XE2 : [.5, 1) [0, 1), E2 (X) = 2 (X-.5)

Incremental Encoding

Generating and transmitting portions of the code as the sequence is being observed rather than wait until the end of the sequence.

Advantages of arithmetic coding

1. Advantageous for small alphabet size and highly skewed probabilities2. Easy to implement a system with multiple arithmetic codes. Once the

computational machinery to implement one arithmetic code is developed, all that is needed to set up multiple arithmetic codes is the availability of more probability tables.

3. Easy to adapt arithmetic codes to changing input statistics. No need to generate a tree (Huffman Code), a priori. Modeling and coding procedures can be separated.

QM Coder:

This is a modification of an adaptive binary arithmetic coder called the Q coder. QM coder tracks the lower end of the tag interval l(n) and the size of the interval A(n).

A(n) = u(n) - l(n)

where u(n) is the upper end of the tag interval.

Applications of arithmetic coding in standards1. JPEG: Extended DCT-based process and lossless process2. JBIG: 1024 to 4096 ‘ACS’. QM Coder. Also JBIG-2: Context based

AC.

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

9

Page 10: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

3. H.263 Optional mode: Syntax based ‘AC’ ‘SBAC’

4. MPEG-4 Context based arithmetic coding in shape coding ( MPEG-4 VM 8.0, July 1997)

5. MPEG-4 Still frame image coding - wavelet based. Lowest subband is DPCM coded. Prediction errors are coded using adaptive arithmetic coder. Zerotree wavelet coding and quantized values are coded using adaptive arithmetic coding.

6. MPEG-2 AAC (Also MPEG-4 T/F audio coder) Scale factors are coded based on bit-sliced arithmetic coding. AAC: advanced audio coding, T/F: time/frequency

7. JPEG-LS part 2 Lossless and nearly lossless compression of continuous tone still images. [51]

8. JPEG2000: context dependent binary arithmetic coding. Uses up to 9 contexts

9. H.264/MPEG-4 Part 10: Context based adaptive binary arithmetic coding (CABAC) [63].

JPEG: Joint Photographic Experts Group [20]JBIG: Joint Binary Image Group [19]H. 263 (Video Coding < 64 Kbps) [23] MPEG: Moving Picture Experts Group

Adaptive arithmetic coding

Probabilities of source symbols are dynamically estimated based on the changing symbol statistics observed in the message to be encoded, i.e., estimate probabilities on the fly (dynamic modeling).

References:

1. T. C. Bell, J. G. Cleary, and I. H. Witten. Text Compression. Advanced Reference Series. Englewood Cliffs, NJ: Prentice Hall, 1990.2. G. G. Langdon, Jr. An Introduction to Arithmetic Coding, IBM Journal

of Research and Development, 28: 135-149, March 1984.3. J. J. Rissanen and G. G. Langdon. Universal Modeling and Coding.

IEEE Trans. on Information Theory, vol. IT-27(1),pp.12-22, Jan. 1981.4. G. G. Langdon and J. J. Rissanen. Compression of Black-White

Images with Arithmetic Coding. IEEE Trans. on Communications, vol. 29(6): pp.858- 867, June 1981.

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

10

Page 11: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

5. T. C. Bell, I. H. Witten, and J. G. Cleary. Modeling for Text Compression. ACM Computing Survey, vol. 21: pp. 557-591, Dec. 1989.

6. W. B. Pennebaker et-al, “ An Overview of the Basic Principles of the Q-Coder Adaptive Binary Arithmetic Coder”. IBM Journal of Research and Development, vol. 31: pp.717-726, Nov. 1988.

7. J. L. Mitchell and W. B. Pennebaker. Optimal Hardware and Software Arithmetic Coding Procedures for the Q-Coder. IBM Journal of Research and Development, vol. 32: pp.727-736, Nov. 1988.

8. W. B. Pennebaker and J. L. Mitchell. Probability Estimation for the Q-Coder. IBM Journal of Research and Development, vol. 32: pp.737-752, Nov. 1988.

9. I. H. Witten, R. Neal, and J. G. Cleary, “Arithmetic coding for data compression,” Communications of the Association for Computing Machinery, vol. 30: pp. 520-540, June 1987. (Software)

10. G. G. Langdon, Jr., and J. J. Rissanen. “A Simple General Binary Source Code”. IEEE Trans. on Information Theory, vol. IT-28: pp. 800-803, Sept. 1982.

11. M. Nelson. The Data Compression Book. New York: M&T Books, 1991.

12. K. Sayood, “Introduction to data compression,” San Francisco, CA: Morgan Kaufmann Publishers, 1996. III Edition, 2006.

13. M. R. Nelson, “Arithmetic coding and statistical modeling,” Dr. Dobb’s Journal.

14. J. A. Storer, “Data compression,” Rockville, MD, Computer Science Press, 1988.

15. C. Chamzas and D. Duttweiler, “Probability estimation in arithmetic and adaptive Huffman entropy coders,” IEEE Trans. Image Process, vol. 4, pp. 237-246, March 1995.

16. J. M. Jou, “An on line adaptive data compression chip using arithmetic codes,” ISCAS 96, pp. …….Atlanta, GA, May 1996.

17. R. M. Pelz and B. Jannet, “Error concealment for robust arithmetic decoding in mobile radio environments,” Signal Processing: Image Communication, vol. 8, pp. 411-419, July 1996.

18. F. Mueller and K. Illgner, “Embedded Laplacian pyramid image coding using conditional arithmetic coding,” IEEE ICIP-96, Lausanne, Switzerland, Sept. 1996.

19. H. M. Hang and J. W. Woods (Eds.), “Handbook of visual communications,” Orlando, FL, Academic Press, 1995.

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

11

Page 12: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

20. W. B. Pennebaker and J. L. Mitchell, “JPEG still image data compression standard,” New York, NY, Van Nostrand Reinhold, 1993.

21. W. Kou, “Digital image compression algorithms and standards,” Norwell, MA, Kluwer Academic, 1995.

22. Z. Xiang, K. Ramachandran and M. T. Orchard, “Efficient arithmeticcoding for wavelet image compression,” SPIE/IS & T Symp. onElectronic Imaging, vol. 3024, San Jose, CA, Feb. 1997.

23. “Video Coding for narrow telecommunication channels at < 64 Kbits/s” Draft ITU-T Rec. H. 263, April 1995.

24. ITU-LBC-97-094, Draft 10 of H.263+, H.263+ Video Group, Nice,France, Feb. 1997.

25. F. Golchin and K. Paliwal, “Quadtree based classification with arithmetic and trellis coded quantization for subband image coding,” ICASSP 97, vol.4, pp. 2921-2924, Munich, Germany, April 1997.

26. Web site: www.icspat.com go to links wavelet sites arithmetic coding package ([email protected]).

27. I.H. Witten, R.M. Neal and J.G. Cleary, “Arithmetic coding for data compression”, Commun. of the ACM, vol. 30, pp. 520-540, June 1987.

28. L. Stuvier and A. Moffat, “ Piecewise integer mapping for arithmetic coding”, IEEE DCC Conf., March 1998.

29. T. Bell and B.McKenzie, “ Compression of sparse matrices by arithmetic coding”, ”, IEEE DCC Conf., March 1998.

30. I. Kozintsev, J. Chou and K. Ramachandran, “ Image transmission using arithmetic coding based continuous error detection”, IEEE DCC, March 1998.

31. F. Ling and W. Li, Dimensional adaptive arithmetic coding for image compression,” IEEE ISCAS, Monterey, CA, June 1998.

32. L-S. Wang, “Basics of arithmetic coding,” in http://dodger.ee.ntu.edu.tw/lswang/arith/adapt.htm 1996.(www.iscas.nps.navy.mil)

33. L. Labelle and D. Lauzn, "Arithmetic coding of a lossless contour-based representation of label images," IEEE ICIP, pp. MA8-2, Chicago, IL,Oct. 1998.

34. I. Balasingham, J.M. Lervik and T.A. Ramstad, "Lossless imagecompression using integer coefficient filter banks and class-wisearithmetic coding," ICASSP98, vol. III, pp. 1349-1352, Seattle, WA,May 1998.

35. Website: http://www.cis.ohio-state.edu/hypertext/faq/usenet/compress ion - faq/part1/faq-doc-12.html

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

12

Page 13: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

36. C. Caini, G. Calarco and A. V. Coralli, "A modified arithmetic coder subband audio compression," IEEE ISPACS, pp. 621-626, Melbourne, Australia, Nov. 1998.

37. J.B. Lee, J.-S. Cho and A. Eleftheriadis, "Optimal shape coding underbuffer constraints," IEEE ICIP, pp. MA8-8, Chicago, IL, Oct. 1998.

38. J. Ostermann, "Efficient encoding of binary shapes using MPEG-4,IEEE ICIP, pp. MA8-9, Chicago, IL, Oct. 1998.

39. N. Brady, F. Bossen and N. Murphy, "Context-based arithmetic codingof 2D shape sequences," Special session on shape coding, IEEE ICIP,pp., Santa Barbara, CA, Oct. 1997.

40. B. Martins and S. Forchhammer, “Lossless, near-lossless and refinement coding of bilevel-images,” IEEE Trans. IP, vol. 8, pp. 601-613, May 1999.

41. R.L. Joshi, V.J. Crump, and T.R. Fischer, “ Image subband coding using . arithmetic coded trellis coded quantization”’ IEEE Trans. CSVT, vol.5, pp.515-523, Dec. 1995.

42. W.K. Pratt et al, “ Combined symbol matching facsimile data compression”, Proc. IEEE, vol. 68, pp.786-796, July 1990.

43. P.W. Moo and X. Wu, “ Resynchronization properties of arithmetic coding”, IEEE ICIP’99, Kobe, Japan, Oct. 1999.

44. MPEG-4 Parametric coder HVXC (speech) and HILN (MUSIC) LBR BIT SLICED ARITHMETIC CODING is applied to spectral coefficients, ISO/IEC JTC1/SC29/WG11,MPEG 99/N2946, OCT. 1999.

45. D. Tzovaras, N.V. Boulgouris and M.G. Strintzis, “ Lossless image compression based on optimal prediction, adaptive lifting and conditional arithmetic coding,” Submitted to IEEE Trans. IP, Dec. 1998.

46. I. Sodagar, B.-B. Chai and J. Wus, “ A new error resilience technique for image compression using arithmetic coding,” IEEE ICASSP 2000, Istanbul, Turkey, June 2000.

47. A. Moffat, R. Neal and H. Witten, “ Arithmetic coding revisited,” DCC1995, IEEE Data Compression, Conf., pp.202-211, Snow Bird, UT, March 1995.

48. R.R. Osorio and J.D. Bruguera, “ Architectures for arithmetic coding in image compression,” EUSIPCO2000, Tampere, Finland, Sept.2000. http://eusipco2000.cs.tut.fi

49. Andra, “A multi-bit arithmetic coding technique,” IEEE ICIP, Vancouver, Canada, Sept.2000.

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

13

Page 14: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

50. S.A Martucci, “ Reversible compression of HDTV images using median adaptive prediction and arithmetic coding,” IEEE ISCAS, pp. 1310-1313, 1990.

51. M.J. Weinberger, G. Seroussi and G. Sapiro, “ LOCO-A: an arithmetic coding of extension of LOCO-I,” ISO/IEC JTC1/SC29/WG1 document N342, June 1996.

52. D. Gong and Y. He, “An Efficient Architecture for Real-time Content-based Arithmetic Coding,” IEEE ISCAS, Geneva, Switzerland, May 2000. http://iscas.epfl.ch

53. R.A. Freking and K.K. Parhi, “Highly parallel arithmetic coding,” IEEE DSP Workshop, Hunt, TX, Oct. 2000.

54. E. Baum, V. Harr and J. Speidel, “ Improvement of H.263 encoding by adaptive arithmetic coding,” IEEE Trans. CSVT, vol. 10, pp. 797-800, Aug. 2000.

55. J-K. Kim, K.H. Yong and C.W. Lee, “ Document image compression by nonlinear binary subband decomposition and concatenated arithmetic coding’” IEEE Trans. CSVT, vol. 10, pp. 1059-1067, Oct. 2000.

56. D. LeGall and A. Tabatabai, “ Subband coding of digital images using symmetric short kernel filters and arithmetic coding techniques,” IEEE ICASSP, pp. 761-765, New York, NY, 1988.

57. Proposal of the arithmetic coder for JPEG2000. ISO/IEC JTC1/SC29/WG1 N762, March 1998.

58. D.-Y. Chan, J.-F. Yang, and S.-Y. Chen, “ Efficient connected-index finite-length arithmetic codes, “ IEEE Trans. CSVT, vol. 11, pp. 581-593, May 2001.

59. Gonzales et al, “ DCT coding of motion video storage using adaptive arithmetic coding,” Signal Processing; Image Communication, vol.20, pp.145-154, Aug.1990.

60. D. Mukherjee and S.K. Mitra, “ Arithmetic coded vector SPIHT with classified tree-multistage VQ for color image coding,”

61. M. Ghanbari, “ Arithmetic coding with limited past memory,” IEE Electronics Letters, vol. 23, # 13, pp. 1157-1159, June 1991.

62. A. Abu-Hajar and R. Sankar, “ Wavelet based lossless image compression using partial SPIHT and bit plane arithmetic coding,” IEEE ICASSP, vol. 4, pp. 3497-3500, March 2002.ftp://ftp.imtc-files.org All documents related to JVT (H.264 & MPEG-4 Part 10)

63. Y.

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

14

Page 15: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

64. T. Guionnet and C. Guillermot, “Robust Decoding of Arithmetic Codes for Image Transmission Over Error-Prone Channels,” IEEE ICIP, PP. ,Barcelona, Spain, 2003.

65. D. Marpe and T. Wiegand, “A Highly Efficient Multiplication-Free Binary Arithmetic Coder and Its Application in Video Coding,’ IEEE ICIP, PP. ,Barcelona, Spain, 2003.

66. K. Sugimoto, et al, “Generalized Motion Compensation and Arithmetic Coding for Matching Pursuit Coder,” IEEE ICIP, PP. ,Barcelona, Spain, 2003.

67. M. Grangetto, G. Olmo and P. Cosman, “Error Correction by Means of Arithmetic Codes: an Application to Resilient Image Transmission,” IEEE ICIP, PP. ,Barcelona, Spain, 2003.

68. E. Pasero,and A. Montuori “ Neural network based arithmetic coding for real-time audio transmission on the TMS320C6000 DSP platform,” IEEE ICASSP,vol..II, pp. , 2003.

69. D. M Arpe, H. Schwarz and T. Wiegand, “Context-based adaptive binary arithmetic coding in the H.264/AVC video compression standard,” IEEE Trans. CSVT, vol. 13, pp. 620-636, July 2003.

70. D. Hong and V. van der Schaar, “ Arithmetic coding with adaptive context-tree weighting for the H.264 video coders,’ SPIE, vol. 5308, pp. , Jan. 2004.

71. B. Valentine and O. Sohm, “ Optimizing the JPEG2000 binary arithmetic encoder for VLIW architectures”, IEEE ICASSP 2004, PP. , Montreal, Canada, May 2004.

72. T. Guionnet and C. Guillemot, “ Soft and joint source-channel decoding of quasi-arithmetic codes”, Eurasip J. on Applied Signal Processing, vol. 2004, pp. , March 2004.

73. M. Grangetto, E. Magli and G. Olmo, “ Error resilient MQ coder and MAP JPEG 2000 decoding”, IEEE ICIP 2004, Singapore, Oct. 2004.

74. S. Bouchoux et al, “Implementation of JPEG2000 arithmetic decoder using dynamic reconfiguration of FPGA”, IEEE ICIP 2004, Singapore, Oct. 2004.

75. M. Dyer, D. Taubman and S. Nooshabadi, “ Improved throughput arithmetic coder for JPEG2000”, IEEE ICIP 2004, Singapore, Oct. 2004.

76. M. Grangetto, E. Magli and G. Olmo, “ Reliable JPEG2000 wireless imaging by means of error-correcting MQ coder”, IEEE ICME, pp. Taipei, Taiwan, June 2004.

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

15

Page 16: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

77. S.-M. Lei, “On the Finite-Precision Implementation of Arithmetic Codes,” Journal of Visual Communication and Image Representation, Vol. 6, No. 1, pp. 80-88, March 1995.

78. Kun-Bin Lee, Jih-Yiing Lin, and Chein-Wei Jen, “A multisymbol context-based arithmetic coding architecture for MPEG-4 shape coding”, IEEE Trans. CSVT, VOL. 15, PP. 283-295, Feb. 2005.

79. K-K Ong et al, “ A high throughput contaxt-based adaptive arithmetic codec for JPEG2000”, IEEE ISCAS 2002, Scottsdale, AZ, May 2002.

80. H. Shojania and S. Sudharsanan, “ A VLSI architecture for high performance CABAC encoding”, SPIE VCIP2005, Beijing, China, July 2005.

81. L. Zhang, R. Zhang and J. Zhou, “ Algorithm of incorporating error detection into H.264 CABAC”, SPIE-VCIP2005, Beijing, China, July 2005.

82. M. Grangetto, P. Cosman and G. Otmo, “ Joint source/channel coding and MAP decoding of arithmetic coding”, IEEE Trans. Commun., vol. 53, pp. , 2005.

83. A. Bfezina, J. Polec and M. Hudak, “ Context based arithmetic coding of segmented images”, 5th Eurasip conf., EC-SIP-M2005, Smolenice, Slovakia, June-July, 2005.

84. M. Dyer, S. Nooshabadi and D. Taubman, “Reduced latency arithmetic decoder for JPEG2000 block decoding”, IEEE ISCAS 2005, vol. , pp , Kobe, Japan, May 2005.

85. D. LeGall and A. Tabatabai, “ Subband coding of digital images using short kernel filters and arithmetic coding techniques”, IEEE ICASSP 1988, pp. 761-764, 1988.

86. T. M. Cover and J. M. Thomas, “Elements of Information Theory”, John Wiley & Sons, New York, 1991.

87. E. N. Gilbert and E. F. Moore, Variable-Length Binary Encodings, Bell Syst. Tech. J., vol. 7, pp. 932-967, 1959.

88. J. J. Rissanen, Generalized Kraft inequality and arithmetic coding, IBM J. Res. Dev., vol. 20 pp. 198—203, 1976.

89. J. J. Rissanen and G. G. Langdon, Arithmetic coding, IBM J. Res.Dev., vol. 23 (2) pp. 149—162, 1979.

90. F. Rubin, Arithmetic stream coding using fixed precision registers, IEEE Trans. Inform. Theory, vol. 25 (6), pp. 672—675, 1979.

91. R. Pasco, Source coding algorithm for fast data compression (Ph.D. thesis, Stanford University, 1976.

92. J. Chen et al, “Efficient video coding using legacy algorithmic approaches”, IEEE Trans. Multimedia accepted.

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

16

Page 17: Arithmetic Coding - The University of Texas at Arlington … · Web viewTo assign a code for a particular sequence (groups of symbols) of length m, Huffman Code requires developing

93. G. Gote et al, “ H. 263+: Video coding at low bit rates”, IEEE Trans. CSVT, vol. 8, pp 849-866, Nov. 1998.

94. K. N. Ngan, D. Chai and A. Mallik, “Very low rate video coding using H.263 codes, “IEEE Trans. CSVT, vol. 6, pp. 308-312, June 1996.

95. Lin, Y.J. Wang and T.H. Fan, “ Compaction of ordered dithered images using arithmetic coding,” IEEE Trans. IP, vol. 10, pp.797-802, May 2001.

96. ftp://ftp.ucalgary.edu/ [ftp site]97. D. Marpe, et al., "Context Based Adaptive Binary Arithmetic coding in

the H.264/AVC Video Compression Standard," IEEE Trans. CSVT, vol. 13, no. 7, pp. 620 - 637, July. 2003.

98. V. Sze and M. Budagavi, "High throughput CABAC entropy coding in HEVC," IEEE Trans. CSVT, vol.22, no.12, pp. 1778 - 1791, Dec. 2012.

99. Q. Yu et al, "High-throughput and low-complexity binary arithmetic decoder based on logarithmic domain," IEEE ICIP, Quebec city, Canada, 27 - 30 Sept. 2015

100. V. Sze and D. Marpe, "Entropy coding in HEVC," chapter 8 from the book, V. Sze, M. Budagavi and G. J. Sullivan, "High Efficiency Video Coding (HEVC) : Algorithms and Architectures," Springer, 2014.

101. Wikipedia, “Truncated binary encoding – Wikipedia, the free encyclopedia,” 2014. [online]. Availabe :http://en.wikipedia.org/wiki/Truncated_binary_encoding.

102. E. Belyaev et al, “An Efficient Adaptive Binary Arithmetic Coder with Low Memory requirement,” IEEE Journal of selected topics in signal processing, vol. 7, pp. 1053-1061, Dec 2013.

P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com, http://textbooks.elsevier.com

17