Audio Signal Processing -- Quantization

27
1 Audio Signal Processin g -- Quantization Shyh-Kang Jeng Department of Electrical Engineeri ng/ Graduate Institute of Communicatio n Engineering

description

Audio Signal Processing -- Quantization. Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering. Overview. Audio signals are typically continuous-time and continuous-amplitude in nature - PowerPoint PPT Presentation

Transcript of Audio Signal Processing -- Quantization

Page 1: Audio Signal Processing -- Quantization

1

Audio Signal Processing-- Quantization

Shyh-Kang JengDepartment of Electrical Engineering/

Graduate Institute of Communication Engineering

Page 2: Audio Signal Processing -- Quantization

2

Overview

• Audio signals are typically continuous-time and continuous-amplitude in nature

• Sampling allows for a discrete-time representation of audio signals

• Amplitude quantization is also needed to complete the digitization process

• Quantization determines how much distortion is presented in the digital signal

Page 3: Audio Signal Processing -- Quantization

3

Binary Numbers

• Decimal notation– Symbols: 0, 1, 2, 3, 4, …, 9– e.g.,

• Binary notation– Symbols: 0, 1– e.g.,

0123 10*910*910*910*11999

1002*02*02*12*0

2*02*12*12*0]01100100[0123

4567

Page 4: Audio Signal Processing -- Quantization

4

Negative Numbers• Folded binary

– Use the highest order bit as an indicator of sign

• Two’s complement– Follows the highest positive number with the

lowest negative– e.g., 3 bits,

• We use folded binary notation when we need to represent negative numbers

42]100[4],011[3 4

Page 5: Audio Signal Processing -- Quantization

5

Quantization Mapping

• Quantization

• Dequantization

Continuous values Binary codes

Binary codes Continuous values)(1 xQ

)(xQ

Page 6: Audio Signal Processing -- Quantization

6

Quantization Mapping (cont.)

• Symmetric quantizers– Equal number of levels (codes) for positive and

negative numbers

• Midrise and midread quantizers

Page 7: Audio Signal Processing -- Quantization

7

Uniform Quantization

• Equally sized range of input amplitudes are mapped onto each code

• Midrise or midread• Maximum non-overload input value,• Size of input range per R-bit code,• Midrise• Midread • Let

maxx

Rx 2/2 max)12/(2 max Rx

1max x

Page 8: Audio Signal Processing -- Quantization

8

2-Bit Uniform Midrise Quantizer

01

10

11

3/4

1/4

-1/4

-3/4

1.0

-1.0

1.0

0.0

-1.0

00

Page 9: Audio Signal Processing -- Quantization

9

Uniform Midrise Quantizer

• Quantize: code(number) = [s][|code|]

• Dequantize: number(code) = sign*|number|

0number1

0number0s

elsewhere)number2int(

1numberwhen12code 1R

1R

1R2/)5.0code(number

1sif1

0sif1sign

Page 10: Audio Signal Processing -- Quantization

10

2-Bit Uniform Midtread Quantizer

01

11

2/3

0.0

-2/3

1.0

-1.0

1.0

0.0

-1.0

00/10

Page 11: Audio Signal Processing -- Quantization

11

Uniform Midread Quantizer• Quantize: code(number) = [s][|code|]

• Dequantize: number(code) = sign*|number|

0number1

0number0s

elsewhere)2/)1number)12int(((

1numberwhen12code R

1R

1sif1

0sif1sign

)12/(code2number R

Page 12: Audio Signal Processing -- Quantization

12

Two Quantization Methods• Uniform quantization

– Constant limit on absolute round-off error – Poor performance on SNR at low input power

• Floating point quantization– Some bits for an exponent– the rest for an mantissa– SNR is determined by the number of mantissa bits an

d remain roughly constant– Gives up accuracy for high signals but gains much gr

eater accuracy for low signals

2/

Page 13: Audio Signal Processing -- Quantization

13

Floating Point Quantization

• Number of scale factor (exponent) bits : Rs• Number of mantissa bits: Rm• Low inputs

– Roughly equivalent to uniform quantization with

• High inputs– Roughly equivalent to uniform quantization wit

h

Rm12R Rs

1RmR

Page 14: Audio Signal Processing -- Quantization

14

Floating Point Quantization Example

• Rs = 3, Rm = 5

[s0000000abcd] scale=[000]mant=[sabcd]

[s0000000abcd]

[s0000001abcd] scale=[001]mant=[sabcd]

[s0000001abcd]

[s000001abcde] scale=[010]mant=[sabcd]

[s000001abcd1]

[s1abcdefghij] scale=[111]mant=[sabcd]

[s1abcd100000]

Page 15: Audio Signal Processing -- Quantization

15

Quantization Error• Main source of coder error• Characterized by • A better measure

• Does not reflect auditory perception• Can not describe how perceivable the errors are• Satisfactory objective error measure that reflects

auditory perception does not exist

)/(log10SNR 2210 qx

2q

Page 16: Audio Signal Processing -- Quantization

16

Quantization Error (cont.)

• Round-off error

• Overload errorOverload

Page 17: Audio Signal Processing -- Quantization

17

Round-Off Error• Comes from mapping ranges of input amplitude

s onto single codes• Worse when the range of input amplitude onto

a code is wider• Assume that the error follows a uniform distrib

ution• Average error power

• For a uniform quantizer

2/

2/

222 12/1dqqq

)2*3/( R22max

2 xq

Page 18: Audio Signal Processing -- Quantization

18

Round-Off Error (cont.)

771.4R021.6)/(log10

3log102logR20)/(log10

)/(log10SNR

2max

210

10102max

210

2210

xx

xx

qx

4 bits8 bits

16 bits

Input power (dB)

SN

R(d

B)

Page 19: Audio Signal Processing -- Quantization

19

Overload Error

• Comes from signals where • Depends on the probability distribution of

signal values• Reduced for high • High implies wide levels and

therefore high round-off error• Requires a balance between the need to

reduce both errors

max)( xtx

maxx

maxx

Page 20: Audio Signal Processing -- Quantization

20

Entropy• A measure of the uncertainty about the next code t

o come out of a coder• Very low when we are pretty sure what code will c

ome out• High when we have little idea which symbol is co

ming• • Shanon: This entropy equals the lowest possible bi

ts per sample a coder could produce for this signal

n

nn ppEntropy )/1(log2

Page 21: Audio Signal Processing -- Quantization

21

Entropy with 2-Code Symbols

• When there exist other lower bit rate ways to encode the codes than just using one bit for each code symbol

))1/(1(log*)1()/1(log* 22 ppppEntropy

p

Ent

rop

y

0 1

5.0p

Page 22: Audio Signal Processing -- Quantization

22

Entropy with N-Code Symbols• • Equals zero when probability equals 1• Any symbol with probability zero does not

contribute to entropy• Maximum when all probabilities are equal• For equal-probability code symbols

• Optimal coders only allocate bits to differentiate symbols with near equal probabilities

n

nn ppEntropy )/1(log2

R2

R))2(log*2(*2 R2

RR Entropy

Page 23: Audio Signal Processing -- Quantization

23

Huffman Coding

• Create code symbols based on the probability of each symbols occurrence

• Code length is variable• Shorter codes for common symbols• Longer codes for rare symbols• Shannon:• Reduce bits over fixed-bit coding, if the

symbols are not evenly distributed

1RHuffman EntropyEntropy

Page 24: Audio Signal Processing -- Quantization

24

Huffman Coding (cont.)

• Depend on the probabilities of each symbol• Created by recursively allocating bits to

distinguish between the lowest probability symbols until all symbols are accounted for

• To decode, we need to know how the bits were allocated– Recreate the allocation given the probabilities– Pass the allocation with the data

Page 25: Audio Signal Processing -- Quantization

25

Example of Huffman Coding• A 4-symbol case

– Symbol 00 01 10 11– Probability 0.75 0.1 0.075 0.075

• Results– Symbol 00 01 10 11– Code 0 10 110 111–

bits4.13*15.02*1.01*75.0R

010

10

1

0

Page 26: Audio Signal Processing -- Quantization

26

Example (cont.)

• Normally 2 bits/sample for 4 symbols

• Huffman coding required 1.4 bits/sample on average

• Close to the minimum possible, since

• 0 is a “comma code” here– Example: [01101011011110]

bits2.1Entropy

Page 27: Audio Signal Processing -- Quantization

27

Another Example• A 4-symbol case

– Symbol 00 01 10 11– Probability 0.25 0.25 0.25 0.25

• Results– Symbol 00 01 10 11– Code 00 01 10 11

• Adds nothing when symbol probabilities are roughly equal

0 1 0 1

0 1