Solman-128852 (2)
description
Transcript of Solman-128852 (2)
SOLUTIONS MANUAL FORCOMPUTER AND COMMUNICATIONNETWORKS
Nader F. Mir
Upper Saddle River, NJ . Boston . Indianapolis . San Francisco . New YorkToronto . Montreal . London . Munich . Paris . Madrid . CapetownSydney . Tokyo . Singapore . Mexico City
The author and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of anykind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damagesin connection with or arising out of the use of the information or programs contained herein.
Visit us on the Web: www.prenhallprofessional.com
Copyright © 2007 by Pearson Education, Inc.
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching theircourses and assessing student learning. Dissemination or sale of any part of this work(including on the World Wide Web)will destroy the integrity of the work and is not permitted. The work and materials from it should never be made available tostudents except by instructors using the accompanying text in their classes. All recipients of this work are expected to abideby these restrictions and to honor the intended pedagogical purposes and the needs of other instructors who rely on thesematerials.
ISBN 0-13-234570-6
First release, March 2007
Contents
Preface ii
0.1 How to Obtain Errata of Text-Book . . . . . . . . . . . . . . iii
0.2 Errors in This Solution Manual . . . . . . . . . . . . . . . . . iii
0.3 How to Obtain an Updated Solution Manual . . . . . . . . . iii
0.4 How to Contact Author . . . . . . . . . . . . . . . . . . . . . iv
About the Author v
I Fundamental Concepts 1
1 Packet-Switched Networks 3
2 Foundation of Networking Protocols 9
3 Networking Devices 15
4 Data Links and Transmission 21
5 Local-Area Networks and Networks of LANs 29
6 Wireless Networks and Mobile IP 35
7 Routing and Inter-Networking 41
8 Transport and End-to-End Protocols 51
i
ii CONTENTS
9 Applications and Network Management 57
10 Network Security 67
II Advanced Concepts 73
11 Packet Queues and Delay Analysis 75
12 Quality-of-Service and Resource Allocation 89
13 Networks in Switch Fabrics 101
14 Optical Switches and Networks, and WDM 121
15 Multicasting Techniques and Protocols 123
16 VPNs, Tunneling, and Overlay Networks 133
17 Compression of Digital Voice and Video 135
18 VoIP and Multimedia Networking 149
19 Mobile Ad-Hoc Networks 155
20 Wireless Sensor Networks 157
Preface
An updated version of the solution manual is hereby provided to instructors.
For any problem marked with “N/A,” the solution will be provided in the
upcoming versions. Please check with the publisher or the author time-
to-time to obtain the latest verion of the solution manual. For effective
educational learning purposes, instructors are kindly requested not to allow
any student to access this solution manual.
0.1 How to Obtain Errata of Text-Book
The Errata of the text-book, Edition 1, is now available. Please contact the
author directly at [email protected] for copy.
0.2 Errors in This Solution Manual
If you find any error in this solution manual, please directly inform the
author at [email protected] .
0.3 How to Obtain an Updated Solution Manual
Please check time to time the web page of the text-book in the Prentice-
Hall site and click on “Instructors’ link” to obtain the latest version of this
solution manual.
iii
iv CONTENTS
0.4 How to Contact Author
Please feel free to send any feedback on the text book to Department of Elec-
trical Engineering, San Jose State University, San Jose, CA 95192, U.S.A,
or via email at [email protected]. The preparation of this version of the
solution manual took years and the manual may contain some errors. Please
also feel free to send me any feedback on this solution manual.
I would love to hear from you especially if you have suggestions for im-
proving this book further for its next editions. I will carefully read all review
comments. You can find out more about me at: http://www.engr.sjsu.edu/nmir
I hope that you enjoy the text and that you receive a little of my liking for
the computer communications from it.
Nader F. Mir
San Jose, California
About the Author
Nader F. Mir received the B.Sc. degree (with honors) in electrical &
computer engineering in 1985 and the M.Sc. and Ph.D. degrees both in
electrical engineering from Washington University in St. Louis, MO, in
1990 and 1994 respectively.
He is currently a professor and department associate chairman of Elec-
trical Engineering at San Jose State University, California. He is also the
director of MSE Program in Optical Sensors and Networks for Lockheed
Martin Space Systems. Previously, he was an associate professor at this
school and assistant professor at the University of Kentucky in Lexington.
From 1994 to 1996, he was a research scientist at the Advanced Telecommu-
nications Institute, Stevens Institute of Technology, in New Jersey, working
on the design of advanced telecommunication networks. From 1990 to 1994,
he was with the Computer and Communications Research Center at Wash-
ington University in St. Louis and worked as a research assistant on design
and analysis of high-speed switching-systems projects.
His research interests are: analysis of computer communication networks,
design and analysis of switching systems, network design for wireless ad-
hoc and sensor systems, and applications of digital integrated circuits in
computer communications.
He is a senior member of the IEEE and has served as the member of
Technical Program Committee and Steering Committee of a number of major
IEEE networking conferences such as WCNC, GLOBECOM, and ICC. Dr.
v
vi CONTENTS
Mir has published numerous refereed technical journal and conference papers
all in the field of communications and networking. He has published a
book in video communication engineering, and another text-book published
by Prentice Hall Publishing Co. entitled “Computer & Communication
Networks, Design and Analysis”.
Dr. Mir has received a number of prestigious national and university
awards including the university teaching recognition award and research
excellence award. He is also the recipient of the 2004 IASTED Outstanding
Tutorial Presentation award.
Currently, he has several journal editorial positions such as: the Edito-
rial Board Member of the International Journal of Internet Technology and
Secured Transactions, the Editor of Journal of Computing and Information
Technology, and the Associate Editor of IEEE Communication Magazine.
Part I
Fundamental Concepts
1
Chapter 1
Packet-Switched Networks
1. Total distance = � = 2(√3, 0002 + 10, 0002) = 20,880.61 km.
Speed = c = 2.3 ×108 m/s.
(a) proagation delay = tp =�c = 20,880.61 km
2.3×108 m/s= 90.8 ms
(b) Number of bits in transit during the propagation delay
= (90.8 ms)× (100× 106 b/s)
= 9.08 Mb
(c) 10 bytes = 80 bits
2.5 bytes = 20 bits, then:
total length = 80 + 20 = 100 bits
T = 100 b100×106 b/s
= 1 μs
2. Total distance = � = 2(√(30/1000)2 + 10, 0002) ≈ 20,000 km.
Speed = c = 2.3 ×108 m/s.
(a) tp =�c =
20,000 km2.3×108 m/s
= 87 ms
(b) 100 Mb/s × 0.087 s = 8.7 Mb
3
4 Chapter 1. Packet-Switched Networks
(c) Data: (10 B)×8 b100 Mb/s
+ tp = 0.79 μs + 0.087 s
Ack: (2.5 B)×8 b100 Mb/s
+ tp = 0.19μs + 0.087 s
Total time ≈ 1μs (transfer) + 0.173 s (prop.)
3. Assuming the speed of transmission at 2.3× 108:
(a) Total Delay: D = [np + (nh − 2)]tf + (nh − 1)tp + nhtr
(b) tp1 =50 miles×1600 m/miles
2.3×108 m/s= 0.35 ms
tp2 =400 miles×1600 m/miles
2.3×108 m/s= 2.8 ms
Number of packets = np =200MB10KB = 20, 000
tf =10,040 B/pockets×8 b/B
100 Mb/s= 0.8 ms/pockets
D = [20, 000+(5−2)]0.8+[(3−1)0.35+(3−1)2.8]+5×0.2×103
≈ 16.6 s
4. Dp = [np + (nh − 2)]tf1 + nhtr1 + (nh − 1)tp
Dc = 3 ([1 + (nh − 2)]tf2 + nhtr2 + (nh − 1)tp)
Dt = Dp + Dc = (np + nh − 2)tf1 + 3(nh − 1)tf2 + nh(tr1 + 3tr2) +
4(nh − 1)tp
5. Number of packets = np =200MB10KB = 20, 000
Dt = Dp +Dc
Dc = dconn-req + dconn-accep + dconn-release
(a) Here, the problem askes that tr be defined as the processing time
for each packet. Therefore, tr = 20, 000× 0.2 = 4,000 s
5
Dc = dconn-req = dconn-accep = dconn-release
= [np + (nh − 2)]tf + nhtr + (nh − 1)tp
= [1 + (5− 2)]500 b/packet
100 mb/s+ 3× 4, 000 s + 4.84 ms
≈ 12, 000 s
(b) Same as Part (a).
(c) Dt = Dp +Dc = 17 + 3× 12, 000 = 36, 017 s
6. s = 109 b/s
nh = 10 nodes
tr1 = tr2 = 0.1 s = tr
Data forms two packets:
(9960 + 40) bytes for packet1
(2040 + 40) bytes for packet2
tf1−packet1 =10,000 B×8 b/B
109 b/s= 8× 10−5 s
tf1−packet2 =2,080 B×8 b/B
109 b/s= 16.64 × 10−6 s
tf2 = transfer times for control packets = 500 b109 b/s
= 5× 10−7 s
tp =�c =
500 miles×1.61×103 m2.3×108 m/s
= 3.5× 10−3 s
(a) request + accept time:
t1 + t2 = 2 ([np + (nh − 1)]tf2 + (nh − 1)tp + nhtr]) = 2.06 s
(b) t3 =12 (t1 + t2) = 1.03 s
(c) Dt = Dp +Dc
Dp = Dp−packet1 +Dp−packet2= [np + (nh − 2)]tf1−packet1 + nhtr1 + (nh − 1)tp + [np + (nh −
6 Chapter 1. Packet-Switched Networks
2)]tf2−packet1 + nhtr1 + (nh − 1)tp
Dc = t1 + t2 + t3
Dt = 4.1 s
7. d+ h = 10, 000 b,
ρd = 72%,
hd = 0.04,
s = 100 Mb/s,
(a) h = 0.04d then: dd+h = 0.96.
Since: ρd = ρ dh+d then: 0.72 = ρ0.96
⇒ ρ = 0.74
(b) μ = sh+d =
100×106 b/s10×103 b = 10 × 103 = 10, 000 packets/sec
(c) λ = ρμ = 0.74× 10, 000 = 74, 000
D̄ = 110,000−7,400 = 0.38 ms
(d) D̄opt =hs
( √ρd
1−√ρd
)2d+ h = 10, 000h/d = 0.04
⇒ h = 384 b
D̄opt = 0.12
8. s = 100 Mb/s
ρ = 80%
(a) ρ = λμ
⇒ 0.8 = 8000μ
⇒ μ = 10, 000 packets/s
7
t
tt
t
ConnectionRequest
ConnectionAccept
ConnectionRelease
Data
Node A
Node C
Node B
Node D
Figure 1.1: Signaling delay in connection-oriented packet-switched environ-ment.
μ = sh+d ⇒ 10, 000 = 100×106
h+d
⇒ h+ d = 10, 000 b
(b) ρh = 0.008 and ρh = ρ hd+h
⇒ d+ h = 100h
⇒ h = 100 b, and d = 9900 b.
(c) ρh = 0.008
ρd = ρ− ρh = 0.8− 0.008
⇒ ρd = 0.792
dopt = h( √
ρd1−√
ρd
)= 809 bits
(d+ h)opt = h+ dopt = 100 + 809
⇒ (d+ h)opt = 909 b
(d) D̄opt =hs
(1
1−√ρd
)2⇒ D̄opt = 8.2× 10−5 s
9. See Figure 1.1.
10. D = d+hs[1−ρd/d(d+h)]
8 Chapter 1. Packet-Switched Networks
(a) ∂D∂h = 0
Thus: hopt =d(1−2ρd)
2ρd
(b) Queueing delay
Chapter 2
Foundation of NetworkingProtocols
1. (a) Address:
127.156.28.31 =
0111 1111 . 1001 1100 . 0001 1100 . 0001 1111
Mask:
255.255.255.0 =
1111 1111 . 1111 1111 . 1111 1111 . 0000 0000
Class A
Subnet ID: 1001 1100 0001 1100=39964
(b) Address:
150.156.23.14 =
1001 0110 . 1001 1100 . 0001 0111 .0000 1110
Mask:
255.255.255.128 =
1111 1111 . 1111 1111 . 1111 1111 . 1000 0000
Class B
Subnet ID: 000101110 = 46
(c) Address:
150.18.23.101 =
9
10 Chapter 2. Foundation of Networking Protocols
1001 0110 . 0001 0010 . 0001 0111 . 0110 0101
Mask:
255.255.255.128 =
1111 1111 . 1111 1111 . 1111 1111 . 1000 0000
Class B
Subnet ID: 000101110 = 46
2. (a) IP: 1010 1101 . 1010 1000 . 0001 1100 . 0010 1101
Mask: 1111 1111 . 1111 1111 . 1111 1111 . 0000 0000
Class B
Subnet ID=00011100=28
(b) A packet with IP address 188.145.23.1 using mask pattern 255.255.255.128
IP: 1011 1100 . 1001 0001 . 0001 0111 . 0000 0001
Mask: 1111 1111 . 1111 1111 . 1111 1111 . 1000 0000
Class B
Subnet ID=000101110=46
(c) A packet with IP address 139.189.91.190 using a mask pattern
255.255.255.128
IP: 1000 1011 . 1011 1101 . 0101 1011 . 1011 1110
Mask: 1111 1111 . 1111 1111 . 1111 1111 . 1000 0000
Class B
Subnet ID=010110111=183
3. IP1: 1001 0110 . 0110 0001 . 0001 1100 . 0000 0000
IP2: 1001 0110 . 0110 0001 . 0001 1101 . 0000 0000
IP3: 1001 0110 . 0110 0001 . 0001 1110 . 0000 0000
New IP (CIDR): 150.97.28.0/22
11
4. Address:
141.33.11.0/22 = 1000 1101 . 0010 0001 . 0000 1011 . 0000 0000
141.33.12.0/22 = 1000 1101 . 0010 0001 . 0000 1100 . 0000 0000
141.33.13.0/22 = 1000 1101 . 0010 0001 . 0000 1101 . 0000 0000
141.33.8.0/21
5. (a) 191.168.6.0
1011 1111 . 1010 1000 . 0000 0110 . 0000 0000
1111 1111 . 1111 1111 . 1111 1110 . 0000 0000
Result:
1011 1111 . 1010 1000 . 0000 0110 . 0000 0000 = 191.168.6.0/23
(b) 173.168.28.45
1010 1101 . 1010 1000 . 0001 1100 . 0010 1101
1111 1111 . 1111 1111 . 1111 1110 . 0000 0000
Result:
1010 1101 . 1010 1000 . 0001 1100 . 0000 0000 = 173.108.28.0/23
(c) 139.189.91.190
1000 1011 . 1011 1101 . 0101 1011 . 1011 1110
1111 1111 . 1111 1111 . 1111 1110 . 0000 0000
Result:
1000 1011 . 1011 1101 . 0101 1010 . 0000 0000 = 139.189.90.0/23
6. 180.19.18.3:
1011 0100 . 0001 0011 . 0001 0010 . 0000 0011
12 Chapter 2. Foundation of Networking Protocols
(a) 180.19.0.0/18:
1011 0100 . 0001 0011 . 0000 0000 . 0000 0000
180.19.3.0/22:
1011 0100 . 0001 0011 . 0000 0011 . 0000 0000
180.19.16.0/20:
1011 0100 . 0001 0011 . 0001 0000 . 0000 0000
(b) The longest match is 180.19.16.0/20.
7. (a) N1 L11: 1100 0011 . 0001 1001 . 0000 0000 . 0000 0000
N2 L13: 1000 0111 . 0000 1011 . 0000 0010 . 0000 0000
N3 L21: 1100 0011 . 0001 1001 . 0001 1000 . 0000 0000
N4 L23: 1100 0011 . 0001 1001 . 0000 1000 . 0000 0000
N5 L31: 0110 1111 . 0000 0101 . 0000 0000 . 0000 0000
N6 L33: 1100 0011 . 0001 1001 . 0001 0000 . 0000 0000
(b) Packet: 1100 0011 . 0001 1001 . 0001 0001 . 0000 0011
L11: 1100 0011 . 0001 1001 . 0000 0000 . 0000 0000
L12: 1100 0011 . 0001 1001 . 0001 0000 . 0000 0000
L12: 1100 0011 . 0001 1001 . 0000 1000 . 0000 0000
L13: 1000 0111 . 0000 1011 . 0000 0010 . 0000 0000
L21: 1100 0011 . 0001 1001 . 0001 1000 . 0000 0000
L22: 1100 0011 . 0001 1001 . 0001 0000 . 0000 0000
L23: 1100 0011 . 0001 1001 . 0000 1000 . 0000 0000
L31: 0110 1111 . 0000 0101 . 0000 0000 . 0000 0000
L33: 1100 0011 . 0001 1001 . 0001 0000 . 0000 0000
Answer=N6
(c) 32-21=11 b
So, 211 = 2,048 users
13
8. (a) IPv4 address field has 32 bits
Total number of IP addersses available = 232
Number of IP addresses per person for 620 million people
= 232
620×106 = 6.9 ≈ 7
(b) Number of bits required to serve 620 million people is 30 since
229 ≈ 536 mil < 620 mil < 230 ≈ 1, 074 mil
Thus, CIDR can only have 32-30 = 2 bit as Network ID field as
x.x.x.x/2
(c) IPv6 address field has 128 bits:
Total number of IPaddresses available = 2128
Number of IP addresses per person for 620 million people
= 2128
620×106 = 5.49 × 1029
9. (a) 1111:2A52:111:73C2:A123:56F4:1B3C
binary form:
0001 0001 0001 0001 : 0010 1010 0101 0010 : 1010 0001 0010
0011 : 0000 0001 0001 0001 : 0111 0011 1100 0010 : 1010 0001
0010 0011 : 0101 0110 0111 0100 : 0001 1011 0011 1100
(b) 2532::::FB58:909A:ABCD:0010
binary form:
0010 0101 0011 0010 : : : : 1111 1011 0101 1000 : 10001 0000
1001 1010 : 1010 1011 1100 1101 : 0000 0000 0001 0000
(c) 2222:3333:AB01:1010:CD78:290B::1111
binary form:
0010 0010 0010 0010 : 0011 0011 0011 0011 : 1010 1011 0000
0001: 0001 0000 0001 0000 : 1100 0000 0111 1000 : 0010 1001
0000 1011 : : 0001 0001 0001 0001
14 Chapter 2. Foundation of Networking Protocols
10. N/A
11. Connection set up can be greatly simlpified because the VP is already
selected. Only the VC has to be chosen with 216 = 64K choices.
12. Retain features:
• conection-oriented network.
Lost Features:
• shorter process time
• lower header/data ratio
• harder to multiplex
Chapter 3
Networking Devices
1. (a) Number of channels= 12× 5× 10 = 600
(b) Capacity= 600× 4 KHZ = 2.4 MHZ
2. (First, please make a correction: 170 Kb/s must change to 160 Kb/s)
Total output bit-rate of the multiplexer is 160 Kb/s
(a) Total bit-rate from analog links
= (5 KHz + 2 KHz + 1 KHz) × 2 samples/cycle × 5 bits/sample
= 80 Kb/s
(b) Total bit-rate from digital links = 160 Kb/s - 80 Kb/s = 80 Kb/s
Total bit-rate per digital line =80 Kb/s4 lines
= 20 Kb/s
Pulse stuffing per digital line = 20 Kb/s - 8 Kb/s = 12 Kb/s
(c) The number of channels of the frame dedicated to each line is
proportional to the data rate of that line. Let’s consider one
channel when upto 10 Kb/s of data rate is present. Using a
proportional channel assignment , we need a total of 5 + 2 + 1
= 8 analog channels. As each digital line requires 8 Kb/s, we
can also assign one channel per digital line. Therefore we need 4
15
16 Chapter 3. Networking Devices
digital channels, and one control channel:
Bits in each frame = (8 + 4 + 1)×5 b/channel + 1 guard-bit=
66 b/frame
Frame rate =160×103 b/s66 b/frame
= 2,424 frames/s
3. (a) Pulse stuffing = 4800 b/s × 0.03 = 144 b/s
Number of characters are = 1 (sync) + 99 (data)= 100
Synchronization bit rate =4800 b/s
100 ≈ 50 b/s
Number of 150 b/s terminals
=4800−(2×600+5×300+50+144) b/s
150 b/s= 12.7 ≈ 12 terminals
(b) The number of characters for synchronization is proportional to
bit rates. For example, since we need 12 characters for 150 b/s
terminals, therefore we need 3 characters for synchronization.
Frame format in terms of bits is:
2 × (12 char × 10 b/char) + 5 × (6 char × 10 b/char) + 12 ×(3 char× 10 b/char) + 3× 10 b/char + 1× 10 b/char
= 940 b/frame
4. (a) #bits/frame (total) = 2 Mb/s× 26μs/frame = 52 bits/frame
#bits/frame (data) = 52 bits/frame− 10 = 42 bits/frame
#channels = n = 42 bits/frame 1
6 bits/ch= 7 ch/frame
(b) P[clipping] =∑m−1
i=n
(m− i
i
)pi(1− p)m−i−1
m = 10, n = 7, p = 0.9
P[clipping] = 0.947
17
5. (a) ρ = tatc+d = 2/8 = 25%
(b) Pj=3 =
(mj
)(tita
)j
∑j
i=0
(mj
)( tatd
)i
=
(83
)( 26)3
∑3
j=0
(81
)( 26)i
≈ 21%
(c) B = Pj=n=4 =
(mn
)(1/3)n
∑n
i=0
(mi
)(1/3)i
=
(87
)(1/3)4
∑4
i=0
(8i
)(1/3)i
= 70/811+8/3+28/9+56/27+70/81 ≈ 9%
(d) E[C] =∑4
j=1 jpj = 1(0.275) + 2(0.32) + 3(0.213) + 4(0.0889) ≈1.94
6. (a) m = 4
n = 2
Prob[clipping] = Pc =∑3
i=2
(3i
)ρi(1− ρ)3−i
= 10.4% for ρ = 0.2
= 35.2% for ρ = 0.4
= 64.8% for ρ = 0.6
= 89.6% for ρ = 0.8
(b) m = 4
n = 3
Prob[clipping]Pc =∑3
i=3
(33
)ρ3(1− ρ)3−3
= 0.0% for ρ = 0.2
= 6.4% for ρ = 0.4
= 21.6% for ρ = 0.6
= 51.2% for ρ = 0.8
(c) n = 4
P4 = 100%
18 Chapter 3. Networking Devices
7. ρ = tata+td
= 0.9
for m = 11 and n = 10 :
Pc = the clipping prabability
=∑m−1
i=n
(m− 1
i
)ρi(1−ρ)m−1−i =
(1010
)ρ10(1−ρ)0 = ρ10 =≈
0.35
8. 1μ
(1− η
ρm)
9. See Figure 3.1.
1 0 11 1 110 0
NaturalNRZ
PolarNRZ
Manchester
Figure 3.1: Line coding.
19
10. See Figure 3.2.
ASK
FSK
PSK
Figure 3.2: Modulation techniques.
11. (a) Assume a packet incoming at input port of IPP has length of L
bits. Then, T = d+50r ⇒ ∂T 2
∂d∂r = 0 ⇒ dopt, ropt
Therefore ways to optimize the transmission delay T are follow-
ings:
• Increase transmission rate (r) by reducing clock cycle time of
CPU.
• Define value of d to be equal to highest-probability packet
length(L).
20 Chapter 3. Networking Devices
(b) For example, if the switch fabric has 5 stages of routing in its
internal network, the processing delay mostly depends on AND
gate switch time of gates on a fabric. Assume applying CMOS
transistors,which are slowest technology for switching transistors,
for this switch fabric. Assuming 50-80 ns switch time for CMOS
AND gate, the total propagation delay in this switch fabric = 80
ns ×5 stages=0.4 μs. On the other hand, the delay in IPP (D)
mostly depends on packet fragmentation and encapsulation delay
time. Typical value of this delay time is about tens or hundreds
of milliseconds for a 512-bytes packet.
Therefore processing delay in the switch fabric is not significant
compared to delay in IPP.
12. N/A
Chapter 4
Data Links and Transmission
1. Tprop = 5000×103 m3×108 m/s
= 16.7m/s
Total size=(500 page)(1000 char/page)(8 bits/char)=4 Mb
(a) T = 16.7 ms + 4 Mb/(64 kb/s)=62.51 s
(b) T = 16.7 ms + 4 Mb/(620 mb/s)=23.15 ms
(c) With two million volumes of books:
Total size = 4Mb ×2× 106 = 8000 Gb
i. T = 16.7ms + 8000 Gb/(64 kb/s) = 1.25 × 108 s ≈ 4 years
ii. T = 16.7 ms+8000 Gb/(620 mb/s) = 12903.2167 s ≈ 3.6 hours
2. N/A
3. (a) See Figure 4.1 (a). CRC-12: X12 +X11 +X3 +X2 +X + 1
Rule of hardware: For each existing term except the first term
(in this case X12) assign an EXOR followed by a 1-bit register.
21
22 Chapter 4. Data Links and Transmission
(a)
(b)
143210
111043210
15
Figure 4.1: Answer to exercise.
For each non-existing term assign a 1-bit register. Once all bits
of data (D, 0) moves in completely, the content of registers show
the remainder of the division process.
(b) See Figure 4.1 (b). CRC-16: X16 +X15 +X2 + 1
4. (a) Dividend = X10 +X8 +X6 +X5 +X4
Divisor = X4 +X
(b) If dividend = X10 +X8 +X6 +X5 +X4,
and divisor = X4 +X,
then, quotient = X6 +X4 −X3 +X2 + 2
and, remainder = −X3 − 2X
23
5. The hardware is shown in figure 4.2.
1-bit ShiftRegister
dividend10101110000
Starting bitto enter
0 1 2 3Power of x:
Figure 4.2: Contents of the four shift registers.
If we sift in D, 0 = 1010111,0000
G = 10010 ⇒ X4 + X
The final contents of shift registers as the step-by-step implementation
of D,0G |2 shows CRC = 0 0 0 1 (MSB at right):
Bits of D, 0 left to shift in Shift registers’ contents
1010111,0000 0 0 0 0010111,0000 1 0 0 010111,0000 0 1 0 00111,0000 1 0 1 0111,0000 0 1 0 111,0000 1 1 1 01,0000 1 1 1 10000 1 0 1 1000 0 0 0 100 0 1 0 00 0 0 1 0- 0 0 0 1
If we sift in D,CRC = 1010111,1000
G = 10010 ⇒ X4 + X
The final contents of shift registers as the step-by-step implementation
of D,CRCG |2 shows 0 0 0 0 indicating no errors:
24 Chapter 4. Data Links and Transmission
Bits of D,CRC left to shift in Shift registers’ contents
1010111,1000 0 0 0 0010111,1000 1 0 0 010111,1000 0 1 0 00111,1000 1 0 1 0111,1000 0 1 0 111,1000 1 1 1 01,1000 1 1 1 11000 1 0 1 1000 1 0 0 100 0 0 0 00 0 0 0 0- 0 0 0 0
6. (a) D=1010 1101 0101 111
G=1110 10
g=6, then, g-1=5
D,0=1010 1101 0101 111,0000 0
CRC=D,0G |2
Dividend = 101011010101111,00000
Divisor = 111010
Quotient = 111011111000010
Remainder = 10100
CRC=10100
D,CRC=1010 1101 0101 111,10100
(b) D,CRC = 1010 1101 0101 111,10100
G=1110 10D,CRC
G |2Dividend = 101011010101111,10100
Divisor = 111010
Quotient = 111011111000010
25
Remainder = 0
The data is correct.
7. v = 3× 108 m/sec
� = 80 b/frame
r = 10× 106 bits/sec, tp/tf = 10
(a) E =tft =
tftf+2tp
= 11+2tptf
= 11+2(10) = 0.04762 or 4.76%
(b) Assuming the speed of light to be 3× 108 in the cable:
10 =tptf
=dv�r
=d
3×108
8010×106
⇒ d(10×106)80(3×108) = 10
d = 24 km
(c) See Figure 4.3. tp = d/v = 24 km/3× 108 ⇒ tp = 8× 10−5 s
(d) 8 : E = 11+2(8) = 0.0588 = 5.88%
tp = 6.4× 10−5 sec, d = 19.2 km
6 : E = 11+2(6) = 0.0769 = 7.69%
tp = 4.8× 10−5 sec, d = 14.4 km
4 : E = 11+2(4) = 0.111 = 11.11%
tp = 3.2× 10−5 sec, d = 9.6 km
2 : E = 11+2(2) = 0.2 = 20%
tp = 1.6× 10−5 sec, d = 4.8 km
8. tp = 0.2 s
r = 2 Mb/s
f = 800 b
tf = fr = 800
2×106 = 4× 10−4 s
26 Chapter 4. Data Links and Transmission
0.2
0.1111
2
0.04760.05880.0769
4 6 8 10
E
tp
tf
Figure 4.3: Answer to exercise. The efficiency trend.
(a) Stop-and-Wait protocol:
E =tft =
tftf+2tp
= 4×10−4
4×10−4+2(.2)= 0.0010 ≈ 0.1%
(b) Sliding window protocol, w = 6:
E = w
w+2
(tptf
) = 6
6+2
(0.2
4×10−4
) = 0.00596 ≈ 0.6%
9. Frame = 5,000 b
tp = 1 μs/km
(a) Rate on R2-R3 = 1 Gb/s
Required condition on R3-R4: Link R3-R4 must transfer slower
27
shuch that:
Rate on R3-R4 =(ER3−R4
ER2−R3
)× 1 Gb/s
(b) tp = 1,800 km × 1 μs/km = 1,800 μs
tf =5,000 b/frame
1 Gb/s= 5 μs
ER2−R3 = w
w+2
(tptf
) = 5
5+2
(1,800 μs
5 μs
) = 6.8×10−3.
(c) tp = 800 km × 1 μs/km = 800 μs
tf =5,000 b/frame(
ER3−R4ER2−R3
)×1 Gb/s
= 5(ER2−R3
ER3−R4
)= 5
(6.8×10−3
ER3−R4
)= 0.034
ER3−R4
ER3−R4 = 1
1+2
(tptf
) = 1
1+2
(8000.034
ER3−R4
)⇒ ER3−R4 = 4.6 × 10−3.
28 Chapter 4. Data Links and Transmission
Chapter 5
Local-Area Networks andNetworks of LANs
1. (a) 88 Bit Packet ⇒ Data Part=88-80
Prop. Speed= 200 m/μs
One cycle time = (transmission time+propagation time) for data packet+
(transmission time+propagation time) for ack packet
=
(256 b
106 b/s+ 103 m
200×10−6
)+
(88 b
106 b/s+ 103 m
200×106
)= 354 × 10−6 s
Total time = (one cycle time)× (total bits)/data size of packet
= (354 × 10−6 s)× (8 b/ch × 106 ch)/176 b/packet
= 16 s/packet
(b) One cycle time = (transmission time + propagation time)for data packet+
(transmission time + propagation time)for ack packet
=
(256 b+(100/2)×1 b
106 b/s+ 103 m
200×106 m/s
)+
(50 nodes×1 b/node
106 b/s+ 103 m
200×106
)= 366×10−6 sTotal time = (one cycle time)×(total bits)/data size of packet
= (366 × 10−6 s)× (8 b/ch × 106 ch)/176 b/packet
= 16.54 s/packet
29
30 Chapter 5. Local-Area Networks and Networks of LANs
GroundFloor3
5
LAN
5
4th Floor
3rd Floor
2nd Floor
Figure 5.1: Answer to exercise. The LAN overview of connections in abuilding.
2. Assuming that the computers and phones are placed at the corners of
rooms, the overview of the LAN connections in a building is shown in
Figure 5.1.
(a) 2nd floor: d=5+3+5=13 m
3rd floor: d=5+3+3+5=16 m
4th floor: d=5+3+3+3+5=19 m
(b) VoIP rate per office=64 × 2 = 128 Kb/s
Web rate per office=(22 KB/page/s × 160 )(2 min ) ×8 b/B =5.86
Kb/s
LAN rate=(128+5.86) × 12 offices=1.6 Mb/s
3. Please make the following correction: 100 m to be 1 km. Also combine
31
Parts (a) and (b) to be Part (a) and thus Part (c) to become Part (b)
Data rate=100 × 106 b/s Speed = 200 m/μs,
Frame = 1000 b,
(a) Mean distance = 0.375 km
total time/frame = (transmission time) + (propagation time)
=103 b/frame100×106 b/s
+ 0.375 km200×106 m/s
= 11.87 μs
(b) Time is seconds = to sense a collision in the midpoint of two users’
distance = total time to send a frame up to the midpoint leading
to a collision, and then sense back the collision = 0.5 (11.87 μs)
+ 0.5 (11.87 μs) = 11.87 μs
Time in bits = 11.87 μs× 107 b/s = 1, 187 b
4. 100 Mb/s
96 bit time to clear
tp=180 b
(a) g=2
Retransmission time = (96 + 512× 2)× 10−8 = 1.12× 10−5 s
(b) g=1
Retransmission time = (96 × 512) × 10−8 = 6.08 × 10−6 s
(c) tp =lc =
180 b100 Mb/s
= 1.8× 10−6 s
5. (a) α =tpT
β = λT
32 Chapter 5. Local-Area Networks and Networks of LANs
R = pttB+t =
e−λtp
T+2tp− 1−e−p
λ+ 1
λ
=−λtp
λ(T+2tp)+e−p
Un =−αβ
λ(T+2αβ)+e−αβ =βTe−αβ
β+2αβ+e−αβ
(b) Rn is in terms of frames/time slot due to the throughput R be-
ing normalized. Rn makes it easier to use for estimation of the
system. β is called ”offered load” since β is equal to λ ( is the av-
erage arrival rate) multiplied by T (is a frame duration in seconds)
resulting in “offered load”.
(c) α = {0.001, 0.01, 0.1, 1.0}Un1 =
βT e
−0.001β/β + 2(0.001β) + e−0.001β
Un2 =βT e
−0.01β/β + 2(0.01β) + e−0.01β
Un3 =βT e
−0.1β/β + 2(0.1β) + e−0.1β
Un4 =βT e
−1β/β + 2(1β) + e−1β
6. N/A
7. na = 4
� = 10
n = 10
f = 1500 byte× 8 b/1 byte = 12, 000 b
(a) tp =�c = 10
3×108= 3.33 × 10−8 s
tr =fr = 12,000 bits
10×109 b/s= 1.2× 10−6 s
(b) u = trtr+tp
= 1.2×10−6 s1.2×10−6 s+3.33×10−8 s
= 0.973 s
(c) pc =(na−1na
)na−1=(4−14
)4−1= 0.422
(d) na = 7, i = 7
pc =(na−1na
)na−1=(7−17
)7−1= 0.387
33
pi = pc(1− pc)i = 0.387(1 − 0.387)7 = 0.0116
(e) na = 4
E[c] = 1−pcpc
= 1−0.3870.387 = 1.52
Listen to Medium Transmit data
Wait for 512g
Bit Time
Random
# g
Medium not
available
Medium
available
Transmission
success
Collision
Listen to Medium
Transmit with
Prob P for Max
tp
Medium not
available
Medium
available
Collision
Fixed P
Transmission
success
a)
b)
Figure 5.2: Answer to exercise.
8. See Figure 5.3.
34 Chapter 5. Local-Area Networks and Networks of LANs
HubNetworkAnalyzer
Bridge
ToInternet
1R
To OtherBuildings
Hub
Repeater
Figure 5.3: Answer to exercise.
Chapter 6
Wireless Networks andMobile IP
1. N/A
2. N/A
3. N/A
4. N/A
5. N/A
6. (a) The probability of reaching a cell boundary or the probability of
requiring a handoff as a function of db is shown in Figures 6.1 and
6.2. Suppose that a vehicle initiates a call in a cell with 10 miles
radius. The vehicle speed is chosen to be 45 m/h (within a city)
35
36 Chapter 6. Wireless Networks and Mobile IP
Table 6.1: Probability of having a handoff for Case 4
db Handoff Probability (%)α01 (m) k = 35 m/h k = 60 m/h
0 50 505 37 43
1 10 29 3620 17 26
0 50 505 12 22
5 10 4 920 1 2
0 50 505 4 9
10 10 1 220 0 0
and 75 m/h (on highways). In case 1, since a vehicle is resting
all the time with an average speed of 0 m/h, the probability of
reaching a cell boundary is clearly 0 percent. In contrast to Case
1, for a vehicle is moving with an average speed in Case 2, the
chance of reaching a cell boundary is always 100 percent. Thus,
when a vehicle is either at rest or moving with some speed, the
probability of requiring a handoff is independent on db. From the
figure, we see that as α01 increases, the chance of reaching a cell
boundary is lower. Also, with a fixed db, the handoff probability
in highway is much higher than in the city area. This is because
the higher the speed limit, the higher the probability of reaching a
cell.
Table 6.1 summarizes the results. Assume α01 = 1 and in the city
area where k = 45 m/h: if db is 5 miles the only chance of reaching
a cell is 87 percent, or the chance that a handoff occurs for the
cell is 87 percent. If db is 10 miles, the probability of reaching a
cell boundary is 76 percent. As db increases, the probability of
37
0 2 4 6 8 10 12 14 16 18 200
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
db, miles
Pro
babi
lity
of R
equi
rem
ent F
or H
ando
ff
α01
= 1 α
01 = 5
α01
= 10
Figure 6.1: The probability of reaching a cell boundary for Case 4: (a)within a city
0 2 4 6 8 10 12 14 16 18 200
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
db, miles
Pro
babi
lity
of R
equi
rem
ent F
or H
ando
ff α01
= 1 α
01 = 5
α01
= 10
Figure 6.2: The probability of reaching a cell boundary for Case 4: (b) inhighway.
38 Chapter 6. Wireless Networks and Mobile IP
25 30 35 40 45 50 55 60 65 70 750
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Average Speed of A Vehicle, mph P
roba
bilit
y of
Req
uire
men
t For
Han
doffs
α01
= 1 α
01 = 5
α01
= 10
Figure 6.3: The probability of reaching a cell boundary in terms of a vehicle’sspeed for Case 4.
reaching the call holding time decreases. As the cell size increases,
the probability of reaching a boundary decreases in an exponential
manner. The only difference between Case 3 and Case 4 is that the
change of handoff probability for the latter is between 0 percent to
50 percent while the former has a change of probability between
0 percent to 100 percent. This is mainly due to the difference
between the two initial state probabilities for both case.
(b) The relationship between a vehicle’s speed and the chance of reach-
ing a cell boundary is shown in Figure 6.3 As shown earlier, for
Case 1 and Case 2, the probability of requiring a handoff is inde-
pendent on the call holding time and db and therefore, it is also
independent on the vehicle’s speed. For Case 1 in which a vehicle
is resting all the time, the vehicle will never reach a cell boundary.
For Case 2 in which a vehicle is moving all the time with some
speed, the chance of reaching a cell boundary is always 100 per-
cent. The probability of reaching a cell boundary is proportional
to the vehicle’s speed. This is because the increase of the speed of
a vehicle increases the chance of reaching a cell boundary. As α01
39
increases, the probability of requiring a handoff decreases.
7. N/A
40 Chapter 6. Wireless Networks and Mobile IP
Chapter 7
Routing andInter-Networking
1. (a) See Figure 7.1 (a).
min=2
max=2
H = min+max2 = 2
(b) See Figure 7.1 (b).
min = 3
For max:
n=4, max=4,
n=5, max=9/2,
n=6, max=5
in general, for n, the max is n2 + 2
H = min+max2 = 3+(n/2+2)
2
(c) See Figure 7.1 (c).
H=3
(d) See Figure 7.1 (d).
min = 3
max = 4
H = 2(3)+(n−4)4n−2
41
42 Chapter 7. Routing and Inter-Networking
(a)
(d)(c)
(b)
Figure 7.1: Answer to exercise. Four different network topologies to connecttwo users.
43
2. Using Dijkstra’s Algorithm
Table 7.1: Solution to problem.
(a)
k βA,C βA,D βA,F βA,E βA,B
{A} AC(5) × AF(9) × ×{A,F} AC(5) AFD(12) ACF(8) AFE(10) AFB(14)
{A,F,C} AC(5) ACD(9) ACF(8) ACE(7) AFB(14)
{A,F,C,D} AC(5) ACD(9) ACF(8) ACE(7) AFB(14)
{A,F,C,D,E} AC(5) ACD(9) ACF(8) ACE(7) ACEB(9)
{A,F,C,D,E,B} AC(5) ACD(9) ACF(8) ACE(7) ACEB(9)
(b) See Figure 7.2
3. Using Bellman-Ford Algorithm
(a)
� βA,C βA,D βA,F βA,E βA,B
1 AC(5) × AF(9) × ×2 AC(5) ACD(9) ACF(8) ACE(7) AFB(14)
3 AC(5) ACD(9) ACF(8) ACE(7) ACEB(9)
4 AC(5) ACD(9) ACF(8) ACE(7) ACEB(9)
(b) See Figure 7.3.
4. Using Dijkstra’s Algorithm
(b) See Figure 7.4.
5. Using Dijkstra’s Algorithm
44 Chapter 7. Routing and Inter-Networking
Figure 7.2: Answer to exercise.
Figure 7.3: Answer to exercise.
45
Figure 7.4: Answer to exercise.
Figure 7.5: Answer to exercise.
46 Chapter 7. Routing and Inter-Networking
(a)
k βA,C βA,D βA,F βA,E βA,B
{A} AC(5) × AF(9) × ×{A,F} AC(5) AFD(12) ACF(8) AFE(10) AFB(14)
{A,F,C} AC(5) ACD(9) ACF(8) ACE(7) ACFB(13)
{A,F,C,D} AC(5) ACD(9) ACF(8) ACE(7) ACFB(13)
{A,F,C,D,E} AC(5) ACED(8) ACF(8) ACE(7) ACEB(9)
{A,F,C,D,E,B} AC(5) ACED(8) ACF(8) ACE(7) ACEB(9)
(a)k β1,2 β1,3 β1,4 β1,5 β1,6 β1,7
{1} 1,2(3) 1,3(3) × × 1,6(9) ×{1,2} 1,2(3) 1,3(3) 1,2,4(11) 1,2,5(15) 1,6(9) ×{1,2,3} 1,2(3) 1,3(3) 1,3,4(7) 1,3,5(7) 1,6(9) ×{1,2,3,4} 1,2(3) 1,3(3) 1,3,4(7) 1,3,5(7) 1,6(9) ×{1,2,3,4,5} 1,2(3) 1,3(3) 1,3,4(7) 1,3,5(7) 1,3,5,6(8) 1,3,5,7(20)
{1,2,3,4,5,6} 1,2(3) 1,3(3) 1,3,4(7) 1,3,5(7) 1,3,5,6(8) 1,3,5,6,7(16)
(b) See Figure 7.5.
6. Using Bellman-Ford Algorithm
(a)
� β1,2 β1,3 β1,4 β1,5 β1,6 β1,7
1 1,2(3) 1,3(3) × × 1,6(9) ×2 1,2(3) 1,3(3) 1,3,4(7) 1,3,5(7) 1,6(9) 1,6,7(17)
3 1,2(3) 1,3(3) 1,3,4(7) 1,3,5(7) 1,3,5,6(8) 1,6,7(17)
4 1,2(3) 1,3(3) 1,3,4(7) 1,3,5(7) 1,3,5,6(8) 1,3,5,6,7(16)
(b) See Figure 7.6.
7. From R1 to R4 using Dijkstra’s Algorithm
(b) See Figure 7.7.
47
Figure 7.6: Answer to exercise.
1R2R3R
4R7R
6R
5R
1R2R3R
4R7R
6R
5R
1R2R3R
4R7R
6R
5R
1R2R3R
4R7R
6R
5R
1R2R3R
4R7R
6R
5R
1R2R3R
4R7R
6R
5R
1R2R3R
4R7R
6R
5R
Figure 7.7: Answer to exercise.
48 Chapter 7. Routing and Inter-Networking
(a)
k β1,2 β1,3 β1,4 β1,5 β1,6 β1,7
{1} 1,2(2) × × × 1,6(2) 1,7(8)
{1,6} 1,2(2) 1,6,3(7) × 1,6,5(7) 1,6(2) 1,6,7(3)
{1,6,5} 1,2(2) 1,6,3(7) 1,6,5,4(11) 1,6,5(7) 1,6(2) 1,6,7(3)
{1,6,5,4} 1,2(2) 1,6,3(7) 1,6,5,4(11) 1,6,5(7) 1,6(2) 1,6,7(3)
{1,6,5,4} 1,2(2) 1,6,3(7) 1,6,5,4(11) 1,6,5(7) 1,6(2) 1,6,7(3)
{1,6,5,4,3} 1,2(2) 1,6,3(7) 1,6,3,4(9) 1,6,5(7) 1,6(2) 1,6,7(3)
{1,6,5,4,3,2} 1,2(2) 1,2,3(6) 1,2,3,4(8) 1,6,5(7) 1,6(2) 1,6,7(3)
{1,6,5,4,3,2,7} 1,2(2) 1,2,3(6) 1,6,7,4(5) 1,6,5(7) 1,6(2) 1,6,7(3)
8. From R1 to R4 using Bellman-Ford Algorithm
(a)
� β1,2(�) β1,3(�) β1,4(�) β1,5(�) β1,6(�) β1,7(�)
1 1,2(2) × × × 1,6(2) 1,7(8)
2 1,2(2) 1,2,3(6) 1,7,4(10) 1,6,5(7) 1,6(2) 1,6,7(3)
3 1,2(2) 1,2,3(6) 1,6,7,4(5) 1,6,7,5(4) 1,6(2) 1,6,7(3)
4 1,2(2) 1,2,3(6) 1,6,7,4(5) 1,6,7,5(4) 1,6(2) 1,6,7(3)
(b) See Figure 7.8.
9. PBC = (0.3)(0.1)(0.7) = 0.021
PCE = (0.3)(0.6) = 0.18
PCDF = 1− (1− 0.5)(1 − 0.8) = 0.9
PCEF = 1− (1− 0.18)(1 − 0.2) = 0.344
PCF = (0.9)(0.344) = 0.31
PBCF = 1− (1− 0.021)(1 − 0.31) = 0.324
PBF = (0.3)(0.324) = 0.097
PAF = 1− (1− 0.4)(1 − 0.097) = 0.458
10. PAB = 0.4
PBC = (0.3)(0.1)(0.7) = 0.021
49
1R2R3R
4R7R
6R
5R
1R2R3R
4R7R
6R
5R
1R2R3R
4R7R
6R
5R
1R2R3R
4R7R
6R
5R
Figure 7.8: Answer to exercise.
PCE = (0.3)(0.3)(0.6) = 0.054
PCDF = 1− (1− 0.5)(1 − 0.8) = 0.9
PEF = 0.2
PCF = [1− (1− PCE)(1− PEF )]PCDF (0.3)(0.3) = 0.0197
PAC = 1− (1− PAB)(1− PBC) = 0.4126
PAF = 1− (1− PAC)(1− PCF ) = 0.424
50 Chapter 7. Routing and Inter-Networking
Chapter 8
Transport and End-to-EndProtocols
1. Figure 8.1 shows the operation. Each packet size is 1000 bytes since
that’s the MSS of Host B.
For Host A: MSS = 2000, ISN = 2000, File Size = 200 Kb = 25 KB
For Host B: MSS = 1000, ISN = 4000, Packet Size = 100 Kb = 25 KB
where data per packet is 960 bytes
Three stages in file transfer: Connection establishment, Segment trans-
fer, and Connection termination.
2. (a) TCP sequence number field includes 4 B = 32 b. Thus:
• Maximum number of bytes to be identified in a connection =
232
• We consume one sequence number for connection setup, seq(i),
and one sequence number for connection termination, seq(k).
As each byte of data is identified by a unique sequence number,
therefore, the maximum number of data bytes that can be
identified for a connection and we can transfer = f = 232−2 =
4, 294, 967, 294 B
51
52 Chapter 8. Transport and End-to-End Protocols
HOST B
HOST A
2-WAY CONNECTIONESTABILSHED
PACKET TRANSMISSION STARTS
A gap in transmission to allowHost A not tot wait for ACK
and keep transmitting
2-WAYCONNECTIONTERMINATEDLAST 40 BYTES
Figure 8.1: Answer to exercise.
(b) Total size of each segment = 2,000 B. Also, each segment has the
following headers: 20 B Link + 20 B IP + 20 B TCP = 60 B.
Thus:
• Maximum size of data in each segment = 2,000 B - 60 B =
1,940 B.
• Maximum number of segments to be produced in a connection
= 4,294,967,294 B1,940 B ≈ 2,213,901
• Total size of all segment headers = 2,213,901 × (60 B) =
132,834,060 B
• Total size of all segment headers and data = 4,294,967,294 B
+ 132,834,060 B = 4,427,801,354 B
• Total time it takes to transfer all segments =4,427,801,354 B×8 b/B
100×106 b/s
53
= 354.21 s ≈ 5.9 minutes
3. (a) Slow start congestion control:
Since the number of packets transmitted doubles every time, the
number of round trips to reach n is log2 n− 1.
(b) Additive increase congestion control:
Since the number of packets transmitted increases by 1 every time,
the number of round trips to reach n is n− 1.
4. This Problem belongs to Chapter 12 please see Problem 12.16.
5. N/A
6. � = 1.2 Gb/s
RTT = 3.3 ms
File size f = 2 MB
packet size = 1 KB
Hence the total number of packets needed to be transmitted = 2MB /
1 KB = 2000
(a) With an additive increase/multiplative decrease protocol, the win-
dow size increases by one all the times until a congestion when the
window size is divided by two. Therefore, the window size starting
at wg = 1 KB changes its value as follows:
wg = 1 KB, 2 KB, 3 KB, 4 KB, · · ·, n KB
Therefore: 1 + 2 + 3 + · · ·+ n = 2000
54 Chapter 8. Transport and End-to-End Protocols
Thus: n(n+1)2 = 2000
where we can obtain n = 62.74 ≈ 63
Since, the congestion window size of 500 KB is never reached, no
multiplicative decrease takes place. Thus
The total time = 63 × 3.3 ms = 207 ms
Clearly, the window size takes a total of 500×RTT = 500×3.3=1.65
seconds to reach 500 KB.
(b) With a slow start protocol, the window size is doubled all the times
until a congestion. Therefore, the window size starting at wg = 1
KB changes its value as follows:
wg = 1 KB, 2 KB, 4 KB, 8 KB, · · ·, approximatley 1,024 KB
Therefore, we will have to make 11 roundtrips to transmit the file:
1+2+4+8+16+32+64+128+256+512+1024 = 2047
Thus the total time = 11 × RTT = 11 × 3.3 ms = 36.3 ms
The window size takes a total of 10×(c) With the additive increase/multiplative decrease protocol, it takes
63 RTTs to transfer the 2 MB file. Therefore,
Δ = 63× 3.3 ms ≈ 208 ms
r = fΔ = 2 MB
208 ms = 76.9 Mb/s
(d) With the additive increase/multiplative decrease protocol,
ρu = rB =
76.9 Mb/s1.2 Gb/s
= 64× 10−3
7. Round trip time = 0.5 s
Packet transmitted every 50 ms
Let’s assume packet (segment) P-11 is lost. The first acknowledgement,
ACK-10, is received when P-21 is about to be transmitted. No ACK
is received before P-22 as P-11 is lost. We receive the fourth ACK-10
after P-24 is sent. Segment loss is detected. P-11 is transmitted instead
55
Source
Destination
0.5 sec
LOSS OF SEGMENTDETECTED
50ms
Figure 8.2: Answer to exercise.
of P-25. See Figure 8.2.
(a) In this case, P-11 is transmitted and after 50 ms, P-25 is trans-
mitted, and the cycle continues. Hence, we lose only 50 ms. See
Figure 8.2.
(b) In this case, the sender waits for the acknowledgment of retrans-
mitted P-11. Thus, it has to wait for the complete round trip.
Hence, the time lost here is 0.5 s.
56 Chapter 8. Transport and End-to-End Protocols
Chapter 9
Applications and NetworkManagement
1. The command for is “nslookup.” In the command prompt in that
window we put the name of the website in the format: www.name.com.
When entering this command in the command prompt of windows, it
can be seen that we get the server name, IP address, and Aliases.
The command ns-lookup works both ways, that is, if we give the name
we get the IP addresses and also vice versa. The snapshot in the Figure
9.1 shows the IP addresses of some of the most frequently used websites
with their server names.
2. (a) To obtain the file name in a remote machine, the DNS server re-
quests the local DNS server if it is not the local DNS server to
contact the remote machine. These requests are carried out by
the server either recursively or iteratively. On the other hand, if
the server wants to obtain the file name from another DNS server,
depending upon the type of information and the file location, the
DNS server either requests the root DNS or another local DNS.
(b) When we take the domain name from the DNS server, our query
57
58 Chapter 9. Applications and Network Management
Figure 9.1: Solution to exercise.
59
gives us a result which includes all the possible aliases of the par-
ticular domain names and their corresponding IP addresses. On
the other hand, if the query is done using the IP address, then we
get only the particular alias that corresponds to that IP address in
response. This is illustrated by the Figure 9.2 using the example
of gmail.com.
(c) As seen in Figure 9.2, all hosts from the same subnet need not be
identified by the same DNS server as we can assign different subnets
with different IP addresses. This is done for various reasons like
traffic sharing, having different names (alias) for same website, etc.
3. (a) SSH provides a far better security of transmission compared with
TELNET.
(b) The functionality given by Rlogin implementation in Telnet are:
It passes terminal type
It bypasses the need for username/password to be entered.
No newline etc processing is applied to data transferred
It has better out-of-band data handling
It has better flow-control handling
It has window-size negotiation
4. (a) No, FTP does not compute checksum any checksum for its file
transfer. It relies on the underlying TCP layer for error control.
TCP layer uses checksum for error control.
(b) If the TCP connection is shut down, the browser tries to set up
the connection once. If this attempt fails then the browser quits
the file transfer.
(c) Following are the list of commands that may be set for FTP clients.
60 Chapter 9. Applications and Network Management
Figure 9.2: Solution to exercise.
61
Command Explanation
ABOR Abort an active file transfer.
ACCT Account information.
ALLO Allocate sufficient disk space to receive a file.
APPE Append
CDUP Change to Parent Directory.
CLNT Send FTP Client Name to server
CWD Change working directory.
DELE Delete file.
EPSV Enter extended passive mode.
EPRT Specifies an extended address and port to which the server
should connect.
FEAT Get the feature list implemented by the server.
GET Use to download a file from remote
HELP Returns usage documentation on a command if specified,
else a general help document is returned.
LIST Returns information of a file or directory if specified, else
information of the current working directory is returned.
LPSV Enter long passive mode.
LPRT Specifies a long address and port to which the server should
connect.
MDTM Return the last-modified time of a specified file.
MGET Use to download multiple files from remote
MKD Make directory (folder).
MODE Sets the transfer mode.
MPUT Use to upload multiple files to remote
NLST Returns a list of filenames in a specified directory.
NOOP No operation (dummy packet; used mostly on keep alive).
OPTS Select options for a feature.
PASS Authentication password.
62 Chapter 9. Applications and Network Management
PASV Enter passive mode.
PORT Specifies an address and port to which the server should
connect.
PUT Use to upload a file to remote
PWD Print working directory. Returns the current directory of
the host.
QUIT Disconnect.
REIN Re initializes the connection.
REST Restart transfer from the specified point.
RETR Retrieve a remote file.
RMD Remove a directory.
RNFR Rename from
RNTO Rename to.
SITE Sends site specific commands to remote server.
SIZE Return the size of a file.
SMNT Mount file structure.
STAT Returns the current status.
STOR Store a file.
STOU Store a file uniquely.
STRU Set file transfer structure.
SYST Return system type.
TYPE Sets the transfer mode (ASCII/Binary).
USER Authentication username.
5. The total file transfer delay is:
(a) On both directions when the network is in its best state of traffic,
the average file transfer delay is 3.5 ms.
(b) On both directions when the network is in its worst state of traffic,
the average file transfer delay is 9 ms.
63
(c) On one direction when we try FTP from one computer to itself:
The average file transfer is 7.5 ms.
6. All characters of the URL must be from the following:
A-Z, a-z, 0-9 . \ / ∼ % - + & # ? ! = () @
If a URL contains a different character it should be converted; for
example,ˆmust be written as %5e, the hexadecimal ASCII value with
a percent sign in front. A blank space can also be converted into an
underscore.
7. (a) The purpose of the GET command in the HTTP is to request
a representation of the specified resource. The GET method re-
trieves whatever information (in the form of an entity) identified
by the Request-URI. If the Request-URI refers to a data-producing
process, it is the produced data is returned as the entity in the re-
sponse and not the source text of the process, unless that text
happens to be the output of the process.
(b) The purpose of the PUT command in HTTP is to request the
enclosed entity to be stored under the supplied Request-URI. Thus
it basically uploads a representation of the specified resource.
(c) The GET command needs to use the name of the contacted server
when it is applied as HTTP is a stateless protocol. This means
that it keeps the state information and live connections to remote
clients. Thus, we connect to the server, get the info we need,
and then disconnect. Therefore, we need to give the name of the
contacted server.
64 Chapter 9. Applications and Network Management
8. (a) The role of ASN.1 on the 7 layer OSI model is shown in Figure 9.3.
The ASN.1 notation is used in the application layer as a notation.
(b) The impact of constructing a grand set of unique global ASN.1
names for MIB systems is that the network management can iden-
tify an object by a sequence of names or numbers from the root
to that object. This enables designers to produce specifications
without undue consideration to the encoding issues.
(c) A US based organization must register under the following:
Root : ISO : company name : dod : internet : MIB
Figure 9.3: Solution to exercise.
9. (a) The SNMP protocol has the function wherein the network man-
ager can use this protocol to find the location of fault. The task
of SNMP is to transport MIB information among all the manag-
ing centers and agents executing on its behalf. The most efficient
method for the above functions is unarguably UDP as it will be
faster and efficient.
(b) The pros of letting all the managing centers access the MIB is bet-
ter connection ability and better communication. It would greatly
65
help in the development and servicing of the MIB. All these things
would result in the better utilization of the network and also more
efficiency. On the other hand, the price to pay for this kind of
flexibility is having a huge impact on the security of the network.
Also, even if the security aspect is taken care of, letting everyone
access to the MIB variables increases the complexity of the MIB
design and maintenance.
(c) MIB is the information storage medium that contains managed
objects reflecting the current status of the network. Now, if the
MIB variables are located in the router memory, it would greatly
improve the efficiency and the speed of the process. However, it
brings with it the problem of updating the MIB. In the scenario
where router B is not involved in, any communication must also
be notified about the change in order to update the MIB if the
communication is limited between routers A and C. This would
create unnecessary overheads and wastage of network bandwidth.
This is of course on top of increasing the router complexity, buffer
size, and host of other problems. Thus, MIB variables should not
be organized in the local router memory.
66 Chapter 9. Applications and Network Management
Chapter 10
Network Security
1. L4 = 4de5635d, R4 = 3412a90e, k5 = be11427e6ac2.
L4 = 0100;1110;1111;0101;0110;0011;0101;1110.
R4 = 0011;0100;0001;0010;1010;0101;0000;1110.
After the expansion stage the right half will become:
R4 = 000110;101000;000010;100101;010100;001010;100001;011100.
k5 = 101111;110001;000101;000010;011111;110110;101011;000010.
R4 Xor k5 gives us:
101001;011001;000111;100111;001011;111100;001010;011110.
Now passing it through the S-Box:
R4 = 0100;0110;1001;0110;0111;1011;0000;0111.
L4 = 0100;1110;1111;0101;0110;0011;0101;1110.
Xor with the left half:
R4 = 0000;1000;0110;0011;0001;1000;0101;1001
After permutation:
R5 = 1011;1010;0100;1001;0010;1000;0000;0100 = ba492804
L5 = 0011;0100;0001;0010;1010;0101;0000;1110 = 3412a50e
67
68 Chapter 10. Network Security
2. Key generation:
The key is 010101. . . .01 and is 56 bit long. Thus, the parity bits have
already been discarded.
The key is first divided into two blocks of 28 bits using the standard
permutation block provided by the DES algorithm:
the left block say C0 = 0000000; 0111111; 1100000; 0001111.
the right block say D0= 0000000; 0111111; 1100000; 0001111.
Now, we shift left both C0 and D0 by 1 thus we get C1 and D1 as
follows:
C1 = 0000000; 1111111; 1000000; 0011110.
D1 = 0000000; 1111111; 1000000; 0011110.
ki(left) = 101100;001001;001011;001010.
ki(right) = 010101;010000;001001;010100.
ki = ki(left);ki(right).
Message generation:
The message is all ones: 111. . . .111 (64). With left half (32) 11. . . .11
and right half(32) 11. . . 11. (Here the initial permutation has no effect.)
Converting the 32 message into 48 by passing through the mangler (1):
111. . . 11(32 bit) = 111. . . 111(48bits).
Xoring with the key ki:
010011; 110110; 110100; 110101; 101010; 101111; 110110; 101011.
Now, passing it through the S-Box:
0110;0110;0010;0101;1101;1011;1000;1010.
Xor with left half:
1001;1001;1101;1010;0010;0100;0111;0101.
After permutation of the right half we get:
0000;0110;1101;1001;0100;1101;1110;1010 (R1)
1111;1111;1111;1111;1111;1111;1111;1111 (L1).
69
3. N/A
4. N/A
5. From the text book: c = mx mod n and m = cy mod n. Note that x
and y are mod inverse of each other. Thus,
c = ((cy)x) mod n.
Since x and y are inverse of each other, we then get
c = c mod n = c.
6. M = 1010. The two four bit primes are a = 5 and b = 11. Also x = 3.
To find the keys, we have
n = ab = (5)(11) = 55
q = (a− 1)(b − 1) = (4)(10) = 40
Thus, xy mod (a − 1)(b − 1) = 1 resulting in 3y mod 40 = 1 which
implies that y = 27, since (3)(27) = 81 and 81 mod 40 = 1. Therefore
the keys are:
The public key = {3, 55}The private key = {27, 55}.Thus: the cipher text from the message 1010 (10 in decimal) is 103 mod
55 = 1000 mod 55 = 10 mod 55. Therefore, the cipher text is 10.
7. m = 13
a = 5
70 Chapter 10. Network Security
b = 11
x = 7
(a) Encryption:
The public key = {7, 55}C = 137 mod 55 = 62748517 mod 55 = (55)(1140882) + 7 mod 55
= 7 mod 55.
C = 7.
(b) The corresponding y is given as follows:
n = ab = (5)(11) = 55
q = (a− 1)(b− 1) = (4)(10) = 40
Also x = 7
Thus, xy mod (a− 1)(b− 1) = 1
7y mod 40 = 1
Which implies y = 23 (since (7)(23) = 161 and 161 mod 40 = 1)
The private key = {23, 55}.(c) The decryption is 723 mod 55 = 13.
8. (a) When encrypting with small values of the m, the (non-modular)
result of me may be strictly less than the modulus n. In this
case, ciphertexts may be easily decrypted by taking the the root
of the ciphertext with regardless of the modulus. For systems that
conventionally use small values of e, such as 3, the AES key of 256
bits using this scheme would be insecure since the largest m would
have a value of 2563, and 2553 is less than any reasonable modulus.
Such plaintexts could be recovered by simply taking the cube root
of the ciphertext.
Thus, the 256-bit AES key, k, chosen by user 1 is too small to
encrypt securely with RSA having a public key as {x, 5} since
ke < x. Thus, ke mod x = ke and an intruder only recovers k by
71
taking the eth root.
(b) The values m = 0 or m = 1 always produce ciphertexts equal to
0 or 1 respectively, due to the properties of exponentiation. Thus,
the keys containing of all 0’s or all 1’s can be easily recovered by
the attacker. An example could be {x = 3, y = 7}.
9. To overcome the vulnerability in the above combination, practical RSA
implementations typically embed some form of structured, randomized
padding into the value m before encrypting it. This padding ensures
that m does not fall into the range of insecure plaintexts, and that a
given message, once padded, encrypts to one of a large number of dif-
ferent possible ciphertexts. The latter property can increase the cost
of a dictionary attack beyond the capabilities of a reasonable attacker.
Modern constructions use secure techniques such as optimal asymmet-
ric encryption padding (OAEP) to protect messages.
The intuitive solution to this problem is that user 1 must select a larger
random number for RSA encryption. In this case, both users 1 and 2
use this number to create key, k.
A second solution is that user 1 pads k with random bits so that the
message has almost the same number bits as x does.
10. Suppose that user 1 chooses a prime number a, a random number x1,
and a generator g and creates y1. We can say:
k1 = yx12 mod a = (gx2 mod a)x1 mod a = [(gx2)x1 mod a] mod a =
[(gx1)x2 mod a] mod a = (gx1 mod a)x2 mod a = yx21 mod a = k2
72 Chapter 10. Network Security
Part II
Advanced Concepts
73
Chapter 11
Packet Queues and DelayAnalysis
1. Note to Instructors: This problem requires a good understanding of
queueing theory without any use of formulas. Advance explanation of
objectives of this problem to students will be very helpful.
We can summarize the queing situation as follows:
• Interarrival time = 20 μs
• Service time
= 0 μs, if there is no packet misordering
= 10 + 30×(n=number of misorderings) μs, if there are n packet
misorderings in a block.
(a) Packet block arrival and departure activities are shown in Table
11.1. If we consider one packet block between arrival time 20 μs,
and departure time 90 μs (for the duration of 70 μs), and then
one packet block between arrival time 40 μs, and departure time
170 μs (for the duration of 130 μs), and continue this trend, the
queuing behaviour can be shown in Figure 11.1.
(b) Mean number of packet blocks
=
∑(Service Times)×(1 packet block)
Duration of System Processing Time
= 1,200 blocks680 μs = 1.76
75
76 Chapter 11. Packet Queues and Delay Analysis
Table 11.1: Packet block arrival and departure activities.
PacketBlockNumber
Numberof Misor-derings
ArrivalTime in μs
ServiceTime in μs
DepartureTime in μs
1 2 20 70 902 4 40 130 1703 0 60 10 704 0 80 10 905 1 100 40 1406 4 120 130 2507 3 140 100 2808 5 160 160 3209 2 180 70 25020 4 200 130 33011 0 220 10 23012 2 240 70 31013 5 260 160 42014 2 280 70 35015 1 300 40 340
0
1
2
3
4
5
6
7
20 40 600 640 680
Processing Time (micro-sec)
Number of Waiting Packet-Blocks in Queue
90 …
…
Figure 11.1: Solution to exercise. The trend of packets accumulated in thequeue over time.
77
(c) Percentage of time that the buffer is not empty can be realized
from Figure 11.1. If we include all the activities of the queue as
described in Part (a), the queue activity stops at around 680 μs.
Thus, the percentage of time that the buffer is not empty
= Time That the Buffer Is EmptyDuration of System Processing Time
= 20 μs680 μs = 0.029
Percentage of time the buffer is not empty = 1 - 0.029 = 0.97
2. (a) E[Kq(t)] = λE[Tq] =ρ2
1−ρ = 0.92
1−0.9 = 8.1
(b) E[Tq] = E[T ]− E[Ts]
E[T ] = 1μ(1−ρ)
E[Ts] =1μ
⇒ E[Tq] = ρ(
1μ−λ
)λμ = ρ,⇒ μ = λ
ρ = 44.44
E[Tq] = 0.9(
144.44−40
)= 0.2 s
(c) P0 = 1− ρ = 1− 0.9 = 0.1
λ = 40 packets/s
μ = 44.44 packets/s
3. (a) T = min(T1, T2, ..., Ti)
(b) P [T > t] = P [time until next packet departures]
= P [min(T1, T2, ..., Ti)]
= P [T1 > t1, T2 > t2, ..., Ti > ti]
= P [T1 > t]P [T2 > t]...P [Ti > t]
= e−μte−μt...e−μt
= e−iμt
78 Chapter 11. Packet Queues and Delay Analysis
4. (a) P [K(t) < k] = 1− P [K(t) ≥ k]
= 1−∑∞i=k Pi = 1−∑∞
i=k ρi(1− ρ) = 1− (1−ρ)ρk
1−ρ = 1− ρk
(b) P [K(t) < 20] = 0.9904 ⇒ k = 20
⇒ 0.9904 = 1− ρ20
⇒ ρ = 0.7927
ρ = λμ = 0.7927 = 300
μ
⇒ μ = 378.45 packet/s
5. (a) P [K(t) ≥ k] = (1− ρ)∑∞
j=k ρj = (1− ρ) ρk
1−ρ = ρk
(b) P [K(t) ≥ 60] = ρ60 = 0.01
⇒ ρ ≈ 0.92
⇒ λ ≈ 0.92μ
6. (a) The Markov chain is similar to a regular M/M/1 except the ar-
raival rate to any state i is λi =λi
(b) For State 0: p0λ = p1μ
⇒ p1 =λμp0 ⇒ p1 =ρp0
For State 1: p1λ2 + p1μ = p0 + p2μ
⇒ p1λ2 = p2μ ⇒ p2 =
λ2μp1 ⇒ p2 =
12ρp1 = 1
2ρ2p0
Continuing this trend for next states, a generic form can be devel-
oped as:
pi−1λi = piμ ⇒ pi =
λiμpi−1 = 1
i!ρip0
Since∑∞
i=0 pi = 1⇒ p0 = e−ρ
⇒ pi =1i!ρ
ie−ρ
79
(c) When i → ∞ while ρ < 1, we have pi → 0, thus, the system is in
a steady state.
(d) The utilization (of the server) is:
ρi =λiμ =
1iλ
μ = λiμ
(e) E[K(t)] =∑∞
i=0 iP [K(t) = i] =∑∞
i=0 i(ρk
i!
)e−ρ
Solving this equation using∑∞
i=0ρe
e! = eρ will result in:
E[K(t)] = ρ
(f) The mean system delay considering any State i is obtained using
Little’s law:
E[T ]i =E[K(t)]
λi= ρ
λi
= iρλ
However, since the arrival rate is different in each state, we need
to compute the mean over all states:
E[T ] =∑∞
i=0 piE[T ]i =∑∞
i=0 pi(iρλ
)= ρ
∑∞i=0
(1i!λρ
ie−ρ) (
iλ
)= ρ2
λ
Since E[Ts] =1μ , then:
E[Tq] = E[T] - E[Ts] =ρ2
λ − 1μ
7. a = 2
1μ = 100 ms/packet
λ = 18 packets/s
(a) Prob[blocking a packet]=Prob[all servers are busy]
ρ1 =λμ = (18)(100 × 10−3) = 1.8
ρ = λaμ = 18/2.1 = 0.9
P [i > 2] = Pa1−ρ
Pa = P2 =ρ212! P0
P0 =1
(1+ρ1)+ρ212!
11−ρ
= 1(1+1.8)+ 1.82
2!1
1−0.9
= 0.052
80 Chapter 11. Packet Queues and Delay Analysis
ρ = 18/20 = 0.9
P [K(t) = 2] = Pa =ρa2a! ρ0 =
1.82
2 (0.0526) = 0.0853
⇒ Prob[Waiting] = P [K(t) ≥ 2] =∑∞
i=a Pi = Pa1−ρ = 0.0853
1−0.9 =
0.853
(b) E[K(t)] = Paρ(1−ρ)2 + ρ1
= (0.0853)(0.9)(1−0.9)2 + 1.8 = 9.474
(c) E[T ] = E[Tq] + E[Ts]
= Paaμ(1−ρ)2 + 1
μ = 0.421 + 0.1 = 0.521 s
(d) P [K(t) > 50] =∑∞
i=50 ρi
=∑∞
i=50 ρi−aPa = Pa
ρ2∑∞
i=50 ρi
= Paρ2 [∑∞
i=50 ρi −∑i=50 ρi]
= Paρ2 [
11−ρ − 1−ρ50+1
1−ρ ]
= Paρ2 [
ρ51
1−ρ ] =0.08530.92
0.951
0.1
P [K(t) > 50] = 0.00488
8. (a) Prob[blocking a packet]=Prob[all servers are busy]
λ = 100 packets/s
mean service rate:μ = 20 packets/s; ρ1 = 100/20 = 5
Use Erlang-B:
Pa =ρa1a!
1∑a
i=0
ρi1i!
= 56
6!1∑6
i=05i
i!
= 0.1935
(b) Pa = 0.19352 = 0.0967
By plugging numbers in the equation, we have:
P7 = 0.121
P8 = 0.075
We need two switches to lower the blocking probability below
0.0967, this is 4 more switches then the setup in Part (a).
81
i
0 1 2 j
i i i
i i i j i (j+1)
i
i
i
λ λ λ λ λ λi
c
c
i i μ 2μ 3μ μ μ μ
Figure 11.2: Solution to exercise, Markov chain for the M/M/c/c system.
9. (a) The handoff process is modeled usingM/M/c/c Markovian system
in which there is no queueing line with random interarrival call time
and exponential service time (channel holding time) with c servers
(c channels) as well as c sources (c handoff calls). The Markovian
chain of this system is illustrated in Figure 11.2 where in our case:
λi = Handoff request rate for traffic type i ∈ {0, 1, ..., k} following
a Poisson process.
1/μi = Mean holding time of a channel or mean channel exchange
time for traffic type i with an exponential distribution.
Let j be channels that are busy. Thus, handoff calls depart at rate
jμi.
(b) When the number of requested channels reaches the total number
of available channels (ci), ie. j = ci, then, all ci channels are in
use and the channel exchange rate is ciμi. In this case, any new
arriving handoff calls are blocked since there is no queueing. The
global balance equations are:
λiP0 = μiP1 for j = 0 (11.1)
λiPj−1 = jμiPj for 0 < j ≤ ci (11.2)
where, P0 is the probability that no channel exchange is requested
for traffic type i, and Pj is the probability that j channel exchanges
82 Chapter 11. Packet Queues and Delay Analysis
are requested for traffic type i. It then follows that
P1 = ρiP0 (11.3)
Pj =ρiPj−1
j(11.4)
where, ρi = λi/μi is the offer load of the system. In Equation
(11.4), let j = 2 and 3, then:
P2 =ρiP1
2=
ρi2P0
2× 1=
ρi2P0
2!(11.5)
P3 =ρiP2
3=
ρi3P0
3× 2!=
ρi3P0
3!(11.6)
By induction from Equations 11.5 and 11.6
Pj =ρi
jP0
j!(11.7)
Knowing that the sum of the probabilities must be one
1 =ci∑j=0
ρijP0
j!⇒ P0 =
1∑cij=0
ρij
j!
(11.8)
Equations (11.7) and (11.8) can be combined as:
Pj =ρi
j
j!
1∑cij=0
ρij
j!
(11.9)
When j = ci, all the channels are busy and any handoff call gets
blocked. The handoff blocking probability denoted as Pci are ex-
pressed by
Pci =ρi
ci
ci!
1∑cij=0
ρij
j!
(11.10)
83
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Handoff Request Rate (calls/sec)
Han
doff
Blo
ckin
g P
roba
blity
(%
)
← 1/μi = 10 ms
← 1/μi = 20 ms
1/μi = 30 ms →
ci = 50
Figure 11.3: Hanfoff Blocking Probability (ci = 50).
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Handoff Request Rate (calls/sec)
Han
doff
Blo
ckin
g P
roba
blity
(%
)
1/μi = 10 ms
← 1/μi = 20 ms
1/μi = 30 ms →
↓
ci = 100
Figure 11.4: Hanfoff Blocking Probability (ci = 100).
84 Chapter 11. Packet Queues and Delay Analysis
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Handoff Request Rate (calls/sec)
Han
doff
Blo
ckin
g P
roba
blity
(%
)
← ci = 100
ci = 50 →
← ci = 10
1/μi = 30 ms
Figure 11.5: Hanfoff Blocking Probability (ci = 50 and 100, 1/μi = 30 ms).
0 50 100 150 200 250 3000
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Number of Channels
Han
doff
Blo
ckin
g P
roba
blity
(%
)
← 1/μi = 30 ms
−−− 1/μi = 20 ms
−.−. 1/μi = 10 ms
Figure 11.6: Hanfoff Blocking Probability (ci ranging from 0 to 300).
85
10. (a) The handoff blocking probability of the selected handoffs as a func-
tion of system offered load are shown in Figures 11.3, 11.4, 11.5,
and 11.6.
(b) In Figure 11.4, we assume the total available channels (ci) to be
50 and 100 respectively. The handoff blocking probabilities with a
choice of three different mean holding times of 15, 20, and 30 ms are
plotted. The times shown in the plot are estimates of the switched
or exchanged channel latencies. The figures show that the blocking
probability is directly proportional to the mean channel exchange
time. As the mean holding time increases, the performance reaches
to its ideal value.
In Figure 11.5, the handoff blocking probability drops when the
numbers of available channel increases. At last, we plot the graph
of blocking probability versus the number of channels for different
values of 1/μi in Figure 11.6. The handoff call request rate is fixed
while we vary the number of channels from 0 to 300. For 1/μi
= 30 ms, the blocking probability is dramatically decreased when
the number of channels are less than 150. After exceeding 150, the
behavior of decreasing is not obviously seen.
11. (a) λ1 = α+ λ2
λ2 = λ3
λ3 = 0.41
λ4 = 0.6λ1
λ1 = α+ λ3 = α+ 0.4λ1 = α0.6 = 33 packets/ms
λ2 = λ3 = 0.4λ1 = 0.4× 33 = 13.33 packet/ms
λ4 = 0.6 × 33 = 20 packet/ms
(b) ρ1 =λ1μ1
= 0.33
ρ2 =λ2μ2
= 0.133
86 Chapter 11. Packet Queues and Delay Analysis
ρ3 =λ3μ3
= 0.666
ρ4 =λ4μ4
= 0.666
E[K1(t)] =ρ1
1−ρ1= 0.5 packets
E[K2(t)] =0.133
1−0.133 = 0.15 packets
E[K3(t)] =0.666
1−0.666 = 2 packets
E[K4(t)] =0.666
1−0.666 = 2 packets
(c) E[T ] =
∑4
i=1E[Ki(t)]
α = 0.5+0.15+2+220 = 0.232 ms
12. N/A
13. (a) Queuing unit 1: λ1 = 3λ
Queuing unit 2: λ2 = 6λ+ 6λ+ 3λ+ λ = 20λ
Queuing unit 3: λ3 = 6λ
Queuing unit 4: λ4 = 6λ
(b) E[K1] =ρ1
1−ρ1; ρ1 =
3λμ1
E[K2] =ρ2
1−ρ2; ρ2 =
20λμ2
E[K3] =ρ3
1−ρ3; ρ3 =
6λμ3
E[K4] =ρ4
1−ρ4; ρ4 =
6λμ4
(c) E[T ] = E[K1]3λ + E[K2]
20λ + E[K3]6λ + E[K4]
6λ
14. α = 200 packets/ms
μ0 = 100 packets/ms
μi = 10 packets/ms
(a) λ = 0.4λ + α ⇒ λ = α/0.6 = 333.33 packets/s
Thus, the arrival rate to each of the queuing units in parallel is:
λi = 0.4/5 × 333.33 = 26.67 packets/s
87
(b) E[K0(t)] =ρ0
1−ρ0
Since ρ0 =λμ0, then:
E[K0(t)] =λμ0
1− λμ0
= λμ0−λ =
333.33 packets/s100×103 packets−333.33 packets/s
=
3.34 × 10−3 packets
E[Ki(t)] =ρi
1−ρi= λi
μi−λi=
26.67 packets/s10×103 packets−26.67 packets/s
= 2.67×10−3 (for 1 ≤ i ≤ 5)
(c) E[T0] =E[K0(t)]
λ = 3.34×10−3 packets333.33 packets/s
= 10−5 s
E[Ti] =E[K0(t)]+E[E[Ki(t)]]
α =E[K0(t)]+
∑5
i=1PiE[Ki(t)]
α = 2.67×10−3+5(0.2E[Ki(t)])200
= 30 ms
15. (a) λ3 = 0.3λ − 1
λ2 = α+ 0.3λ1
λ1 = λ2 + λ3 + λ4
λ4 = 0.3λ1
λ1 = λ2 + λ3 + λ4 = α+ 0.3λ1 + 0.3λ1 + 0.3λ1 = α+ 0.9λ1
⇒ λ1 = 8 packets/ms
λ2 = α+ 0.3λ1 = α+ 0.3 × 10α = 4α = 3.2 packets/ms
λ3 = 0.3 × 80 = 2.4 packets/ms
λ4 = 0.3λ1 = 0.3 × 80 packets/ms
(b) ρ1 =810 ;E[K1(t)] =
ρ11−ρ1
= 0.81−0.8 = 4
ρ2 =3.212 ;E[K2(t)] =
ρ21−ρ2
= 0.2671−0.267 = 0.364
ρ3 =2.414 ;E[K3(t)] =
ρ11−ρ1
= 0.1711−0.171 = 0.206
ρ4 =2.416 ;E[K4(t)] =
ρ11−ρ1
= 0.151−0.15 = 0.176
(c) E[T ] = E[K(t)]α = 4+0.364+0.206+0.176
0.8 = 5.93 ms
88 Chapter 11. Packet Queues and Delay Analysis
Chapter 12
Quality-of-Service andResource Allocation
1. (a) P0 = PX(0)P1 + [PX(0) + PX(1)]P0
P1 = PX(2)P0 + PX(1)P1 + PX(0)P2
P2 = PX(3)P0 + PX(2)P1 + PX(1)P2 + PX(0)P3
P3 = PX(4)P0 + PX(3)P1 + PX(2)P2 + PX(1)P3 + PX(0)P4
P4 = PX(5)P0+PX(4)P1+PX(3)P2+PX(2)P3+PX(1)P4+PX(0)P5
(b) See Figure 12.1.
0 1 2 3 4
Px(5)
Px(1) Px(1) Px(1) Px(1)
Px(0)
Px(0) + Px(1)
Px(0) Px(0) Px(0)
Px(2)Px(2) Px(2) Px(2)
Px(3) Px(3) Px(3)
Px(4) Px(4)
Figure 12.1: Markov chain.
89
90 Chapter 12. Quality-of-Service and Resource Allocation
2. (a) PX(k) = 14 for k = 0, 1, 2, 3
PX(k) = 0 for k = 0, 1, 2, 3
Also for k = 0, 1, 2, 3 : PX(0) = PX(1) = PX(2) = PX(3) = 14
(b) P0 = [PX(0) + PX(1)]P0 + PX(0)P1 = (14 + 14)P0 +
14P1
= 12P0 +
14P1 ⇒ 1
2P0 =14P1 ⇒ 2P0 = p1
P1 = PX(2)P0 + PX(1)P1 + PX(0)P2 = 14P0 +
14P1 +
14P2
= 14P1 +
14 × 1
2P1 +14P2
⇒ 52P1 = P2, P2 = 5
2 × 2P0 = 5P0
P2 = PX(3)P0 + Px(2)P1 + Px(1)P2 + PX(0)P3
= 14(P0 + P1 + P2 + P3)
= 14(
15P2 +
25P2 + P2 + P3)
⇒ P3 =125 P2, P3 = 12P0
P0 = 0.008
P1 = 2P0 = 0.016
(c) See Figure 12.2.
0 1 2 3¼
¼
¼
¼¼
¼
¼½
Figure 12.2: Markov chain.
P00 = PX(0) + PX(1) = 14 +
14 = 1
2
P01 = PX(2) = 14
P02 = PX(3) = 14
P11 = PX(1) = 14
P10 = PX(0) = 14
P12 = PX(2) = 14
91
P20 = 0
P21 = PX(0) = 14
P22 = PX(1) = 14
3. (a) For Poisson distribution:
PX(x) = (λt)xe−λt
x! .
For t = 1g
PX(x) =(λg)xe
−λg
x! .
With λ = 20 packets/s and g = 30 packets/s
PX(k) =( 23)ke−
23
k! .
(b) P0 = [PX(0) + PX(1)]P0 + PX(0)P1
⇒ P1 =[1−PX(0)−PX (1)]P0
PX(0)
We know P0 = 0.007. From Part (a), we also know:
PX(0) = e−23
1 = 0.513
PX(1) =23e−
23
1 = 0.34, thus:
P1 = 0.00197
P1 = PX(2)P0 + PX(1)]P1 + PX(0)P2
⇒ P2 =[1−PX(1)]P1−PX(2)P0
PX(0)
We know PX(2) =( 23)
2e−
23
2! = 0.114, thus:
P2 = 0.249
P2 = PX(3)P0 + PX(2)]P1 + PX(1)P2 + PX(0)P3
⇒ P3 =[1−PX(1)]P2−PX(3)P0−PX(2)P1
PX(0)
We know PX(3) =( 23)
3e−
23
3! = 0.025, thus:
P3 = 0.318
(c) Transition probabilities:
92 Chapter 12. Quality-of-Service and Resource Allocation
P00 = PX(0) + PX(1) = 0.855 P01 = PX(2) = 0.114
P02 = PX(3) = 0.025
P03 = PX(4) = 0.004
P01 = PX(2) = 0.114
P10 = PX(0) = 0.513
P11 = PX(1) = 0.342
P12 = PX(2) = 0.114
P13 = PX(3) = 0.025
P20 = 0
P21 = PX(0) = 0.513
P22 = PX(1) = 0.342
P23 = PX(2) = 0.114
P30 = 0
P31 = 0
P32 = PX(0) = 0.513
P33 = PX(1) = 0.342
(d) Sketch from Part (c).
4. (a) b+ vTb = zTb
Tb =b
z−v
(b) b = 0.5 Mb
z = 100 Mb/s
v = 10 Mb/s
Tb =0.5 Mb
100 Mb/s−10 Mb/s= 5.56 ms
93
5. N/A
6. Solving this problem requires a great deal of ellaborations in complex
mathematical background. We try to summarize this background. In
general, for a continuous random variable X ≥ 0, with mean E[X,
second moment E[X2], and PDF fX(x), the Laplace-Steiljes transform
(LST) of FX(x) is defined by
f̂X(δ) =∫∞0 e−δxfX(x)dx
Hence, for the residual time distribution, Rj(t), LST is given by
r̂j(δ) =1−f̂X(δ)E[X]δ
Now, we can derive the mean residual time, rj , can be derived as
rj = −limδ→0dr̂j(δ)dδ
= E[X2]2E[X]
7. (a) For non-preemtive scheduler:
E[T1] = 0.37 s, E[T2] = 0.62 s, E[T3] = 0.25 s
(b) For preemtive scheduler:
E[T1] = 0.12 s, E[T2] = 0.66 s, E[T3] = 1.91 s
(c) The waiting time E[Ti] for class i packet is lower for preemtive
scheduler as long as i is low and it will be reversed for higher is.
8. We compare the impact of an increased number of inputs on total delay
in priority scheduler with: three flows (n=3), and four flows (n=4).
λi = λ = 0.2 packets/ms
94 Chapter 12. Quality-of-Service and Resource Allocation
1μi
= 1μ = 1ms
ri = r = 0.5 ms
(a) For a non-preemptive scheduler, we know:
E[Tq,i] = Wx + E[Tq,i]2 + E[Tq,i]3
=∑n
j=1 ρjrj +∑i
j=1 ρjE[Tq,j ] + E[Tq,i]∑i−1
j=1 ρj
n = 3, non-preemptive scheduler:
Wx =∑n
j=1 ρjrj =∑3
j=1 ρjrj = (3)(0.2)(0.5) = 0.3
For i = 1:
E[Tq,1] = Wx +∑1
j=1 ρjE[Tq,j ] + 0 = 0.3 + ρ1E[Tq,1]
= 0.3 + 0.2E[Tq,1]
E[Tq,1] = 0.375
For i = 2:
E[Tq,2] = Wx +∑2
j=1 ρjE[Tq,j ] + E[Tq,2]∑1
j=1 ρj
= 0.3 + ρ1E[Tq,1] + ρ2E[Tq,2] + E[Tq,2]ρ1
= 0.3 + 0.2 × 0.375 + 0.2E[Tq,2] + 0.2E[Tq,2]
E[Tq,2] = 0.625 ms
For i = 3:
E[Tq,3] = Wx +∑3
j=1 ρjE[Tq,j ] + E[Tq,3]∑2
j=1 ρj
= 0.3 + ρ1E[Tq,1] + ρ2E[Tq,2] + ρ3E[Tq,3] + ρ1E[Tq,3] + ρ2E[Tq,3]
= 0.3 + 0.2 × 0.375 + 0.2× 0.625 + 0.2E[Tq,3]
E[Tq,3] = 1.25 ms
E[Ti] = E[Tq,i] +1μi
For i = 3 ⇒E[T3] = E[Tq,i] +
1μ3
= 1.25 + 1 = 2.25 ms
95
n = 4, non-preemptive scheduler:
Wx =∑n
j=1 ρjrj =∑4
j=1 ρjrj = (4)(0.2)(0.5) = 0.4
For i = 1:
E[Tq,1] = Wx +∑1
j=1 ρjE[Tq,j] + 0 = 0.4 + 0.2E[Tq,1]
E[Tq,1] = 0.5
For i = 2:
E[Tq,2] =0.4 + ρ1E[Tq,1] + ρ2E[Tq,2] + E[Tq,2]ρ1
= 0.4 + 0.2× 0.5 + 0.4E[Tq,2] + 0.2E[Tq,2]
E[Tq,2] = 0.833 ms
For i = 3:
E[Tq,3] =0.4 + ρ1E[Tq,1] + ρ2E[Tq,2] + ρ3E[Tq,3] + 2ρ1E[Tq,3]
= 0.4 + 0.2× 0.5 + 0.2× 0.833 + 0.6E[Tq,3]
E[Tq,3] = 1.667 ms
E[Ti] = E[Tq,i] +1μi
For i = 3 ⇒E[T3] = E[Tq,3] +
1μ3
= 1.667 + 1 = 2.667 ms
(b) For a preemptive scheduler, we know:
E[Ti] = E[Tq,i] + θi
where:
θi =1
μi
(1−∑i−1
j=1ρj
)
n = j = 3:
From non-preemtive case: E[Tq,3] = 1.25 ms
θ3 =1
μ3
(1−∑2
j=1ρj
) = 1.667
E[T3] = 1.25 + 1.667 = 2.92 ms
96 Chapter 12. Quality-of-Service and Resource Allocation
n = j = 4:
From non-preemtive case: E[Tq,3] = 1.667 ms
θ3 =1
μ3
(1−∑3
j=1ρj
) = 2.5
E[T3] = 2.5 + 1.667 = 4.167 ms
(c) The total delay obtained in the non-preemtive scheduler where
n = 3 and n = 4 are close.
In a preemtive scheduler, the difference in delay when n = 3 and
n = 4 is very large.
This is very obvious as when the numbers of flows increases in the
case of preemtive scheduler, the waiting time becomes large for a
low priority packet to be procesed. However, in a non-preemtive
scheduler, this has a littile impact. This is because in an non-
preemptive scheduler, lower priority packets cannot be interrupted
immidiately upon the arriaval of higher priority packets.
9. (a) 2.1,2.1,4.1,1.1,3.2,4.2,1.2,4.3,1.3,2.2,4.4,1.4,3.3,1.5,2.3,3.4,2.4,3.6,4.5,2.5
(b) 2.1,3.1,4.1,4.2,4.3,4.4,1.1,2.2,2.3,3.2,3.3,3.4,4.5,1.3,2.4,2.5,1.5
10. N/A
11. N/A
97
Packet No. Size Flow Fi (FQ) Fi (WQ)
1 110 1 110 11002 110 1 220 22003 110 1 330 33004 100 1 430 43005 100 1 530 53006 100 1 630 63007 100 2 100 5008 200 2 300 15009 200 3 200 666.610 240 3 440 1466.611 240 3 680 2266.612 240 4 240 600
12. Priority Queueing:
(a) With 10% of the bandwidth,the low-priority will at least be able
to transmit,without the guarantee bandwidth, a low-priority might
never transmit.
(b) The high-priority flows will lose 10% bandwidth and this 10% hit
will be distributed evenly throughout all the high-priority flows,
therefore the performance hit will not be noticeable.
13. See the table for the following arrangement:
Flow1 110,110,110,100,100,100,(Flow2) 100,200, (Flow3) 200,240,240,
(Flow4) 240
(a) Packets in fair queueing: 7,1,9,2,12,8,3,4,10,5,6,11
(b) Packet in weighted queueing: 7,12,9,1,10,8,2,11,3,4,5,6
14. (a) Priority Q.: A1,B1,B2,B3,A2,C1,C2,C3,A3,A4,B4,B5,C4,D1,D2,D3,A5,C5,D4,D5
98 Chapter 12. Quality-of-Service and Resource Allocation
(b) Fair Q.: A1,B1,D1,B2,C1,D2,A2,B3,C2,D3,A3,B4,C3,D4,A4,B5,C4,D5,A5,C5
(c) Weighted: A1,B1,B2,C1,C2,C3,D1,D2,D3,D4,A2,B3,B4,C4,D5,A3,C5,B5,A4,A5
15. Please make a correction: Part (c), flow D is 30 percent.
(a) Priority Q.: B1,–,B2,A1,A2,B3,C1,B4,A3,A4,B5,C2,C3,C4,C5,D1,D2,D3,A5,D4,D5
(b) Fair Q.: B1,–,B2,C1,D1,A1,B3,C2,D2,A2,B4,C3,D3,A3,B5,C4,D4,,A4,C5,,D5,A5
(c) Weighted: B1,–,B2,C1,C2,C3,C4,D1,D2,A1,A2,A3,B3,C5,D3,D4,A4,B4,D5,A5,B5
16. This Problem is a result of moving Problem 8.4 to here as Problem
12.16.
(a) Fairness index of B1, B2, B3 is given by
σ =(∑n
i=1fi)
2
n∑n
i=1f2i
= (f1+f2+f3)2
3(f21+f2
2+f23 )
= (1+1+1)2
3(12+12+12)= 1
(b) For equal throughput rates, the fairness index is
σ = (f+f+f)2
3(f2+f2+f2)
= (3f)2
3(3f2) = 1
We know: 0 is for the worst and 1 is for the best allocation of
resource allocation. Thus from result of Part (a) we can say that
we can have the best resource allocation when the throughput rates
are equal.
(c) Fairness index of B1, through B5 is given by
σ = (f1+f2+f3+f4+f5)2
5(f21+f2
2+f23+f2
4+f25 )
= (1+1+1+1.2+16)2
5(1+1+1+1.44+256) = 0.313
99
(d) The result of Part (c) shows us that the resource allocation is not
the best when the throughput rates are different. This is because
the network cannot offer the fair amount of resource to each flow.
100 Chapter 12. Quality-of-Service and Resource Allocation
Chapter 13
Networks in Switch Fabrics
1. XCrossbar = n2
XDelta = nd logd n
n Complexity of crossbar Complexity of Delta network
22 16 1624 256 12825 1024 320
The complexity of crossbar increases dramatically compared to Delta
network as n increases. See Figures 13.1.
2. (a) See Figure 13.2.
(b) Complexity:
X(D16,2) =(nd × d2
)logd n = nd logd n
= (16)(2) log2 16
= 128
X(D16,4) = 2(nd × d2
)= (2)(16)(2) = 128
They have the same complexity
101
102 Chapter 13. Networks in Switch Fabrics
5 10 15 20 25 30
500
1000
Crossbar
Delta network
n
Complexity
Figure 13.1: Comparison on the complexity of crossbar and Delta network.
Communication Delay:
The delay for D16,2 is higher than D16,4 because it has to go
through more stages. However, D16,4 is nonblocking while D16,2
is blocking.
3. (a) See Figures 13.3 and 13.4.
(b) Complexity:
X(Ω16,2) =(nd × d2
)logd n = nd logd n
= (16)(2) log2 16
= 128
X(Ω16,4) = 2(nd × d2
)= (2)(16)(2) = 128
They have the same complexity
Communication Delay:
The delay for Ω16,2 is higher than Ω16,4 because it has to go through
more stages. However, Ω16,4 is nonblocking while Ω16,2 is blocking.
103
(a) (b)
Figure 13.2: (a) D16,2 switch fabric, (b) a D16,4 switch fabric.
4. B = 1− (1− P )3
B = P
5. (a) See Figures 13.5 and 13.6.
(b) B16,2number of stages → 2 log2 16− 1 = 7
B16,4number of stages → 2 log4 16 − 1 = 3 Compare in terms of
complexity:
In general, the comlexity of Bn,d is : nd(r logd n− 1)
B16,2 : 32× 7 = 224
B16,4) : 64× 3 = 192
104 Chapter 13. Networks in Switch Fabrics
Figure 13.3: Ω16,2 switch fabric.
105
Figure 13.4: Ω16,4 switch fabric.
106 Chapter 13. Networks in Switch Fabrics
11 21 31 41
13 23 33 43
25 35
17 27 37 47
112131
2333
15253545
172737
Figure 13.5: B16,2 switch fabric.
107
Figure 13.6: B16,4 switch fabric.
108 Chapter 13. Networks in Switch Fabrics
Compare in terms of communication Delay :
B16,2 : 7× delay per stage
B16,4 : 3× delay per stage
Thus, B16,2 has higher complexity and delay.
6. See Figures 13.7 and 13.8.
B16,2 :
P2 = 1− (1− P1)2
P3 = (1− (1− P1)2)2
P4 = 1− (1− P1)2(1− P3)
P5 = P4
P6 = 1− (1− P1)2(1− P5)
Pblock = P 26 = (1− (1− P1)
2(1− P5))2
= (1− (1− P1)2(1− P 2
4 ))2
= (1− (1− P1)2(1− (1− (1− P1)
2(1− (1− (1− P1)2)2)2))2
B16,4 :
Pblock = (1 − (1− P1)2)4
Pblock(B16,2) < Pblock(B16,4)
7. (a) See Figures 13.9 and 13.10.
(b) The Banyan network is similar to Delta network. Thus, the routing
rule of the Delta network can be applied to a Banyan network.
8. (a) See Figure 13.11.
(b) B9,3 :
XB = nd(2 logd n− 1) = 9(3)(2 log3 9− 1) = 81 crosspoints
s = 2 logd n− 1 = 2 log2 9− 1 = 3 stages
109
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1 P1
P1 P1
P1 P1
P1 P1P1 P1
P1 P1P1 P1
P1 P1
P1
P1
Figure 13.7: Lee’s blocking model for B16,2 switch fabric.
P1 P1P1 P1
P1 P1
P1P1
Figure 13.8: Lee’s blocking model for B16,4 switch fabric.
110 Chapter 13. Networks in Switch Fabrics
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111
Figure 13.9: Y16,2 switch fabric.
111
Figure 13.10: Y16,4 switch fabric.
112 Chapter 13. Networks in Switch Fabrics
(b)
(a)
Figure 13.11: Comparing two switching networks: (a) B9,3 and (b) Ω9,3.
113
9. (a) N/A
(b) The complexity of this network is estimated to be
(nc/n) = d(h+ logd n)
(nL/n) = 1 + h+ logd n
(c) For the extended the Delta network DEn,d,h,the blocking probability
is estimated as follows
Pp(n, d, 0) = 1− (1− p)k−1
Pp(n, d, 0) = [1− (1− p)2(1− Pp(n/d, d, h − 1))]d
10. (a) Architecture 1: choose d = 2
k ≥ 2d− 1 ⇒ k = 3
complexity = 4(2× 3) + 3(4 × 4) + 4(3 × 2) = 96
This is optimal in terms of complexity:Dopt =√
n2 = 2
⇒ k = 3
See Figure 13.12
Figure 13.12: Clos network, d = 2 and k = 3.
Architecture 2: choose d = 4
k ≥ 2d− 1 ⇒ k = 7
Complexity = 2(4 × 7) + 7(2 × 2) + 2(7× 4) = 140
114 Chapter 13. Networks in Switch Fabrics
See Figure 13.13
k=7d =4
Figure 13.13: Clos network, d = 2 and k = 3.
(b) For reliability and fault talerance.
11. N/A
12. The Clos network is designed to be non-blocking with extra link in the
middle stages, however the Lee’s model referred to the possible conges-
tion for each link, there for the number calculated values using Lee’s
method referred to the conceptual blocking of each link between notes.
Summary: With Lee’s method, we consider only one path. But for
the analysis of blocking k ≥ 2d− 1 we consider all paths.
115
(a)
(b)
Figure 13.14: Comparing two Clos networks: (a) C6,2,3 and C6,3,5 switchingnetworks (b) Comparing Lee’s models.
13. (a) See Figure 13.14. d =√6/2 ⇒ d = 2
k ≥ 2d− 1 = 2× 3− 1 ⇒ k = 3
k = 2× 3− 1 = 5
(b) Lee’s model, total blocking probability.
Assuming the probability of deley for all links are p,
C6,2,3 ⇒ B = (1− (1− p)2)3 = (2p − p2)3
C6,3,5 ⇒ B = (1− (1− p)2)5 = (2p − p2)5
116 Chapter 13. Networks in Switch Fabrics
(c) C6,3,5 has lower probability of blocking. But it yields a higher
complexity and higher cost due to more crossbars is used. So it
really depends on the need of the network to decide which is better.
14. Five-stage Clos network with n = 8, d = 2.
(a) See Figure 13.15.
(b) See Figure 13.16.
(c) Total blocking probability with p = 0.2.
The probability of blocking for the middle stage is (2p−p2)3. Thus
B = (1− (1− p)× (1− p)× [1− (2p − p2)3])3 = 0.059
15. (a) See Figure 13.17.
(b) See Figure 13.18.
BXY = [1−(1−p)2(1−(1−p)2)] = [1−(1−0.2)(1−0.2)(1−0.059)] =
0.0629
BZW = [1− (1− 0.2)(1 − 0.0629)(1 − 0.2)]3 = 0.02
16. Consider a five-stage Clos network (similar to problem 13.14) whose
stages use the following crossbar dimensions: 1st stage: d × k, 2nd
stage: e× j, 3rd stage: nde × n
de , 4th stage: j × e, and 1st stage: k × d.
(a)
(b) We find k, d, j, e in terms of n. Non blocking conditions: j ≥ 2e−1
k ≥ 2d− 1
Xc = dknd + k[ej n/d
e + n/de × n/d
e × j + ej n/de ] + dkn
d
= kn+ k( jnd + jn2
d2e2 + jnd ) + kn
117
D=2K=2D=2
K=3D=2K=3
Figure 13.15: Five-stage Clos network with n = 8, d = 2.
118 Chapter 13. Networks in Switch Fabrics
Figure 13.16: Lee’s model for the five-stage Clos network with n = 8, d = 2.
Xc = 2kn + k(2jnd + jn2
d2e2 )
k = 2d− 1
j = 2e− 1
Xnbc = 2n(2d− 1) + (2d− 1)[2n(2e−1)
e + n2(2e−1)d2e2 ]
= 4nd− 6n+ 8ne+ 4n2
de2 − 2n2
de2 − 4ned − 2n
d − 2n2
d2e − n2
d2e2
To optimize:dXnb
cde = 8n− 4n2
de2 + 4n2
de3 − 4nd + 2n2
d2e2 + 2n2
d2e3 = 0
⇒ eopt ≈√
−4n2d+2n2
−8nd2+4nd
We know: dopt ≈√
n2 ⇒ eopt ≈
√−4n2
√n2+2n2
−4n2+4n√
n2
(c) Plug dopt and eopt into Xnbc .
119
n=8 n
Multiplexer
5-Stage ClosNetwork
Figure 13.17: A Cantor network with three parallel switching planes.
17. (a) m = 512 bytes× 16 = 8192 bytes
(b) rows =8192×8B/b
32 = 2048 rows
C = log2 2048 = 11 bits
(c) RAM segment process time: 518× 8× 132 × (2 ns+2 ns+1 ns) =
640 ns/segment
Transmission time of a segment = 0.4μs16 = 25 ns
Bit rate = 512×8 bits640+25 = 6.15 × 109 = 6.15 Gb/s
(d)6.15 Gb/s
512×8 = 1.5 Msegment/sec/port
120 Chapter 13. Networks in Switch Fabrics
Z
X Y
W
Figure 13.18: Lee’s blocking probability model for the Cantor network withthree parallel switching planes.
Chapter 14
Optical Switches andNetworks, and WDM
1. n ≤ � ≤ 2n− 1
8 ≤ � ≤ 15
(a) Minimum loss = 8× 40 = 32 dB
Maximum loss = 15× 40 = 600 dB
(b) �ave ≈ n+(2n−1)2 ≈ 3n−1
2 ≈ 12
Average loss = 12× 40 = 240 dB
2. N/A
3. N/A
4. N/A
5. N/A
121
122 Chapter 14. Optical Switches and Networks, and WDM
6. (a) E[T ] = 1μi,j−Li,j
(b) E[Ts] =1
μi,j−si,jLi,j
(c) E[Tn] =1
n(n−1)
∑i,j
si,jμi,j−si,jΛ
7. N/A
8. N/A
9. N/A
10. N/A
Chapter 15
Multicasting Techniques andProtocols
1. Multicast Connection. A multicast protocol such as MOSPF, PIM,
and CBT helps keep the traffic down by requiring the source to trans-
mit only one packet. In the down side, a multicast protocol causes
some computational delay associated with the implementation of the
protocol in a router. Additionally, a special type router is needed to
implement the multicast protocol. Another major drwback with the
multicast protocol is that if the source packet is lost on its way before
being copied, no multicast group members receive a copy of the packet.
Multicast Connection. In contrast, sending the packet separately
to each group would increase the traffic substantially. But it is more
convenient from the network management stand-point.
2. In the sparse-mode algorithm, a shared-tree technique is used, and a
relatively low cost path is selected. As a result, the sparse-mode ap-
proach intruduces extra delay than the dense-mode approach.
123
124 Chapter 15. Multicasting Techniques and Protocols
MulticastGroup 2
1R
2R
LAN 1
LAN 2
LAN 3
LAN 5
LAN 4
3R4R
7R8R
6R
5R
Figure 15.1: MOSPF protocol.
A rendesvous point (RP) router is selected as a shared root of dis-
tribution sub-tree. RP router is used to coordinate forwarding packets
and prevent initial flooding of datagram. However, RP router can a
hot spot for multicast traffic congestion and a possible point of failure
in routing.
Also, as finding a low cost path is not necessary in the sparse-mode,
less hardware complexity meay be needed.
However, the sparse-mode approach lowers the efficiancy of MOSPF
as this protocol employs OSPF unicast routing that requires that each
router in a network be aware of all available links. If the sparse mode
is chosen, each router using a shared-tree may cause some involving
routers select longers paths than they would normally select in a dense-
mode.
3. (a) See Figure 15.1.
125
(b)
k β3,2 β3,4 β3,5 β3,6 β3,7 β3,8
{3} 3-2(5) 3-4(7) × × 3-7(10) 3-8(11)
{2,3} 3-2(5) 3-4(7) 3-2-5(12) × 3-7(10) 3-8(11)
{2,3,4} 3-2(5) 3-4(7) 3-2-5(12) × 3-7(10) 3-8(11)
{2,3,4,7} 3-2(5) 3-4(7) 3-2-5(12) 3-7-6(23) 3-7(10) 3-8(11)
{2,3,4,7,8} 3-2(5) 3-4(7) 3-2-5(12) 3-7-6(23) 3-7(10) 3-8(11)
{2,3,4,5,7,8} 3-2(5) 3-4(7) 3-2-5(12) 3-7-6(23) 3-7(10) 3-8(11)
{2,3,4,5,6,7,8} 3-2(5) 3-4(7) 3-2-5(12) 3-7-6(23) 3-7(10) 3-8(11)
The copying router is R5.
4. N/A
5. (a) We choose R2 as a rendezvous point as this node is at the network
edge and has the least cost to multicast group.
(b) Form a least-cost tree for the multicast action. For LAN 1:
R3 → R2 → R5 → LAN 4, and
R3 → R2 → R5 → R6 → LAN 3.
The copying router is R5, and
The total cost = 26.
6. N/A
7. (a) In the MOSPF deployment, the multicast tree is:
R3 → R8 → LAN2, and
R8 → R6 → LAN3.
The total cost is 2.
126 Chapter 15. Multicasting Techniques and Protocols
(b) In the PIM deployment, we choose R4 as the rendezvous router.
The multicast tree is:
R3 → R7 → R4 → R7 → R6 → LAN3, and
R7 → R8 → LAN2.
The total cost is 5.
8. See Figure 15.2. For the copy network with F = 7, d = 2, and k = 7:
Divide the stages into two halves.
The first half has �72 = 3 stages. For Stages 1, 2, and 3:
Initialize: F1 = 7, f1 = 1
⇒ route the packet randomly then at each stage:
The second half of the network has �72� = 4 stages. Stages
4, 5, 6, and 7:
Stage 4: F4 = �71� = 7 and f4 = � 727−4 � = �78� = 1
⇒ make one copy at this stage
Stage 5: F4 = �71� = 7 and f4 = � 727−5 � = �74� = 2
⇒ make two copy at this stage
Stage 6: F4 = �72� = 4and f4 = � 727−6 � = �42� = 2
⇒ make two copy at this stage
Stage 7: F4 = �42� = 2and f4 = � 727−7 � = �21� = 2
⇒ make two copy at this stage
At the end of stage 7, we have 8 copies in total, since we only need
7, we’ll discard one of them and send the remaining 7 copies to the
interface.
9. (a) We prefer a copy node not to be at the edge of a network:
127
11 21 31 41
13 23 33 43
25 35
17 27 37 47
112131
2333
15253545
172737
Figure 15.2: Multicasting with F = 7 copies in a copy network with d = 2and k = 7.
128 Chapter 15. Multicasting Techniques and Protocols
R12 → R13 → R10, and
R12 → R17 → R15 → R16 → LAN 1, and
R12 → R17 → R15 → R14 → LAN 2.
Total multicast cost is 10.
(b) R12 → R13 → R10 → R20 → R28 → R27 → R26 → R24 → R25 →R23 → LAN 4, and
R12 → R13 → R10 → R20 → R28 → R27 → R26 → R24 → R25 →R21 → LAN 3.
Total multicast cost is 27.
(c) R12 → R13 → R10 → R30 → R37 → R36 → R34 → R33 → R32 →LAN 5.
Total multicast cost is 31.
10. (a) See Figure 15.3.
(b) The complexity of Boolean splitting multicast algorithm is less
than the one for tree-based algorthm since it does not need two
switch fabrics for routing and copying. But with the Boolean split-
ting multicast algorithm if the number of packets increases, the
congestion can happen because the copying and routing are func-
tioned at the same time. Thus, the performance of the Boolean
splitting algorithm is better than the tree-base algorithm only for
making low number of packets.
11. (a) See Figure 15.4.
(b) See Figure 15.5.
s = k = logd n = log2 8 = 3
for 1 ≤ �k2 = 3
F = 5, fj = 1
129
0 0 0 1
0 0 1 0
0 0 1 1
0 1 1 1
1 0 1 0
1 1 0 0
1 1 1 1
0
1
0 0 1
0 0 1 0
0 0 1 1
0 1 1
1 1 0
1 1 0 0
1 1 1
0 0 1
0 0 1 0
0 0 1
0 1 1
1 0 0
1 1 0
1 1 1
0 0 0
0 0 10 0 1
0 1 1
1 0 1
1 1 0
1 1 1
0
1
0
1
0
1
0
1
1
10
1 1
0
1
1
1
0
11
1
0
1
0
0
0
0
1
11
0
1
1
1
0
0
j =1
j =2
j =3j =4
0 0 0 1
0 0 1 0
0 0 1 1
0 1 1 1
1 0 1 0
1 1 0 0
1 1 1 1
Figure 15.3: Multicasting with F = 7 copies in a D16,2 copy network usingBoolean splitting algorithm.
j = 4 : F4 = �F3f3� = �51� = 5f4 = � F4
dk−4 � = � 526−4 �� 5
22 � = 2
j = 5 : F5 = �F4f4� = �52� = 3f5 = � F5
d6−4 � = �32� = 2
j = 6 : F6 = �F5f5� = �32� = 2f6 = � F6
d6−6 � = �2� = 2
12. See Figure 15.6. To construct a 4 × 4 Crossbar switch, we need four
4 × 1 multiplexers. if in0 wants to send packets to Out1 and Out2, it
just needs to set the control-bits C4-C6 to accept packets from In0.
130 Chapter 15. Multicasting Techniques and Protocols
0 0 1
0 1 0
1 0 0
1 0 1
1 1 1
0
1
0 0 1
0 1 0
0
1
1 0 0
1 0 1
1 1 11
0
0 0 11
0 1 10
1 0 1
1 0 1
0
1
1 1 11
0
0
0
0
1
1
1
1
1
1
0 0 1
0 1 0
1 0 1
1 1 1
1 0 0
0 0 1
0 1 0
1 0 0
1 0 1
1 1 1
0
1
0
3-StageCopying
3-StageRouting
(All Identical)
Figure 15.4: Multicasting with F = 5 copies in a Ω8,2 copy network usingBoolean splitting algorithm.
000001010
100101110111
011
000001
010011
100101
110111
Figure 15.5: Cascading two Omega networks to form a copy network.
131
Out 1
Out 2
Out 3
Out 4
int0 int1 int2 int3
C 0 C 1 C 2 C 3 C 4 C 5 C 6 C 7
Figure 15.6: Internal srtructure of the a multicast switch.
132 Chapter 15. Multicasting Techniques and Protocols
Chapter 16
VPNs, Tunneling, andOverlay Networks
1. (a) The advantage to having egress nodes estimate routing is that the
routing computations are distributed throughout the MPLS net-
work, and require fewer entries in the forwarding table for each
egress node. This could provide faster assignment of labels (and
therefore faster transmission) to packets entering the MPLS net-
work, due to the smaller tables.
(b) The advantage of a preassigned router is that convergence of the
best routes would be faster and synchronized, as updates to the
topography would not need to be sent between all routers in the
MPLS network (such as when LSRs go up and down which initiate
topography changes).
2. N/A
3. N/A
133
134 Chapter 16. VPNs, Tunneling, and Overlay Networks
4. N/A
5. N/A
6. N/A
Chapter 17
Compression of Digital Voiceand Video
1. See Figure 17.1.
g(t) = sin(200πt)
s(t) =∑+∞
n=−∞∏[ t−nTs
τ ] =∑+∞
n=−∞∏[2000t − 2n]
gs(t) = g(t)× s(t) =∑+2
n=−2
∏[2000t − 2n] sin(200πt)
G(f) = j[δ(f − 100) + δ(f + 100)]
S(f) = 12
∑+∞n=−∞ sinc[5× 10−4(f − 1000n)]
Gs(f) = G(f)× S(f)
= 12
∑+2n=−2
(sinc[5× 10−4(f − 1100n)] + sinc[5× 10−4(f − 900n)]
)
2. N/A
3. Mean: E[X] = 0
Rate Capacity: R = 4 b/sample
Variance: V [X] = 2
(a) For this source, we know Db = V [X]2−2R. With R = 4, we obtain
135
136 Chapter 17. Compression of Digital Voice and Video
01 2
-1-2-2 -1 0 1 2
1
-2 -10 1 2
-100 100 -1000 1000 -1100- 100 1100
t t
f f f
G(f) S(f) G(f)s
g(t) s(t) g(t)s
t
Figure 17.1: Solution to exercise. Sampling process in time and frequencydomains.
Db=0.0078
(b) If tolerable distortion becomes Db = 0.05 by using the same for-
mula we obtain
R = 12 log2
1Db
= 2.66
Thus, the required transmission capacity=2.66 b/sample
4. Variance = σ2 = V [X] = 10
N = 12
137
(a) Δ√V [X]
= 0.4238
Δ = 0.4238(√10) = 1.34
(b) ai = −aN−i = −(N2 − i)Δ
a1 = −a11 = −(6− 1)1.34 = −6.7
a2 = −a10 = −(6− 2)1.34 = −5.36
a3 = −a9 = −(6− 3)1.34 = −4.02
a4 = −a8 = −(6− 4)1.34 = −2.68
a5 = −a7 = −(6− 5)1.34 = −1.34
a6 = 0
(c) x̂i = −x̂N+1−i = −(N2 − i+ 12)Δ
x̂1 = −x̂12 = −(6− 1 + 12)1.34 = −7.37
x̂2 = −x̂10 = −(6− 2 + 12)1.34 = −6.03
x̂3 = −x̂9 = −(6− 3 + 12)1.34 = −4.69
x̂4 = −x̂8 = −(6− 4 + 12)1.34 = −3.35
x̂5 = −x̂7 = −(6− 5 + 12)1.34 = −2.01
x̂6 = −x̂6 = −(6− 6 + 12)1.34 = −0.67
(d) DV [X] = 0.0.1885
D = 0.1885
(e) For the source, we have Db = V [X]2−2R. Here, we are asked to
find Db given R.
N = 12
R = log2 N ≈ 4
Db = V [X]2−2R = 0.039
5. Note to instructors: this problem needs to be corrected to “· · ·repeat Problem 4, this time using a 16-level optimal uniform quantizer
· · ·”
138 Chapter 17. Compression of Digital Voice and Video
Variance = σ2 = V [X] = 10
N = 16
(a) Δ√V [X]
= 0.4908
Δ = 0.4908(√10) = 1.06
(b) ai = −aN−i = −(N2 − i)Δ
a1 = −a15 = −(8− 1)1.06 = −7.42
a2 = −a14 = −(8− 1)1.06 = −6.36
a3 = −a13 = −(8− 1)1.06 = −5.3
a4 = −a12 = −(8− 1)1.06 = −4.24
a5 = −a11 = −(8− 1)1.06 = −3.18
a6 = −a10 = −(8− 1)1.06 = −2.12
a7 = −a9 = −(8− 2)1.06 = −1.06
a8 = 0
(c) x̂i = −x̂N+1−i = −(N2 − i+ 12 )Δ
x̂1 = −x̂16 = −(8− 1 + 12)1.06 = −7.95
x̂2 = −x̂15 = −(8− 2 + 12)1.06 = −6.69
x̂3 = −x̂14 = −(8− 3 + 12)1.06 = −5.83
x̂4 = −x̂13 = −(8− 4 + 12)1.06 = −4.77
x̂5 = −x̂12 = −(8− 5 + 12)1.06 = −3.71
x̂6 = −x̂11 = −(8− 6 + 12)1.06 = −2.65
x̂7 = −x̂10 = −(8− 6 + 12)1.06 = −1.59
x̂8 = −x̂9 = −(8− 6 + 12)1.06 = −0.53
(d) DV [X] = 0.01154
D = 0.1154
6. N/A
139
7. The sampling rate is fs = 80, 000 meaning that we take 80,000 samples
per second. Each sample is quantised using 16 bits so the total number
of bits per second is 80,000 × 16. For a music piece of duration 60
min=3000 sec the resulting number of bits is 80, 000 × 16 × 3000 =
3.8× 109
8. We define Λ(x) as follows:
Λ(x) =
⎧⎪⎨⎪⎩
x+ 1 −1 ≤ x ≤ 0−x+ 1 0 ≤ x ≤ 10 otherwise
We define the PDF as follows:
fX(x) = 12Λ(
12x) =
⎧⎪⎨⎪⎩
12(
x2 + 1) = x+2
4 −2 ≤ x ≤ 012(
−x2 + 1) = −x+2
4 0 ≤ x ≤ 20 otherwise
Q(X) is in fact the quantiztion function denoted by X̃ in the book.
Thus X̃ = Q(X). We define the quantization error by a new random
variable Y = X − X̃:
For − 2 < x ≤ −1 ⇒ x̃1 = −1.5
fX(x1) =x1+24 = (y1−x̃1)+2
4 = (y1−1.5)+24 = y1+0.5
4
For − 1 < x ≤ 0 ⇒ x̃2 = −0.5
fX(x2) =x2+24 = (y2−x̃2)+2
4 = (y2−0.5)+24 = y2+1.5
4
For 0 < x ≤ −1 ⇒ x̃3 = 0.5
fX(x3) =−x3+2
4 = −(y3−x̃3)+24 = −(y3+0.5)+2
4 = −y3+1.54
140 Chapter 17. Compression of Digital Voice and Video
For 1 < x ≤ 2 ⇒ x̃4 = 1.5
fX(x4) =−x4+2
4 = −(y4−x̃4)+24 = −(y4+1.5)+2
4 = −y4+0.54
To find fY (y), we use the important property of the PDF and a func-
tion of a random variable Y = g(X), as:
fY (y) =∑n
i=1fX(xi)
dydx
We know that dydx = 1. Thus:
fY (y) =∑4
i=1 fX(xi) =y1+0.5
4 + y2+1.54 + −y3+1.5
4 + −y4+0.54 =1
9. See Table 17.1 and Figure 17.2.
Table 17.1: Encoded words.
Input: ABC Output: XYZ
+7 = 111 110
+5 = 110 111
+3 = 101 101
+1 = 100 100
-1 = 011 000
-3 = 010 001
-5 = 001 011
-7 = 000 010
10. (a) See Figure 17.3
(b) 513 Cc − 46, 2 Cc0, 3 -2 Cc0, 2 -2 0 1 0 1 0 -1 Cc0, 4 -1 Cc0, 43
11. HX(x) = − ∫ 0−1(x+ 1) ln(x+ 1)dx− ∫ 10 (−x+ 1) ln(−x+ 1)dx
141
A
B
C
X
Y
Z
1 1 1 1
0000
0 0 1 1
0011
0 1 0 1
1010
X=A
Y = A + B = A B + A B
BCA
BCA
BCA
Z = B + C = B C + B C
Figure 17.2: PCM encoder design: Karnaugh maps and the logic circuit.
= 12
12. Sample space (alphabet) = {a1, a2, a3, a4, a5}Corresponding probabilities = {0.23, 0.30, 0.07, 0.28, 0.12}.(a) Entropy H(x) = −∑5
i=1 Pi logPi
= −(0.23 log2 0.23 + 0.3 log2 0.3 + 0.07 log2 0.07 + 0.28 log2 0.28 +
0.12 log2 0.12)
= 2.157 b/sample
(b) {a1, a2, a3, a4, a5}{15 ,
15 ,
15 ,
15 ,
15}
H2(x) = −∑5i=1 Pi log Pi
= −5× 15 log2
15 = 2.32 b/sample
142 Chapter 17. Compression of Digital Voice and Video
513 -46 0 -2 0 -1 0 0
-46 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0
-2 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0
-1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
Figure 17.3: Quantization of a still image to produce Matrix Q[i][j], andthe order of matrix elements for transmission.
Entropy of a uniformly distributed source is more compared to the above
entropy of a non-uniformly distributed suorce.
13. (a) PX(1) = PXY (1, 1)+PXY (1, 2)+PXY (1, 3) = 0.1+0.2+0.4 = 0.7
PX(2) = PXY (2, 1) + PXY (2, 2) + PXY (2, 3) = 0.1 + 0 + 0.2 = 0.3
PX(3) = PXY (3, 1) + PXY (3, 2) + PXY (3, 3) = 0
H(X) = −(0.7 log 0.7 + 0.3 log 0.3) = 0.881
PY (1) =∑3
x=1 PXY (x, 1) = 0.1 + 0.1 = 0.2
PY (2) =∑3
x=1 PXY (x, 2) = 0.2
PY (3) =∑3
x=1 PXY (x, 3) = 0.4 + 0.2 = 0.6
H(Y ) = −(0.2 log 0.2 + 0.2 log 0.2 + 0.6 log 0.6) = 1.371
(b) The marginal entropy shows the average information that we re-
cieve from one source if we know the other one.
(c) H(X,Y ) = −(0.1 log 0.1 + 0.2 log 0.2 + 0.2 log 0.2 + 0.1 log 0.1 +
143
0.4 log 0.4) = 2.122
(d) Joint entropy shows the average information thet we recieve from
combination of two source.
14. H(X|Y ) = H(X,Y )−H(Y )
H(Y |X) = H(X,Y )−H(X)
15. (a) Sample output a4
(b) The information content of samples a1 and a5 is:
I = I(P1) + I(P5) = − log2(0.1) − log2(0.2) = 5.79 b
(c) Least prob. seq. = {a4, a4, a4, a4, a4, a4, a4, a4, a4, a4}And, its probability = (0.05)10 = 9.76 × 10−14
No, it is not.
(d) H(X) = −∑7i=0 Pi log2 Pi = 2.501 b/symbol
F = 50 HZ× 2 = 100 symbol/sec
H(X) = 100 symbol/sec× 2.501 b/symbol = 250 b/s
(e) number of typical seq. = 2nH(X) = 225 = 3.4 × 107
number of non-typical seq. = number of seq.-number of typical seq.
= Nn − 2nH(X)
= 28.7 × 107 − 3.4× 107
16. Sample Space (Alphabet) = {a1, a2, a3, a4}P ∈ {0.15, 0.20, 0.30, 0.35}
144 Chapter 17. Compression of Digital Voice and Video
(a) H(x) = −∑Nk=1 Pk log2 Pk
= −(0.15 log2 0.15 + 0.2 log2 0.2 + 0.3 log2 0.3 + 0.35 log2 0.35)
= 1.926
Number of typical sequences = 2nH(x) = 2100(1.926) = 9.6× 1057
(b) Total number of sequences = Nn = 4100 = 1.607 × 1060
Total number of non-typical sequences = 1.607× 1060 − 9.9× 1057
≈ 1.597 × 1060
Number of typical sequencesNumber of non-typical sequences
= 9.6×1057
1.597×1060 = 0.006
(c) P [X̄ ] = 2−nH(x) = 2−100(1.926) = 1.04 × 10−58
(d) Number of bits to represent typical sequence= nH(x) = 100(1.926) =
193 b
(e) Most probable sequence is: {a4, a4, · · · , a4} with probability Pn4 =
(0.35)100 = 2.55 × 10−46
17. (a) See Figure 17.4. The compressed codes are:
a0 = 0, a1 = 110, a2 = 10100, a3 = 100, a4 = 1011, a5 = 111,
a6 = 10101.
(b) The entropy of the source is computed as
H(X) = −∑6k=0 Pk log2 Pk = 2.07
The average code length is:
R̄ =∑6
i=0 Pi�i = 1(0.55) + 3(0.1) + 5(0.05) + 3(0.14) + 4(0.06) +
3(0.08) + 5(0.02) = 2.1
As it is expected: H(X) ≤ R̄ ≤ H(X) + 1
145
0
1
0
1
0
0
1
a0
a6
a2
a4
a5
a1
a3
1
0
10
1
0.13
0.07
0.45
0.13
0.070.18
0.18
0.55
0.02
0.05
0.06
0.08
0.10
0.14
0.08
0.10
0.06
0.55
0.14
0.14
0.13
0.55
0.18 0.27
0.27
0.55
Figure 17.4: Huffman encoder.
Code efficiency:
η = H(X)R̄
= 2.072.1 = 0.98
18. {−3,−2,−1, 0, 2, 3, 5}P ∈ {0.05, 0.1, 0.1, 0.15, 0.05, 0.25, 0.3}(a) H(x) = −∑6
k=0 Pk log2 Pk
= −(0.05 log2 0.05+0.1 log2 0.1+0.1 log2 0.1+0.15 log2 0.15+0.05 log2 0.05+
0.25 log2 0.25 + 0.3 log2 0.3)
= 2.528 bits/sample
(b) fs = 4000
guard = 200
Entropy rate
= (2fs + guard)H(x)
= (4000 × 2 + 200)H(x)
= 20, 731 bits/sec
146 Chapter 17. Compression of Digital Voice and Video
(c) See Figure 17.5. The generated codes are:
5(00), 3(01), 0(100), −1(101), −2(110), 2(1110), −3(1111).
-3
2
-2
-1
0
3
5
0.05
0.05
0.1
0.1
0.15
0.25
0.3
0
0.1
00.2
0.1
0.150
0.25
0.25
0.2
0 0.45
0.45
0.3
0.25
0 0.451
0
1
1
1
1
1
0.55
0.55
Figure 17.5: Huffman encoder.
(d) R̄ =∑6
i=0 Pi�i = (0.3 + 0.25)(2) + (0.1 + 0.1 + 0.15)(3) + (0.05 +
0.05)4 = 2.55
Cr =R̄
�log2 N� = 2.553 = 0.85
η = H(X)R̄
= 2.5282.55 = 0.99
19. Source sequence is
0, 1, 01, 00, 001, 000, 11, 111, 0010, 10, 101, 1111, 010, 0101, 0101
Find the smallest phrases that have not appeared. Phrases are encoded
Table 17.2.
20. See Table 17.3.
147
Table 17.2: Lempel-Ziv coding process
Parser Output Location Encoded Output
0 0001 00000
1 0010 00001
01 0011 00011
00 0100 00010
001 0101 01001
000 0110 01000
11 0111 00101
001 1000 01111
010 1001 01010
10 1010 00100
101 0110 10101
111 1011 10001
010 1100 00110
101 1110 11011
1010 1111 11100
148 Chapter 17. Compression of Digital Voice and Video
Table 17.3: Lempel-Ziv coding process
Parser Output Location Encoded Output
1 00001 000001
11 00010 000011
110 00011 000100
0 00100 000000
01 00101 001001
010 00110 001010
10 00111 000010
101 01000 001111
1101 01001 000111
0111 01010 000101
100 01011 001110
0101 01100 001101
01010 01101 011000
00 01110 001000
1111 01111 010101
010100 10000 011010
001 10001 011101
Chapter 18
VoIP and MultimediaNetworking
1. (a) Voice B.W. for telecommunication = 4 KHz Number of samples
per second = 2 × 4= 8 Ksamples/s 8 b/sample = 8 × 8 =64 kb/s
(b) RTP header=12 bytes
(c) RTP encapsulation:
RTP(12),UDP(8),IP(≥ 20)
Header=12+8+20=40 byte
Data=0.5 sec ⇒ 0.5 × 64 kb/s = 32 kb = 4 kbyte
Packet = 4 kbyte + 40 byte ≈ 4040 byte
2. (a) Assume packets (segments) of 1,500 byte long including IP, UDP,
and RTP headers. The RTP header consists of:
• 12 B Common RTP header: 1 B (V+P+X+CSC), 1 B (M+Payload
Type), 2 B (Seq. No.), 4 B (Time Stamp), and 4 B (Sync
Source ID).
• 8 B Contributing Source RTP header: 4 B (Contributing Source1
ID), and 4 B (Contributing Source2 ID).
Thus, the total packet payload size = 1,500 B - [20B (IP header)
149
150 Chapter 18. VoIP and Multimedia Networking
+ 12B (UDP header) + 20B (total RTP headers)]= 1,448 B
(b) Total combined two source data rates = 2×31 Kb/s ×(1/8) B/b=
7.75 KB/s
Time required for one packet = 1,448 B7.75 KB/s
= 186.8 ms
Number of packets generated in 5 minutes = 5×60186.8 ms = 1,605
packets.
3. N/A
4. 1,280×1,024 pixel blocks
1 packet = 0.1 row
1 chunk = 1 pixel cblock
(a) Sample sapce ⇒ 77 samples ⇒ each sample (pixel) = 7 b
A chunk = 1 pixel block = (8×8 pixels) × 7 b = 448 b
A chunk + header = 448 b + (4 B) × 8 (b/B) = 480 b
(b) 1 packet = 0.1 row = (0.1)(1,280) = 128 pixel blocks = 128 chunks
= 128 chunks × 480 b/chunk = 61,440 b
A packet + header = 61,440 b + 12 × 8 (b/B) = 61,536 b
(c) Video clip = 4 minutes
1 sec = 30 images
Total number of images/min = (4 min) × 60 (sec/min) × 30 =
7,200 images
Total number of packets/image = (10 packets/row) × 1,024 (rows)
= 10,240 packets/image
Total number of packets/4 min = (7,200) × (10,240) = 73,728,000
151
Total number of bits/4 min = (73,728,000) × (61,440 b/packets)
= 4.53 Tb
Bandwidth = 4.53Tb4 min = 18.8 Gb/s
5. Data of Chunk: 1 pixel block× 10 phrases× 5 bits/phrase = 50 b
Header of Chunk: = 4 B
Chunk = 50 + 4× 8 = 82 bits
(a) Row data = 1280 pixel block× 82 b/pixel block
SCTP header = 12 bytes
Packet = 1280×828 + 12 = 13, 132 byte ≈ 12.8 kbyte
(b) One frame = 12.8 kbyte/row × 1024 rows = 12.8 Mbyte
Required B.W. = 12.8 Mbyte/frame × 30 frames/s × 8 b/B =
3 Gb/s
(c) Total data = 3 Gb/s× 2× 60× 60 = 2, 700 GB/h
6. (a) FY (t)(y) = P [Y (t) ≤ y] = P [X(t) + 2t ≤ y] = P [X(t) − 2t]
= FX(t)(y − 2t) ⇒ fY (t)(y) = FX(y − 2t) = 1√2παt
e−(y−2t)2/2αt
(b) FY (t)Y (t+1)(y1, y2) = P [X(t) + 2t ≤ y1,X(t+ 1) + 2(t+ 1) ≤ y2]
= FX(t),X(t+1)(y1 − 2t, y2 − 2(t+ 1))
⇒ fY (t)Y (t+s)(y1, y2) = fX(t),X(t+1)(y1 − 2t, y2 − 2(t+ 1))
= fX(t)(y1 − 2t)fX(t)(y2 − y1 − 2)
= 3−(y1−2t))2/2αt√2παt
× e−(y2−y1−2)2/2α√2πα
7. 70 cycles/min
6 pulses/cycle
1 pulse/chunk
152 Chapter 18. VoIP and Multimedia Networking
Q,R,S → 4 samples
P,T,U → 1 sample
(a) We use variable size chunk.
Assume each sample is encoded by 8 bits.
A packet consists of: 12B SCTP header + 6 chunks each with 16B
chunk header.
Total bits of the 3 chunks made by P,T,U pulses
= 3(16B (chunk header) × 8 b/B + 1 sample/pulse × 8 b/sample)
= 408 b
Total bits of the 3 chunks made by Q,R,S pulses
= 3(16B (chunk header) × 8 b/B+ 4 samples/pulse × 8 b/sample)
= 480 b
Total SCTP packet size = (408b + 480b) + [20B (IP header) +
12B (SCTP header)] × 8 b/B = 1144 b/(SCTP packet)
Bandwidth = 70 cycles/min =70×1144 b/packet
60s = 1335 b/s
(b) H = Max. Number of Human’s Heartbeat Cycles
G = Max. number of Patients
L = Max. Link Bandwidth
We can choose the above parameters as long as we must can sat-
isfy:
L = H×G×114460s
(c) The second option reduces the transmission overhead (packet over-
head) but requires a larger bandwidth.
8. N/A
153
9. N/A
10. N/A
154 Chapter 18. VoIP and Multimedia Networking
Chapter 19
Mobile Ad-Hoc Networks
1. N/A
2. N/A
3. N/A
4. N/A
5. N/A
155
156 Chapter 19. Mobile Ad-Hoc Networks
Chapter 20
Wireless Sensor Networks
1. N/A
2. N/A
3. N/A
4. N/A
157