1
Implementation/Infrastructure Support for Collaborative
Applications
Prasun Dewan
2
Infrastructure vs. Implementation Techniques
• Implementation technique are interesting when general– Applies to a class of applications
• Coding of such an implementation technique is infrastructure.
• Sometimes implementation techniques apply to very narrow app set– Operation transformation for text editors.
• These may not qualify as infrastructures• Will study implementation techniques applying to
small and large application sets.
3
Collaborative Application
Coupling Coupling
4
Infrastructure-Supported Sharing
Client
Sharing Infrastructure
Coupling Coupling
5
Systems: Infrastructures• NLS (Engelbart ’68)• Colab (Stefik ’85)• VConf (Lantz ‘86)• Rapport (Ahuja ’89)• XTV (Abdel-Wahab, Jeffay & Feit
‘91)• Rendezvous (Patterson ‘90)• Suite (Dewan & Choudhary ‘92)• TeamWorkstation (Ishii ’92)• Weasel (Graham ’95)• Habanero (Chabert et al ‘ 98)• JCE (Abdel-Wahab ‘99)• Disciple (Marsic ‘01)
• Post Xerox• Xerox• Stanford• Bell Labs• UNC/ODU• Bellcore
• Purdue• Japan• Queens• U. Illinois• ODU• Rutgers
6
Systems: Products• VNC (Li, Stafford-
Fraser, Hopper ’01)• NetMeeting • Groove• Advanced Reality• LiveMeeting (Pay
by minute service model)
• Webex (service model)
• ATT Research• Microsoft
• Microsoft
7
Issues/Dimensions
• Architecture• Session management• Access control• Concurrency control• Firewall traversal• Interoperability• Composability• …
Colab. Sys. 1 Implementation 1
Colab. Sys. 2 Implementation 3
Architecture ModelSession ManagementConcurrency Control
8
Infrastructure-Supported Sharing
Client
Sharing Infrastructure
Coupling Coupling
9
Architecture?
Infrastructure/client (logical) components
Component (physical) distribution
10
Shared Window Logical ArchitectureApplication
Window WindowCoupling
Near-WYSIWIS
11
User 1 User 2
X Server
X Client
X Server
Pseudo Server Pseudo Server
Centralized Physical ArchitectureXTV (‘88) VConf (‘87) Rapport (‘88) NetMeeting
Input/Output
12
User 1 User 2
X Server
X Client
X Server
X Client
Pseudo Server Pseudo Server
Replicated Physical ArchitectureRapport VConf
Input
13
Relaxing WYSIWIS?Application
Window WindowCoupling
Near-WYSIWIS
14
Model-View Logical ArchitectureModel
View View
Window Window
15
Centralized Physical ModelModel
View View
Window Window
Rendezvous (‘90, ’95)
16
Replicated Physical ModelModel
View View
Window Window
Sync ’96, Groove
ModelInfrastructure
17
Comparing the Architectures
Model
View View
Window Window
Window
App
Window
App
Pseudo Server
Pseudo Server
Input
Window
App
Window
Pseudo Server
Pseudo Server
I/O
Model
View View
Window Window
Model Architecture Design Space?
18
Architectural Design Space
• Model/ View are Application-Specific• Text Editor Model
– Character String– Insertion Point– Font– Color
• Need to capture these differences in architecture
19
Single-User Layered Interaction
PC
Increasing Abstraction
Layer N
Layer N-1
Layer 0
Layer 1
Communication Layers
Layer N
Layer N-1
Layer 0
Layer 1
I/O Layers
Physical Devices
20
Single-User Interaction
Layer N
Layer N-1
Layer 0
Layer 1
PC
Increasing Abstraction
PC
21
Example I/O Layers
Framebuffer
Window
Model
Widget
Increasing Abstraction
PC
22
Layered Interaction with an Object{“John Smith”, 2234.57}
• John Smith•
• John Smith•
• John Smith•
X
Abstraction
Interactor/Abstraction
Interactor/Abstraction
Interactor
Interactor =
Absrtraction Representation
+
Syntactic Sugar
23
Single-User Interaction
Layer N
Layer N-1
Layer 0
Layer 1
Increasing Abstraction
PC
24
Identifying the Shared Layer
Increasing Abstraction
Layer N
Layer S+1
Shared Layer
Higher layers will also be shared
Lower layers may diverge
Layer 0
Layer S
Program Component
User-Interface Component
PC
25
Replicating UI Component
Layer N
Layer S+1
Layer N
Layer S+1
Layer N
Layer S+1
PCPC PC
26
Centralized Architecture Layer 0
Layer S
Layer N
Layer S+1
Layer N
Layer S+1
Layer N
Layer S+1
PCPC PC
27
Replicated (P2P) Architecture Layer 0
Layer S
Layer 0
Layer S
Layer 0
Layer S
Layer N
Layer S+1
Layer N
Layer S+1
Layer N
Layer S+1
PCPC PC
28
Implementing Centralized Architecture
PC
Layer N
Layer S+1
Layer N
Layer S+1
Layer N
Layer S+1
Layer 0
Layer S
Master Input Relayer Output Broadcaster Slave I/O RelayerSlave I/O Relayer
29
Replicated Architecture
PC
Layer N
Layer S+1
Layer N
Layer S+1
Layer N
Layer S+1
Layer 0
Layer S
Layer 0
Layer S
Layer 0
Layer S
Input Broadcaster Input Broadcaster Input Broadcaster
30
Classifying Previous Work– XTV– NetMeeting App Sharing– NetMeeting Whiteboard– Shared VNC– Habanero– JCE– Suite– Groove– LiveMeeting– Webex
Rep vs. Central
Shared Layer
31
Classifying Previous Work• Shared layer
– X Windows (XTV)– Microsoft Windows (NetMeeting App Sharing)– VNC Framebuffer (Shared VNC)– AWT Widget (Habanero, JCE)– Model (Suite, Groove, LiveMeeting)
• Replicated vs. centralized– Centralized (XTV, Shared VNC, NetMeeting App. Sharing,
Suite, PlaceWare)– Replicated (VConf, Habanero, JCE, Groove, NetMeeting
Whiteboard)
Rep vs. Central
Shared Layer
32
Service vs. Server vs. Local Commuication
• Local: User site sends data– VNC, XTV, VConf, NetMeeting Regular
• Server: Organization’s site connected by LAN to user site sends data– NetMeeting Enterprise, Sync
• Service: External sites connected by WAN to user site sends data– LiveMeeting, Webex
33
Push vs. Pull of Data• Consumer pulls new data by sending request for it
in response to– notification
• MVC– receipt of previous data
• VNC
• Producer pushes data for consumers– As soon as data are produced
• NetMeeting, Real-time sync – When user requests
• Asynchronous Sync
34
Dimensions
• Shared layer level.• Replicated vs. Centralized.• Local vs. Server vs. Service Broadcast• Push vs. Pull Data• …
35
Evaluating design space points
• Coupling Flexibility• Automation • Ease of Learning• Reuse• Interoperability• Firewall traversal• Concurrency and
correctness • Security
Performance– Bandwidth usage– Computation load– Scaling– Join/leave time– Response time
• Feedback to actor– Local– Remote
• Feedthrough to observers– Local– Remote
– Task completion time
36
Sharing Low-Level vs. High-Level Layer• Sharing a layer nearer the
data– Greater view independence– Bandwidth usage less
• For large data sometimes visualization is compact.
– Finer-grained access and concurrency control
• Shared window system support floor control.
– Replication problems better solved with more app semantics
• Sharing a layer nearer the physical device– Have referential
transparency• Green object no meaning
if objects colored differently
– Higher chance layer is standard.
• Sync vs. VNC• promotes reusability and
interoperability
•Sharing flexibility limited with fixed layer sharing•Need to support multiple layers.
37
Centralized vs. Replicated: Dist. Comp. vs. CSCW
• CSCW– Input immediately
delivered without distributed commitment.
– Floor control or operation transformation for correctness
• Distributed computing:– More reads (output)
favor replicated– More writes (input)
favor centralized
38
Bandwidth Usage in Replicated vs. Centralized
• Remote I/O bandwidth only an issue when network bandwidth < 4MBps (Nieh et al ‘2000)– DSL link = 1 Mbps
• Input in replication less than output• Input produced by humans• Output produced by faster computers
39
Feedback in Replicated vs. Centralized• Replicated: Computation time on local computer• Centralized
– Local user• Computation time on local computer
– Remote user• Computation time on hosting computer plus roundtrip time
– In server/ service model an extra LAN/ WAN link
40
Influence of communication cost• Window sharing remote feedback
– Noticeable in NetMeeting.– Intolerable in PlaceWare’s service model.
• Powerpoint presentation feedback time– not noticeable in Groove & Webex replicated model.– noticeable in NetMeeting for remote user.
• Not typically noticeable in Sync with shared model• Depends on amt of communication with remote site
– Which depends on shared layer
41
Case Study: Colab. Video Viewing
42
Case Study: Collaborative Video Viewing (Cadiz, Balachandran et al. 2000)
• Two users collaboratively executing media player commands
• Centralized NetMeeting sharing added unacceptable video latency
• Replicated architecture created using T 120 later
• Part of problem in centralized system sharing video through window layer
43
Influence of Computation Cost
• Computation intensive apps– Replicated case: local computer’s computation
power matters.– Central case: central computer’s computation
power matters– Central architecture can give better feedback,
specially with fast network [Chung and Dewan ’99]– Asymmetric computation power => asymmetric
architecture (server/desktop, desktop/PDA)
44
Feedthrough• Time to show results at remote site.• Replicated:
– One-way input communication time to remote site.– Computation time on local replica
• Centralized:– One-way input communication time to central host – Computation time on central host– One-way output communication time to remote site.
• Server/service model add latency• Less significant than remote feedback:
– Active user not affected.• But must synchronize with audio
– “can you see it now?”
45
Task completion time• Depends on
– Local feedback• Assuming hosting user inputs
– Remote feedback• Assuming non hosting user inputs• Not the case in presentations, where centralized favored
– Feedthrough• If interdependencies in task• Not the case in brainstorming, where replicated favored
– Sequence of user inputs • Chung and Dewan ’01
– Used Mitre log of floor exchanges and assumed interdependent tasks
– Task completion time usually smaller in replicated case– Asymmetric centralized architecture good when computing power
asymmetric (or task responsibility asymmetric?).
46
Scalability and Load• Centralized architecture with powerful server more suitable.• Need to separate application execution with distribution.
– PlaceWare– Webex
• Related to firewall traversal. More later.• Many collaborations do not require scaling
– 2-3 collaborators in joint editing– 8-10 collaborators in CAD tools (NetMeeting Usage Data)– Most calls are not conference calls!
• Adapt between replicated and centralized based on # collaborators– PresenceAR goals
47
Display Consistency
• Not an issue with floor control systems.• Other systems must ensure that concurrent input
should appear to all users to be processed in the same (logical) order.
• Automatically supported in central architecture.• Not so in replicated architectures as local input
processed without synchronizing with other replicas.
48User 1 User 2
Insert e,2Insert d,1
Insert e,2 Insert d,1
dabc aebcdeabc daebc
Insert d,1 Insert e,2
UI
Program
Input Distributor
Synchronization Problemsabc abc
Program
Input Distributor
UI
49User 1 User 2
Insert e,2Insert d,1
Insert e,3 Insert d,1
dabc aebcdaebc daebc
Insert d,1 Insert e,2
UI
Program
Input Distributor
Peer to peer Mergerabc abc
Program
Input Distributor
UI
Merger Merger
Ellis and Gibbs ‘89, Groove, …
50User 1 User 2
Insert e,2Insert d,1
Insert e,3 Insert d,1
dabc aebcdaebc daebc
Insert d,1 Insert e,2
UI
Program
Input Distributor
Local and Remote Merger
• Curtis et al ’95, LiveMeeting, Vidot ‘02
• Feedthrough via extra WAN Link
• Can recreate state through central site
abc abc
Program
Input Distributor
UI
Merger
Merger Merger
51User 1 User 2
Insert e,2Insert d,1
Insert e,3 Insert d,1
dabc aebcdaebc daebc
Insert d,1 Insert e,2
UI
Program
Input Distributor
Centralized Merger
• Munson & Dewan ‘94• Asynchronous and synchronous
– Blocking remote merge• Understands atomic change set• Flexible remote merge semantics
– Modify or delete can win
abc abc
Program
Input Distributor
UI
Merger
52
Merging vs. Concurrency Control
• Real-time Merging called Optimistic Concurrency Control
• Misnomer because it does not support serializability.
53User 1 User 2
read” “f”read “f”
read “f”
UI
Program
Input Distributor
Reading Centralized Resources
Program
Input Distributor
UI
Central bottleneck!
abf
Read file operation executed
infrequently
54User 1 User 2
write” “f”, “c”write “f”, “c”
write“f”, “c”
UI
Program
Input Distributor
Writing Centralized Resources
Program
Input Distributor
UI
Multiple writes
abf abcc
55User 1 User 2
write “f”, “c”write f, “c”
write“f”, “c”
UI
Program
Input Distributor
Replicating Resources
• Groove Shared Space &Webex replication
• Pre-fetching• Incremental replication
(diff-based) in Groove
Program
Input Distributor
UI
abccf f abcc
56
msg msg
User 1 User 2
mail joe, msgmail joe, msg
mail joe, msg
UI
Program
Input Distributor
Non Idempotent Operations
Program
Input Distributor
UI
57User 1 User 2
insert d, 1insert d, 1
mail joe, msg
UI
Program
Input Distributor
Separate Program Component
• Groove Bot: Dedicated machine for external access
• Only some users can invite Bot in shared space
• Only some users can invoke Bot functionality
• Bot data can be given only to some users• Similar idea of special “externality
proxy” in Begole 01
Program
Input Distributor
UI
Program’
insert d,1
msg
58User 1 User 2
mail joe, msg
UI
Program
Two-Level Program Component
• Dewan & Choudhary ’92, Sync, LiveMeeting
• Extra comm. hop and centralization
• Easier to implement
UI
Program
Program++
mail joe, msg
insert d,1
insert d,1
insert d,1
msg
59
Classifying Previous Work• Shared layer
– X Windows (XTV)– Microsoft Windows (NetMeeting App Sharing)– VNC Framebuffer (Shared VNC)– AWT Widget (Habanero, JCE)– Model (Suite, Groove, PlaceWare,)
• Replicated vs. centralized– Centralized (XTV, Shared VNC, NetMeeting App. Sharing,
Suite, PlaceWare)– Replicated (VConf, Habanero, JCE, Groove, NetMeeting
Whiteboard)
Rep vs. Central
Shared Layer
60
Layer-specific
• So far, layer-independent discussion.• Now concrete layers to ground discussion• Screen sharing• Window sharing• Toolkit sharing• Model sharing
61User 2 User 3
Centralized Window Architecture
User 1
Window Client
Win. Server Win. Server Win. Server
a ^, w2, x, y
Press a
Output Broadcaster & I/O Relayer
I/O Relayer I/O Relayer
a ^, w1, x, y draw a, w1, x, y
a ^, w, x, y
draw a, w2, x, y draw a, w3, x, ydraw a, w1, x, y
draw a, w, x, y
62User 2 User 3
UI Coupling in Centralized Architecture
User 1
Window Client
Win. Server Win. Server Win. Server
Output Broadcaster & I/O Relayer++
I/O Relayer I/O Relayer
move w2
move w2move w1
move w3
move w3
move w
• Existing approach– T 120, PlaceWare
• UI coupling need not be supported
– XTV
63User 2 User 3
Distributed Architecture for UI Coupling
User 1
Window Client
Win. Server Win. Server Win. Server
Output Broadcaster & I/O Relayer++
I/O Relayer ++ I/O Relayer++
move w1
move w1
move w1
move w3
move w
• Need multicast server at each LAN
• Can be supported by T 120
move w1
64
Two Replication Alternatives
• Replicate d in S by– S-1 sending input events to all S instances– S sending events directly to all peers
• Direct communication allows partial sharing (e.g. windows)• Harder to implement automatically by infrastructure
S -1
S S
S -1 S -1
S S
S -1
65
Semantic Issue
• Should window positions be coupled?• Leads to window wars (Stefik et al ’85)• Can uncouple windows
– Cannot refer to the “upper left” shared window
• Compromise– Create a virtual desktop for physical desktop
of a particular user
66
UI Coupling and Virtual Desktop
67User 2 User 3
Raw Input with Virtual Desktop
User 1
Window Client
Win. Server Win. Server Win. Server
a ^, w2, x’, y’
Press a
Output Broadcaster I/O Relayer & VD
VD & I/O Relayer VD & I/O Relayer
a ^, w1, x, y draw a, w1, x, y
draw a, w2, x’, y’ draw a, w3, x’, y’
a ^, x’, y’
draw a, w1, x, y
draw a, x’, y’
Knows about virtual desktop
68User 2 User 3
Translation without Virtual Desktop
User 1
Window Client
Win. Server Win. Server Win. Server
a ^, w2, x, y
Press a
Output Broadcaster, I/O Relayer & Translator
I/O Relayer I/O Relayer
a ^, w1, x, y draw a, w1, x, y
draw a, w2, x, y draw a, w3, x, ydraw a, w1, x, y
a ^, w1, x, y
69
Coupled Expose Events: NetMeeting
70User 2 User 3
Coupled Exposed Regions
User 1
Window Client
Win. Server Win. Server Win. Server
Output Broadcaster I/O Relayer & VD
draw w
T 120 (Virtual Desktop)
expose w
front w3
expose w
expose w draw w
draw wexpose w
VD & I/O Relayer VD & I/O Relayer
draw w
71
Coupled Expose Events: PlaceWare
72User 2 User 3
Uncoupled Expose Events
User 1
Window Client
Win. Server Win. Server Win. Server
Output Broadcaster I/O Relayer & VD
draw w
draw w
expose w
• XTV (no Virtual Desktop)
• expose event not broadcast so remote computers do not blacken region
• Potentially stale data
draw wdraw w
front w3
expose w
VD & I/O Relayer VD & I/O Relayer
73
Uncoupled Expose Events• Centralized collaboration-transparent app draws to
areas of last user who sent expose event.– May only sent local expose events
• If it redraws the entire window anyway everyone is coupled.
• If it draws only exposed areas.– Send the draw request only to inputting user– Would work as long unexposed but visible regions not
changing.– Assumes draw request can be associated with expose event.
• To support this accurately, system needs to send it union of exposed regions received from multiple users
74
Window-based Coupling• Mandatory
– Window sizes– Window contents
• Optional– Window positions– Window stacking order– Window exposed regions
• Optional can be done with or without virtual desktop– Remote and local windows
could mix, rather than have remote windows embedded in Virtual Desktop window.
– Can lead to “window wars” (Stefik et al ’87)
• Couplable properties– Size– Contents– Positions– Stacking order– Exposed regions
• In shared window system some must be coupled and others may be.
75
Example of Minimal Window Coupling
76User 2 User 3
UI
Program
Input Distributor
Replicated Window Architecture
Program
Input Distributor
UI
User 1
UI
Program
Input Distributor
a ^, w2, x, y
Press a
a ^, w1, x, y a ^, w3, x, y
a ^, w1, x, y a ^, w3, x, y
draw a, w2, x, y draw a, w2, x, y draw a, w3, x, y
a ^, w2, x, y
77User 2 User 3
UI
Program
Input Broadcaster
Replicated Window Architecture with UI Coupling
Program
Input Broadcaster
UI
User 1
UI
Program
Input Broadcaster
move w
move w
move w move w
move w move w
78User 2 User 3
UI
Program
Input Distributor
Replicated Window Architecture with Expose coupling
Program
Input Distributor
UI
User 1
UI
Program
Input Distributor
expose w
move w2
expose w expose w
expose w expose w
draw w draw w draw w
expose w
79
Replicated Window System
• Centralized only implemented commercially– NetMeeting– PlaceWare– Webex
• Replicated can offer more efficiency and pass through firewalls limiting large traffic
• Must be done carefully to avoid correctness problems• Harder but possible at window layer
– Chung and Dewan ’01– Assume floor control as centralized systems do– Also called intelligent app sharing
80
Screen Sharing
• Sharing the screen client– Window system (and all applications running
on top of it)– Cannot share windows of subset of apps– Share complete computer state– Lowest layer gives coarsest sharing granularity.
81
Sharing the (VNC) Framebuffer Layer
82
VNC Centralized Frame Buffer Sharing
Window Client
Win. Server
Framebuffer
I/O Relayer
Output Broadcaster & I/O Relayer
Framebuffer Framebuffer
I/O Relayer
draw pixmap rect (frame diffs)
key eventsmouse events
83
Replicated Screen Sharing?
• Replication hard if not impossible– Each computer runs a framebuffer server and
shared input– Requires replication of entire computer state
• Either all computers are identical and receive same input from when they were bought
• Or at start of sharing session download one computer’s entire environment
• Hence centralization with virtual desktop
84
Sharing pix maps vs. drawing operations
• Potentially larger size• Obtaining pixmap changes difficult
– Do framebuffer diffs– Put hooks into window system– Do own translation
• Single output operation• Standard operation• No context needed for interpretation• Multiple operations can be coalesced
into single pixmap– Per-user coalescing and compression– Based on network congestion and
computation power of user• Pixmap can be compressed
• Smaller size• Obtaining drawing operations easy
– Create proxy that traps them• Many output operations• Non-standard operations• Fonts, colormaps etc need to be
replicated– Reliable protocol needed– Possible non standard operations
for distributing state– Session initiation takes longer
• Compression but not coalescing possible
85
T. 120 Mixed Model
• Send either drawing operation or pixmap.• Pixmap sent when
– Remote site does not support operation– Multiple graphic operations need to be combined into
single pixmap because of network congestion or computation overload
• Feedthrough and fidelity of pixmaps only when required
• More complex – mechanisms and policies for conversion
86
Pixmap compression• Combine pixmap updates to overlapping regions into one
update.– In VNC diffs of framebuffer done.– In T 120 rectangles computed from updates
• When data already exists, send x,y of source (VNC and T 120)– Scrolling and moving windows– Function of pixmap cache size
• Diffs with previous rows of pixmap (T 120)• Single color with pixmap subrectanges (VNC)
– Background with foreground shapes• JPEG for still data, MPEG for moving data• Larger number of operations conflicts with interoperability.• Reduce statelessness
– Efficieny gain vs loss
87
T 120 Drawing Operation compression
• Identify operands of previous operations (within some history) rather than send new value (T 120)– E.g. Graphics context often repeated
• Both kinds of compression useless when bandwidth abundant– But can unduly increase latency.
88
T 120 Pointer Coalescing
• Multiple input pointer updates combined into one• Multiple output pointer updates combined into one.• Reduced user experience• Bandwidth usage of pointer updates small.• Reduce jitter in variable latency situations.
– If events are time stamped• Consistent with not sending incremental movements
and resizing of shapes in whiteboards.
89
Flow Control Algorithms• T 120 push-based approach
– Sender pushes data to group of receivers– Compare end to end rate for slowest receiver by looking at application
Queue– Works with overlays (firewalls)– Adapt compression and coalescing based on this– Very slow computers leave collaboration.
• VNC pull based rate– Each client pulls data at consumption rate– Gets diffs since since last pull with no intermediate points– Per client diffs must be maintained– Data might be sent along same path multiple times– Could replicate updates at all LANS (federations) [Chung 01]
90
Experimental Data
• Pull-based vs. Push-based flow control• Sharing pixmaps vs. drawing operations• Replicated vs. centralized architecture
91
Remote Feedback Experiments• Nieh et al, 2000: Remote
single-user access experiments.– VNC– RDP (T. 120 based)
• Measured– Latency (Remote feedback time)– Data transferred
• Give idea of performance seen by remote user in centralized architecture, comparing– Sharing of pixmap vs.drawing
operations– Pull-based vs. no flow control
User 1
Window Client
Win. Server Win. Server
Master I/O Distributor
Slave I/O Distributor
92
High Bandwidth Experiments
• Letter A– Latency
• VNC (Linux) 60 ms• RDP (Win2K, T 120-
based) 200 ms
– Data transferred• VNC 0.4KB• RDP 0.3KB• Previewers send text as
bitmaps (Hanrahan)
• Red box fill– Latency
• VNC (Linux) 100 ms• RDP (Win2K, T 120-
based) 220 ms– Data transferred
• VNC 1.2KB• RDP 0.5KB
• Compression increases latency reducing data
93
Web Page Experiments
• Time to execute a web page script– Load 54*2 pages (text and
bitmaps)– Scroll down 200 pixels– Common parts: blue left
column, white background, PC magazine logo
• Load time– 4-100 Mbps < 50 seconds– 100 MBps
• RDP: 35s• VNC: 24s
• Load time– 128 Kbps
• RDP 297s• VNC 25s
• Data transferred– 100 Mbps
• Web browser 2MB• RDP 12MB• VNC 4MB
– 128 Kbps• RDP 12 MB• VNC 1MB
• Data loss reduces load time
94
Animation Experiments
• 98 KB Macromedia Flash 315 550x400 frames
• FPS– 100 Mbps
• RDP: 18• VNC: 15
– 512 kbps• RDP: 8• VNC: 15
– 128 Kbps• RDP: 2• VNC: 16
• Data transferred– 100 Mbps
• RDP: 3MB• VNC: 2.5MB
– 512 kbps• RDP: 2MB• VNC: 1.2MB
– 128 kbps• RDP: 2MB• VNC: 0.3MB
• 18 fps acceptable, < 8fps intolerable• Data loss increases fps• LAN speed required for tolerable
animations
95
Cyclic Animation Experiments• Wong and Seltzer 1999,
RDP Win NT• Animated 468x60 pixel
GIF banner– 0.01 mbps
• Animated scrolling news ticker– 0.01mbps
• Bezier screen saver– 10 bezier curves repeated– 0.1 mbps
• GIF banner and scrolling news ticker simultaneously– 1.60 Mbps
• Client side cache of pixmaps– Cache not big enough to
accommodate both animations– LRU policy not ideal for cyclic
animations• 10 Mbps can accommodate
only 5 users• Load put by other UI
operations?
96
Network Loads of UI Ops
• Wong and Seltzer 1999, RDP Win NT
• Typing– 75 wpm word typist
generated 6.26 kbps
• Mousing– Random, continuous:
2Kbps– Usefulness of mouse
filtering in T 120?
• Menu navigation– Depth-first selection from
Windows start menu: 1.17 Kbps
– Alt right arrow in word: 39.82 Kbps
– Office 97 with animation: 48.88 KBps
• Scrolling– Word document, PG down
key held: 60 kbps
97
Relative Occurrence of Operations
• Danshkin, Hanrahan ’94, X• Two 2D drawing programs• Postscript previewer• X11 perf benchmark• 5 grad students doing daily
work• Most output responses are
small.– 100 bytes– TCP/IP adds 50% overhead
• Startup lots of overhead ~ 20s
• Bytes used1. Images
1. 53 bytes avg size2. BW bitmap rectangles
2. Geometry1. Half clearing letter rectangles
3. Text4. Window enter and leave5. Mouse, Font, Window
movement, etc events negligible
• Grad students vs. real people?
98
User Classes vs. Load & Bandwidth Usage
• Terminal services study• Knowledge Worker
– Makes own work– Marketing, authoring– Excel, outlook, IE, word– Keeps apps open all the time
• Structured task worker– Claims processing, accts payable– Outlook, word– Uses each app for less time,
closing and opening apps• Data Entry worker
– Transcription, typists, order entry– SQL, forms
• Simulation scripts run to measure how may of each class can be supported before 10% degradation in server response
– 2xPentium 111 Xeon 450 MHz– 40 structured task workers– 70 knowledge workers– 320 Data entry workers– In central architecture, perhaps
separate multicaster• Network utilization
– Structured task: 1950 bps– Knowledge worker: 1200 bps– Data entry: 495 bps
• Encryption has little effect
99
Regular vs. Bursty Traffic• Droms and Dyksen ’90, X traffic• Regular
– 8 hour systems programmer usage• 236 bps, 1.58 packets per second
– Compares well with network file system traffic• Bursts
– 40,000 bps, 100 pps– Individual apps
• Twm and xwd > 100, 000 bps, 100 pps• Xdvi, 60,000 bps, 90 pps
– Comparable to animation loads• Bandwidth requirements as much as remote file system
100
Bandwidth in Replicated vs. Centralized
• Input in replication less data than output– Several mouse events could be discarded– Output could be buffered.
• X Input vs. Output (Ahuja ’90)– Unbuffered: 6 times as many messages sent in centralized– Buffered: 3.6 times as many messages sent– Average input and output message size: 25 bytes
• RDP each keystroke message 116 bytes• Letter a, box fill, text scroll: < 1 Kb• Bitmap load: 100 KB
101
Generic Shared Layers Considered
• Framebuffer• Window
102
Shared Widgets
• Layer above window is Toolkit• Abstractions offered
– Text– Sliders– Other “Widgets”
103
Sharing the (Swing) Toolkit Layer
• Different window sizes• Different looks and feel• Independent scrolling
104
Window Divergence • Independent scrolling• Multiuser scrollbar• Semantic telepointer
105
Shared Toolkit • Unlike window system, toolkit not a network layer• So more difficult to intercept I/O• Input easier by subscribing to events, and hence popular
replicated implementations done for Java AWT & Swing– Abdel Wahab et al 1994 (JCE),Chabert et al 1998 (NCSA’s
Habanero) ,Begole 01– GlassPane can be used in Swing
• A frame can be associated with a glass pane whose transparent property is set to true
• Mouse and keyboard events sent to glass pane • Centralized done for Java Swing by intercepting output and
input (Chung ’02)– Modified JComponent constructor to turn debug option on– Graphics object wrapped in DebugGraphics object– DebugGraphics class changed to intercept actions– Cannot modify Graphics as it is an abstract class subclassed by
platform dependent classes
106
Shared Toolkit • Widely available commercial shared toolkits not available.• Intermediate point between model and window sharing.• Like model sharing
– Independent window sizes and scrolling– Concurrent editing of different widgets– Merging of concurrent changes to replicated text widget
• Like window sharing– No new programming model/abstractions– Existing programs
107User 1 User 2
Insert w,d,1
Toolkit
Program
Input Distributor
Replicated Widgets
abc abc
Program
Input Distributor
Toolkit
Insert w, d,1
Insert w, d,1
adbc adbc
108
Sharing the Model Layer
• The same model can be bound to different widgets!
• Not possible with toolkit sharing
109
Sharing the Model Layer
Increasing Abstraction
Framebuffer
Window
Model
Toolkit
Program Component/ Model
User-Interface Component
110
Sharing the Model Layer
Increasing Abstraction
Framebuffer
Window
Model
Toolkit
Program Component/ Model
User-Interface Component
View Controller
Cost of accessing remote model
111
Sharing the Model Layer
Increasing Abstraction
Framebuffer
Window
Model
Toolkit
Program Component/ Model
User-Interface Component
View Controller
Send changed model state in notfication
112
Sharing the Model Layer
Increasing Abstraction
Framebuffer
Window
Model
Toolkit
Program Component/ Model
User-Interface Component
View Controller
No standard protocol
113User 2 User 3
UI
Program
Output Broadcaster & I/O Relayer
Centralized Architecture
UI
User 1
UI
I/O Relayer I/O Relayer
Output Broadcaster and relayers cannot be standard
114User 2 User 3
UI
Program
Input Broadcaster
Replicated Architecture
Program
Input Broadcaster
UI
User 1
UI
Program
Input Broadcaster
Input broadcaster cannot be standard
115
Model Collaboration Approaches
• Communication facilities of varying abstractions for manual implementation.
• Define Standard I/O for MVC• Replicated types• Mix these abstractions
116
Unstructured Channel Approach
• T 120 and other multicast approaches– Used for data sharing in whiteboard
• Provide byte-stream based IPC primitives• Add multicast to session capability• Programmer uses these to create relayers
and broadcasters
117
RPC• Communicate PL types rather than unstructured
byte streams– Synchronous or asynchronous
• Use RPC– Many Java based colab platforms use RMI
118
M-RPC
• Provide multicast RPC (Greenberg and Marwood ’92, Dewan and Choudhary ’92) to subset of sites participating in session:– processes of programmer-defined group of users– processes of all users in session– processes of users other than current inputter– current inputter– all processes of specific user– specific process
119
GroupKit Example
proc insertIdea idea { insertColouredIdea blue $idea gk_toOthers ''insertColouredIdea red $idea'' }
120
Model Collaboration Approaches
• Communication facilities for varying abstractions for manual implementation.
• Define Standard I/O for MVC• Replicated types• Mix these abstractions
121
Sharing the Model Layer
Increasing Abstraction
Framebuffer
Window
Toolkit
Program Component/ Model
User-Interface Component
View Controller
ModelDefine standard protocol
122
Sharing the Model Layer
Increasing Abstraction
Framebuffer
Window
Model
Toolkit
Program Component/ Model
User-Interface Component
View
Define standard protocol
123
Standard Model-View Protocol
• Can be in terms of model objects or view elements.
• View elements are varied– Bar charts, Pie charts
• Model elements can be defined by standard types
• Single-user I/O model– Output: Model sends its
displayed elements to view and updates to them.
– Input: View sends input updates to displayed model elements
• Dewan & Choudhary ‘90
Model
View
Displayed element
124
IM Model /*dmc Editable String, IM_History */ typedef struct { unsigned num; struct String *message_arr; } IM_History; IM_History im_history; String message; Load () { Dm_Submit (&im_history, "IM History", "IM_History"); Dm_Submit (&message, "Message", String); Dm_Callback(“Message”, &updateMessage); Dm_Engage ("IM History"); Dm_Engage ("Message"); } updateMessage (String variable, String new_message) { im_history.message_arr[im_history.num++] = value; Dm_Insert(“IM History”, im_history.num, value); }
Create view of element named “IM History” whose type is
“IM_History” and value is at address “&im_history”
Show (a la map) the view of “IM
History”
Whenever “Message” is changed by user call
updateMessage()
125
Multiuser Model-View Protocol
• Multi-user I/O model– Output Broadcast: Output
messages broadcast to all views.
– Input relay: Multiple views send input messages to model.
– Input coupling: Input messages can be sent to other views also
• Dewan & Choudhary ’91
Model
View View
126
IM Model /*dmc Editable String, IM_History */ typedef struct { unsigned num; struct String *message_arr; } IM_History; IM_History im_history; String message; Load () { Dm_Submit (&im_history, "IM History", "IM_History"); Dm_Submit (&message, "Message", String); Dm_Callback(“Message”, &updateMessage); Dm_Engage ("IM History"); Dm_Engage ("Message"); } updateMessage (String variable, String new_message) { im_history.message_arr[im_history.num++] = value; Dm_Insert(“IM History”, im_history.num, value); }
Insert to to all
Called by any user
127
Replicated Objects in Central Architecture
• Distributed view needs to create local replica of displayed object.
• Can build replication into types
Model
View View
replicas
128
Replicating Popular Types for Central and Replicated Architectures
• Create replicated versions of selected popular types.• Changes in a type instance automatically made in all of its
replicas (in views or models)– No need for explicit I/O
• Can select which values in a layer replicated• Architectures
– replicated architecture (Greenberg and Marwood ’92, Groove)– semi-centralized (Munson & Dewan ’94, PlaceWare)
View
Model
View
Model Model
View View
129
Example Replicated Types• Popular primitive types: String, int, boolean …
(Munson & Dewan ’94, PlaceWare, Groove)• Records of simple types (Munson & Dewan ’94,
Groove)• Dynamic sequences (Munson & Dewan ’94,
Groove, PlaceWare)• Hashtables (Greenberg & Marwood ’92, Munson
& Dewan ’94, Groove)• Combinations of these types/constructors
(Munson & Dewan ’94, PlaceWare, Groove)
130
Kinds of Distributed Objects• By reference (Java and .NET)
– reference sent to remote site– remote method invocation site results in
calls at local site• By value (Java and .NET)
– deep copy of object sent– remote method invocations results in calls
at remote site– copies diverge
• Replicated objects– deep copy of object sent– remote method invocations results in local
and remote calls– either locks or merging used to detect/fix
conflicts
site 1 site 2
site 1 site 2
site 1 site 2
131
Alternative model sharing approaches
1. Stream-based communication2. Regular RPC3. Multicast RPC4. Replicated Objects (/Generic Model View
Protocol)
132
Replicated Objects vs. Communication Facilities
• Higher abstraction– No notion of other sites– Just make change
• Cannot use existing types directly– E.g. in Munson & Dewan ’94, ReplicatedSequence
• Architecture flexibility– PlaceWare bound to central architecture– Replicas in client and server of different types, e.g. VectorClient &
VectorServer• Abstraction flexibility
– Set of types whose replication supported by infrastructure automatically– Programmer-defined types not automatically supported
• Sharing flexibility– Who and when coupled burnt into shared value
• Use for new apps
133
Replicated Objects vs. Communication Facilities
• PlaceWare has much richer set than WebEx– Ability to include Polling as a slide in a
PowerPoint presentation– Seating arrangement
• Not as useful for converting existing apps.– Need to convert standard types to replicated
types– Repartitioning to separate shared and unshared
models
134
Stream based vs. Others• Lowest-level
– Serialize and deserialize objects– Multiplex and demultiplex operation invocations into
and from stream• Stream-based communication (wire protocol) is
language independent• No need to learn non standard syntax and
compilers• May be the right abstraction for converting
existing apps into collaborative ones.
135
Case Study: Collaborative Video Viewing (Cadiz, Balachandran et al. 2000)
• Replicated architecture created using T 120 multicast later.
• Exchanged command names
• Implementer said it was easy to learn and use.
136
RPC vs. Others
• Intermediate ease of learning, ease of usage, flexibility
• Use when:– Overhead of channel usage < overhead of RPC
learning– Appropriate replicated types
• Not available, or– Who and when coupled, architecture burnt into replicated
type• learning overhead > RPC usage overhead
137
M-RPC vs. RPC
• Higher-level abstraction• Do not have to know exact site roster
– Others, all, current• Can be automatically mapped to stream-
based multicast• Use M-RPC when possible
138
Combining Approaches
• System combining benefits of multiple abstractions?
– Flexibility of lower-level and automation of higher-level
• Co-existence• Migratory path• New abstractions
139
Coexistence
Support all of these abstractions in one system• RPC and shared objects (Dewan &
Choudhary ’91, Greenberg & Marwood ’92, Munson & Dewan ’94, and PlaceWare)
140
Migratory PathProblem of simple co-existence• Low-level abstraction effort not reused.
– E.g. RPC used to built a file directory • Allow the use of low-level abstraction to create higher-
level abstraction• Framework allowing RPC to be used to create new
shared objects (Munson & Dewan ’94, PlaceWare).– E.g. shared hash table
• Can be difficult to use and learn• Low-level abstraction still needed when controlling who
and when coupled
141
New abstractions: Broadcast Methods
Stefik et al ’85: Mixes shared objects and RPC
• Declare one or more methods of arbitrary class as broadcast
• Method invoked on all corresponding instances in other processes in session
• Arbitrary abstraction flexibility
public class Outline { String getTitle(); broadcast void setTitle(String
title); Section getSection(int i); int getSectionCount(); broadcast void setSection(int
i,Section s); broadcast void insertSection(int
i,Section s); broadcast void
removeSection(int i);}
142
Association
Broadcast Methods Usage
Model
View
Window
User 1
Model
View
Window
User 2
bm
Broadcast method
Associates/Replicas
Associates/Replicas
lm
lm
lm
lmlm
143
Problems with Broadcast Methodspublic class Outline {
String getTitle(); broadcast void setTitle(String title); Section getSection(int i); int getSectionCount(); broadcast void setSection(int i,Section
s); broadcast void insertSection(int i,Section
s); broadcast void removeSection(int i); broadcast void insertAbstract (Section s)
{ insertSection (0, s);
}}
• Language support needed– C#?
• Single multicast group– Cannot do subset of participants
• Selecting broadcast methods required much care
– Sharing at method rather than data level
Broadcast method should not call another broadcast method!
144
Method vs. State based Sharing
• Method-based sharing for indirectly sharing state.• Programmer provides mapping between state and
methods that change it.• With infrastructure known mapping, replicated
types automatically implemented.• Mapping of internal state and methods not
sufficient because of host-dependent data (specially in UI abstractions)
• Need mapping of external (logical) state.
145
Property-based Sharingpublic class Outline {
String getTitle(); void setTitle(String title); Section getSection(int i); int getSectionCount(); void setSection(int i,Section s); void insertSection(int i,Section s); void removeSection(int i); void insertAbstract (Section s) {
insertSection(0, s); }
}
• Roussev & Dewan ’00• Synchronize external state or
properties• Properties deduced automatically
from programming patterns– Getter and setter for record
fields– Hashtables and sequences
• System keeps properties consistent– Parameterized coupling model
• Patterns can be programmer-defined
146
Programmer-defined conventions
insert = void insert<PropName> (int, <ElemType>)remove = void remove<PropName> (int)lookup = <ElemType> elementAt<PropName>(int)set = void set<PropName> (int, <ElemType>)count = int get<PropName>Count()
getter = <PropType> get<PropName>() setter = void set<PropName>(<PropType>)
147
Multi-Layer Sharing with Shared Objects
Story so far:• Need separate sharing
implementation for each layer– Framebuffer: VNC– Window: T. 120– Toolkit: GroupKit
• Problem with data layer since no standard protocol
• Create shared objects for this layer
• But objects occur at each layer– Framebuffer– Window– TextArea
• Why not use shared object abstraction for any of these layers?
148
Sharing Various Layers
Framebuffer
Window
Toolkit
Framebuffer
Window
Toolkit
Parameterized CouplerModel
View
Model
View
149
Sharing Various Layers
Framebuffer
Window
Toolkit
Framebuffer
Window
ToolkitParameterized Coupler
Model
View
Model
View
150
Sharing Various Layers
Framebuffer
Window
Toolkit
Framebuffer
Window
ToolkitParameterized Coupler
Model
View
Model
View
151
Experience with Property Based Sharing
• Used for – Model – AWT/Swing Toolkit – Existing Graphics Editor
• Requires well written code– Existing may not be
152
Multi-layer Sharing
• Two ways to implement colab. application– Distribute I/O
• Input in Replicated• Output in Centralized• Different implementations (XTV, NetMeeting) distributed
different I/O– Defined replicated objects
• A single implementation used for multiple layers
• Single implementation in Distribute I/O approach?
153
Translator-based Multi-Layer Support for I/O Distribution
• Chung & Dewan ‘01• Abstract Inter-Layer Communication Protocol
– input (object)– output(object)– …
• Translator between specific and abstract protocol • Adaptive Distributor supporting arbitrary, external mappings
between program and UI components• Bridges gap between
– window sharing (e.g. T 120 app sharing) and higher-level sharing (e.g. T 120 whiteboard sharing)
• Supports both centralized and replicated architectures and dynamic transitions between them.
154
I/O Distrib: Multi-Layer Support
PC
Layer N
Layer N-1
Layer 0
Layer S
Layer N
Layer N-1
Layer 0
Layer S
Layer N
Layer N-1
Layer 0
Layer S
Translator Translator TranslatorAdaptive Distributor Adaptive Distributor Adaptive Distributor
155
Translator Translator Translator
I/O Distrib: Multi-Layer Support
PC
Layer N
Layer S+1
Layer N
Layer S+1
Layer N
Layer S+1
Layer 0
Layer S
Layer 0
Layer S
Layer 0
Layer S
Adaptive Distributor Adaptive Distributor Adaptive Distributor
156
Experience with Translators
• VNC• X• Java Swing• User Interface
Generator• Web Services
• Requires translator code, which can be non trivial
157
Infrastructure vs. Meta-Infrastructure
Property/Translator-based Distributor/Coupler
Text Editor Outline Editor
Pattern Editor
X JavaBeansJava’s Swing VNC
application application
application
application
application application
application application
Infrastructure Meta-Infrastructure
Checkers
158
The End of Comp 290-063 Material
(Remaining Slides FYI)
159
Using Legacy Code
• Issue: how to add collaboration awareness to single-user layer– Model– Toolkit– Window System– …
• Goal– Would as little coupling as possible between existing
and new code
160
Adding Collaboration Awareness to Layer
Colab. Transp.
Colab. Aware
Extend Colab-Transp. Class
JCE
Colab. Transp. Colab. Aware
Ad-HocSuite
Colab. Aware
Colab. Transp.
Extend Colab. Aware Class
Sync
Colab. Aware Colab. Transp.
Colab. Aware DelegateRoussev’00
161
Proxy Delegate
XTV
X Server
X Client
Pseudo Server COLA
Called Object
Calling Object
Adapter Object
162
Identifying Replicas• Manual connection:
– Translators identify peers (Chung and Dewan ’01)• Automatic:
– Central downloading:• Central copy linked to downloaded objects (PlaceWare, Suite, Sync)
– Identical programs: Stefik et al ’85• Assume each site runs the same program and instantiates programs in the same order• Connect corresponding instances (at same virtual address) automatically.
– Identical instantiation order intercepted• Connect Nth instantiated object intercepted by system• E.g. Nth instantiated windows correspond
– External descriptions (Groove)• Assume an external description describing models and corresponding views• System instantiates models and automatically connects remote replicas of them.• Gives programmers events to connect models to local objects (views, controllers).
– No dynamic control over shared objects.• Semi-manual (Roussev and Dewan ’00)
– Replicas with same GID’s automatically connected.– Programmer assigns GIDs to top level objects, system to contained objects
163
Connecting Replicas vs. Layers• Object correspondence
established after containing layer correspondence.
• Only some objects may be linked
• Layer correspondence established by session management
• E.g. Connecting whiteboards vs. shapes in NetMeeting
164
Conference 1
App1 App2
Basic Session Management Operations
User 1
Join/Leave (User 2)
User 2
Create/ Delete (Conference 1)
Add/Delete (App3)
App3
List/Query/ Set/ Notify Properties
165
Basic Firewall
• Limit network communication to and from protected sites
• Do not allow other sites to initiate connections to protected sites.
• Protected sites initiate connection through proxies that can be closed if problems
• can get back results– Bidirectional writes– Call/reply
protected site
communicating site
unprotected proxy
open
send
send
call
repl
y
166
Protocol-based Firewall
• May be restricted to certain protocols– HTTP– SIP
protected site
communicating site
unprotected proxy
open
open
call
repl
y
http
sip
sip
167
Firewalls and Service Access
• User/client at protected site.• Service at unprotected site.• Communication and
dataflow initiated by protected client site– Can result in transfer of data
to client and/or server• If no restriction on protocol
use regular RPC• If only HTTP provided,
make RPC over HTTP– Web services/Soap model
protected user
unprotected service
unprotected proxy
open
call
repl
y
rpc
http-rpc
168
Firewalls and Collaboration
• Communicating sites may all be protected.
• How do we allow opens to protected user?
protected user
protected user
open
169
Firewalls and collaboration• Session-based forwarder• Protected site opens connection
to forwarder site outside firewall for session duration
• Communicating site also opens connection to forwarder site.
• Forwarder site relays messages to protected site
• Works well if unrestricted access allowed and used
• What if restricted protocol?
unprotected forwarder
open
open
close
clos
e protected user
protected user
send
send
170
Restricted Protocol
• If only restricted protocol then communication on top of it as in service solution
• Adds overhead.
171
Restricted protocols and data to protected site
• HTTP does not allow data flow to be initiated by unprotected site
• Polling– Semi-synchronous collaboration
• Blocked gets (PlaceWare)– Blocked server calls in general in one-way call model– Must refresh after timeouts
• SIP for MVC model– Model sends small notifications via SIP– Client makes call to get larger data– RPC over SIP?
172
Firewall-unaware clients• Would like to isolate specific apps from worrying protocol choice and
translation.• PlaceWare provides RPC
– Can go over HTTP or not• Groove apps do not communicate directly – just use shared objects
and don’t define new ones– Can go either way
• Groove and PlaceWare try unrestricted first and then HTTP• UNC system provides standard property-based notifications to
programmer allows them to be delivered as:– RMI– Web service– SIP– Blocked gets– Protected site polling
173
Forwarder & Latency• Adds latency
– Can have multiple forwarders bound to different areas (Webex)
– Adaptive based on firewall detection (Groove)
• try to open directly first• if fails because of firewall, opens
system provided forwarder• asymmetric communication possible
– Messages to user go through forwarder– Messages from user go directly
– Groove is also a service based model!– PlaceWare always has latency and it
shows
unprotected forwarder
protected user
protected user
174
Forwarder & Congestion Control
• Breaks congestion control algorithms– Congestion between
communication between protected site and forwarder controlled by algorithms may be different than end to end congestion
– T 120 like end to end congestion control relevant
unprotected forwarder
protected user
protected user
Different congestions
175
Forwarder + Multi-caster• Forwarder can multicast to other users
on behalf of sending user • Separation of application processing
and distribution– Supported by PlaceWare, Webex
• Reduces messages in link to forwarder• Separate multicaster useful even if no
firewalls• Forwarder can be much more powerful
machine.– T 120 provides multi-caster without
firewall solution• Forwarder can be connected to higher
speed networks– In groove, if (possibly unprotected)
user connected via slow network, single message sent to forwarder, which is then multicast
• May need hierarchy of multicasters (T 120), specially dumb-bell
unprotected forwarder + multicaster
protected user
protected user
protected user
176
Forwarder + State Loader• Forwarder can also maintain state in
terms of object attributes• Slow and latecomer sites pull state
asynchronously from state loader– Avoid message from forwarder to
protected site containing state– Alternative to multicast– Extra message to forwarder for pulling
adds latency and traffic– Each site pulls at its consumption rate
• Works for MVC like I/O models – VNC: framebuffer rectangles– PlaceWare: PPT slides– Chung & Dewan ’98: Log of arbitrary
input/output events converted to object state
• Useful even if no firewalls• Goes against stateless server idea
– State should be an optimization
unprotected forwarder + state loader
protected user
protected user
protected user
read
read
read
177
Forwarder + multicaster + state loader
• Multicaster for rapidly changing information
• State loader for slower changing information
• Solution adopted in PlaceWare– Multicast for window sharing– State loading for PPT slides
• VNC results show pull model works for window sharing
• Greenberg ’02 shows pull model works for video
unprotected forwarder + multicaster + state
loader
protected user
protected user
protected user
read
read
read
send
send
send
178
Interoperability
• Cannot make assumptions about remote sites
• Important in collaboration because one non conforming can prevent adoption of collaboration technology
• Devise “standard” protocols for various collaboration aspects to which specific protocols can be translated
179
Examples of Collaboration Aspects
• Codecs in media (SIP)• Window/Frame-based sharing
– Caching capability for bitmaps, colormaps.– Graphics operations supported– Bits per pixel– Virtual desktop size
180
Layer and Standard Protocols
• Easier to agree on lower level layer• Every computer has a framebuffer with similar
properties.• Windows are less standard
– WinCE and Windows not same• Toolkits even less so
– Java Swing and AWT• Data in different languages and types
– Interoperation very difficult
181
Data Standard
• Web Services– Everyone converts to it
• Object properties based on patterns translated to Web services?
• XAF
182
Multiple Standards
• More than one standard can exist– With different functionality/performance
• How to negotiate?• Same techniques can be used to negotiate
user policies– E.g. which form of concurrency control or
coupling
183
Enumeration/Selection Approach
• One party proposes a series of protocols– M= audio 4006 RTP/AVP 0 4– A = rtpmap: 0 PCMU/8000– A = rtpmap: 4 GSM/8000
• Other party picks one of them– M= audio 4006 RTP/AVP 0 4– A = rtpmap: 4 GSM/8000
184
Extending to multiple parties
• One party proposes a series of protocols• Other responds with subsets supported• Proposing party picks some value in
intersection.• Multiple rounds of negotiaton
185
Single-Round
• Assume– Alternative protocols can be ordered into levels,
where support for protocol at level l indicates support for all protocols at levels less than l
• Broadcast level and pick min of values received
186
Capability Negotiation
• Protocol not named but function of capabilities– Set of drawing operations supported.
• Increasing levels can represent increasing capability sets.– Sets of drawing operations
• Increasing levels can represent increasng capability values– Max virtual desktop size
187
Uniform Local Algorithm
• Apply same local algorithm at all sites to choose level and hence associated “collapsed” capability set– Min
• Of capability set implies an AND• Bits per pixel, drawing operation sets
– Max• Of boolean values implies an OR• Virtual desktop size
– Something else based on # and identity of sites supporting each level
188
UI Policy Negotiation• Can use same mechanism for UI policy negotiation• Examples
– Unconditional grant floor control: In T 120, each node can say• Yes • No• Ask Floor Controller• Yes < Ask Floor Controller < No• Use min of this for least permissive control
– Sharing control: many systems each node can say:• Share scrollbar• Not share < share• Use min for least permissive sharing
189
Office Apps
• Multiple versions of office apps exist• Use similar scheme for negotiating
capabilities of office apps in conference– pdf capability < viewer < full app– Office 10 < Office 11 < Office 12
190
Conversion to Standard Protocols
• May need to convert richer protocol to lesser “lowest common denominator” protocol with lesser capabilities
• Also may not wish to lose lowest common denominator protocol and do per site conversion
• Drawing operation to bitmap in T 120• Fine-grained locks to floor control in Dewan and
Sharma ‘91
191
Composability• Collaboration infrastructure must perform several tasks
– Session management– Set up (centralized/replicated/hybrid) architecture– I/O distribution
• filtering• Multipoint communication• Latecomer and slow user accomodation
– Access and concurrency control– Firewall traversal
• Multiple ways to perform each of these functions• Implement separate functions in different composable
modules• Difficult because they must work with each other
192
T 120 Composable Architecture• Multicast layer
– Multicast + tokens• Session Management
– Session operations + capability negotiation• Application template
– Standard structure of client of multicast + session management• Window-based sharing
– Centralized architecture for window sharing– Uses session management + multicast
• Whiteboard– Replicated or centralized whiteboard sharing– Uses session management + multicast
193
T 120 Layers
...
...
T0826350-96
T.120 Infrastructure Recommendations
Rec. T.126 (SI)
Rec. T.127 (MBFT)
Rec. T.120
Application Protocol Entity
User Application(s)(Using Both Standard and Non-Standard Application Protocols)
User Application(s)(Using Std. Appl. Protocols)
NodeController
User Application(s)(Using Non-Std Protocols)
Application ProtocolRecommendations
Non-Standard ApplicationProtocol Entity
Generic Conference Control (GCC)Rec. T.124
Multipoint Communication Service (MCS)Rec. T.122/T.125
Network Specific Transport ProtocolsRec. T.123
194
Composability Advantages
• Can use needed components– Just the multicast channel
• Can substitute layers– Different multicast implementation
• Orthogonality– Level of sharing not bound to multicast– Architecture not bound to multicast
195
Composability Disadvantages
• May have to do much more work.• T 120 component model
– Create application protocol entity and relate to actual application
– Create/ join multicast channel• Suite & PlaceWare monolithic model
– Instantiating an application automatically performs above tasks
196
Combining Advantages
• Provide high-level abstractions representing popular ways of interacting with sub sets of these components
• e.g. Implementing APE for Java applets
197
Improving T120 Componentization
• Add object abstraction on top of application protocol entity
• Web Service• Object with properties?
198
Improving T120 Componentization
• Separate input and output sharing• Some nodes will be input only
– E.g PDAs sharing projected presentation
199
Using Mobile Computer for Input
UI UI
Program
UI
Use mobile computers for input (e.g. polls)
200
Generic Conference Abstraction
• Conference (T 120, PlaceWare)• Room (MUDs, Jupiter)• Space (Groove)• Session (SIP)
– Different from application session• May be persistent and asynchronous
– Space, Room
201
Conference 1
App1 App2
Basic Session Management Operations
User 1
Join/Leave (User 2)
User 2
Create/ Delete (Conference 1)
Add/Delete (App3)
App3
List/Query/ Set/ Notify Properties
202
Advanced Session Management• Join/Leave subset of (possibly queried) apps (T 120)• Eject user (T 120)• Transfer users from one conference to another (T 120)• Timed conference
– Set conference duration (T 120, PlaceWare)– Query duration left (T 120)– Extend duration ( T 120)
• Schedule conference and modify schedule (PlaceWare)• Keep interaction log, and query (PlaceWare)• Terminate when no active users (PlaceWare, T 120• In persistent conferences, in-core version automatically created
– When first user joins (PlaceWare)– When conference manager launched (Groove)
203User 2
Centralized Session Management
User 1
UI
Program
Output Broadcaster & I/O Relayer
UI
I/O Relayer
• Add app – loads and starts program at:
• invoker’s site– XTV, Suite
• or some other site– T 120
– joins all existing users• Join conference
– loads, starts, and binds local UI to central program
204User 2
Replicated Session Management
User 1
UI UI
Input broadcaster
• Add app – loads and starts program
replica at invoker’s site– XTV, Groove
– joins all existing users• Join conference
– loads, starts, and binds replicas
Program
Input broadcaster
Program
205
Architecture Flexibility of Session Management
• Architecture specific– Groove, PlaceWare, …
• Architecture semi-dependent– T 120
• Single APE abstraction “started” when user connects• APE abstraction bound to architecture
• Architecture independent– Chung and Dewan, 01
• Single “loggable” abstraction connected to central or replicated logger
• Loggable not bound to architecture• Join operation specifies architecture
206
Architecture-independent Session Management
Chung & Dewan ’01: one app session per conference
(a) creating a new session (b) joining an existing session
207
Application-Session Management Coordination
• Session Management must know about attachment points.• In centralized architecture:
– POD and Applet – PlaceWare– X app and X server – XTV– Generic APE – T 1.20– Java “loggable” objects - Chung & Dewan -98
• In replicated architecture: program replicas and UIs– model and views (Groove & Sync)– APE ( T 120)– Java “loggable” objects - Chung & Dewan -01
• These are registered with session management.
208
Explicit & Implicit Join/Leave
• Explicit– Create, join, and leave operations explicitly
executed• Implicit
– Automatic or side effect of other operations
209
Implicit Join/Leave
• Artifacts being edited– Editing same object joins them in conference– Dewan & Choudhary ’91, Edwards– Important MSFT Office 12 scenario
• Intersection of auras in virtual environment– Benford and DIVE– Applications and users have auras– Join conference result of user’s aura intersecting application aura
• Conference has single application session– Office 12 fixes thus
• Not general.• No control – with options, semi-implicit
Session Joining/Leaving Side Effect of:
210
Explicit Join/Leave
App2 App3
User 1 User 2
Conference 1
Join
User 2
App1 App2 App3
User 1 User 2
Conference 1
User 2
InviteAccept
Autonomous Joining Invitation-based Joining
T 120 T 120 SIP Groove
App1
211
Autonomous vs. Invitation-based Join
• Less message traffic and per user overhead– No invitations sent
• Needs discovery (e.g. notifications), name resolution, and separate access control mechanism
• Overhead amortized in recurring conferences
• Suitable for large, planned conferences
• Implicit notification• Low overhead to create
small conference.• Raises mobility issues
– User may have multiple devices
– Can register device (SIP, Groove)
– Privacy issues• Raises firewall issues
– invitee must accept connections
212
Examples
• Invitation-based– NetMeeting, Messenger
• Autonomous– PlaceWare, Webex
• Both– T 120– Integrate messenger and PlaceWare?
213
Open vs. Closed Session Management
• Closed Session Management– Policies bound (PlaceWare)
• Name vs. Invitation• Implicit vs. explicit• UI
• Open Session Management– Multiple policies can be implemented ( T. 120, SIP,
Roseman & Greenberg ’92) using an API– Defaults may be provided (Roseman & Greenberg ‘92)
214
• SIP Model• N-party?• Name-
based?
invite Aaccept
bye
User Agent A
User Agent B
API for 2-Party Invites
215
X crea
tedJo
in X
Joine
d X, BJoin X, B OK
Join X, B OK?
X created
create X
2-Party, AutonomousUser
Agent AUser
Agent B
• GroupKit• +create X OK?• +create X OK• Delete like Create C• Bye terminates
Conference Agent
216
User Agent B
Conference Agent
User Agent A
Join X, C OK?
Join C, X OK Join
X, C O
K?
Join
X, C O
K
Joine
d X, C
Joined X, C
N-Party, Autonomous
• GroupKit, T 120• Leave like Join
– Event may not be broadcast as leaver can do so (T 120)
• + LastUserLeft event
User Agent C
Join
X, C
Join
ed X
, C
217
User Agent B
Join
X, C O
K?
Join
X, C O
K
Joine
d X, CConference
Agent
Invite X, CJoined X, C
N-Party, Autonomous and Invitation-based
User Agent A
User Agent C
Invi
te X
, C
Acc
ept
X, C
218
Example GroupKit Policies
• Open registration– Anyone can invite– Conference persists after last user
• Centrally facilitated– Only convener can invite
• Room-based session management– Anyone can join name (room)
219
Performance Problems
• Operations are heavyweight– Require OK? and success events sent to each user– joining expensive in T 120
• Could use publish/subscribe– Build n-party, name-based on top of SIP
publish/subscribe and invite/accept/delete model?– Mobility supported– Need extra (conference) argument to invite
220
Improving ProgrammingUser
Agent AUser
Agent B
User Agent C
Conference Agent
Invite X, C
Join
X, C O
K?
Invi
te X
, C
Join
X, C O
K
Joine
d X, C
Joined X, C
Acc
ept
X, C
• Shared data type• Success events
generated on update to it – Joined X, C
• GroupKit
221
Session-Aware Applications• Applications may want session events
– To display information– To create (centralized or replicated) application session
possibly involving multicast channels– To exchange capabilities (interoperation)
• Each app on a site can subscribe directly from conference agent (GroupKit)– Multiple events sent to a node
• Each app subscribes from user agent (T 120)– IPC latency– User agent implements conference agent interface
222
Improving Session Access Control
• Create, delete, leave, join, protected through events• Could also protect add/delete application
– Add/delete app Ok?, OK and Success• Protect discovery of conferences
– Listed attribute in T 120• Protect query of conference information
– PlaceWare• “Lock” ? “Unlock” conference (T 120)
– Allow/disallow more joins– Set user limit (PlaceWare)
• Protect how late users can join (PlaceWare)
223
Improving Access Control• Can support ACLs and passwords
– Password protected attribute and extra join parameter (T 120, PlaceWare)
– ACL parameter (PlaceWare)– More efficient but earlier binding than interactive OK? Events.
• Regular, Interactive, and Optimistic access control– Tech fest demo
• Can protect groups of conferences together– As files in a directory– PlaceWare place is group of conferences similarly protected
• Can specify groups of users– PlaceWare
224
Session vs. Application Access Control
• Controls session operations– Create, Delete conf.– Join, Leave user– Add, Remove App– Query…
• Indirectly provides coarse-grained application access– If cannot join, cannot use
applications• May want to prevent joins
for performance rather than security reasons
• Control interaction with applications.– Presenter vs. audience
privileges. (PlaceWare & Webex)
– Telepointer editable only by creator (T 120, PlaceWare, GroupKit, NetMeeting Whiteboard, Webex)
• Access denied for authorization rather than performance reasons.
225
Shared Layer & Application Access Control
• Higher level sharing implies finer granularity access– screen sharing protected operations
• provide input – window sharing
• Display window• Input in window• Add to NetMeeting to support digital rights?
– PPT sharing• change shared slide vs. change private slide (Webex)
• In many cases screen sharing enough– PlaceWare PPT sharing: audience vs. presenter equivalent to providing
input control• App-specific controls may be needed
226
Operation-specific access control• Allow each operation to determine who can execute it.
– Dewan and Choudhary ’91 and Groove• Operation can query environment for user
– PlaceWare• Operation is remote procedure call• Callee identity automatically added as an extra argument• Integrated with RPC proxy generation• Add such a facility to Indigo?
• Dewan and Shen ’92– Can build app-specific access control without access awareness– Extends the notion of generic “file rights” to generic “tree-based
collaboration rights”– Assumes system intercepts operation before it is executed– Would apply to XAF-like tree model
227
Meta Access Control
• Who sets the access privileges?• Convener
– PlaceWare– T 120
• Group ownership, delegation etc– Dewan & Shen ‘96
228
Access vs. Concurrency Control• Access control
– Controls whether user is authorized to execute operation• Concurrency control
– Controls whether authorized users actions conflict with others and schedules conflicting actions
• Can share common mechanism– for preventing operation from being executed.
• In T 120 window sharing, UI can be identical– Only one user allowed to enter input– UI allows mediator to give application control to users– User passed because of AC or CC.
229
Shared Layer & Concurrency Control
• Higher level sharing implies more concurrency– screen sharing
• Cannot distinguish between different kinds of input• Multiple input events make an operation• Must prevent concurrency
– window sharing (add to NetMeeting and PlaceWare?)• Can allow concurrent input in multiple windows• Probably will not conflict
– Same word document in multiple windows– Whiteboard
• can allow concurrent editing of different objects• Probably will not conflict
– Object and connecting line• App-specific concurrency control may be needed
230
Pessimistic vs. Optimistic CC• Two alternatives to serializable transactions• Pessimistic
– Prevent conflicting operation before it is executed– Implies locks and possibly remote checking
• Optimistic– Abort conflicting operation after it executes– Involves replication, check pointing/compensating
transactions– Not actually implemented in collaborative systems
• Aborting user (vs. programmed) transactions not acceptable– Merge and optimistic locking variations
231
Merging• Like optimistic
– Allow operation to execute without local checks• But no aborts
– Merge conflicting operations– E.g. insert 1,a || insert 2, b = insert 1, a; insert 3, b || insert 2, b;
insert 1, a• Serializability not guaranteed
– Strange results possible– E.g. concurrent dragging of an object in PlaceWare whiteboard
• App-specific
232
App-Specific Merging• Text editor specific
– Sun ’89, ….• Tree editor specific
– ECSCW ’03– Apply to XAF and Office Apps?
• Programmer writes merge procedures– Per file in Coda (Kistler and Satya 92)– Per object in Rover (Joseph et al 95, & PlaceWare)– Per relation in Bayou and Longhorn WinFS (Terry et al 95, [email protected])
• Programmer creates merge specifications (Munson & Dewan ’94)– Object decomposed into properties– Properties merged according to merge matrix– Less flexible but easier to use.– Accommodates all existing policies.– Implement in C# objects?
233
Synchronous vs. Asynchronous Merge
• Synchronous– Efficient– Less work to destroy– Can accommodate simple-minded merge– Replicated operation transformation
• Asynchronous– Opposite– Centralized, merge procedures and matrices
• Faster computers allow complex synchronous merging– Centralized merge matrix
• Merging of drawing operations still an issue
234
Merging vs. Locking
• Requires replication– With its drawbacks and advantages
• Requires high-level local operations– Cannot work with replicated window-based systems
• Conflicts cannot be merged– Require an interactive phase
• No lock delays• More concurrency• Disconnected (asynchronous) interaction
235
Response time for locks
• Central lock information– Well known site knows who has locks– Delay in contacting the site
• Distributed lock information (T 120)– Lock information sent to all sites– More traffic but less delay
• Still delay in getting lock from current holder
236
Optimistic Locking
• Greenberg et al, 94– In general remote checking can take time– Allow operation as in optimistic until lock
response received– At that point continue operation or abort
• Abort damage potentially small
• Office 12 scenarios
237
Floor Control• Host only (T 120)
– Person hosting app has control– Usually convener
• Mediated ( T 120)– Any one can request floor.– One or more of the other users have to agree (specially current floor holder)– Can pass control to another, if latter accepts
• Facilitated– Facilitator distributes floor (PlaceWare)– Special case of mediated when floor passed through facilitator
• Unconditional grant (T 120)– Anyone can take current floor by clicking– Special case of mediated where no user has to agree.
• End user negotiation to decide on policy interactively ( T 120)– Interoperability solution works
• API to set and implement policy programmatically (T 120)
238
Fine-Grained Concurrency Control
• Provide an API (T 120)– Allocate/de-allocate token– Test– Grab exclusively/non exclusively– Release– Request/give token
• Munson and Dewan ’96– Lock hierarchical object properties – Associate lock tables with properties– Hierarchical locking
• Office 12 scenarios use fine-grained locks
239
Interoperability
• Cannot make assumptions about remote sites
• Important in collaboration because one non conforming can prevent adoption of collaboration technology
• Devise “standard” protocols for various collaboration aspects to which specific protocols can be translated
240
Examples of Collaboration Aspects
• Codecs in media (SIP)• Window/Frame-based sharing
– Caching capability for bitmaps, colormaps.– Graphics operations supported– Bits per pixel– Virtual desktop size
241
Layer and Standard Protocols
• Easier to agree on lower level layer• Every computer has a framebuffer with similar
properties.• Windows are less standard
– WinCE and Windows not same• Toolkits even less so
– Java Swing and AWT• Data in different languages and types
– Interoperation very difficult
242
Data Standard
• Web Services– Everyone converts to it
• Object properties based on patterns translated to Web services?
• XAF
243
Multiple Standards
• More than one standard can exist– With different functionality/performance
• How to negotiate?• Same techniques can be used to negotiate
user policies– E.g. which form of concurrency control or
coupling
244
Enumeration/Selection Approach
• One party proposes a series of protocols– M= audio 4006 RTP/AVP 0 4– A = rtpmap: 0 PCMU/8000– A = rtpmap: 4 GSM/8000
• Other party picks one of them– M= audio 4006 RTP/AVP 0 4– A = rtpmap: 4 GSM/8000
245
Extending to multiple parties
• One party proposes a series of protocols• Other responds with subsets supported• Proposing party picks some value in
intersection.• Multiple rounds of negotiaton
246
Single-Round
• Assume– Alternative protocols can be ordered into levels,
where support for protocol at level l indicates support for all protocols at levels less than l
• Broadcast level and pick min of values received
247
Capability Negotiation
• Protocol not named but function of capabilities– Set of drawing operations supported.
• Increasing levels can represent increasing capability sets.– Sets of drawing operations
• Increasing levels can represent increasng capability values– Max virtual desktop size
248
Uniform Local Algorithm
• Apply same local algorithm at all sites to choose level and hence associated “collapsed” capability set– Min
• Of capability set implies an AND• Bits per pixel, drawing operation sets
– Max• Of boolean values implies an OR• Virtual desktop size
– Something else based on # and identity of sites supporting each level
249
UI Policy Negotiation• Can use same mechanism for UI policy negotiation• Examples
– Unconditional grant floor control: In T 120, each node can say• Yes • No• Ask Floor Controller• Yes < Ask Floor Controller < No• Use min of this for least permissive control
– Sharing control: many systems each node can say:• Share scrollbar• Not share < share• Use min for least permissive sharing
250
Office Apps
• Multiple versions of office apps exist• Use similar scheme for negotiating
capabilities of office apps in conference– pdf capability < viewer < full app– Office 10 < Office 11 < Office 12
251
Conversion to Standard Protocols
• May need to convert richer protocol to lesser “lowest common denominator” protocol with lesser capabilities
• Also may not wish to lose lowest common denominator protocol and do per site conversion
• Drawing operation to bitmap in T 120• Fine-grained locks to floor control in Dewan and
Sharma ‘91
252
Basic Firewall
• Limit network communication to and from protected sites
• Do not allow other sites to initiate connections to protected sites.
• Protected sites initiate connection through proxies that can be closed if problems
• can get back results– Bidirectional writes– Call/reply
protected site
communicating site
unprotected proxy
open
send
send
call
repl
y
253
Protocol-based Firewall
• May be restricted to certain protocols– HTTP– SIP
protected site
communicating site
unprotected proxy
open
open
call
repl
y
http
sip
sip
254
Firewalls and Service Access
• User/client at protected site.• Service at unprotected site.• Communication and
dataflow initiated by protected client site– Can result in transfer of data
to client and/or server• If no restriction on protocol
use regular RPC• If only HTTP provided,
make RPC over HTTP– Web services/Soap model
protected user
unprotected service
unprotected proxy
open
call
repl
y
rpc
http-rpc
255
Firewalls and Collaboration
• Communicating sites may all be protected.
• How do we allow opens to protected user?
protected user
protected user
open
256
Firewalls and collaboration• Session-based forwarder• Protected site opens connection
to forwarder site outside firewall for session duration
• Communicating site also opens connection to forwarder site.
• Forwarder site relays messages to protected site
• Works well if unrestricted write/write allowed and used
• How to support RPC and higher-level protocols in both directions
• What if restricted protocol?
unprotected forwarder
open
open
close
clos
e protected user
protected user
send
send
257
RPC in both directions
• Would like RPC to be invoked by and on protected site (via forwarder).
• In two one-way RPCs– Proxies generated separately– Create independent channels opened by each party– Implies forwarder opens connection to protected site.
• PlaceWare two-way RPC– Proxies generated together– Use single channel opened by (client) protected site.
258
Restricted Protocol
• If only restricted protocol then communication on top of it as in service solution
• Adds overhead.• Groove and PlaceWare try unrestricted first
and then HTTP
259
Restricted protocols and data to protected site
• HTTP does not allow data flow to be initiated by unprotected site
• Polling– Semi-synchronous collaboration
• Blocked gets (PlaceWare)– Blocked server calls in general in one-way call model– Must refresh after timeouts
• SIP for MVC model– Model sends small notifications via SIP– Client makes call to get larger data– RPC over SIP?
260
Firewall-unaware clients• Would like to isolate specific apps from worrying protocol choice and
translation.• PlaceWare provides RPC
– Can go over HTTP or not• Groove components use SOAP to communicate
– Apps do not communicate directly – just use shared objects and don’t define new ones
• UNC system provides standard property-based notifications to programmer allows them to be delivered as:– RMI– Web service– SIP– Blocked gets– Protected site polling
261
Forwarder & Latency• Adds latency
– Can have multiple forwarders bound to different areas (Webex)
– Adaptive based on firewall detection (Groove)
• try to open directly first• if fails because of firewall, opens
system provided forwarder• asymmetric communication possible
– Messages from user go through forwarder
– Messages to user go directly– Groove is also a service based model!– PlaceWare always has latency and it
shows
unprotected forwarder
protected user
protected user
262
Forwarder & Congestion Control
• Breaks congestion control algorithms– Congestion between
communication between protected site and forwarder controlled by algorithms may be different than end to end congestion
– T 120 like end to end congestion control relevant
unprotected forwarder
protected user
protected user
Different congestions
263
Forwarder + Multi-caster• Forwarder can multicast to other users
on behalf of sending user • Separation of application processing
and distribution– Supported by PlaceWare, Webex
• Reduces messages in link to forwarder• Separate multicaster useful even if no
firewalls• Forwarder can be much more powerful
machine.– T 120 provides multi-caster without
firewall solution• Forwarder can be connected to higher
speed networks– In groove, if (possibly unprotected)
user connected via slow network, single message sent to forwarder, which is then multicast
• May need hierarchy of multicasters (T 120), specially dumb-bell
unprotected forwarder + multicaster
protected user
protected user
protected user
264
Forwarder + State Loader• Forwarder can also maintain state in
terms of object attributes• Slow and latecomer sites pull state
asynchronously from state loader– Avoid message from forwarder to
protected site containing state– Alternative to multicast– Extra message to forwarder for pulling
adds latency and traffic– Each site pulls at its consumption rate
• Works for MVC like I/O models – VNC: framebuffer rectangles– PlaceWare: PPT slides– Chung & Dewan ’98: Log of arbitrary
input/output events converted to object state
• Useful even if no firewalls• Goes against stateless server idea
– State should be an optimization
unprotected forwarder + state loader
protected user
protected user
protected user
read
read
read
265
Forwarder + multicaster + state loader
• Multicaster for rapidly changing information
• State loader for slower changing information
• Solution adopted in PlaceWare– Multicast for window sharing– State loading for PPT slides
• VNC results show pull model works for window sharing
• Greenberg ’02 shows pull model works for video
unprotected forwarder + multicaster + state
loader
protected user
protected user
protected user
read
read
read
send
send
send
266
Composability• Collaboration infrastructure must perform several tasks
– Session management– Set up (centralized/replicated/hybrid) architecture– I/O distribution
• filtering• Multipoint communication• Latecomer and slow user accomodation
– Access and concurrency control– Firewall traversal
• Multiple ways to perform each of these functions• Implement separate functions in different composable
modules• Difficult because they must work with each other
267
T 120 Composable Architecture• Multicast layer
– Multicast + tokens• Session Management
– Session operations + capability negotiation• Application template
– Standard structure of client of multicast + session management• Window-based sharing
– Centralized architecture for window sharing– Uses session management + multicast
• Whiteboard– Replicated or centralized whiteboard sharing– Uses session management + multicast
268
T 120 Layers
...
...
T0826350-96
T.120 Infrastructure Recommendations
Rec. T.126 (SI)
Rec. T.127 (MBFT)
Rec. T.120
Application Protocol Entity
User Application(s)(Using Both Standard and Non-Standard Application Protocols)
User Application(s)(Using Std. Appl. Protocols)
NodeController
User Application(s)(Using Non-Std Protocols)
Application ProtocolRecommendations
Non-Standard ApplicationProtocol Entity
Generic Conference Control (GCC)Rec. T.124
Multipoint Communication Service (MCS)Rec. T.122/T.125
Network Specific Transport ProtocolsRec. T.123
269
Composability Advantages
• Can use needed components– Just the multicast channel
• Can substitute layers– Different multicast implementation
• Orthogonality– Level of sharing not bound to multicast– Architecture not bound to multicast
270
Composability Disadvantages
• May have to do much more work.• T 120 component model
– Create application protocol entity and relate to actual application
– Create/ join multicast channel• Suite & PlaceWare monolithic model
– Instantiating an application automatically performs above tasks
271
Combining Advantages
• Provide high-level abstractions representing popular ways of interacting with sub sets of these components
• e.g. Implementing APE for Java applets
272
Improving T120 Componentization
• Add object abstraction on top of application protocol entity
• Web Service• Object with properties?
273
Improving T120 Componentization
• Separate input and output sharing• Some nodes will be input only
– E.g PDAs sharing projected presentation
274
Using Mobile Computer for Input
UI UI
Program
UI
Use mobile computers for input (e.g. polls)
275
Summary• Multiple policies for
– Architecture– Session management– Coupling, Concurrency, Access Control– Interoperability– Firewalls– Componentization
• Existing systems such as Groove, PlaceWare, NetMeeting are not that different, sharing many policies.
• Pros and cons of each policy• Flexible system possible
276
Recommendations: window sharing
• Centralized window sharing.– Remove expose coupling in window sharing– Add window-based access and concurrency
control in– Provide multi-party sharing, through firewalls,
without extra latency• Investigate replicated window sharing
– Will go through firewalls because low bandwidth
277
Recommendations: model sharing
• Decouple architecture and data sharing– Use delegation based model
• Provide a replicated type for XAF tree model.
• Use property based sharing to share collaboration-unaware C# objects
278
Recommendations: Multi-Layer Sharing
• Allow users to choose the level of sharing– Transparently change system (NetMeeting,
PlaceWare)– Provide layer-neutral sharing
• Allow users to select the architecture, possibly dynamically– From peer to peer to server-based to service
based depending on single collaborator, local multiple collaborators, and remote collaborators
279
Recommendations: expriments
• Need more experimental data– Sharing different layers– Centralized, replicated, and hybrid archs
• Need benchmarks– MSR usage scenarios?
280
Recommendations: Communication
• Use standard Indigo layer, with modifications– Sending data to protected site
• Use SIP • Provide PlaceWare 2-way RPC
– Access aware methods• Add M-RPC• Build over mulicast
281
Recommendations: Communication and componentization
• Have a separate stream multicast for language neutrality and lightweightness
• Need M-RPC so it can be mapped to above layer
282
Recommendations: Coupling
• Add externally configurable filtering component to determine what, when, and who.
283
Recommendations: Concurrency Control
• Support– Various kinds of floor control– Fine-grained token-based control– Optimistic and regular locks– Property-based locking on top– Property-based merging of arbitrary C# types
284
Recommendations: Session Management
• Build N-party session management on top of SIP to get mobility
• Support– Implicit and explicit– Name-based and invitation-based
285
Recommendations: Custom Collaborative Applications
• Model sharing in existing office applications
• Use capability negotiation• Create shared object type for XAF
286
Recommendations: Composability
• Extend T120 component model with• Replicated types• M-RPC• SIP features
287
Recommendation
• Lots of research in this area• Use input from research also when deciding
on new products
288
THE END (The rest are extra slides)
289
Partial Sharing
Uncoupled
Coupled
290
Merging vs. Concurrency Control
• Real-time Merging called Optimistic Concurrency Control• Misnomer because it does not support serializability.• Related because Concurrency Control prevents the
problem it tries to fix– Collaboration awareness needed– User intention may be violated– Correctness vs. latency tradeoff
• CC may be– floor control: e.g. NetMeeting App Sharing– fine-grained: e.g. NetMeeting Whiteboard
• Selecting an object implicitly locks it.• Approach being used in design of some office apps.
291
Evaluating Shared Layer and Architecture
• Mixed centralized-replicated architecture• Pros and cons of layering choice• Pros and cons of architecture choice• Should implement entire space rather than single
points– Multiple points
• NetMeeting App Sharing, NetMeeting Whiteboard, PlaceWare, Groove
– Reusable code• T 120• Chung and Dewan ‘01
292User 2 User 3
UI
Program
Output Broadcaster & I/O Relayer
Centralized Architecture
UI
User 1
UI
I/O Relayer I/O Relayer
293User 2 User 3
UI
Program
Input Broadcaster
Replicated Architecture
Program
Input Broadcaster
UI
User 1
UI
Program
Input Broadcaster
294
Limitations• In OO system must create new types for sharing
– No reuse of existing single-user types– E.g. in Munson & Dewan ’94, ReplicatedVector
• Architecture flexibility– PlaceWare bound to central architecture– Replicas in client and server of different types, e.g. VectorClient & VectorServer
• Abstraction flexibility– Set of types whose replication supported by infrastructure automatically– Programmer-defined types not automatically supported
• Sharing flexibility– Who and when coupled burnt into shared value
• Single language assumed– Interoperability of structured types very difficult– XML-based solution needed
295
Translating Language Calls to SOAP
• Semi-Automatic translation from Java & C# exist
• Bean objects automatically translated.• Other objects must be translated manually.• Could use pattern and property-based
approach to do translation (Roussev & Dewan ’00)
296
Property-based Notifications• Assume protected site gets notified (small amt of
data) and then pulls data in response a la MVC• Provide standard property-based notifications to
programmer• Communicate them using
– RMI– Web service– SIP– Blocked gets– Protected site polling
• Semi-synchronous collaboration
297
Shared Layer Conclusion• Infrastructure should support as many shared
layers as possible• NetMeeting/T. 120
– Desktop sharing– Window sharing– Data sharing (at high cost)
• PlaceWare– Data sharing (at low cost)
• Should and can support a larger set of layers at low cost (Chung and Dewan ’01)
298
Classifying Previous Work• Shared layer
– X Windows (XTV)– Microsoft Windows (NetMeeting App Sharing)– VNC Framebuffer (Shared VNC)– AWT Widget (Habanero, JCE)– Data (Suite, Groove, PlaceWare,)
• Replicated vs. centralized– Centralized (XTV, Shared VNC, NetMeeting App. Sharing,
Suite, PlaceWare)– Replicated (VConf, Habanero, JCE, Groove, NetMeeting
Whiteboard)
Rep vs. Central
Shared Layer
299
Suite Text Editor
300
Suite Text Editor Type
/*dmc Editable String */ String text = "hello world"; Load () { Dm_Submit (&text, "Text", "String"); Dm_Engage ("Text"); }
301
Multiuser Outline
302
Outline Type
/*dmc Editable Outline */typedef struct { unsigned num; struct section *sec_arr; } SubSection;typedef struct section { String Name; String Contents; SubSection Subsections;} Section;typedef struct { unsigned num; Section *sec_arr; } Outline;Outline outline;
Load () { Dm_Submit (&outline, "Outline", "Outline"); Dm_Engage ("Outline");}
303
Talk
304
Talk Program/*dmc Editable String */String UserA = "", UserB = "";int talkers = 0;Load () { if (talkers < 2) { talkers++; Dm_Submit (&UserA, "UserA", "String"); Dm_Submit (&UserB, "UserB", "String"); if (talkers == 1) Dm_SetAttr ("View: UserB", AttrReadOnly, 1); else Dm_SetAttr ("View: UserA", AttrReadOnly, 1); Dm_Engage_Specific ("UserA", "UserA", "Text"); Dm_Engage_Specific ("UserB", "UserB", "Text"); }}
305
N-User IM /*dmc Editable Outline */ typedef struct { unsigned num; struct String *message_arr; } IM_History; IM_History im_history; String message; Load () { Dm_Submit (&im_history, "IM History", "IM_History"); Dm_SetAttribute(“IM History”, “ReadOnly”, 1); Dm_Engage ("IM History"); Dm_Submit (&message, "Message", String); Dm_Update(“Message”, &updateMessage); Dm_Engage ("Message"); } updateMessage (String variable, String new_message) { im_history.message_arr[im_history.num++] = value; }
306
Broadcast MethodsModel
View
Toolkit
Model
View
Toolkit
User 2
bm
Broadcast method
lmlm
lmlm
lm
Window Window
User 1
307
Connecting Applications
• Replicas connected when containing applications connected in (collaborative) sessions.
• Collaborative application session created when application is added to a conference.
• Conference created by a convener to which others can join.
• Management of conference and application sessions called conference/session management.
308
Mobility Issues
• Invitee registers current device(s) with system
• System sends invitation to all current devices
• Supported by Groove and SIP
309
Connecting Applications
• Replicas connected when containing applications connected in (collaborative) sessions.
• Collaborative application session created when application is added to a conference.
• Conference created by a convener to which others can join.
• Management of conference and application sessions called conference/session management.
310User 1 User 2
Insert e,2Insert d,1Insert e,2 Insert d,1
dabc aebcdeabc daebc
Insert d,1 Insert e,2
X Server
X Client
X Server
X Client
Pseudo Server Pseudo Server
Synchronization in Replicated Architecture
abc abc
311
Comparing the Architectures
Model
View View
Window Window
Host 1
Window
App
Window
App
Pseudo Server
Pseudo Server
Input
Window
App
Window
Pseudo Server
Pseudo Server
I/O
Shared Abstraction = Model +
View
Shared Abstraction Centralized
Shared Abstraction Replicated
Shared Abstraction
= Model
Top Related