Chapter Seven - Resource Management for Scalability and Performance
-
Upload
eleanor-knight -
Category
Documents
-
view
38 -
download
1
description
Transcript of Chapter Seven - Resource Management for Scalability and Performance
Chapter Seven - Resource Management for Scalability and Performance
An Information-Centric An Information-Centric View of Resources 183View of Resources 183
Optimizing the Optimizing the Communications Protocol Communications Protocol 185185
• Packet Compression 186Packet Compression 186
• Packet Aggregation 191Packet Aggregation 191
An Information-Centric An Information-Centric View of Resources 183View of Resources 183
Optimizing the Optimizing the Communications Protocol Communications Protocol 185185
• Packet Compression 186Packet Compression 186
• Packet Aggregation 191Packet Aggregation 191
Controlling the Visibility of Controlling the Visibility of Data 195Data 195
• Area-of-Interest Filtering Area-of-Interest Filtering Subscriptions 197Subscriptions 197
• Multicasting 204Multicasting 204
• Hybrid Multicast Hybrid Multicast Aggregation 210Aggregation 210
Controlling the Visibility of Controlling the Visibility of Data 195Data 195
• Area-of-Interest Filtering Area-of-Interest Filtering Subscriptions 197Subscriptions 197
• Multicasting 204Multicasting 204
• Hybrid Multicast Hybrid Multicast Aggregation 210Aggregation 210
Chapter Seven - Resource Management for Scalability and Performance
Taking Advantage of Taking Advantage of Perceptual Limitations 213Perceptual Limitations 213
• Exploiting Level-of-Detail Exploiting Level-of-Detail Perception 215Perception 215
• Exploiting Temporal Exploiting Temporal Perception 221Perception 221
Taking Advantage of Taking Advantage of Perceptual Limitations 213Perceptual Limitations 213
• Exploiting Level-of-Detail Exploiting Level-of-Detail Perception 215Perception 215
• Exploiting Temporal Exploiting Temporal Perception 221Perception 221
Enhancing the System Enhancing the System Architecture 236Architecture 236
• Server Clusters 237Server Clusters 237
• Peer-Server Systems 243Peer-Server Systems 243
Conclusion 244Conclusion 244
References 246References 246
Enhancing the System Enhancing the System Architecture 236Architecture 236
• Server Clusters 237Server Clusters 237
• Peer-Server Systems 243Peer-Server Systems 243
Conclusion 244Conclusion 244
References 246References 246
Basic Network VE’sSupport limited number of usersSupport limited number of users
Demand fast networks connecting Demand fast networks connecting participating machinesparticipating machines
Require high processing capacityRequire high processing capacity
Support limited number of usersSupport limited number of users
Demand fast networks connecting Demand fast networks connecting participating machinesparticipating machines
Require high processing capacityRequire high processing capacity
Large-scale net-VE’sMilitary war gaming is a prevalent example.Military war gaming is a prevalent example.
The military envisions support for up to The military envisions support for up to 100,000 simultaneous entities.100,000 simultaneous entities.
To support systems of this size, net-VE To support systems of this size, net-VE designers must use new methods of managing designers must use new methods of managing resources such as bandwidth and processor resources such as bandwidth and processor capacity.capacity.
Military war gaming is a prevalent example.Military war gaming is a prevalent example.
The military envisions support for up to The military envisions support for up to 100,000 simultaneous entities.100,000 simultaneous entities.
To support systems of this size, net-VE To support systems of this size, net-VE designers must use new methods of managing designers must use new methods of managing resources such as bandwidth and processor resources such as bandwidth and processor capacity.capacity.
Improved Interactive Performance
Reduced processor load results in quicker processing of Reduced processor load results in quicker processing of user actions at the local machine.user actions at the local machine.
Excess processor capacity allows for higher-quality Excess processor capacity allows for higher-quality graphics generation for the user.graphics generation for the user.
Reduced network load can lower transmission latencies Reduced network load can lower transmission latencies caused by network congestion and ensure shared state caused by network congestion and ensure shared state information is exchanged quicker.information is exchanged quicker.
By demanding fewer resources, net-VE software can co-By demanding fewer resources, net-VE software can co-exist with other applications on participant’s machines.exist with other applications on participant’s machines.
Reduced processor load results in quicker processing of Reduced processor load results in quicker processing of user actions at the local machine.user actions at the local machine.
Excess processor capacity allows for higher-quality Excess processor capacity allows for higher-quality graphics generation for the user.graphics generation for the user.
Reduced network load can lower transmission latencies Reduced network load can lower transmission latencies caused by network congestion and ensure shared state caused by network congestion and ensure shared state information is exchanged quicker.information is exchanged quicker.
By demanding fewer resources, net-VE software can co-By demanding fewer resources, net-VE software can co-exist with other applications on participant’s machines.exist with other applications on participant’s machines.
Net-VE Designer GoalStrive to avoid wasting available resources.Strive to avoid wasting available resources.
This chapter provides methods to achieve improved net-This chapter provides methods to achieve improved net-VE size and performance by reducing bandwidth and VE size and performance by reducing bandwidth and processor resources.processor resources.
Strive to avoid wasting available resources.Strive to avoid wasting available resources.
This chapter provides methods to achieve improved net-This chapter provides methods to achieve improved net-VE size and performance by reducing bandwidth and VE size and performance by reducing bandwidth and processor resources.processor resources.
OverviewExplain close relationship between resources Explain close relationship between resources and information requirements in net-VE.and information requirements in net-VE.
Four broad types of resource management:Four broad types of resource management:
- Communication protocol optimization- Communication protocol optimization
- Data flow restriction- Data flow restriction
- Leveraging limited user perception- Leveraging limited user perception
- Modifying system architecture- Modifying system architecture
Explain close relationship between resources Explain close relationship between resources and information requirements in net-VE.and information requirements in net-VE.
Four broad types of resource management:Four broad types of resource management:
- Communication protocol optimization- Communication protocol optimization
- Data flow restriction- Data flow restriction
- Leveraging limited user perception- Leveraging limited user perception
- Modifying system architecture- Modifying system architecture
Resource ManagementNetwork Bandwidth increases as the number of Network Bandwidth increases as the number of new users increases.new users increases.
This growth occurs for three reasons:This growth occurs for three reasons:
- Each additional user must receive the initial net-VE state as - Each additional user must receive the initial net-VE state as well as updates that other users are receiving.well as updates that other users are receiving.
-Each additional user introduces new updates to the existing -Each additional user introduces new updates to the existing shared state.shared state.
- Each new User introduces an additional shared state to the - Each new User introduces an additional shared state to the net-net- VE.VE.
Network Bandwidth increases as the number of Network Bandwidth increases as the number of new users increases.new users increases.
This growth occurs for three reasons:This growth occurs for three reasons:
- Each additional user must receive the initial net-VE state as - Each additional user must receive the initial net-VE state as well as updates that other users are receiving.well as updates that other users are receiving.
-Each additional user introduces new updates to the existing -Each additional user introduces new updates to the existing shared state.shared state.
- Each new User introduces an additional shared state to the - Each new User introduces an additional shared state to the net-net- VE.VE.
Resource ManagementAs more users enter the net-VE, additional processor cycles As more users enter the net-VE, additional processor cycles are required at each of the existing users’ hosts.are required at each of the existing users’ hosts.
• Each new user introduces more elements that the processor must Each new user introduces more elements that the processor must render.render.
• Since each user introduces new shard state to the net-VE, the Since each user introduces new shard state to the net-VE, the processor must cache this additional state, receive updates to this processor must cache this additional state, receive updates to this new state, and apply those updates to the cache.new state, and apply those updates to the cache.
• Because each user introduces additional updates, the processor Because each user introduces additional updates, the processor must be prepared to receive and handle the increased volume of must be prepared to receive and handle the increased volume of updates and support increased interactions with the local userupdates and support increased interactions with the local user
As more users enter the net-VE, additional processor cycles As more users enter the net-VE, additional processor cycles are required at each of the existing users’ hosts.are required at each of the existing users’ hosts.
• Each new user introduces more elements that the processor must Each new user introduces more elements that the processor must render.render.
• Since each user introduces new shard state to the net-VE, the Since each user introduces new shard state to the net-VE, the processor must cache this additional state, receive updates to this processor must cache this additional state, receive updates to this new state, and apply those updates to the cache.new state, and apply those updates to the cache.
• Because each user introduces additional updates, the processor Because each user introduces additional updates, the processor must be prepared to receive and handle the increased volume of must be prepared to receive and handle the increased volume of updates and support increased interactions with the local userupdates and support increased interactions with the local user
Resource ManagementSummary:Summary:
New users increase amount of shared data and level of New users increase amount of shared data and level of interaction in the environment. More network interaction in the environment. More network bandwidth is required to maintain the data and bandwidth is required to maintain the data and disseminate the interactions.disseminate the interactions.
As more users enter the net-VE, additional processor As more users enter the net-VE, additional processor cycles are required at each of the existing users’ hosts.cycles are required at each of the existing users’ hosts.
Summary:Summary:
New users increase amount of shared data and level of New users increase amount of shared data and level of interaction in the environment. More network interaction in the environment. More network bandwidth is required to maintain the data and bandwidth is required to maintain the data and disseminate the interactions.disseminate the interactions.
As more users enter the net-VE, additional processor As more users enter the net-VE, additional processor cycles are required at each of the existing users’ hosts.cycles are required at each of the existing users’ hosts.
Networked VE Information PrincipleThe resource utilization of a new-VE is directly related to The resource utilization of a new-VE is directly related to the amount of information that must be sent and the amount of information that must be sent and received by each host and how quickly that information received by each host and how quickly that information must be delivered by the network.must be delivered by the network.
The resource utilization of a new-VE is directly related to The resource utilization of a new-VE is directly related to the amount of information that must be sent and the amount of information that must be sent and received by each host and how quickly that information received by each host and how quickly that information must be delivered by the network.must be delivered by the network.
Networked Virtual Environment Information Principle
The relationship between networking and the processing in The relationship between networking and the processing in net-VE’s.net-VE’s.
The key to improving scalability and performanceThe key to improving scalability and performance
Resources = M * H * B * T * PResources = M * H * B * T * P M M is the number of messages exchanged is the number of messages exchanged
HH is the average number of destination hosts for each message is the average number of destination hosts for each message
BB is the bandwidth required for a message is the bandwidth required for a message
TT represents the timeliness with which the packets must be delivered represents the timeliness with which the packets must be delivered
to each destinationto each destination
P P is the number of processor cycles required to receive and processis the number of processor cycles required to receive and process
each messageeach message
The relationship between networking and the processing in The relationship between networking and the processing in net-VE’s.net-VE’s.
The key to improving scalability and performanceThe key to improving scalability and performance
Resources = M * H * B * T * PResources = M * H * B * T * P M M is the number of messages exchanged is the number of messages exchanged
HH is the average number of destination hosts for each message is the average number of destination hosts for each message
BB is the bandwidth required for a message is the bandwidth required for a message
TT represents the timeliness with which the packets must be delivered represents the timeliness with which the packets must be delivered
to each destinationto each destination
P P is the number of processor cycles required to receive and processis the number of processor cycles required to receive and process
each messageeach message
Optimizing Communications Protocol
Every network packet incurs a processing penalty at the Every network packet incurs a processing penalty at the sender and receiver, and every packet must include sender and receiver, and every packet must include header information as per TCP/IP and UDP/IP.header information as per TCP/IP and UDP/IP.
Network packet Optimizations:Network packet Optimizations:
- Reduce packet size- Reduce packet size
- Reduce the number of packets- Reduce the number of packets
How can we reduce M (number of messages) and B How can we reduce M (number of messages) and B (bandwidth per message) while increasing P (processor (bandwidth per message) while increasing P (processor requirements per packet)?requirements per packet)?
Every network packet incurs a processing penalty at the Every network packet incurs a processing penalty at the sender and receiver, and every packet must include sender and receiver, and every packet must include header information as per TCP/IP and UDP/IP.header information as per TCP/IP and UDP/IP.
Network packet Optimizations:Network packet Optimizations:
- Reduce packet size- Reduce packet size
- Reduce the number of packets- Reduce the number of packets
How can we reduce M (number of messages) and B How can we reduce M (number of messages) and B (bandwidth per message) while increasing P (processor (bandwidth per message) while increasing P (processor requirements per packet)?requirements per packet)?
Optimizing Communications ProtocolPacket Compression and AggregationPacket Compression and Aggregation
Compression may be either lossless or lossy.Compression may be either lossless or lossy.
- Lossless simply shrinks the format used to transmit the - Lossless simply shrinks the format used to transmit the data, but does not affect the information transmitted.data, but does not affect the information transmitted.
- Lossy may eliminate some of the information as part of the - Lossy may eliminate some of the information as part of the compression process.compression process.
Compression can be either internal or external.Compression can be either internal or external.
- Internal manipulates a packet based solely on its content, not - Internal manipulates a packet based solely on its content, not what has been transmitted previously.what has been transmitted previously.
- External manipulates the packet based on what has been - External manipulates the packet based on what has been transmitted.transmitted.
Packet Compression and AggregationPacket Compression and Aggregation
Compression may be either lossless or lossy.Compression may be either lossless or lossy.
- Lossless simply shrinks the format used to transmit the - Lossless simply shrinks the format used to transmit the data, but does not affect the information transmitted.data, but does not affect the information transmitted.
- Lossy may eliminate some of the information as part of the - Lossy may eliminate some of the information as part of the compression process.compression process.
Compression can be either internal or external.Compression can be either internal or external.
- Internal manipulates a packet based solely on its content, not - Internal manipulates a packet based solely on its content, not what has been transmitted previously.what has been transmitted previously.
- External manipulates the packet based on what has been - External manipulates the packet based on what has been transmitted.transmitted.
The right choice (or combination) of compression algorithms The right choice (or combination) of compression algorithms tends to depend on the particular net-VE application.tends to depend on the particular net-VE application.
The decision depends on:The decision depends on:
• Frequency of Packet updates.Frequency of Packet updates.
• Content of Packets.Content of Packets.
• Communication architecture and protocols used for packet Communication architecture and protocols used for packet distribution.distribution.
The choice is derived by analyzing traffic generated earlier in The choice is derived by analyzing traffic generated earlier in prior executions of the net-VE to detect data duplication and prior executions of the net-VE to detect data duplication and other inefficiencies.other inefficiencies.
The right choice (or combination) of compression algorithms The right choice (or combination) of compression algorithms tends to depend on the particular net-VE application.tends to depend on the particular net-VE application.
The decision depends on:The decision depends on:
• Frequency of Packet updates.Frequency of Packet updates.
• Content of Packets.Content of Packets.
• Communication architecture and protocols used for packet Communication architecture and protocols used for packet distribution.distribution.
The choice is derived by analyzing traffic generated earlier in The choice is derived by analyzing traffic generated earlier in prior executions of the net-VE to detect data duplication and prior executions of the net-VE to detect data duplication and other inefficiencies.other inefficiencies.
Optimizing Communications Protocol
Optimizing Communications Protocol
Protocol Independent Compression Algorithm (PICA)Protocol Independent Compression Algorithm (PICA) [Van Hook/Calvin/Miller94][Van Hook/Calvin/Miller94]
• A lossless, external compression algorithmA lossless, external compression algorithm
• Used in early versions of the U.S. Military’s Strategic Used in early versions of the U.S. Military’s Strategic Theater of War (STOW) program.Theater of War (STOW) program.
• Eliminates redundant state information from successive Eliminates redundant state information from successive update packets.update packets.
• Used when data transmission is unreliable.Used when data transmission is unreliable.
Protocol Independent Compression Algorithm (PICA)Protocol Independent Compression Algorithm (PICA) [Van Hook/Calvin/Miller94][Van Hook/Calvin/Miller94]
• A lossless, external compression algorithmA lossless, external compression algorithm
• Used in early versions of the U.S. Military’s Strategic Used in early versions of the U.S. Military’s Strategic Theater of War (STOW) program.Theater of War (STOW) program.
• Eliminates redundant state information from successive Eliminates redundant state information from successive update packets.update packets.
• Used when data transmission is unreliable.Used when data transmission is unreliable.
Protocol Independent Compression Algorithm (PICA)Protocol Independent Compression Algorithm (PICA)
The PICA Compression ModuleThe PICA Compression Module: :
The compression module is responsible for constructing The compression module is responsible for constructing the update packet by selecting the differences between the the update packet by selecting the differences between the current state “snapshot” and the current entity state. When current state “snapshot” and the current entity state. When the number of byte differences exceeds a threshold, the PICA the number of byte differences exceeds a threshold, the PICA compression module transmits a new reference “snapshot.”compression module transmits a new reference “snapshot.”
Protocol Independent Compression Algorithm (PICA)Protocol Independent Compression Algorithm (PICA)
The PICA Compression ModuleThe PICA Compression Module: :
The compression module is responsible for constructing The compression module is responsible for constructing the update packet by selecting the differences between the the update packet by selecting the differences between the current state “snapshot” and the current entity state. When current state “snapshot” and the current entity state. When the number of byte differences exceeds a threshold, the PICA the number of byte differences exceeds a threshold, the PICA compression module transmits a new reference “snapshot.”compression module transmits a new reference “snapshot.”
Optimizing Communications Protocol
Localized Compression Using Application GatewaysLocalized Compression Using Application Gateways
Compression and Decompression usually occur on the source Compression and Decompression usually occur on the source and destination hosts. However, compression only needs to be and destination hosts. However, compression only needs to be localized to areas of the network having limited bandwidth localized to areas of the network having limited bandwidth availability.availability.
• Application Gateways compress data at limited bandwidth areas.Application Gateways compress data at limited bandwidth areas.
• Quiescent Entity Service (QES) eliminates repeated update Quiescent Entity Service (QES) eliminates repeated update transmissions from low bandwidth WANs.transmissions from low bandwidth WANs.
• Application Gateways detect and filter inactive entities and Application Gateways detect and filter inactive entities and generate periodic state updates on behalf of inactive entities generate periodic state updates on behalf of inactive entities located on remote LANs.located on remote LANs.
Localized Compression Using Application GatewaysLocalized Compression Using Application Gateways
Compression and Decompression usually occur on the source Compression and Decompression usually occur on the source and destination hosts. However, compression only needs to be and destination hosts. However, compression only needs to be localized to areas of the network having limited bandwidth localized to areas of the network having limited bandwidth availability.availability.
• Application Gateways compress data at limited bandwidth areas.Application Gateways compress data at limited bandwidth areas.
• Quiescent Entity Service (QES) eliminates repeated update Quiescent Entity Service (QES) eliminates repeated update transmissions from low bandwidth WANs.transmissions from low bandwidth WANs.
• Application Gateways detect and filter inactive entities and Application Gateways detect and filter inactive entities and generate periodic state updates on behalf of inactive entities generate periodic state updates on behalf of inactive entities located on remote LANs.located on remote LANs.
Optimizing Communications Protocol
Optimizing Communications Protocol
Packet AggregationPacket Aggregation
Reducing the number of packets that are actually transmitted Reducing the number of packets that are actually transmitted by merging information from multiple packets into a single by merging information from multiple packets into a single packet.packet.
• Merging saves network bandwidth by reducing the number of Merging saves network bandwidth by reducing the number of packet headers sent over the network.packet headers sent over the network.
• Example: Each UDP/IP packet includes a 28 byte headerExample: Each UDP/IP packet includes a 28 byte header
• Each TCP/IP packet includes a 40 byte headerEach TCP/IP packet includes a 40 byte header
• With the two packets merged, one of the headers is With the two packets merged, one of the headers is eliminatedeliminated..
• Aggregation can eliminate as much as 50% of bandwidth Aggregation can eliminate as much as 50% of bandwidth requirements in a Net-VE.requirements in a Net-VE.
Packet AggregationPacket Aggregation
Reducing the number of packets that are actually transmitted Reducing the number of packets that are actually transmitted by merging information from multiple packets into a single by merging information from multiple packets into a single packet.packet.
• Merging saves network bandwidth by reducing the number of Merging saves network bandwidth by reducing the number of packet headers sent over the network.packet headers sent over the network.
• Example: Each UDP/IP packet includes a 28 byte headerExample: Each UDP/IP packet includes a 28 byte header
• Each TCP/IP packet includes a 40 byte headerEach TCP/IP packet includes a 40 byte header
• With the two packets merged, one of the headers is With the two packets merged, one of the headers is eliminatedeliminated..
• Aggregation can eliminate as much as 50% of bandwidth Aggregation can eliminate as much as 50% of bandwidth requirements in a Net-VE.requirements in a Net-VE.
Optimizing Communications Protocol
Aggregation Tradeoffs and StrategiesAggregation Tradeoffs and Strategies- Aggregation may artificially delay transmission of update - Aggregation may artificially delay transmission of update
packets packets by waiting until it has enough packets to merge. by waiting until it has enough packets to merge.
- The receivers must rely on stale information longer than they - The receivers must rely on stale information longer than they otherwise would have. otherwise would have.
Tradeoff:Tradeoff:
By waiting longer to transmit an aggregated packet, it offers By waiting longer to transmit an aggregated packet, it offers greater potential for bandwidth savings (more packets might get greater potential for bandwidth savings (more packets might get merged), but reduces the value of the data (by delaying transmission).merged), but reduces the value of the data (by delaying transmission).
Aggregation Tradeoffs and StrategiesAggregation Tradeoffs and Strategies- Aggregation may artificially delay transmission of update - Aggregation may artificially delay transmission of update
packets packets by waiting until it has enough packets to merge. by waiting until it has enough packets to merge.
- The receivers must rely on stale information longer than they - The receivers must rely on stale information longer than they otherwise would have. otherwise would have.
Tradeoff:Tradeoff:
By waiting longer to transmit an aggregated packet, it offers By waiting longer to transmit an aggregated packet, it offers greater potential for bandwidth savings (more packets might get greater potential for bandwidth savings (more packets might get merged), but reduces the value of the data (by delaying transmission).merged), but reduces the value of the data (by delaying transmission).
Optimizing Communications Protocol
Timeout-based transmission policyTimeout-based transmission policy
The packet aggregator collects individual packets and The packet aggregator collects individual packets and transmits them after waiting a fixed period of time.transmits them after waiting a fixed period of time.
• Guarantees an upper bound on the delay introduced on an Guarantees an upper bound on the delay introduced on an update packet.update packet.
Drawback:Drawback:
It is possible during the timeout period, only one entity generates an It is possible during the timeout period, only one entity generates an update, so no aggregation will occur even though the transmitted update, so no aggregation will occur even though the transmitted packet still paid the delay penalty.packet still paid the delay penalty.
Timeout-based transmission policyTimeout-based transmission policy
The packet aggregator collects individual packets and The packet aggregator collects individual packets and transmits them after waiting a fixed period of time.transmits them after waiting a fixed period of time.
• Guarantees an upper bound on the delay introduced on an Guarantees an upper bound on the delay introduced on an update packet.update packet.
Drawback:Drawback:
It is possible during the timeout period, only one entity generates an It is possible during the timeout period, only one entity generates an update, so no aggregation will occur even though the transmitted update, so no aggregation will occur even though the transmitted packet still paid the delay penalty.packet still paid the delay penalty.
Optimizing Communications Protocol
Quorum-based transmission policyQuorum-based transmission policy
The packet aggregator merges individual updates until The packet aggregator merges individual updates until the aggregated packet contains a certain number the aggregated packet contains a certain number (quorum) of updates.(quorum) of updates.
• Guarantees a particular bandwidth and packet rate Guarantees a particular bandwidth and packet rate reduction.reduction.
• Bandwidth reduction is controlled by the number of updates Bandwidth reduction is controlled by the number of updates that are merged.that are merged.
Quorum-based transmission policyQuorum-based transmission policy
The packet aggregator merges individual updates until The packet aggregator merges individual updates until the aggregated packet contains a certain number the aggregated packet contains a certain number (quorum) of updates.(quorum) of updates.
• Guarantees a particular bandwidth and packet rate Guarantees a particular bandwidth and packet rate reduction.reduction.
• Bandwidth reduction is controlled by the number of updates Bandwidth reduction is controlled by the number of updates that are merged.that are merged.
Optimizing Communications Protocol
Drawbacks to the quorum:Drawbacks to the quorum:
• No limitation on how long the entity update is delayed.No limitation on how long the entity update is delayed.
• Could be delayed indefinitely while waiting for more Could be delayed indefinitely while waiting for more updates to merge. updates to merge.
The quorum-based approach can transmit fewer packets The quorum-based approach can transmit fewer packets than timeout-based transmission, but its than timeout-based transmission, but its delay delay characteristics are less predictable.characteristics are less predictable.
It is possible to merge these two aggregation strategies It is possible to merge these two aggregation strategies to achieve a middle-ground solution.to achieve a middle-ground solution.
Drawbacks to the quorum:Drawbacks to the quorum:
• No limitation on how long the entity update is delayed.No limitation on how long the entity update is delayed.
• Could be delayed indefinitely while waiting for more Could be delayed indefinitely while waiting for more updates to merge. updates to merge.
The quorum-based approach can transmit fewer packets The quorum-based approach can transmit fewer packets than timeout-based transmission, but its than timeout-based transmission, but its delay delay characteristics are less predictable.characteristics are less predictable.
It is possible to merge these two aggregation strategies It is possible to merge these two aggregation strategies to achieve a middle-ground solution.to achieve a middle-ground solution.
Hybrid Transmission Approach:Hybrid Transmission Approach:
• Can adapt to dynamic entity update rates.Can adapt to dynamic entity update rates.
• At slow update rates, the timeout prevails preventing At slow update rates, the timeout prevails preventing indefinite waiting.indefinite waiting.
• At fast update rates, the quorum prevails and achieves the At fast update rates, the quorum prevails and achieves the target level of update aggregation.target level of update aggregation.
Hybrid Transmission Approach:Hybrid Transmission Approach:
• Can adapt to dynamic entity update rates.Can adapt to dynamic entity update rates.
• At slow update rates, the timeout prevails preventing At slow update rates, the timeout prevails preventing indefinite waiting.indefinite waiting.
• At fast update rates, the quorum prevails and achieves the At fast update rates, the quorum prevails and achieves the target level of update aggregation.target level of update aggregation.
Optimizing Communications Protocol
Optimizing Communications Protocol
Aggregation ServersAggregation Servers
Servers that collect updates from multiple source hosts Servers that collect updates from multiple source hosts and disseminate aggregated update packets using one and disseminate aggregated update packets using one of the strategies.of the strategies.
• Good for managing inactive entity updates.Good for managing inactive entity updates.
Quiescent Entity Service (QES)Quiescent Entity Service (QES)
Generates aggregated updates on behalf of entities that Generates aggregated updates on behalf of entities that are dead or just inactive.are dead or just inactive.
Aggregation ServersAggregation Servers
Servers that collect updates from multiple source hosts Servers that collect updates from multiple source hosts and disseminate aggregated update packets using one and disseminate aggregated update packets using one of the strategies.of the strategies.
• Good for managing inactive entity updates.Good for managing inactive entity updates.
Quiescent Entity Service (QES)Quiescent Entity Service (QES)
Generates aggregated updates on behalf of entities that Generates aggregated updates on behalf of entities that are dead or just inactive.are dead or just inactive.
Optimizing Communications Protocol
Advantages of Aggregation ServersAdvantages of Aggregation Servers
• Reduces the need for a single high-power machine by Reduces the need for a single high-power machine by distributing the workload.distributing the workload.
• Improved fault tolerance characteristics of the net-VE Improved fault tolerance characteristics of the net-VE should a server fail.should a server fail.
• Use of multiple servers can improve the overall Use of multiple servers can improve the overall performance of the aggregation process.performance of the aggregation process.
Advantages of Aggregation ServersAdvantages of Aggregation Servers
• Reduces the need for a single high-power machine by Reduces the need for a single high-power machine by distributing the workload.distributing the workload.
• Improved fault tolerance characteristics of the net-VE Improved fault tolerance characteristics of the net-VE should a server fail.should a server fail.
• Use of multiple servers can improve the overall Use of multiple servers can improve the overall performance of the aggregation process.performance of the aggregation process.
Controlling the Visibility of DataThe goal of the net-VE designer should be to send The goal of the net-VE designer should be to send information to those hosts who really need to receive it.information to those hosts who really need to receive it.
Aura - Nimbus Information ModelAura - Nimbus Information Model
Aura - Data should only be available to those entities that are Aura - Data should only be available to those entities that are capable of perceiving that information. capable of perceiving that information.
Nimbus - Data should only be available to those who are Nimbus - Data should only be available to those who are interested interested in that information. in that information.
Aura-Nimbus has the disadvantage in that it does not scale to large Aura-Nimbus has the disadvantage in that it does not scale to large numbers of entities.numbers of entities.
The goal of the net-VE designer should be to send The goal of the net-VE designer should be to send information to those hosts who really need to receive it.information to those hosts who really need to receive it.
Aura - Nimbus Information ModelAura - Nimbus Information Model
Aura - Data should only be available to those entities that are Aura - Data should only be available to those entities that are capable of perceiving that information. capable of perceiving that information.
Nimbus - Data should only be available to those who are Nimbus - Data should only be available to those who are interested interested in that information. in that information.
Aura-Nimbus has the disadvantage in that it does not scale to large Aura-Nimbus has the disadvantage in that it does not scale to large numbers of entities.numbers of entities.
Categories of Data flow Management:Categories of Data flow Management:
- - Area of Interest Filters Area of Interest Filters are explicit data filters are explicit data filters provided by each host, allowing the net-VE to perform fine-provided by each host, allowing the net-VE to perform fine-grained data management to deliver only the information the grained data management to deliver only the information the host needs.host needs.
- - MulticastingMulticasting involves taking advantage of involves taking advantage of subscription-based network routing protocols to restrict the flow subscription-based network routing protocols to restrict the flow of data.of data.
- - Subscription-Based Aggregation (hybrid)Subscription-Based Aggregation (hybrid) represents a hybrid represents a hybrid between the two approaches, grouping between the two approaches, grouping available data into fine-grained “channels” to which destination available data into fine-grained “channels” to which destination hosts may subscribe.hosts may subscribe.
Categories of Data flow Management:Categories of Data flow Management:
- - Area of Interest Filters Area of Interest Filters are explicit data filters are explicit data filters provided by each host, allowing the net-VE to perform fine-provided by each host, allowing the net-VE to perform fine-grained data management to deliver only the information the grained data management to deliver only the information the host needs.host needs.
- - MulticastingMulticasting involves taking advantage of involves taking advantage of subscription-based network routing protocols to restrict the flow subscription-based network routing protocols to restrict the flow of data.of data.
- - Subscription-Based Aggregation (hybrid)Subscription-Based Aggregation (hybrid) represents a hybrid represents a hybrid between the two approaches, grouping between the two approaches, grouping available data into fine-grained “channels” to which destination available data into fine-grained “channels” to which destination hosts may subscribe.hosts may subscribe.
Controlling the Visibility of Data
Controlling the Visibility of Data
Joint Precision Strike Demonstration (JPSD)Joint Precision Strike Demonstration (JPSD)
Military net-VE built to train army tactical commanders.Military net-VE built to train army tactical commanders.
- Most of the entities are artificially constructed with no - Most of the entities are artificially constructed with no human controller. human controller.
- Demonstrated with up to 6000 participating entities - Demonstrated with up to 6000 participating entities with 80 with 80 hosts. hosts.
- Each host manages subscriptions for all local entities - Each host manages subscriptions for all local entities and and sends packets to interested clients. sends packets to interested clients.
- Data is transmitted using peer-to peer unicast.- Data is transmitted using peer-to peer unicast.
Joint Precision Strike Demonstration (JPSD)Joint Precision Strike Demonstration (JPSD)
Military net-VE built to train army tactical commanders.Military net-VE built to train army tactical commanders.
- Most of the entities are artificially constructed with no - Most of the entities are artificially constructed with no human controller. human controller.
- Demonstrated with up to 6000 participating entities - Demonstrated with up to 6000 participating entities with 80 with 80 hosts. hosts.
- Each host manages subscriptions for all local entities - Each host manages subscriptions for all local entities and and sends packets to interested clients. sends packets to interested clients.
- Data is transmitted using peer-to peer unicast.- Data is transmitted using peer-to peer unicast.
Controlling the Visibility of Data
Intrinsic and Extrinsic FilteringIntrinsic and Extrinsic Filtering
Intrinsic: Inspecting the content of each packet to Intrinsic: Inspecting the content of each packet to determine if it should be delivered to a particular determine if it should be delivered to a particular destination host. (Area of Interest Filtering)destination host. (Area of Interest Filtering)
Extrinsic: Filtering packets based on external Extrinsic: Filtering packets based on external properties such as the network address to which the properties such as the network address to which the packet was transmitted. (Multicasting)packet was transmitted. (Multicasting)
Intrinsic and Extrinsic FilteringIntrinsic and Extrinsic Filtering
Intrinsic: Inspecting the content of each packet to Intrinsic: Inspecting the content of each packet to determine if it should be delivered to a particular determine if it should be delivered to a particular destination host. (Area of Interest Filtering)destination host. (Area of Interest Filtering)
Extrinsic: Filtering packets based on external Extrinsic: Filtering packets based on external properties such as the network address to which the properties such as the network address to which the packet was transmitted. (Multicasting)packet was transmitted. (Multicasting)
Controlling the Visibility of Data
MulticastingMulticasting
A network protocol technique whereby the application sends A network protocol technique whereby the application sends each packet to a “each packet to a “multicast group” multicast group” by supplying a specialby supplying a special “multicast address”“multicast address” as the destination for the UDP/IP packet. as the destination for the UDP/IP packet.
- The packets are delivered exclusively to those hosts - The packets are delivered exclusively to those hosts who who have subscribed to the multicast group. have subscribed to the multicast group.
- A host must subscribe (Join) to receive and - A host must subscribe (Join) to receive and unsubscribe unsubscribe (Leave) to stop receiving. (Leave) to stop receiving.
Relatively efficient compared to broadcast protocols.Relatively efficient compared to broadcast protocols.
MulticastingMulticasting
A network protocol technique whereby the application sends A network protocol technique whereby the application sends each packet to a “each packet to a “multicast group” multicast group” by supplying a specialby supplying a special “multicast address”“multicast address” as the destination for the UDP/IP packet. as the destination for the UDP/IP packet.
- The packets are delivered exclusively to those hosts - The packets are delivered exclusively to those hosts who who have subscribed to the multicast group. have subscribed to the multicast group.
- A host must subscribe (Join) to receive and - A host must subscribe (Join) to receive and unsubscribe unsubscribe (Leave) to stop receiving. (Leave) to stop receiving.
Relatively efficient compared to broadcast protocols.Relatively efficient compared to broadcast protocols.
Multicasting Approaches:Multicasting Approaches:
• Group per Entity approachGroup per Entity approach allows each host to receive information allows each host to receive information about all hosts that lie within its nimbus. Each host executes its about all hosts that lie within its nimbus. Each host executes its subscription filter locally, based on entities that exist in the net-subscription filter locally, based on entities that exist in the net-VE.VE.
• Group per Region approach Group per Region approach partitions the VE world into regions and partitions the VE world into regions and assigns each region to one or more multicast groups. Each entity assigns each region to one or more multicast groups. Each entity transmits its data to groups corresponding to regions that cover transmits its data to groups corresponding to regions that cover its current location.its current location.
• Hybrid Multicast Aggregation Approach Hybrid Multicast Aggregation Approach strikes a balance between the strikes a balance between the two approaches. The hybrid scheme aims to partition the net-VE two approaches. The hybrid scheme aims to partition the net-VE data into fine-grained multicast groups as well as ensuring that data into fine-grained multicast groups as well as ensuring that the data is not so fine grained that the approach does not the data is not so fine grained that the approach does not degenerate into simple unicast. The data must be broad enough to degenerate into simple unicast. The data must be broad enough to include multiple source entities.include multiple source entities.
Multicasting Approaches:Multicasting Approaches:
• Group per Entity approachGroup per Entity approach allows each host to receive information allows each host to receive information about all hosts that lie within its nimbus. Each host executes its about all hosts that lie within its nimbus. Each host executes its subscription filter locally, based on entities that exist in the net-subscription filter locally, based on entities that exist in the net-VE.VE.
• Group per Region approach Group per Region approach partitions the VE world into regions and partitions the VE world into regions and assigns each region to one or more multicast groups. Each entity assigns each region to one or more multicast groups. Each entity transmits its data to groups corresponding to regions that cover transmits its data to groups corresponding to regions that cover its current location.its current location.
• Hybrid Multicast Aggregation Approach Hybrid Multicast Aggregation Approach strikes a balance between the strikes a balance between the two approaches. The hybrid scheme aims to partition the net-VE two approaches. The hybrid scheme aims to partition the net-VE data into fine-grained multicast groups as well as ensuring that data into fine-grained multicast groups as well as ensuring that the data is not so fine grained that the approach does not the data is not so fine grained that the approach does not degenerate into simple unicast. The data must be broad enough to degenerate into simple unicast. The data must be broad enough to include multiple source entities.include multiple source entities.
Controlling the Visibility of Data
Controlling the Visibility of Data
Projection-Based Multicast AggregationProjection-Based Multicast Aggregation
A system whereby information is partitioned based on multiple A system whereby information is partitioned based on multiple parameter values. These groups are referred to as a parameter values. These groups are referred to as a ““projectionprojection”.”.
• Each projection is associated with its own multicast group Each projection is associated with its own multicast group address.address.
Projection Example:Projection Example:
Suppose each entity in a net-VE has a type and location. Each Suppose each entity in a net-VE has a type and location. Each projection is specific to entity types and locations. One projection projection is specific to entity types and locations. One projection might include tanks located between (10,25) and (35,40) while another might include tanks located between (10,25) and (35,40) while another projection includes cars located between (85,70) and (110,85) in the projection includes cars located between (85,70) and (110,85) in the net-VE.net-VE.
Projection-Based Multicast AggregationProjection-Based Multicast Aggregation
A system whereby information is partitioned based on multiple A system whereby information is partitioned based on multiple parameter values. These groups are referred to as a parameter values. These groups are referred to as a ““projectionprojection”.”.
• Each projection is associated with its own multicast group Each projection is associated with its own multicast group address.address.
Projection Example:Projection Example:
Suppose each entity in a net-VE has a type and location. Each Suppose each entity in a net-VE has a type and location. Each projection is specific to entity types and locations. One projection projection is specific to entity types and locations. One projection might include tanks located between (10,25) and (35,40) while another might include tanks located between (10,25) and (35,40) while another projection includes cars located between (85,70) and (110,85) in the projection includes cars located between (85,70) and (110,85) in the net-VE.net-VE.
Controlling the Visibility of Data
Projection AggregationProjection Aggregation
Instead of sending data directly to the projection’s Instead of sending data directly to the projection’s multicast group, source hosts can send data to a multicast group, source hosts can send data to a projection aggregation server.projection aggregation server.
• Each server is responsible for managing one or more Each server is responsible for managing one or more projections, collecting data and sending aggregated packets projections, collecting data and sending aggregated packets that contain multiple updates.that contain multiple updates.
Advantage: Less network bandwidth consumed.Advantage: Less network bandwidth consumed.
Projection AggregationProjection Aggregation
Instead of sending data directly to the projection’s Instead of sending data directly to the projection’s multicast group, source hosts can send data to a multicast group, source hosts can send data to a projection aggregation server.projection aggregation server.
• Each server is responsible for managing one or more Each server is responsible for managing one or more projections, collecting data and sending aggregated packets projections, collecting data and sending aggregated packets that contain multiple updates.that contain multiple updates.
Advantage: Less network bandwidth consumed.Advantage: Less network bandwidth consumed.
Taking Advantage of Perceptual Limitations
Limitations inherent to a human userLimitations inherent to a human user
• Delay in human reaction to events that occurDelay in human reaction to events that occur
• Humans cannot discern intricate details about Humans cannot discern intricate details about appearance or location if the object is distant.appearance or location if the object is distant.
• Humans are relatively forgiving about small Humans are relatively forgiving about small inaccuracies in a rendered scene.inaccuracies in a rendered scene.
Limitations inherent to a human userLimitations inherent to a human user
• Delay in human reaction to events that occurDelay in human reaction to events that occur
• Humans cannot discern intricate details about Humans cannot discern intricate details about appearance or location if the object is distant.appearance or location if the object is distant.
• Humans are relatively forgiving about small Humans are relatively forgiving about small inaccuracies in a rendered scene.inaccuracies in a rendered scene.
Taking Advantage of Perceptual Limitations
Exploiting LOD PerceptionExploiting LOD Perception
Only the users who are located near the entity Only the users who are located near the entity in the VE need to receive the high-detail in the VE need to receive the high-detail information.information.
• Distant viewers can tolerate less detail and less Distant viewers can tolerate less detail and less information about current structure, position, and information about current structure, position, and orientation.orientation.
• Many inaccuracies go undetected on a fine-Many inaccuracies go undetected on a fine-resolution display.resolution display.
Exploiting LOD PerceptionExploiting LOD Perception
Only the users who are located near the entity Only the users who are located near the entity in the VE need to receive the high-detail in the VE need to receive the high-detail information.information.
• Distant viewers can tolerate less detail and less Distant viewers can tolerate less detail and less information about current structure, position, and information about current structure, position, and orientation.orientation.
• Many inaccuracies go undetected on a fine-Many inaccuracies go undetected on a fine-resolution display.resolution display.
Taking Advantage of Perceptual Limitations
Wasted ResourcesWasted Resources
• Transmitting high-resolution to distant Transmitting high-resolution to distant viewers imposes unnecessary bandwidth viewers imposes unnecessary bandwidth burdens on the network and burdens on the network and processing processing burdens on the receiving hosts.burdens on the receiving hosts.
Wasted ResourcesWasted Resources
• Transmitting high-resolution to distant Transmitting high-resolution to distant viewers imposes unnecessary bandwidth viewers imposes unnecessary bandwidth burdens on the network and burdens on the network and processing processing burdens on the receiving hosts.burdens on the receiving hosts.
Taking Advantage of Perceptual Limitations
Multiple Channel ArchitectureMultiple Channel Architecture
Each entity transmits multiple independent data channels each Each entity transmits multiple independent data channels each with a different LOD and frequency.with a different LOD and frequency.
Example: The low resolution channel might provide updates Example: The low resolution channel might provide updates once every 20 seconds and contain only the entity’s position. once every 20 seconds and contain only the entity’s position. The high resolution channel might provide updates every three The high resolution channel might provide updates every three seconds and include the entity’s position, orientation, and seconds and include the entity’s position, orientation, and dynamic structure. Viewers subscribe to the channel that dynamic structure. Viewers subscribe to the channel that provides their required LOD.provides their required LOD.
Though more packets are transmitted, bandwidth Though more packets are transmitted, bandwidth requirements are reduced across the network. (Distant viewers requirements are reduced across the network. (Distant viewers shift to low bandwidth)shift to low bandwidth)
Multiple Channel ArchitectureMultiple Channel Architecture
Each entity transmits multiple independent data channels each Each entity transmits multiple independent data channels each with a different LOD and frequency.with a different LOD and frequency.
Example: The low resolution channel might provide updates Example: The low resolution channel might provide updates once every 20 seconds and contain only the entity’s position. once every 20 seconds and contain only the entity’s position. The high resolution channel might provide updates every three The high resolution channel might provide updates every three seconds and include the entity’s position, orientation, and seconds and include the entity’s position, orientation, and dynamic structure. Viewers subscribe to the channel that dynamic structure. Viewers subscribe to the channel that provides their required LOD.provides their required LOD.
Though more packets are transmitted, bandwidth Though more packets are transmitted, bandwidth requirements are reduced across the network. (Distant viewers requirements are reduced across the network. (Distant viewers shift to low bandwidth)shift to low bandwidth)
Taking Advantage of Perceptual Limitations
System ReliabilitySystem Reliability
Channels that generate low-frequency updates, a Channels that generate low-frequency updates, a reliable multicast scheme may be used to prevent reliable multicast scheme may be used to prevent receivers from operating with stale data.receivers from operating with stale data.
Channels that generate high-frequency updates, will Channels that generate high-frequency updates, will quickly replace lost updates.quickly replace lost updates.
System ReliabilitySystem Reliability
Channels that generate low-frequency updates, a Channels that generate low-frequency updates, a reliable multicast scheme may be used to prevent reliable multicast scheme may be used to prevent receivers from operating with stale data.receivers from operating with stale data.
Channels that generate high-frequency updates, will Channels that generate high-frequency updates, will quickly replace lost updates.quickly replace lost updates.
Taking Advantage of Perceptual Limitations
Channel Choices TradeoffChannel Choices Tradeoff
How many channels should a source host How many channels should a source host provide?provide?
• The more choices, the greater the chance of The more choices, the greater the chance of matching rendering accuracy and computational matching rendering accuracy and computational requirements.requirements.
• Too many choices imposes high cost in terms of Too many choices imposes high cost in terms of computation at the source host and in terms of computation at the source host and in terms of bandwidth for the host’s local network links.bandwidth for the host’s local network links.
Channel Choices TradeoffChannel Choices Tradeoff
How many channels should a source host How many channels should a source host provide?provide?
• The more choices, the greater the chance of The more choices, the greater the chance of matching rendering accuracy and computational matching rendering accuracy and computational requirements.requirements.
• Too many choices imposes high cost in terms of Too many choices imposes high cost in terms of computation at the source host and in terms of computation at the source host and in terms of bandwidth for the host’s local network links.bandwidth for the host’s local network links.
Taking Advantage of Perceptual Limitations
Three Channel StructureThree Channel Structure
The source provides three channels for each entity.The source provides three channels for each entity.
- Each channel provides an order-of-magnitude - Each channel provides an order-of-magnitude difference in difference in structural and positional accuracy, and an structural and positional accuracy, and an order-of-order-of- magnitude difference in packet rate. magnitude difference in packet rate.
Three channels:Three channels:
- rigid-body channel (far-range)- rigid-body channel (far-range)
- approximate-body channel (mid-range)- approximate-body channel (mid-range)
- full-body channel (near-range)- full-body channel (near-range)
Three Channel StructureThree Channel Structure
The source provides three channels for each entity.The source provides three channels for each entity.
- Each channel provides an order-of-magnitude - Each channel provides an order-of-magnitude difference in difference in structural and positional accuracy, and an structural and positional accuracy, and an order-of-order-of- magnitude difference in packet rate. magnitude difference in packet rate.
Three channels:Three channels:
- rigid-body channel (far-range)- rigid-body channel (far-range)
- approximate-body channel (mid-range)- approximate-body channel (mid-range)
- full-body channel (near-range)- full-body channel (near-range)
Taking Advantage of Perceptual Limitations
Rigid-body ChannelRigid-body Channel
Source host transmits enough information to Source host transmits enough information to allow remote hosts to represent the entity as a allow remote hosts to represent the entity as a rigid body.rigid body.
- Used when the entity is distant from the - Used when the entity is distant from the local viewer. local viewer.
Demands the least network bandwidth and processor Demands the least network bandwidth and processor computation.computation.
Updates Position, Orientation, and Basic Structure only.Updates Position, Orientation, and Basic Structure only.
Rigid-body ChannelRigid-body Channel
Source host transmits enough information to Source host transmits enough information to allow remote hosts to represent the entity as a allow remote hosts to represent the entity as a rigid body.rigid body.
- Used when the entity is distant from the - Used when the entity is distant from the local viewer. local viewer.
Demands the least network bandwidth and processor Demands the least network bandwidth and processor computation.computation.
Updates Position, Orientation, and Basic Structure only.Updates Position, Orientation, and Basic Structure only.
Taking Advantage of Perceptual Limitations
Approximate-body ChannelApproximate-body Channel
Enables remote hosts to render a rough approximation Enables remote hosts to render a rough approximation of the entity’s dynamic structure (ie: appendages or of the entity’s dynamic structure (ie: appendages or articulated parts)articulated parts)
- Remote hosts subscribe for non-rigid bodies that are - Remote hosts subscribe for non-rigid bodies that are close close enough to notice structural differences yet far enough enough to notice structural differences yet far enough to to tolerate inaccuracies in structural representation. tolerate inaccuracies in structural representation.
Consumes more bandwidth and computational Consumes more bandwidth and computational resources.resources.
Approximate-body ChannelApproximate-body Channel
Enables remote hosts to render a rough approximation Enables remote hosts to render a rough approximation of the entity’s dynamic structure (ie: appendages or of the entity’s dynamic structure (ie: appendages or articulated parts)articulated parts)
- Remote hosts subscribe for non-rigid bodies that are - Remote hosts subscribe for non-rigid bodies that are close close enough to notice structural differences yet far enough enough to notice structural differences yet far enough to to tolerate inaccuracies in structural representation. tolerate inaccuracies in structural representation.
Consumes more bandwidth and computational Consumes more bandwidth and computational resources.resources.
Taking Advantage of Perceptual Limitations
Full-body ChannelFull-body Channel
Provides the highest level of detail about the entity’s Provides the highest level of detail about the entity’s dynamic position, orientation, and structure.dynamic position, orientation, and structure.
- A viewer is restricted to subscribe to a limited - A viewer is restricted to subscribe to a limited number of channels at any one time.number of channels at any one time.
Required for nearby entity’s or when the viewer needs to Required for nearby entity’s or when the viewer needs to interact with the entity.interact with the entity.
Imposes the highest bandwidth and computational Imposes the highest bandwidth and computational requirements.requirements.
Full-body ChannelFull-body Channel
Provides the highest level of detail about the entity’s Provides the highest level of detail about the entity’s dynamic position, orientation, and structure.dynamic position, orientation, and structure.
- A viewer is restricted to subscribe to a limited - A viewer is restricted to subscribe to a limited number of channels at any one time.number of channels at any one time.
Required for nearby entity’s or when the viewer needs to Required for nearby entity’s or when the viewer needs to interact with the entity.interact with the entity.
Imposes the highest bandwidth and computational Imposes the highest bandwidth and computational requirements.requirements.
Summary of Multiple ChannelsSummary of Multiple Channels
- The source host must simultaneously support the low, - The source host must simultaneously support the low, medium and high fidelity requirements of its entity medium and high fidelity requirements of its entity
viewers.viewers.
- Multiple Channels - Multiple Channels improves scalabilityimproves scalability by allowing by allowing each each viewer to independently determine its rendering viewer to independently determine its rendering accuracy accuracy requirements.requirements.
- Multiple Channels - Multiple Channels reduce the aggregate traffic and reduce the aggregate traffic and computationcomputation throughout the net-VE by shifting throughout the net-VE by shifting
most most subscriptions toward the lower frequency, lower subscriptions toward the lower frequency, lower data data volume updates.volume updates.
Summary of Multiple ChannelsSummary of Multiple Channels
- The source host must simultaneously support the low, - The source host must simultaneously support the low, medium and high fidelity requirements of its entity medium and high fidelity requirements of its entity
viewers.viewers.
- Multiple Channels - Multiple Channels improves scalabilityimproves scalability by allowing by allowing each each viewer to independently determine its rendering viewer to independently determine its rendering accuracy accuracy requirements.requirements.
- Multiple Channels - Multiple Channels reduce the aggregate traffic and reduce the aggregate traffic and computationcomputation throughout the net-VE by shifting throughout the net-VE by shifting
most most subscriptions toward the lower frequency, lower subscriptions toward the lower frequency, lower data data volume updates.volume updates.
Taking Advantage of Perceptual Limitations
Exploiting Temporal PerceptionExploiting Temporal Perception
- Since multiple-fidelity channels that can provide largely - Since multiple-fidelity channels that can provide largely inaccurate representation due to dead reckoning schemes based inaccurate representation due to dead reckoning schemes based
on stale or incomplete information an approach that eliminates this on stale or incomplete information an approach that eliminates this problem is required.problem is required.
- Exploiting Temporal Perceptions alleviates this problem by - Exploiting Temporal Perceptions alleviates this problem by rendering the entity in an accurate, but slightly out of rendering the entity in an accurate, but slightly out of
date,location, date,location, though it was there at some time in the past.though it was there at some time in the past.
- Used when the local user is not interacting with the rendered - Used when the local user is not interacting with the rendered entity and the temporal inaccuracies can be hidden inside the rendered entity and the temporal inaccuracies can be hidden inside the rendered net-VE.net-VE.
- A formalization of the frequent state regeneration techniques- A formalization of the frequent state regeneration techniques..
Exploiting Temporal PerceptionExploiting Temporal Perception
- Since multiple-fidelity channels that can provide largely - Since multiple-fidelity channels that can provide largely inaccurate representation due to dead reckoning schemes based inaccurate representation due to dead reckoning schemes based
on stale or incomplete information an approach that eliminates this on stale or incomplete information an approach that eliminates this problem is required.problem is required.
- Exploiting Temporal Perceptions alleviates this problem by - Exploiting Temporal Perceptions alleviates this problem by rendering the entity in an accurate, but slightly out of rendering the entity in an accurate, but slightly out of
date,location, date,location, though it was there at some time in the past.though it was there at some time in the past.
- Used when the local user is not interacting with the rendered - Used when the local user is not interacting with the rendered entity and the temporal inaccuracies can be hidden inside the rendered entity and the temporal inaccuracies can be hidden inside the rendered net-VE.net-VE.
- A formalization of the frequent state regeneration techniques- A formalization of the frequent state regeneration techniques..
Taking Advantage of Perceptual Limitations
Advantages of Exploiting Temporal PerceptionsAdvantages of Exploiting Temporal Perceptions
• Because packet recipients can explicitly hide the effects of network latency, Because packet recipients can explicitly hide the effects of network latency, the net-VE can be safely deployed over the WAN having greater latency.the net-VE can be safely deployed over the WAN having greater latency.
• Network Bandwidth requirements can be reduced by enhanced packet Network Bandwidth requirements can be reduced by enhanced packet aggregation techniques that artificially delay transmission of data.aggregation techniques that artificially delay transmission of data.
• These techniques can enhance the use of dead reckoning and other These techniques can enhance the use of dead reckoning and other prediction techniques by reducing the required prediction time interval and prediction techniques by reducing the required prediction time interval and limiting the potential prediction error. When temporal techniques are used, limiting the potential prediction error. When temporal techniques are used, prediction is used to hide the effects of network latency rather than provide prediction is used to hide the effects of network latency rather than provide an accurate model of the entity’s position.an accurate model of the entity’s position.
Advantages of Exploiting Temporal PerceptionsAdvantages of Exploiting Temporal Perceptions
• Because packet recipients can explicitly hide the effects of network latency, Because packet recipients can explicitly hide the effects of network latency, the net-VE can be safely deployed over the WAN having greater latency.the net-VE can be safely deployed over the WAN having greater latency.
• Network Bandwidth requirements can be reduced by enhanced packet Network Bandwidth requirements can be reduced by enhanced packet aggregation techniques that artificially delay transmission of data.aggregation techniques that artificially delay transmission of data.
• These techniques can enhance the use of dead reckoning and other These techniques can enhance the use of dead reckoning and other prediction techniques by reducing the required prediction time interval and prediction techniques by reducing the required prediction time interval and limiting the potential prediction error. When temporal techniques are used, limiting the potential prediction error. When temporal techniques are used, prediction is used to hide the effects of network latency rather than provide prediction is used to hide the effects of network latency rather than provide an accurate model of the entity’s position.an accurate model of the entity’s position.
Taking Advantage of Perceptual Limitations
Active and Passive EntitiesActive and Passive Entities
• An Active Entity is one that takes actions on its own, such as human An Active Entity is one that takes actions on its own, such as human participants as well as computer controlled entities.participants as well as computer controlled entities.
• A Passive Entity only reacts to events from its environment and does not A Passive Entity only reacts to events from its environment and does not generate it own actions. These include inanimate objects, such as rocks, generate it own actions. These include inanimate objects, such as rocks, books, etc.books, etc.
Because Active entities interact with Passive entities, the net-VE must Because Active entities interact with Passive entities, the net-VE must render each passive entity according to the network latency of its nearest render each passive entity according to the network latency of its nearest active entity so users perceive that the passive entity reacts active entity so users perceive that the passive entity reacts instantaneously to the actions of the active entity.instantaneously to the actions of the active entity.
Active and Passive EntitiesActive and Passive Entities
• An Active Entity is one that takes actions on its own, such as human An Active Entity is one that takes actions on its own, such as human participants as well as computer controlled entities.participants as well as computer controlled entities.
• A Passive Entity only reacts to events from its environment and does not A Passive Entity only reacts to events from its environment and does not generate it own actions. These include inanimate objects, such as rocks, generate it own actions. These include inanimate objects, such as rocks, books, etc.books, etc.
Because Active entities interact with Passive entities, the net-VE must Because Active entities interact with Passive entities, the net-VE must render each passive entity according to the network latency of its nearest render each passive entity according to the network latency of its nearest active entity so users perceive that the passive entity reacts active entity so users perceive that the passive entity reacts instantaneously to the actions of the active entity.instantaneously to the actions of the active entity.
Taking Advantage of Perceptual Limitations
Enhancing the System Architecture
Optimizing by changing the logical structure of Optimizing by changing the logical structure of the net-VE system to enable more efficient the net-VE system to enable more efficient information dissemination.information dissemination.
Net-VE’s are divided into two basic structures:Net-VE’s are divided into two basic structures:
- client-server- client-server
- peer-to-peer- peer-to-peer
Recent research has explored ways to combine the two to Recent research has explored ways to combine the two to support greater scalability and performance.support greater scalability and performance.
Optimizing by changing the logical structure of Optimizing by changing the logical structure of the net-VE system to enable more efficient the net-VE system to enable more efficient information dissemination.information dissemination.
Net-VE’s are divided into two basic structures:Net-VE’s are divided into two basic structures:
- client-server- client-server
- peer-to-peer- peer-to-peer
Recent research has explored ways to combine the two to Recent research has explored ways to combine the two to support greater scalability and performance.support greater scalability and performance.
Enhancing the System Architecture
Two ways to improve efficiency in large scale Two ways to improve efficiency in large scale Virtual Environments:Virtual Environments:
1) Client-server architecture can be 1) Client-server architecture can be expanded to include a cluster of servers to expanded to include a cluster of servers to communicate in a peer-to-peer manner.communicate in a peer-to-peer manner.
2) Client-server and peer-to-peer 2) Client-server and peer-to-peer structures are merged to create the peer-structures are merged to create the peer-server architecture.server architecture.
Two ways to improve efficiency in large scale Two ways to improve efficiency in large scale Virtual Environments:Virtual Environments:
1) Client-server architecture can be 1) Client-server architecture can be expanded to include a cluster of servers to expanded to include a cluster of servers to communicate in a peer-to-peer manner.communicate in a peer-to-peer manner.
2) Client-server and peer-to-peer 2) Client-server and peer-to-peer structures are merged to create the peer-structures are merged to create the peer-server architecture.server architecture.
Three Types of Server ClustersThree Types of Server Clusters
• Partitioning Clients across Multiple ServersPartitioning Clients across Multiple Servers
• Partitioning the net-VE across multiple ServersPartitioning the net-VE across multiple Servers
• Server Hierarchies (Hybrid)Server Hierarchies (Hybrid)
Three Types of Server ClustersThree Types of Server Clusters
• Partitioning Clients across Multiple ServersPartitioning Clients across Multiple Servers
• Partitioning the net-VE across multiple ServersPartitioning the net-VE across multiple Servers
• Server Hierarchies (Hybrid)Server Hierarchies (Hybrid)
Enhancing the System Architecture
Partitioning Clients across multiple serversPartitioning Clients across multiple serversEX. A client forwards an update message to its server, that server forwards the EX. A client forwards an update message to its server, that server forwards the message to its interested clients as well as other servers having clients interested in message to its interested clients as well as other servers having clients interested in the information. The other servers then forward the information to its interested the information. The other servers then forward the information to its interested clients.clients.
• This requires that the servers themselves communicate using peer-to-peer protocols. This requires that the servers themselves communicate using peer-to-peer protocols. Control MessagesControl Messages must be sent between servers containing composite information must be sent between servers containing composite information about the interests of each servers clients. This limits the flow of data to only those about the interests of each servers clients. This limits the flow of data to only those servers who have interested clients.servers who have interested clients.
• There are disadvantage which include greater latency than single-server systems due There are disadvantage which include greater latency than single-server systems due to the exchange of information through multiple servers. Also, the amount of to the exchange of information through multiple servers. Also, the amount of processing and bandwidth required by the servers is much greater due to the processing and bandwidth required by the servers is much greater due to the exchange of composite information.exchange of composite information.
Partitioning Clients across multiple serversPartitioning Clients across multiple serversEX. A client forwards an update message to its server, that server forwards the EX. A client forwards an update message to its server, that server forwards the message to its interested clients as well as other servers having clients interested in message to its interested clients as well as other servers having clients interested in the information. The other servers then forward the information to its interested the information. The other servers then forward the information to its interested clients.clients.
• This requires that the servers themselves communicate using peer-to-peer protocols. This requires that the servers themselves communicate using peer-to-peer protocols. Control MessagesControl Messages must be sent between servers containing composite information must be sent between servers containing composite information about the interests of each servers clients. This limits the flow of data to only those about the interests of each servers clients. This limits the flow of data to only those servers who have interested clients.servers who have interested clients.
• There are disadvantage which include greater latency than single-server systems due There are disadvantage which include greater latency than single-server systems due to the exchange of information through multiple servers. Also, the amount of to the exchange of information through multiple servers. Also, the amount of processing and bandwidth required by the servers is much greater due to the processing and bandwidth required by the servers is much greater due to the exchange of composite information.exchange of composite information.
Enhancing the System Architecture
Enhancing the System ArchitecturePartitioning the net-VE across multiple serversPartitioning the net-VE across multiple servers
Each server is responsible for clients located within a Each server is responsible for clients located within a particular region of the net-VE and each client particular region of the net-VE and each client communicates with different servers as it moves communicates with different servers as it moves through the environment.through the environment.
Partitioning the net-VE can eliminate almost 95 percent Partitioning the net-VE can eliminate almost 95 percent of the information exchange among servers, however of the information exchange among servers, however this requires advanced configuration work to enable this requires advanced configuration work to enable information exchanges among the servers, especially information exchanges among the servers, especially when one region can see into another.when one region can see into another.
Partitioning the net-VE across multiple serversPartitioning the net-VE across multiple servers
Each server is responsible for clients located within a Each server is responsible for clients located within a particular region of the net-VE and each client particular region of the net-VE and each client communicates with different servers as it moves communicates with different servers as it moves through the environment.through the environment.
Partitioning the net-VE can eliminate almost 95 percent Partitioning the net-VE can eliminate almost 95 percent of the information exchange among servers, however of the information exchange among servers, however this requires advanced configuration work to enable this requires advanced configuration work to enable information exchanges among the servers, especially information exchanges among the servers, especially when one region can see into another.when one region can see into another.
Server HierarchiesServer Hierarchies
• Server Hierarchies require that servers themselves act as clients in Server Hierarchies require that servers themselves act as clients in a client-server relationship with higher-level servers. The higher a client-server relationship with higher-level servers. The higher level servers communicate information on behalf on net-VE regions level servers communicate information on behalf on net-VE regions represented by “client” servers.represented by “client” servers.
• The higher level servers send updates to its client servers who The higher level servers send updates to its client servers who forward this information to its interested entities. When a client forward this information to its interested entities. When a client server forwards updates up to the higher level server, the higher server forwards updates up to the higher level server, the higher level server determines which areas require the information and level server determines which areas require the information and forward the update to other high-level servers who then send the forward the update to other high-level servers who then send the information to its interested client servers. information to its interested client servers.
Server HierarchiesServer Hierarchies
• Server Hierarchies require that servers themselves act as clients in Server Hierarchies require that servers themselves act as clients in a client-server relationship with higher-level servers. The higher a client-server relationship with higher-level servers. The higher level servers communicate information on behalf on net-VE regions level servers communicate information on behalf on net-VE regions represented by “client” servers.represented by “client” servers.
• The higher level servers send updates to its client servers who The higher level servers send updates to its client servers who forward this information to its interested entities. When a client forward this information to its interested entities. When a client server forwards updates up to the higher level server, the higher server forwards updates up to the higher level server, the higher level server determines which areas require the information and level server determines which areas require the information and forward the update to other high-level servers who then send the forward the update to other high-level servers who then send the information to its interested client servers. information to its interested client servers.
Enhancing the System Architecture
Enhancing the System ArchitecturePeer-Server Systems
•The hybrid Peer-Server technique merges the best characteristics of the traditional peer-to-peer and client-server systems characteristics.•Forwarding Server subscribes to multicast groups for entities of interest to the destination host, performs aggregation and filtering functions, and forwards updates to the destination hosts.•Monitoring Directory Server collects information about the environment and dynamically determines which hosts should receive transmissions from each entity in the environment. •Reachability Testing determines whether a source host can communicate with a destination host.•The Peer-Server technique represents an example of an Adaptive net-VE System Architecture that dynamically adjusts to account for network topology, host location, and network dynamics.
Peer-Server Systems•The hybrid Peer-Server technique merges the best characteristics of the traditional peer-to-peer and client-server systems characteristics.•Forwarding Server subscribes to multicast groups for entities of interest to the destination host, performs aggregation and filtering functions, and forwards updates to the destination hosts.•Monitoring Directory Server collects information about the environment and dynamically determines which hosts should receive transmissions from each entity in the environment. •Reachability Testing determines whether a source host can communicate with a destination host.•The Peer-Server technique represents an example of an Adaptive net-VE System Architecture that dynamically adjusts to account for network topology, host location, and network dynamics.
Conclusions