Load Balancing Azure David Rendón. Agenda Demo FAQS ¿Qué es Load Balancing? ¿Por qué Load Balancing?
1F0_4553_c1 © 1999, Cisco Systems, Inc. Cisco Load Balancing Solutions.
-
Upload
maryann-stanley -
Category
Documents
-
view
231 -
download
2
Transcript of 1F0_4553_c1 © 1999, Cisco Systems, Inc. Cisco Load Balancing Solutions.
1F0_4553_c1 © 1999, Cisco Systems, Inc.
Cisco Load Balancing Cisco Load Balancing SolutionsSolutions
2F0_4553_c1 © 1999, Cisco Systems, Inc.
AgendaAgenda
• Problems We Are Solving
• DistributedDirector
• LocalDirector
• MultiNode Load Balancing
3F0_4553_c1 © 1999, Cisco Systems, Inc.
Problems We Are SolvingProblems We Are Solving
• Efficient, high-performance clientaccess to large server complexes
• Continuous availability of server applications
• Scalable, intelligent load distribution across servers in the complex
• Load distribution based on servercapacity of doing work and application availability
4F0_4553_c1 © 1999, Cisco Systems, Inc. 4
DistributedDirectorDistributedDirector
F0_4553_c1 © 1999, Cisco Systems, Inc.
5F0_4553_c1 © 1999, Cisco Systems, Inc.
What Is DistributedDirector?What Is DistributedDirector?
• Two pieces:
Standalone software/hardware bundleSpecial Cisco IOS®-based software on
Cisco 2501, 2502, and Cisco 4700M hardware platforms—11.1IA release train
Cisco IOS software release 11.3(2)T andlater on DRP-associated routers in field
• DistributedDirector is NOT a router
• Dedicated box for DistributedDirector processing
6F0_4553_c1 © 1999, Cisco Systems, Inc.
What Does What Does DistributedDirector Do?DistributedDirector Do?
• Resolves domain or host names to aspecific server (IP address)
• Provides transparent access to topologicallyclosest Internet/intranet server relative to client
• Maps a single DNS host name to the “closest”server to client
• Dynamically binds one of several IP addressesto a single host name
• Eliminates need for end-users to choose froma list of URL/host names to find “best” server
• The only solution which uses intelligence in the network infrastructure to direct client to best server
7F0_4553_c1 © 1999, Cisco Systems, Inc.
APPL2
APPL1
APPL3
2.2.2.1
IP
DD
Client
12
34
1.1.1.1
3.3.3.1
Resolve appl.com
DNS-Based DistributionDNS-Based Distribution
• Client connects to appl.com
• appl.com request routedto DistributedDirector
• DistributedDirectoruses multipledecision metrics toselect appropriateserver destination
• DistributedDirector sends destination address to client
• Client connects to the appropriate server
8F0_4553_c1 © 1999, Cisco Systems, Inc.
How Are DistributedDirector How Are DistributedDirector Choices Made?Choices Made?
• Director Response Protocol (DRP)Interoperates with remote routers (DRP agents)to determine network topology
Determines network distance between clients and server
• Client-to-server link latency (RTT)
• Server availability
• Administrative “cost”Take a server out of service for maintenance
• Proportional distributionFor heterogeneous distributed server environments
• Random distribution
9F0_4553_c1 © 1999, Cisco Systems, Inc.
Client
WebServer DistributedDirector
DRP Agents
Internet
Web Server
Director Response Protocol (DRP)Director Response Protocol (DRP)
• Operates with routers in thefield to determine:
Client-to-server network proximity
Client-to-serverlink latency
10F0_4553_c1 © 1999, Cisco Systems, Inc.
Client
AS3
AS1
AS4
One Hop
Two Hops
AS2
Server
DRP
Server
DRP
DRP “External” MetricDRP “External” Metric
• Measures distance from DRP agents to client in BGP AS hop counts
Server
DRP
11F0_4553_c1 © 1999, Cisco Systems, Inc.
• Measures client-to-DRPserver round-trip times
• Compares link latencies
• Server with lowest round-triptime is considered “best”
• Maximizes end-to-endserver accessperformance
DRP “Round-Trip Time” MetricDRP “Round-Trip Time” Metric
RTT Measurement
Client
AS1
AS4
Server
DRP
Server
DRP
AS3
AS2
Server
DRP
12F0_4553_c1 © 1999, Cisco Systems, Inc.
““Portion” MetricPortion” Metric
• Proportional load distribution across heterogeneous servers
• Can also be used to enable traditional round-robin DNS
Server 1 SPARCstationServer 1 SPARCstation
Server 2SPARCstationServer 2SPARCstation
Server 3Pentium 60 MHzServer 3Pentium 60 MHz
Server 4Pentium 60 MHzServer 4Pentium 60 MHz
“Portion”Metric Value
“Portion”Metric Value
Server 5Pentium 166 MHzServer 5Pentium 166 MHz
7/24 = 29.2%7/24 = 29.2%
8/24 = 33.3%8/24 = 33.3%
2/24 = 8.3%2/24 = 8.3%
2/24 = 8.3%2/24 = 8.3%
5/24 = 20.8%5/24 = 20.8%
24/24 = 100%24/24 = 100%
Portion ofConnections
Portion ofConnections
77
88
22
22
55
Total = 24Total = 24
13F0_4553_c1 © 1999, Cisco Systems, Inc.
Server Availability ParameterServer Availability Parameter
• DistributedDirector establishes a TCP connection to the service port on each remote server, thus verifying that the service is available
• Verification is made at regular intervals
• Port number and connection interval are configurable
• Minimum configurable interval is ten seconds
• Maximizes service availability as seen by clients
14F0_4553_c1 © 1999, Cisco Systems, Inc.
DistributedDirector—DistributedDirector—How Does It Work?How Does It Work?
• Two configuration modes:
DNS caching name server authoritative for www.foo.com subdomain
HTTP redirector for http://www.foo.com
• Modes configurable onper-domain basis
15F0_4553_c1 © 1999, Cisco Systems, Inc.
DistributedDirector—RedundancyDistributedDirector—Redundancy
•DNS mode
Use multiple DistributedDirectorsto provide several name servers authoritative for agiven hostname to provide redundancy
All DistributedDirectors are considered to be primary DNS servers
•HTTP mode
Use multiple DistributedDirectors and Cisco’s Hot Standby Router Protocol (HSRP)to provide redundancy
16F0_4553_c1 © 1999, Cisco Systems, Inc. 16
LocalDirectorLocalDirector
F0_4553_c1 © 1999, Cisco Systems, Inc.
17F0_4553_c1 © 1999, Cisco Systems, Inc.
LocalDirectorLocalDirector
• LocalDirector appliance front-ends server farmLoad balances connections to “best server”
Failures, changes transparent to end users
Improves response time
Simplifies operations and maintenance
• Simultaneously supports different serverplatforms, operating systems
• Any TCP service (not just Web)
LocalDirector
Data Center
Internet or Intranet
User
18F0_4553_c1 © 1999, Cisco Systems, Inc.
LocalDirector—LocalDirector—Server ManagementServer Management
• Represents multiple servers with a single virtual address
• Easily place servers in and out of service
• Identifies failed servers: takes offline
• Identifies working servers: places in service
• IP address management
• Application-specific servers
• Maximum connections
• Hot-standby server
19F0_4553_c1 © 1999, Cisco Systems, Inc.
LocalDirector—SpecificationsLocalDirector—Specifications
• 80-Mbps throughput—model 416
• 300-Mbps throughput—model 430 Fast Ethernet channel
• Supports up to 64,000 virtual and real IP addresses
• Up to 16 10/100 Ethernet, 4 FDDI ports
• One-million simultaneous TCP connections
• TCP, UDP applications supported
20F0_4553_c1 © 1999, Cisco Systems, Inc.
Network Address TranslationNetwork Address Translation
• Client traffic destined forvirtual address is distributedacross multiple real addressesin the server cluster
• Transparent to client and server
• Network AddressTranslation (NAT)
Requires all traffic topass through LocalDirector
• Virtuals and reals areIP address/port combination Client
Client
Client
IP
Server 2Server 1 Server 3
LocalDirector 1.1.1.1
3.3.3.12.2.2.22.2.2.1
VirtualAddress
RealAddresses
Server Cluster
21F0_4553_c1 © 1999, Cisco Systems, Inc.
Session Distribution AlgorithmSession Distribution Algorithm
• Passive approach
Least connections
Weighted
Fastest
Linear
Source IP
22F0_4553_c1 © 1999, Cisco Systems, Inc.
Ideal for Mission-Critical Ideal for Mission-Critical ApplicationsApplications
TAP ServersMail, Web, FTP, and so on
LocalDirector
High-Availability Solution
23F0_4553_c1 © 1999, Cisco Systems, Inc.
LocalDirector StrengthsLocalDirector Strengths
• Network Address Translation (NAT)allows arbitrary IP topology between LocalDirector and servers
• Proven market leader with extensivefield experience
• Rich set of features to map betweenvirtual and real addresses
• Bridge-like operation allows transparent deployment and gradual migration to NAT
24F0_4553_c1 © 1999, Cisco Systems, Inc.
LocalDirector WeaknessesLocalDirector Weaknesses
• NAT requires all traffic to be routed through a single box
• NAT requires that data be scanned and manipulated beyond the TCP/UDP header
• Two interface types supported:FE and FDDI
25F0_4553_c1 © 1999, Cisco Systems, Inc. 25
MultiNodeMultiNodeLoad BalancingLoad Balancing
F0_4553_c1 © 1999, Cisco Systems, Inc.
26F0_4553_c1 © 1999, Cisco Systems, Inc.
MultiNode Load Balancing MultiNode Load Balancing (MNLB)(MNLB)
• Next-generation server load balancing
• Unprecedented high availabilityEliminate single points of failure
• Unprecedented scalabilityAllow immediate incremental or large-scale expansion of application servers
• New dynamic server feedbackBalance load according to actual application availability and server workload
27F0_4553_c1 © 1999, Cisco Systems, Inc.
MNLB
MNLB—What Is It?MNLB—What Is It?
• Hardware and software solution that distributes IP traffic across server farms
• Cisco IOS router andswitch based
• Implementation ofCisco’s ContentFlow architecture
• Utilizes dynamicfeedback protocol for balancing decisions
28F0_4553_c1 © 1999, Cisco Systems, Inc.
MNLB
MNLB FeaturesMNLB Features
• Defines single-system imageor “virtual address” for IP applications on multiple servers
• Load balances acrossmultiple servers
• Uses server feedback or statistical algorithms forload-balancing decisions
• Server feedback contains application availabilityand/or server work capacity
• Algorithms include round robin, least connections, and best performance
29F0_4553_c1 © 1999, Cisco Systems, Inc.
MNLB
MNLB FeaturesMNLB Features
• Session packet forwarding distributed across multiple routers or switches
• Supports any IP application: TCP, UDP, FTP, HTTP, Telnet, and so on
• For IBM OS/390 ParallelSysplex environments:
Delivers generic resource capability
Makes load-balancing decisionsbased on OS/390 Workload Manager data
30F0_4553_c1 © 1999, Cisco Systems, Inc.
MNLB ComponentsMNLB Components
• Services Manager
Software runs on LocalDirector
ContentFlow Flow Management Agent
Makes load-balancing decisions
Uses MNLB to instruct Forwarding Agents of correct server destination
Uses server feedback protocolto maintain server capacity andapplication availability info
• Backup Services Manager
Enables 100% availabilityfor Services Manager
No sessions lost due to primaryservices manager failure
BackupServiceManager
ServicesManager
31F0_4553_c1 © 1999, Cisco Systems, Inc.
MNLB ComponentsMNLB Components
• Forwarding Agent Cisco IOS router and switch software
ContentFlow Flow Delivery Agent
Uses MNLB to communicate with Services Manager
Sends connection requests toServices Manager
Receives server destination from Services Manager
Forwards data to chosen server
• Workload AgentsRuns on either server platforms or management consoles
Maintains information on server work capacity and application availability
Communicates with Services Manager using server feedback protocol
For IBM OS/390 systems deliversOS/390 Workload Manager data
WorkloadAgents
ForwardingAgents
32F0_4553_c1 © 1999, Cisco Systems, Inc.
WorkloadAgents
ForwardingAgents
ServicesManager
Client
How Does MNLB Work?How Does MNLB Work?
• Initialization:
Services Manager locates Forwarding Agents
Instructs each Forwarding Agent to send session requests for definedvirtuals to ServicesManager
Locates Workload Agentsand receives server operating and application information
33F0_4553_c1 © 1999, Cisco Systems, Inc.
How Does MNLB Work?How Does MNLB Work?
• Session packet flow1. Client transmits connectionrequest to virtual address
2. Forwarding Agent transmitspacket to Services Manager
Services Manager selects appropriate destinationand tells Forwarding Agent
3. Forwarding Agent forwardspacket to destination
4. Session data flows throughany Forwarding Agent routerand switch
The Services Manager is alsonotified on session termination
Client
1
2
3
4
34F0_4553_c1 © 1999, Cisco Systems, Inc.
Dispatch Mode of Dispatch Mode of Session DistributionSession Distribution
• Virtual IP address (VIPA) on hosts (alias, loopback)
• Load-balancer presentsvirtual IP address to network
• Load-balancer forwards packets based on Layer 2 address
Uses ARP to obtainLayer 2 address
IP header still containsvirtual IP address
• Requires subnet adjacency since it relies on Layer 2 addressing
ClientClient
IP
Server 2Server 1 Server 3
LocalDirector1.1.1.1
3.3.3.12.2.2.22.2.2.1
VirtualAddress
RealAddresses
Server Cluster
VIPA 1.1.1.1
35F0_4553_c1 © 1999, Cisco Systems, Inc.
Dispatch ModeDispatch Mode
• Benefits
No need to scan past TCP/UDP header, may achieve higher performance
Outbound packets may travel any path
• Issues
Inbound packets must pass through the load-balancer
Ignoring outbound packets does limit the effectiveness of the balancing decisions
Subnet adjacency can be a real network design problem
Client
IP
Server 2Server 1 Server 3
Server Cluster
Client
36F0_4553_c1 © 1999, Cisco Systems, Inc.
MNLBMNLB
• Uses either NAT ormodified dispatch mode
• NATMNLB architecture createshigh availability—no singlepoint of failure
No throughput bottleneck
• Modified dispatch modeUses Cisco Tag Switch network to address across multiple subnets
Inbound and outbound trafficcan travel through any path
Services Manager notifiedon session termination
MNLB
Client
1.1.1.1
37F0_4553_c1 © 1999, Cisco Systems, Inc. 37
BenefitsBenefits
F0_4553_c1 © 1999, Cisco Systems, Inc.
38F0_4553_c1 © 1999, Cisco Systems, Inc.
MNLB: The Next GenerationMNLB: The Next Generation
• Unprecedented high availability
Eliminate single points of failure
• Unprecedented scalability
Allow immediate incremental or large-scale expansion of application servers
• New dynamic server feedback
Balance load according to actual application availability and server work load
39F0_4553_c1 © 1999, Cisco Systems, Inc.
MNLB
Single System ImageSingle System Image
• One IP address for the server cluster
• Easy to grow and maintain server cluster without disrupting availability or performing administrative tasks on clients
• Easy to administrate clients, only one IP address
• Enhances availability
40F0_4553_c1 © 1999, Cisco Systems, Inc.
MNLB
Server IndependenceServer Independence
• MNLB operates independent of server platform
• Server agents operate inIBM MVS, IBM OS/390, IBM TPF, NT, and UNIX sites
• Application-aware load distribution available inall server sites
• Enables IP load distribution for large IBM Parallel Sysplex complexes
41F0_4553_c1 © 1999, Cisco Systems, Inc.
MNLB
Application-Aware Load BalancingApplication-Aware Load Balancing
• Client traffic is distributedacross server cluster to thebest server for the request
• Transparent to client
• Allow agent(s) in servers to provide intelligent feedbackto network as basis forbalancing decision
• Uses IBM’s OS/390 Work Load Manager in OS/390 Parallel Sysplex environments
• Application-aware load balancing ensures session completion
42F0_4553_c1 © 1999, Cisco Systems, Inc.
MNLB
Total Redundancy—Total Redundancy—Ultimate AvailabilityUltimate Availability
• No single point of failurefor either applications,servers, or MNLB
• Multiple forwardingagents ensure accessto server complex
• Multiple Services Managers ensure load balancing is maintained through failure
• Single cluster address formultiple servers maintainsaccess to applications incase of server failure orserver maintenance
43F0_4553_c1 © 1999, Cisco Systems, Inc.
Unbounded ScalabilityUnbounded Scalability
• Scalability limited only by the number and throughput of forwarding agents
• Performance limited only bythe number and throughputof Forwarding Agents
• Forwarding Agents can beadded at any time with noloss of service
• Servers can be added withno network design changes
• NO throughput bottlenecks
• Scales to the largest of Web sites
MNLB
44F0_4553_c1 © 1999, Cisco Systems, Inc. 44
ImplementationImplementationand Road Mapand Road Map
F0_4553_c1 © 1999, Cisco Systems, Inc.
45F0_4553_c1 © 1999, Cisco Systems, Inc.
MNLB
Phase One ImplementationPhase One Implementation
• MNLB components
Cisco IOS-based forwarding agents in Cisco 7500, 7200, 4000, 3600, and Catalyst® 5000R Services Manager
Services Manager runs onLocalDirector chassis
LocalDirector hot-standby for phase one backup manager
Workload Agents for IBM OS/390, IBM TPF, NT,and UNIX
46F0_4553_c1 © 1999, Cisco Systems, Inc.
Thank YouThank You
Q & AQ & A
46F0_4553_c1 © 1999, Cisco Systems, Inc.
47F0_4553_c1 © 1999, Cisco Systems, Inc.