CONTENT DELIVERY NETWORK - Making Networks …€¦ · CONTENT DELIVERY NETWORK: TOPOLOGY USERS...

8
CONTENT DELIVERY NETWORK Benefits of CDN design with XCloud Networks WHITE PAPER

Transcript of CONTENT DELIVERY NETWORK - Making Networks …€¦ · CONTENT DELIVERY NETWORK: TOPOLOGY USERS...

Page 1: CONTENT DELIVERY NETWORK - Making Networks …€¦ · CONTENT DELIVERY NETWORK: TOPOLOGY USERS ISP/IXP ROUTER NETWORK SWITCHES SERVERS Let’s study example topology of CDN node

CONTENT DELIVERY NETWORK Benefits of CDN design with XCloud Networks

WHITE PAPER

Page 2: CONTENT DELIVERY NETWORK - Making Networks …€¦ · CONTENT DELIVERY NETWORK: TOPOLOGY USERS ISP/IXP ROUTER NETWORK SWITCHES SERVERS Let’s study example topology of CDN node

CONTENT DELIVERY NETWORK: INTRODUCTION

Building Content Delivery Network is crucial for

companies who need to deliver content on the scale by

preserving all the benefits of running own hosted

infrastructure.

Rolling out CDN, IT companies distribute pieces of the

infrastructure to remote locations spread across the

world – it is still piece of own infrastructure and needs

to be managed efficiently. Distributed nature brings lot

of benefits as well as introduces complexities.

XCloud Networks has solution based on open

hardware available from 7 well established brands,

running Cumulus Linux as network operating system

(NOS), XCloud Conductor as software defined

networking platform. Leveraging routing on the host

principles by using Free Range Routing (open-source IP

routing suite).

INTRODUCTION

This paper focuses on necessary benefits achieved when

building CDN using XCloud Conductor platform.

• Simple but robust centralized operations and management

• Universal network equipment for handling multiple jobs in

a single box

• Easy to deploy

• Cost-Efficiency

• Independence from hardware vendors

• Resiliency

• Increased agility in maintenance and deployment

• Load balancing

• Security

• Extended flexibility

Page 3: CONTENT DELIVERY NETWORK - Making Networks …€¦ · CONTENT DELIVERY NETWORK: TOPOLOGY USERS ISP/IXP ROUTER NETWORK SWITCHES SERVERS Let’s study example topology of CDN node

CONTENT DELIVERY NETWORK: TOPOLOGY

USERS

ISP/IXP ROUTER

NETWORK SWITCHES

SERVERS

Let’s study example topology of CDN node

CDN NODE

USERS

Servers are connected to network switches. Leveraging routing on the host technology by using FreeRangeRouting(FRR) and

Cumulus Quagga. Network Switches hold /32 routes towards every server, and every server holds default route towards

network switches. Hashing mechanisms and ECMP (equal cost multi path) make all available paths to be utilized leading to

redundancy, aggregated bandwidth and load balancing.

Network switches are also used for connecting to ISP which is hosting CDN node or to IXP or to multiple ISPs - depending on

particular needs. BGP sessions between CDN Node and ISP/IXP network are handled purely by network switches. Typically

CDN Node advertises at least one /24, which becomes Anycast subnet (more about anycast later in this paper). That /24 is

received by ISP/IXP and advertised to their peers. End-user requests are routed by their ISP to nearest CDN.

In cases when CDN node is connected to just single ISP, network switches hold just default route pointing towards the ISP.

Traffic destined to end-users will be routed towards ISP’s router then ISP will handle the rest of forwarding based on it’s

internal routing policies.

For cases when CDN Node is connected to more than one ISP or which is more typical to an IXP, CDN node should make

decisions regarding which exit point is the best path towards end-user. Network switches can handle up to 85K prefixes

locally, which is enough to properly calculate best path among several ISPs or among all prefixes even from major IXPs.

With these techniques CDN node handles routing, load balancing and ACLs without involving additional equipment.

Page 4: CONTENT DELIVERY NETWORK - Making Networks …€¦ · CONTENT DELIVERY NETWORK: TOPOLOGY USERS ISP/IXP ROUTER NETWORK SWITCHES SERVERS Let’s study example topology of CDN node

CONTENT DELIVERY NETWORK: CONTROLLER

General Health OverviewCentralized control

CDN CDN CDN CDN

Load Balancing statsLogging

Network switches of all the CDN nodes spread across the world are connecting to single XCloud Conductor Controller

(hosted on premise or in the cloud) through encrypted tunnel over the Internet. Encrypted tunnel is used purely for controlling

traffic between switches and controller. XCloud Conductor portal becomes one-stop-shop for all operations related to

network switches of distributed CDN nodes.

Traffic statistics are being collected into centralized portal, where custom traffic boards can be created to facilitate traffic

distribution monitoring. Load Balancer health-checks will visualize application health for every CDN node as well as for every

separate server of every CDN node. ACL rules can be defined centrally and pushed automatically.

Traffic Statistics example view.

Page 5: CONTENT DELIVERY NETWORK - Making Networks …€¦ · CONTENT DELIVERY NETWORK: TOPOLOGY USERS ISP/IXP ROUTER NETWORK SWITCHES SERVERS Let’s study example topology of CDN node

CONTENT DELIVERY NETWORK: LOAD BALANCER & ACL

Load Balancer Health Monitor - 3 Nodes all are OK

Load Balancer Health Monitor - One node view is expanded showing members of the node and configured health-checks

Load Balancer Health Monitor - One node has failed, checking details in expanded view

Example Dialogue for creating ACL rule.

Check button will search across all ACLs in the system to check if there is an existing ACL covering the new rule.

Approval procedures can be engaged with different scenarios.

For temporary ACLs deactivation date and time can be set.

Page 6: CONTENT DELIVERY NETWORK - Making Networks …€¦ · CONTENT DELIVERY NETWORK: TOPOLOGY USERS ISP/IXP ROUTER NETWORK SWITCHES SERVERS Let’s study example topology of CDN node

CONTENT DELIVERY NETWORK: ROUTING ON THE HOST

In this example every CDN server within single CDN node forms BGP sessions with every switch. Every server has two IP

addresses configured on it’s loopback interface: Anycast address which is the same for every server (1.0.0.1) - used for end-

user traffic, Unicast address (unique for every server) - used for server to server communication. In this scenario every switch

holds 4 X /32 routes towards servers thus every switch will load balance requests coming from end-users towards 4 servers.

Switching silicone will calculate hashes from source/destination IP/Port which will help to ensure that singe TCP session is

always reaching the same server.

Health-checks are constantly testing reachability of each service running on every individual server in order to track

possible failures on application level and re-route end-user requests to another server. All health-check states are collected

and visualized in centralized XCloud Conductor portal. In case maintenance should be done on one server it is possible to set

maintenance mode for particular server and traffic will be re-routed away from that server.

ACL rules configured from the portal are being applied on appropriate interfaces to prevent unwanted traffic to reach

servers. Since CDN node is basically Internet facing - it is necessary to allow only required traffic and block everything else.

USERS

BGP

ACL RULES

CDN NODE

U: 5.0.0.1 A: 1.0.0.1

U: 5.0.0.2 A: 1.0.0.1

U: 5.0.0.3 A: 1.0.0.1

U: 5.0.0.4 A: 1.0.0.1

HEALTH-CHECKS

Bare Metal or White Box switches are standard hardware based on industry standard Silicone running Cumulus Linux

Network Operating System(NOS) ( http://cumulusnetworks.com ). XCloud Conductor agent runs on top of NOS contacting

with centralized controller using encrypted tunnel over the Internet.

++

++ Servers can run almost any Linux distribution or Windows Server starting from 2012R2. FRR (Free Range Routing) package

pre-configured with standard settings is used for establishing of BGP session with switches. FRR is OpenSource IP routing

protocol suite for Linux/Unix systems ( http://frrouting.org ). Windows machines can use built-in BGP service

Page 7: CONTENT DELIVERY NETWORK - Making Networks …€¦ · CONTENT DELIVERY NETWORK: TOPOLOGY USERS ISP/IXP ROUTER NETWORK SWITCHES SERVERS Let’s study example topology of CDN node

CONTENT DELIVERY NETWORK: THE INTERNET

CDN

INTERNET

CDN

CDN

CDN

CDN

From previous diagram - CDN node switches handle /32 Anycast addresses from every CDN server. Because global

routing policies allow to use not longer than /24 prefixes for global routing table advertisements, CDN switches will

aggregate all /32 local prefixes into single /24 which will be advertised to ISP/IXP.

In case all servers of particular CDN node are broken or manually configured for maintenance - CDN switches will stop

advertising aggregate /24 to ISP/IXP. That will make traffic to re-route globally to another CDN nodes across the world -

enabling resiliency on global scale.

FAST SIMPLE COST-EFFECTIVE ELASTIC

XCloud Networks - Infrastructures are

Page 8: CONTENT DELIVERY NETWORK - Making Networks …€¦ · CONTENT DELIVERY NETWORK: TOPOLOGY USERS ISP/IXP ROUTER NETWORK SWITCHES SERVERS Let’s study example topology of CDN node

CONTENT DELIVERY NETWORK: IT’S EASY

CONTACT US

REQUEST A DEMO

DEPLOY OWN CDN

it’s easy

https://xcloudnetworks.com