Next-Generation Reading Next-Generation Advanced Algebra ...
Next Generation Internet_revised
-
Upload
krishna-raja -
Category
Documents
-
view
214 -
download
0
Transcript of Next Generation Internet_revised
-
8/6/2019 Next Generation Internet_revised
1/97
NEXT GENERATION INTERNET
Contents
Introduction
Working principles of Internet2
Stages of development of Internet2
Background:
-
8/6/2019 Next Generation Internet_revised
2/97
This chapter deals with the reason behind the creation of next-generation Internet, its genesis and
the various stages involved in creating a deploying a working model of Internet2.
-
8/6/2019 Next Generation Internet_revised
3/97
1.1 Introduction
What is Internet2?
Internet2 is a not for profit advanced networking consortium led by US higher education
universities in partnership with government and industry to work together to develop and deploy
advanced network applications and technologies thereby creating tomorrows Internet. In 2009,
Internet2 member rolls included over 200 higher education institutions, over 40 members from
industry, over 30 research and education network and connector organizations and over 50 affiliate
members (The list is provided below). The global scale of the collaboration has led to the physical
interconnection of nearly 30 countries creating a worldwide community of advanced Internet
development.
Internet2 is built on the following five principles:
1. Address the advanced networking needs and interests of the research and education
community.
Since Internet2 is a university led organization, the organization is aware that most of
the scientific fields like genomics, genetics, astronomy and particle physics among
others need high speed networking capabilities that are significantly higher than those
that are available from the commercial Internet. In addition high speed networks are an
important pre-requisite for development of new teaching methods in academia.
Internet2 aims to meet the requirements of the scientific communitys need for high
speed networks by encouraging their development and deployment.
2. Provide leadership in the evolution of global Internet.
Todays global Internet evolved from the collaboration of scientists at CERN. Similarly
one of the goals of Internet2 is to serve as a prototype as well as a testing ground for the
future development of Internet. Like the way the first generation Internet was used as a
proof of concept for underlying technologies and protocols like TCP/IP, WWW, e-mail
-
8/6/2019 Next Generation Internet_revised
4/97
and domain naming system, so too does the Internet2 community functions as a large
scale model for testing the current advanced technologies. In this role Internet2 serves
as an advocate for principles like advanced end to end architecture that is extremely
important for the Internets continued improvement and growth. End to end architecture
is the consistent and uninterrupted ability of any internet device to connect to another
without external interruptions like firewalls, caches, network address translators being
inserted in the communications path and thus interfering with the device and
application performance
3. Implement a systems approach to a scalable and vertically integrated advanced
networking infrastructure.
Todays Internet faces end to end security and performance issues whose solution
requires an integrated approach. Internet2 with its collaboration between academia and
industry is well suited to combat these issues without compromising on its foundational
principles of continued growth and innovation.
4. Create strategic relationships among academia, industry and government.
The current Internet was created as collaboration between academia, industry and
government. Internet2 builds upon this partnership by providing a framework within
which individuals and organizations can work together on new networking technologies
and advanced applications. The Internet in addition to being a tool for research and
education has also become an indispensable tool for international commerce and
communication. Internet2 fosters and improves the partnerships that address the
complex interests that are crucial for the development of Internet.
5. Catalyze activities that cannot be accomplished by individual organizations.
Internet2 serves as a keystone and framework for increasing the effectiveness of the
group members collective efforts. It performs these functions by supporting working
groups and initiatives, convening workshops and meetings and offers a base of
operations for projects that serve the entire Internet2 community. As an organization,
Internet2 focuses on deployable, scalable and sustainable technologies and solutions.
-
8/6/2019 Next Generation Internet_revised
5/97
1.2 Internet development spiral
Figure 1 - Internet Development Spiral
There are four phases in the Internet development spiral. They are
1. Research and Development:
This phase is the initial stage of Internet development. It takes place the university,
government and industrial laboratories.
-
8/6/2019 Next Generation Internet_revised
6/97
-
8/6/2019 Next Generation Internet_revised
7/97
NEXT GENERATION INTERNET
Architecture
Contents
Background
Backbone Architecture
Network Management and Control Plane
Internet2 Subnet Models
5. Conclusion
-
8/6/2019 Next Generation Internet_revised
8/97
Background:
This chapter deals with the two of the backbone technologies available in the current scenario. It
deals with the infrastructure used and what organizations are involved in setting up the backbone.
-
8/6/2019 Next Generation Internet_revised
9/97
NEXT GENERATION INTERNET
Architecture
Contents: Backbone
Introduction
The Internet2 Network (Abilene Network)
3. vBNS (Very High Performance Backbone Network)
4. Conclusion
-
8/6/2019 Next Generation Internet_revised
10/97
2.2.2 The Internet2 Network (Abilene Network)
Figure 2 Internet2 network
Abilene is a partnership between Indiana University, Juniper Networks, Cisco Systems, Nortel
Networks and Qwest Communications. As the figure indicates, Abilene network is a nationwide
high performance backbone network operated by the Internet2 consortium. In 2007, the name
Abilene Network was retired as the network was transitioned to an upgraded infrastructure
utilizing Level 3 Communications' fiber optic cable. The upgraded network is known as the
-
8/6/2019 Next Generation Internet_revised
11/97
Internet2 Network.
The backbone connects regional network aggregation points, called gigaPoPs, to support the work
of Internet2 universities as they develop advanced Internet applications. The neurons in the central
nervous system of Internet2 are referred to as gigaPoPs. They send the information in packet bursts
to each other, and the data is reassembled into what it was originally. Internet2 consists of dozens
of these gigaPoPs connected to each other by fiber optics. A gigaPoP is a one-stop shopping
connection point that provides exceedingly cost-effective access to the major national commodity
Internet Service Providers (ISPs) as well as to aggregation pools and mechanisms that ensure
alternate data paths, data paths with especially high quality, end-to-end performance for specific
applications, and links to partners.
2007 Infrastructure Upgrade
Previously the Abilene project was utilizing optical fiber networks provided by Qwest
Communications. In March 2006, Internet2 announced that it was planning to upgrade its
infrastructure to Level 3 Communications. Unlike the previous architecture, Level3 manages and
operates an Infinera Networks based DWDM system devoted to Internet2. Internet2 controls and
uses the 40 lambda capacity to provide IP backbone connectivity as well as transport for a new
SONET-based dynamic provisioning network based on the Ciena Networks CoreDirectorplatform. The IP network continues to be based on the Juniper Networks T640 routing platform.
When the transition to the new Level3-based infrastructure was completed in 2007, the name
Abilene networkwas changed toInternet2 Network.
-
8/6/2019 Next Generation Internet_revised
12/97
2.2.3 vBNS (very high performance Backbone Network Service)
Figure 3 - vBNS Backbone Network Map
-
8/6/2019 Next Generation Internet_revised
13/97
The vBNS is the other major network backbone of Internet2, and is just as capable as the Internet2
Network (Abilene), as shown in aspects such as speed, reliability, and native multicasting.
According to the vBNS website (vBNS, 2000),
"vBNS+ is a network that supports high-performance, high-bandwidth applications. Originating in
1995 as the vBNS, vBNS+ is the product of a five-year cooperative agreement between MCI
Worldcom and the National Science Foundation. Now Business can experience the same
unparalleled speed, performance and reliability enjoyed by the Supercomputer Centers, Research
Organizations and Academic Institutions that were part of the vBNS."
vBNS+ may be the first step toward getting Internet2 technology out to the general population.
Anyone can purchase an OC-3 connection to vBNS+, although the price is still hefty
($21,600/month). It is still used by Internet2, but commercial businesses can start to connect to it.
Although it was probably commercialized solely to recover some of the expenses associated with
it, it has the unintentional effect of becoming sort of an intermediate developmental stage. It isnt
too difficult to imagine Abilene remaining as the research network in years to come, leaving the
universities their own playground, and vBNS+ becoming the source for high-speed connections
for the normal customer. While most people are probably not willing to pay such a large sum of
money each month even for such a supreme product as vBNS+ offers, as the price comes down,
more people will connect to it. The vBNS + website shown in figure above, depicts its
connections.
-
8/6/2019 Next Generation Internet_revised
14/97
NEXT GENERATION INTERNET
Architecture
Contents: Network Management and Control
Background
Middleware
3. 4D Architecture
4. Maestro
5. Conclusion
-
8/6/2019 Next Generation Internet_revised
15/97
Background:
Middleware is the software glue that holds the various technologies and applications together. This
chapter deals with the concepts used in setting up fundamental policies of a middleware
framework
-
8/6/2019 Next Generation Internet_revised
16/97
2.3.2 Middleware
Middleware is the software that binds the network applications together. It is the umbrella term for
the layer of software between applications and the network. This software provides services such
as identification, authentication, authorization, directories and security. In todays Internet,
applications are usually required to provide security features by themselves which leads to
competing and incompatible standards. What Internet2 does is it encourages standardization and
interoperability in the middleware making the advanced network applications much easier to
handle. The Internet2 Middleware Initiative (I2-MI) is working toward the use of core middleware
services at Internet2 universities. Shibboleth is one of these initiatives released during 2003.
Shibboleth is an open source package that supports sharing of web resources between various
institutions subject to access controls. It is working on establishing single-sign-on technologies andother ways to authenticate users across the network. Version 1.2 of the software is now available
for use. The following concepts are fundamental to Shibboleths policy framework:
1. Federated Administration: The origin campus (home to browser user) provides details
about the attributes of the user to the target site. There is a certain level of trust that exists
between various campuses which allows them to identify the user and set a trust level for
that particular user. This trust or the combined set of security policies is the framework for
the federation. The campuses are widespread over the network and hence a single technicalapproach or a centralized solution is not feasible. So origin sites are responsible for security
for their own users and are allowed their own ways of doing so.
-
8/6/2019 Next Generation Internet_revised
17/97
2. Access control based on attributes :As mentioned in the previous concept access control
decisions are made using the attributes provided by the user while accessing the origin site.
These attributes include identity but not all the sites require identity. Shibboleth has defined
a standard set of attributes. The first set is based on the eduPerson object class, which
includes attributes widely used in higher education.
3. Active management of privacy: The origin site and the browser user control the
information released to the target. The usual default is belonging to a particular community
that is the user must be a member of the university as a student or faculty. Individuals can
then manage attribute release via a web based user interface. This means that the users are
no longer dependent on the security policies of the target website.
4. Reliance on standards: Shibboleth uses the industrys default standard, OpenSAML, for
the message and assertion formats and protocol bindings, which are based on the Security
Assertion Markup Language (SAML) developed by OASIS Security Services Technical
Committee.
5. Framework for multiple, scalable trust and policy sets (clubs): Shibboleth uses the
concept of a club to specify a set of parties who have agreed to a common set of policies. A
given site can belong to multiple clubs giving them more flexibility in operation. This
concept expands the trust framework beyond bilateral arrangements and provides for
flexibility when different situations require different policy sets.
-
8/6/2019 Next Generation Internet_revised
18/97
Figure 4 - Example of Federated Enterprise
This shared trust environment is illustrated in the above figure. Federation is the basic framework
for higher education in general and for Internet technology in particular. This federated approach
to administration is now gaining wide spread acceptance in academia as well as in the corporate
sectors. One example of this is the Liberty Alliance which is a consortium of over 150 companies
defining standards for secure and interoperable federations of which Internet2 is also a member.
The federated administration ultimately benefits the end user as it allows the user to implement a
uniform single sign-on method to access network applications provided by external partners within
the federation. The user also has control over what kind of attributes he/she can send over to the
target site. Authorization maybe based on membership in a group (student/faculty) other than a
persons personal information.
-
8/6/2019 Next Generation Internet_revised
19/97
2.3.3 Four Dimensional Architecture:
In the 4D dimensional approach we revolutionize the change by making a very specific
observation on the current Intenret architecture. The current Internet approach is a box centric
approach . Routers , switches, management and control planes as a independent boxes which
interact with each other. This box centric approach has following disadvantages.
1> Since the boxes are independent it needs to be manually configured but manual
configuration are error prone. Considering the large network the manual configuration is
bound for errors.
2> Whenver the network topologies change manual configurations with respect to context are
needed to be done.
3> Whenver the protocols are implemented they dont follow the policy language . In order
that protocol responds according to the policy we have to change the input parameters of
the protocols.
4> In addition to above it is difficult to perform network troubleshooting, isolation of failures
is a difficult task in large networks.
Due to lack sufficient mechanisms and proper interfaces between inter-domain and intra-domain
protocols the current Internet architecture fails to have stabilities.
The 4-D architecture was proposed by the research organization FIND. The 4-D in the 4D
architecture are Data, Discovery, Dissemination and Decision. This architecture is a centralized
architecture which enforce control on distributed entities for meeting network level policy
requirements. These planes can be explained as follows:
-
8/6/2019 Next Generation Internet_revised
20/97
4-D Architecture
1> Data Plane: The Data plane is used for handling individual packets and it processes them
according to the decision plane. The decision plane outputs a state which the data plane
complies with it. The state can be routing tables, packet filters positioning and address
translations.
2> Decision Plane: The decision plane outputs a state as described above. It takes into account
the network topology and the network level policies and computes them to get a decision
for each one of the output state i.e. packet filters positioning.
3> Discovery Plane: Discovery plane can imagined as a scout who maintains network views
and discovers characteristics of routers, neighbor discovery and link layer discovery.
4> Dissemination Plane: It provides a channel link for each network node and the Decision
elements. It takes this input from Discovery plane founding.
-
8/6/2019 Next Generation Internet_revised
21/97
The centralized architecture can help make decisions based on the network topology and
organizational policing. By the help of Dissemination plane , the Decision plane can make
decisions which will be eventually executed by Data plane. The decision plane can also re-evaluate
its decision and may introduce additional measures to confirm the policies.
The idea of decision plane and dissemination plane is also extended by making network devices
like routers behave more like forwarders. These routers will be controlled and managed by Router
Control Platform.
Although the 4-D architecture is quite impressive . It has known issues on scalability. For eg the
Discovery plane makes its judgement on network broadcasts . But considering the huge subnet
network flooding is not a feasible option. It can however be remedied by using DHT based
network architecture .
Conclusion: The 4D architecture is a basic idea by making the subnet more knowledgable and
intelligent . The Complexity Oblivious network management is another such architecture built on
the grounds of 4D
-
8/6/2019 Next Generation Internet_revised
22/97
2.3.4 Maestro:
Maestro architecture perceives an operating system view for network control and management.
Like a standard operating system which supports scheduling, synchronization, inter-application
communication and resource application the Maestro tries to do in the same way.
Maestro is a clean slate architecture approach and unlike its contemparies like 4D and CONMan it
has an explicit mechanisms for handling network invariants. This provides buffering against
configuration errors from the higher level to the lower level.
Maestro: Architecture
Maestro as seen from the figure uses Meta-Management System is similar to 4Ds Dissemination
plane which creates a channeling for network devices and decisions. Like 4D it also has Discovery
mechanism to acquire knowledge of network topology and information of the network form MMS.
Based on these inputs the operating system creates a virtual view for the control application
running on top of it. Each application depending on its requirements is provided the relevant view
of the network.
Major difference between the 4D architecture and the Maestro is that 4D views the architecture
views are of a monolithic architecture. While the Maestro supports multiple functions by using an
-
8/6/2019 Next Generation Internet_revised
23/97
operating system approach and network level invariants synchronizing functions and buffering
errors.
NEXT GENERATION INTERNET
Architecture
Contents: Internet2 Subnet models
Background
Content Centric Internet
World Wide Wisdom
Edge based Next Generation model
5. Virtualization based Next Generation model
6. Conclusion
-
8/6/2019 Next Generation Internet_revised
24/97
Background
-
8/6/2019 Next Generation Internet_revised
25/97
CONTENT CENTRIC
NEXT GENERATION INTERNET
Contents: Internet2 Subnet models
Introduction
Principles of content centric Internet
Content Naming
Content Routing
Content Delivery
Content Distribution
Conclusion
-
8/6/2019 Next Generation Internet_revised
26/97
Background:
There are several approaches that are being currently researched. Most of these approaches use theclean slate way or the evolutionary deployment. The novel way is that approach in which a clean
slate architecture is deployed. The other one makes use of existing concepts such as cognitive
computing, cloud computing which are already present but presenting these concepts for Next
Generation Internet is still a challenge and is still to be deployed. The novel or clean slate
architecture provides a sound reasoning of its success but since Internet is very large implementing
the clean slate architecture will be a major challenge than the evolutionary way.
The content centric approach that is discussed in this topic is a clean slate architecture deployment.
The content centric approach rests its argument that Internet is more sought for contents. Hence the
architecture should be focused on content. The following topic will deal of how this can beachieved.
-
8/6/2019 Next Generation Internet_revised
27/97
2.4.1 CONTENT CENTRIC NEXT GENERATION INTERNET
Introduction:
The next generation Internet technology is required since the current Internet technology cannot
manage increasing number of Internet subscribers. The Internet today is heavily used and number
of its users is increasing in great volumes. The measures that are taken today especially the ad-hoc
mechanisms which will be briefly described is not able to cope up with increasing Internet
demands. Thus the need of altogether different Internet architecture is being proposed.
When the usage of Internet is observed it is seen that most of the Internet traffic involves accessing
the Internet for data or more specifically for the content. The content today is increasing rapidly.
Thus one can infer that most of the Internet traffic is content centric. In this was content delivery
becomes a critical part in the Internet design today. The traffic is basically HTTP traffic and some
traffic used for locating the content and finding a suitable content delivery method.
Coming back to our discussion on the current policies to support increasing user demands we see
that these ad-hoc policies have to violate the current network architecture rules. For example the
DNS which resides in the application layer has to find out about the routing information to
facilitate content delivery. The scenario can be explained as this, suppose we need to access the
website if a conventional content routing is used it will cause a lot of overhead .The client access
first the authoritative name server. Then it issues another query for a nearby content server causing
another round trip time and next it may have to get redirected to another web server this will make
the entire content delivery very slow .
So a Dynamic content delivery is used instead in which the DNS emulates a router. It does content
routing. Thus in this we get a lot of flexibility by violating a basic law on network architecture that
the higher layers get the services from lower layers. So basically we have routing in both layers.
Other such violation is transparent caching. In transparent caching the ISPs cache the network
traffic without the content of users and it does not need to check about the browsers setting. In a
way it forces the users to cache which may cause a security issue. Thus as we see that the in order
to maintain the flexibility of the network we hijack the transport level connections.
-
8/6/2019 Next Generation Internet_revised
28/97
-
8/6/2019 Next Generation Internet_revised
29/97
Lookup-delay:
In the address centric scenario the client get the content only after it has mediated through the
DNS. It has to query the DNS server to find out the ip address of the requested site. The DNS then
finds out the ip-address by searching the domains and their respective ip-addresses. After the ip-
address is acquired we then route and access or request for the content to access. This results into
considerable delay although we have faster Internet the subsequent queries and request-responses
does pose a problem. If the network layer is content centric we wont need the DNS server since
the network now knows about the content by name and would router directly to the content
reducing the lookup time overhead.
Mobility:
An ip-address is an indentity of location and the end point. In other words whenever the location
changes the ip address changes , thus we have to get redirected at every ip-address change. If we
thus create a network content centric we will achieve greater mobility since the content name
wont change with location.
Security:
In the current Internet scenario the content that is got through Internet the user has to trust the the
content provider and not the actual content. A simple scenario can be like this if we search ebay
website through the Google website. We trust the search engine and then when we click the link
provided we trust the DNS server in this way. In this way phishing of a website can take place. In
content centric network the security information comes embedded with the content. Even if it is
provided by an untrusted server it will be validated by the customer by checking the security
information in the content. This also gives a advantage over the address centric infrastructure by
allowing replication of secured content. It not only gives flexibility but also ensures that content
replicated is secured. In the next section we outline what are the principles of a content centricInternet
-
8/6/2019 Next Generation Internet_revised
30/97
Principles of content centric network layer:
Since in the content centric scenario we take interest in the content we base our principles on
content.
1> Instead of addressing by hosts that is by using the ip-address we use content name as an
address.
2>For routing we will use the destination content name instead of destination ip-address.
3>Since we will use the content name as an identity, security will be embedded in the
content . This will also prevent the use of fake content names.
4> The design is to provide more and more address-less hosts in which we deliver the
contents.
5> Proving caching to achieve efficiency in content delivery.
The above principles can be lucidly explained by the following illustration:
Bob stores bob.smith.video in the device. This device is connected to more than one routers.
-
8/6/2019 Next Generation Internet_revised
31/97
Alice wished to view the contents of bob.smith.video by requesting at the network layer directly
.The routers the route the request based on the name bob.smith.video to the desitnation device
The destination device then forwards the content to Alice. In this entire process we have relied on
routers for requesting the content using its name. The content centric Internet does intend to make
use of the network layer for identifying the content based on its name alone.
It is now important that how to name the contents so that the routing can be effective. The previous
example is only an illustrative of the content routing will take place. We deal with the issues and
approaches for the content naming.
-
8/6/2019 Next Generation Internet_revised
32/97
Content Naming:
Since Contents name is a very important thing in the next generation Internet naming the content
becomes a big issue that needs to be first resolved. Since naming is important to the user we should
ensure the name should have protocol as a parameter. For deciding how the naming should be we
take help of Zookos Triangle in which we can have only two points or two edges and we have to
let go off another edge.
Memorable
Global Security
Zookos Triangle
Security: The name addresses the content only. In other words the content cannot be duplicated.
Memorable: The name should be memorable i.e it can be easily remembered and hence we can
say easily accessed.
Global/Decentralized: Name can be chosen at free will it should not be assigned by a central
naming authority.
The issue in naming is that in the current scenario when we access a website we trust the DNS
server since the naming in this case is authoritative we can trust the server. However in the content
centric domain we have to decide the two ends of Zookos triangle.
-
8/6/2019 Next Generation Internet_revised
33/97
If we name a content based on Memorable and Global and let go of Security then we have a major
issue which is explained as this. Since the contents name is not secure an untrusted router may
cache the content of a website (original) and posing as an original. Hence it is important to keep
the security. Now we have to decide between the decentralization and memorable. Since
memorable and decentralization are both are equally important we decide it upon the applications
hence depending upon applications, we either have secure-memorable or secure-decentralization.
For websites which are propriety like the website of a company we make it memorable-secure.
Contents that require normal approach like for example status updates we make it secure-
decentralized. However secure-memorable and secure-decentralized does become an issue since
we try to coexist both of them in the same network.
-
8/6/2019 Next Generation Internet_revised
34/97
Content Routing:
The main essence of the content centric Internet lies in content routing i.e as how the content
routing should takes place by using the name of the content. The primary object in content routing
is to forward content based on the request issues by the host. The routing is based on name only.
So currently two types of architecture is being proposed one is the advertise-based and the other is
rendezvous-based architecture. We discuss the basic concept involving these.
Advertised based routing:
In advertise based routing we perform the traditional routing concept but instead of advertising and
routing through IP-address we route based on the content name. Hence in this context the routing
table wont have ip-address entries but instead would have content names as entries.
In the advertise based routing we maintain the same concept of network topology of OSPF and
BGP but the only change is that instead of using ip-address we use content name. Advertise-based
architectures is a feasible solution in terms of transport however there are routing and scalability
issues. The number of entries in a routing table for content names is considerably large and there
needs to be effective mechanism to compress routing tables . The stability is also a concern since
the convergence delay in case of content based routing is worst against convergence delay in terms
ip-based routing. So the major tradeoff in advertise-based architecture is scalability and routing
and the greatest advantage is that it is good for request and response type.
-
8/6/2019 Next Generation Internet_revised
35/97
Rendezvous based architecture:
Another type of architecture is rendezvous-based architecture. Rendezvous is basically a meeting
place or conversation place for two parties. So in this type of architecture the rendezvous node
contains information like the name of content and other content related information that will be
needed. The rendezvous node is found out by using standardized functions. So all the requests of
contents are first forwarded to the rendezvous node by the network layer. So here the rendezvous
node acts as an intermediate node which handles all the transactions so thus the network is divided
into user to rendezvous node and rendezvous to content. These routing protocols are inspired by
overlay network.
The rendezvous based architecture is suitable for subscription-publish type and cannot handle
request-response as fast as advertise based architecture since the requests will be first forwarded to
the rendezvous node and then the request will be routed to the requested content provider.
-
8/6/2019 Next Generation Internet_revised
36/97
Content Delivery:
The content delivery is a concern of how to forward the content from the storage point to the host.
Since we no longer intend to keep the ip-address and also the routing tables contains only content
names and not about the host hence it becomes a challenge as to how to transfer it to the host since
we no longer support the idea of addresses. So the literature takes refuge in providing a temporary
response channel. The examples it shows that a response channel in its part must interface the
content and take the responsibility of following the content from the storage device to the host. It
accomplishes this by one of the two ways.
The first approach is that we apply the similar idea of source routing approach in which the
specific routes or sequences are mentioned in the header of data units transporting it. The second
approach is that the downstream node should just store the information about the next-hop link
layer interface.
Another potential problem faced with content delivery is that the network is subjected to
congestion, so we introduce congestion control mechanisms in the delivery protocol. So we put a
request-response for sending and receiving contents. We response and request through fixed size
for eg 512bytes. This strategy prevents loss of data and provides flexibility in controlling
congestion.
-
8/6/2019 Next Generation Internet_revised
37/97
Content Distribution:
Content distribution deals with caching content in the network so as to reduce end path delays
thereby lowering the latency. This concept is popularly known as in-network caching. Thus the
router becomes a device that has some amount of memory assigned to store the contents. Thus it is
also important to provide structure to the contents network data units such that sequencing and
caching can be relatively easier. It is recommended that size of chunks be around 256-512kB.
In-network caching is either performed in autonomous way or in a co-ordinate way. In the
autonomous method by the help of a locally running algorithm a data unit will be cached to the
closest router to the host. However the greatest disadvantage is that all the closer routers will cache
the same content.
In coordination caching technique we use caching algorithms which will decide where the data
unit should be cached. Also the content distribution or the in-network caching should co-operate
with content routing. It is achieved by using an advertise based architecture or rendezvous based
architecture. In the advertise based architecture the router advertises whenever it caches the
content however in case of rendezvous based architecture by the use of routing algorithm user
requests are forwarded toward a rendezvous node. Since the rendezvous node knows the location
of the node it is forwarded to the router. If the intermediate router has content it is will serve the
request.
-
8/6/2019 Next Generation Internet_revised
38/97
Conclusion
The concept of content centric Internet is a novel idea but it is important to know whether this
clean slate approach can satisfy Internet users requirements which view the Internet for contents is
perfect. It is also important to note that this new architecture means to replace the actual TCP/IP
from the network architecture. This also implies that the future Internet will have software routers
running different network layer protocols by creating a virtual network.
We at the current instance provide what should be general requirements for the CONET (Content
centric Internet) should satisfy:
1> CONET should have control about location where the contents or links that lead to the
content are stored. This is especially important in geographical or an administrative
domain. This needs to be done since we dont want the nodes to be stored randomly.
2> A CONET should advertise its content although care must be taken that it limits this
advertisement inside the domain or definite section of the network.
3>A CONET should follow persistent naming eg a song, movie or a book but is also support
naming depending on purpose or service like weather service. It also that content is allowed
to change keeping the same name for eg revised paper etc.
4> A CONET must have a facility that it can delete or update the contents. Also it must also be
able to provide expiry date for a content so that a old content doesnt remain too long in the
network along with its revised content . It must also give the users to edit or delete the
contents that are in the content or make it unavailable to the general public. As also seen
Wikipedia allows to edit the content or delete the content.
5> A way should be allowed to view to data mine the contents depending on their version.
Thus the users can access the latest content with the still the same content name.
-
8/6/2019 Next Generation Internet_revised
39/97
6>A CONET should also provide functionality in between sessions that have interactive
exchange of data between two upper layers for e.g. client and server. It must do for the
contents that are unnamed but are important for upper-layer entities. The CONET doesnt
imply that every content should be named since some data or content are not of significant
that it should be named since they are used internally. Thus CONET should support content
retrieval and also traditional service.
7>A CONET should provide inbuilt caching by each node and by user terminal. Thus the
users can get the desired content from anywhere and it need not be always from the original
source .It must also be possible the user can retrieve the content even if it is disconnected
from the CONET and connected to a node which has a content cached inside it. The
CONET must also basically aware of the contents in the network. This functionality gives
network operators more control and handle network traffic.
Content Centric Internet is a novel method for Next Generation Internet. Many approaches have
been taken. One of the approaches that will be discussed is an evolutionary approach in which we
offload the computing and network management to the edge routers.
-
8/6/2019 Next Generation Internet_revised
40/97
-
8/6/2019 Next Generation Internet_revised
41/97
2.4.2 Edge Cloud based Next Generation Internet:
Background:
Structure of Internet: The Internet can be subdivided in three components core, edge and access
networks. The core is a backbone consisting of routers supporting multiple telecommunication
interfaces switching and forwarding at very high rates. The edge router is in the outer concentric
circle of core network and is more close to the consumer. The edge router may be connected more
than core routers. The outermost circles are the network of access routers which are connected to
the edge routers. The access routers are mostly concerned with how the consumer wants to utilize
the Internet i.e. it depends on the customer subscription plan which in term determines the
bandwidth and data transfer rates.
-
8/6/2019 Next Generation Internet_revised
42/97
Content centric Internet
CDNs are an evolution of client-server model, in which we bring the content closer to the user
applying a similar analogy of cache memory on a computer. To explain how a basic Content
Delivery method takes place let us consider the figure given below.
Suppose user A is requesting content, the content since its not found in the edge router ER1 it is
routed to the edge router ER1. Now when user B requests data instead of routing the content from
the source server it access it directly from edge router ER1. As seen not only the response time is
improved but it offloads the providing server. It is observed that usage of Internet have been
mostly content centric hence it is important to deploy a better CDN in the Next Generation
Internet.
The approach of Content Centric Internet extends the concept of CDN explained above by making
content as main focus. We decouple the location of the content from its identity. This approach
implies that content should be accessible irrespective of its location and only by its identity. We
thus make content distributed and offloading the server from it. The main goal of this architecture
is that make the content available to the users by adding services to it . We attempt to provide
-
8/6/2019 Next Generation Internet_revised
43/97
intelligence to the edge and make the content transform from an unprocessed entity to a value
added service.
Life at Edge: Until now we have discussed in the Content Centric approach that we offload the
server and make the servers more available. But it is also equally advantageous from the client side
by offloading the computation required to the edge, thus making leaner platform for the client.
This can be easily realized through virtualization and making an Edge cloud to provide better
services. We have thus combined the Content Centric approach, cloud computing and
virtualization together. By using virtualization we can combine different overlay platforms into a
unique infrastructure.
-
8/6/2019 Next Generation Internet_revised
44/97
Edge based Next Generation Internet :
The greatest transition from the current Internet architecture to the edge cloud based architecture is
to perform the computing at the edge only and not to be done by the client thus making the edge
intelligent. By delegating the computing load at the edge we make the core more simpler thus
reducing its function to packet forwarding. We deploy the Cloud at the edge instead at the core..
Architecture of the Edge Cloud:
As seen from the above figure we see that there are three services Access , the Edge and the Core .
In Access layer Infrastructure as a Service is provided. The infrastructure provides services such as
storage servers, networks etc. This storage service is called as storage cloud. The middle layer
provides Platform as a service. This layer computes the platform by virtualization of the
underlying infrastructure.
-
8/6/2019 Next Generation Internet_revised
45/97
Inside the Edge cloud:
Surrogate:
The term surrogate is defined in the RFC 3040 as a gateway located with the original server or at a
different point in the network, which works and operates on behalf of the associated server. The
surrogating helps to accommodate protocol requirements. Thus it can work according to the
requirements ,removing the constraints on the server.
The surrogate thus supports wide range of clients including those with minimum requirements.
The surrogate achieves this through web-based virtualization. The surrogation from users
perspective is a special gateway as described above that can provide services such as unified
communications, content specific services (add, mash-up etc) . Also the surrogate provides both
computing and storage.
For example Web based GUI showing the available media and the user requesting any one of
them. The delivery will either be done from the local storage or the edge cloud. Since the proposed
Internet is content centric we capture the live media and stream the media using the streaming
server. It is also noteworthy that surrogate is stateful it maintains the session information. Also the
surrogate having the client software creates an illusion that service is always continuous even
-
8/6/2019 Next Generation Internet_revised
46/97
though at the backend the terminals get disconnected and reconnected and possibly with different
ip-addresses.
HTTP:
As we can see from the diagram we have used HTTP, we will explain how we intend to use HTTP
in the surrogate. User performs the interaction through web based GUI, which implement the
virtual client side in User platforms (UE). The Web browser in turn supports these. The edge cloud
thus implements a web server, which receives users input entered through the GUI. With the
emergence of the Google docs we see how it is a virtual appliance. Therefore a web browser based
GUI the most suitable choice for virtual clients. Since most of the web pages are using markup
languages we use HTTP to support the transfer.
Content Access:
In this section we give the basic concept of how content access is achieved. We have to provide
content with or without virtualization. This requires content mapping i.e. associating a server to a
content and traffic engineering how the content is to be delivered. The index engine accomplishes
server mapping and using the ISP topology and overlay network conditions we perform the traffic
engineering.
The ISP thus provides the physical resources to the edge cloud though content provider should
have the required resources themselves to provide content, applications and index engines
according to the user requests. Thus the Edge cloud provides separate interfaces to both
Infrastructure provider and the Content Provider.
-
8/6/2019 Next Generation Internet_revised
47/97
Content Overlays:
Content Distribution Architecture
Depending on the ISP services, contents results different Edge clouds which provide an overlay of
a logical content-centric Internet architecture. Overlay networks are those networks that are built
on other networks. For successful operation of this model a control and management plane is fitted
between the overlay network and the infrastructure layer. There are in this models three different
models which can be explained as follows:
1> Infrastructure Provider (InP) provides the infrastructure to the edge cloud . The InP
maintains physical resources like storage, surrogate, physical links etc. The Infrastructure
Provider (InP) also provides the interface to CP via Virtual Network Provider. In addition
to above the InP transports raw bit streams and processes services to the vendors. In this
fashion the ISP is a potential InP
-
8/6/2019 Next Generation Internet_revised
48/97
2>Virtual Network Provider (VNP) encapsulates many InPs together and builds a virtual
network on these (these definfing the overlay network) and the virtual network as expected
is composed of virtual nodes and links. The VNP function is provide an interface to the
CPs . VNP also provides QoS to the InP for maintaining guaranteed level of infrastructure
services.
3>The CP (Content Provider) main function is to maintain applications of the surrogate and
the storage of the Edge Cloud. It is also that these functions are embedded in the interfaces
which is provided by the VNP. In order to facilate the system, the VNP offers interfaces to
the CP at convenient locations at the edge.
So the overlay network described above can be explained as this we have the InP at the bottom
layer. The VNP builds Virtual Network using these ISP. On top of the VnP we have the services
like surrogate and storage. In such a way multiple VNPs and CPs can exist in parallel.
-
8/6/2019 Next Generation Internet_revised
49/97
Issues for Edge based Internet
Challenges facing the implementation of Edge modeled network . Before implementation of this
model we need to address following issues :
Secured Communication between Virtual Client and Surrogate:
The situation is this that the Surrogate is located at the edge and not necessarily at the ISPs .Hence
security and scalability becomes an issue. In order to resolve this we can provide multiple
authentication but this is not a feasible solution we need to provide scalability to this architecture
which provides single sign on capability. In effect user profile , billing , and authorization can also
be provided between two parties getting more security.
Secured Content Management:
Since in the content centric network the content is distributed securing the content becomes
difficult to manage. The proposed architecture must guarantee integrity, authenticity, Direct Right
Management etc. In the current CDN network these services are provided however in the proposed
architecture, it needs to provide self certified and context based techniques in addition to other
models. In the proposed model involvement of the ISPs allow to engage in secured content
delivery model.
Streaming Media Delivery:
Problem with virtualization is that it provides little support for multimedia applications. This case
is seen primarily in case of virtual desktop platforms. In the proposed model HTTP is used as a
virtual client protocol, the issue is this that HTML 5 tags video and audio which are protocol
-
8/6/2019 Next Generation Internet_revised
50/97
agnostic. The solution is to use RTP/RTCP protocol but the browsers still dont support the
RTP/RTCP based streaming. Another issue that needs to be resolved is about the codecs
supported by the browsers.
There is also yet another thing that should be resolved one is that application should perform
exchange of capabilities i.e. between the user client and media server before delivering the content.
In our case of surrogate model must have the knowledge of capabilities of the user terminal before
negotiating with media server. If in the case the user doesnt have the ability to support the media.
The content is first delivered to the surrogate and then transcoding is done and then made available
to the user terminal.
Performance of the Surrogate:
The major backbone of the Edge based network is the Surrogate. Surrogate is important concept in
Edge based network. Surrogate maintains the sessions, connections and state information from
different computing. We can follow the approach of grid computing that is to implement load
balancing to improve the performance of infrastructure. However the distributed implementation
has some issues.
-
8/6/2019 Next Generation Internet_revised
51/97
Future benefits of Edge based Internet:
Simple UE: Now since we have offloaded the computing at the client side we would no longer to
make changes in the UE (User Equipment) and hence any changes that are required would have to
be done at Edge which can be done easily since the Edge implementation is of a cloud .
Works in limited user facility: We are using web-based clients so it works in areas in which there
are organizational restrictions or there is policing. For e.g. It is not advisable to install software or
programs in a college workstations here web-based clients would still work.
Fixed Mobile Convergence: We are using a virtual client , it can be used regardless the type of
network we use i.e. the network can be both fixed or a mobile network.
Future Internet: Multiple implementations of future Internet can co-exist with this model through
virtualization and service supported resources.
Reuse of mash-up content: A mashup as the name suggests is the aggregation from different
sources, received from different sources. Similarly in this model we have data stored in caches and
content repositories the mashup content can also be stored. The content can be given to the local
cache if it is up-to-date.
Enhanced security and billing: A end user will have to authenticated to access the network. This
authentication will be done by ISP by using Single Sign-on technique. Also the ISPs will be the
rights gauging the usage for billing. The usage will be given to the content provider and the billing
would then be calculated.
-
8/6/2019 Next Generation Internet_revised
52/97
-
8/6/2019 Next Generation Internet_revised
53/97
computational intelligence, distributed cognitive sensor network and distributed remote control
systems.
VIRTUALIZATION BASED
NEXT GENERATION INTERNET
Contents: Internet2 Subnet models
1. Introduction
2. Components of Virtualization based Architecture
-
8/6/2019 Next Generation Internet_revised
54/97
-
8/6/2019 Next Generation Internet_revised
55/97
-
8/6/2019 Next Generation Internet_revised
56/97
In order to achieve this a 4D architecture was proposed .In 4D architecture we have decision
,architecture, knowledge and data .With respect to our scenario we have four planes control ,
management , knowledge and data .In order that they dont interfere each other as well for
maintaining and functioning them properly we implement them through virtual networks. In order
that it doesnt becomes monolithic we make it as an incremental model.
Virtualization based Network Architecture:
System model and features : The approach to deploy new network model is this that the current
Internet architecture is very large and making changes in the current architecture is very time
consuming since it may take many years to do this. So the solution that is come across is that we
divide the subnets in two parts on the basis of architecture Current Internet(CI) subnet and Next
Generation Internet (NGI) subnet .The CI subnet and the NGI subnet are logically separated from
each other. Since we are performing virtualization we allow the CI to run parallel and deploy and
test the NGI. Since the Next Generation Internet is not completely deployed, user is given the
choice to opt between the Next Generation Internet subnet and the Current Internet.
Virtualization based Architecture:
Coming down to the network topology, the CI (Current Internet) and Next Generation Internet
(NGI) also named CMMI (Control Manageable and Measurable Internet) have some basic
changes. The physical and the MAC layer are kept same. The application, network and data link
-
8/6/2019 Next Generation Internet_revised
57/97
layers are kept the same .In NGI (Next Generation Internet) we add another dimensions as seen
from the 4D architecture we add Knowledge plane , control plane and management plane . The
data plane as seen from the figure is same as the current Internet architecture. The perception based
network knowledge processing and independent network management helps us solve the issues
currently faced in the Internet. These planes are logically separated from each other. On top of the
application layer we add User selection plane which gives user the option to stay with the current
Internet subnet or switch to next generation Internet.
User selection plane:
The User selection plane which resides above application plane is used to allow the user to use any
of the modes the current Internet subnet mode and next generation Internet mode. This is easily
implemented by introducing a notification bit called SUB_NET in the IP header. By allowing this
it makes the underlying architecture understand how to treat the packet. If the SUB_NET = 0 is
received it is meant for the current Internet subnet if it is set as 1 then it is meant for the next
generation Internet. In case of SUB_NET = 0 the user selection layer identifies that it is meant for
-
8/6/2019 Next Generation Internet_revised
58/97
current Internet subnet and passes it through the subnet. At the receiver side the selection layer
comes to know that has comes from current Internet subnet from the SUB_NET field.
Co-operation among Data, Knowledge, Management and Control Planes.
As described earlier, the four planes that are added to the subnet. These four planes are divided ad
four subnets by virtualization. The data plane is responsible for delivery the application data and
the other control and manage the whole network.
1> The data plane manages the transfer and delivery of the application data throughout the
network. The data plane does this through interface. The users receive and send it through
this interface.
2> The knowledge plane performs self-analysis, self learning and also performs network
measurement .The knowledge plane thus provides the network knowledge .The network
-
8/6/2019 Next Generation Internet_revised
59/97
-
8/6/2019 Next Generation Internet_revised
60/97
System Modeling and Evaluation:
In order to implement the above architecture we use ALLOY.ALLOY provides the required logic
and language. Using ALLOY we use function entity as an abstract entity that undertakes functions
related to the architecture. Similarly the layers like user selection layer and other layers are
represented in terms of entity. These entities for example the user entity and/or selection entity
interact with each other by means of a connector. The connectors are divided as protocol connector
and service connector. The protocol connector is used for horizontal connection and service
connector is for the vertical connection. The basic idea is that protocols being vertical are that peer
nodes exchange information with each other hence horizontal. Since the layer below the topology
provides services to the layers above we use vertical connection concept for service connector.
After the entities are represented we use the ALLOY Analyzer to evaluate using simulation.
-
8/6/2019 Next Generation Internet_revised
61/97
Conclusion:
We have used the virtualization based architecture approach to solve the issues in control and
management plane aspects of the current Internet. This future Internet approach aims to create
individual virtual subnets which deploy management, control and management functions. The
network becomes cognitive by adding knowledge plane which measures and collect network
statuses. The goal is to achieve a concrete network architecture that is controllable, manageable
and measurable.
The network architecture will be tested and evaluated using ALLOY and ALLOY analyzer. The
test bed that will be used to implement this architecture is Planet Lab.
-
8/6/2019 Next Generation Internet_revised
62/97
A Systems Approach to Internet Architecture:
Internet2 consists of applications, middleware, network devices and the physical network.
Applying a systems approach to Internet2 enables these components to be viewed as a whole. The
systems approach consists not just of technology but also of users and policy. The systems
approach allows improvement in any one area to be leveraged against for greater overall gain in
user satisfaction. For instance the simplicity of the Internet architecture allows users to run
applications without any knowledge of the physical network, if the PC operating system knows
how the base network is operating, the application performance can be further increased thereby
improving the user experience as well. As changes occur in the network layer, such as IPv6 and IP
multicast, new applications using these services are available to the user.
This is a continuous cycle for Internet2 in which advanced network facilities create a platform for
better applications and vice-versa as illustrated in the figure. Also end to end system performance
and security enhancements cannot be achieved unless all the individual components are given
simultaneous attention.
-
8/6/2019 Next Generation Internet_revised
63/97
-
8/6/2019 Next Generation Internet_revised
64/97
Contents: Transition from IPv4 to IPv6
Introduction
Issues with IPv4
Features of IPv6
4. Comparison of IPv4 and IPv6
5. Conclusion
3.1 Transition to IPv6 from IPv4
-
8/6/2019 Next Generation Internet_revised
65/97
Background:
This chapter deals with the transition of next generation Internet from IPv4 to IPv6. We talk about
the various drawbacks of IPv4, the advanced features of IPv6
-
8/6/2019 Next Generation Internet_revised
66/97
3.2 Issues with IPv4
An Internet Protocol address is simply a big number that identifies a computer on the Internet.
Packets of data sent across the Internet include the destination address. When we send an e-mail or
watch an online video, any number of computers, switches, routers, and other devices scrutinize
the IP address on these packets and forward them along to their eventual destination. Internet
currently uses Internet Protocol version 4 (IPv4). IPv4 addresses are 32-bit numbers, meaning that
there are 4.3 billion possible addresses. This might look like a lot of addresses but it isnt. This
number has remained the same since the year 1981. This shows that while the number of addressspace available has been constant for 30 years, the number of devices that connect to the Internet
has grown exponentially.
Consider this for an example. For instance technology giant Apple alone has sold mobile iOS
devices (iPad, iPhone, IPod Touch) in excess of a 150 million. This is not counting all the
computers routers and other devices that connect to the Internet sold during the past 30 years. With
the pace at which the technology is hurtling along we have long since run out of IPv4 addresses. At
a ceremony in Florida in February, the last block of IPv4 addresses were allocated to the Regional
Internet Registries, whose job it is to further distribute these final addresses to others. The main
reason this was not taken into account was no one even believed that so many addresses could be
fully exhausted. As Vint Cerf - Google's chief Internet evangelist, "the father of the Internet," and
the person responsible for choosing 32-bit numbers - said in an interview earlier this year, "Who
the hell knew how much address space we needed?"
Although lack of addressing space is a major issue in IPv4, there are several other concerns as
well. IPv4 follows the flat routing infrastructure. This means that individual address prefixes were
assigned and each address prefix contributed to a new entry in the routing table of the Internet
-
8/6/2019 Next Generation Internet_revised
67/97
backbone routers. Configuration was also a major problem in IPv4. IPv4 networks must either be
configured manually or through the Dynamic Host Configuration Protocol (DHCP). Using DHCP
allows the network to be expanded beyond its present capacities but DHCP also must be
configured and managed manually.
Security is another major topic of concern with IPv4. The Internet was first designed with a
friendly environment in mind and security was taken care of by end to end nodes. For instance, if
an application such as e-mail requires encryption services, it should be the responsibility of the
email application at the end nodes to provide such services. The original Internet is still transparent
and there is no proper security framework in place for threats such as Denial of Service (DoS)
attacks, malicious code distribution (worms and virus attacks), fragmentation attacks and Port
scanning attacks. The address space is so tiny, scanning and finding vulnerable ports to attack in a
Class C Network takes less than 4 minutes.
Priority for certain packets, such as special handling parameters for low delay and low variance in
delay for voice or video traffic, are possible with IPv4. However, it relies on a new interpretation
of the IPv4 Type of Service (ToS) field, which is not supported for all the devices on the network.
Additionally, identification of the packet flow must be done using an upper layer protocol
identifier such as a TCP or User Datagram Protocol (UDP) port. This additional processing of the
packet by intermediate routers makes forwarding less efficient.
As mentioned above there has been a proliferation in mobile devices like phones and music players
that have the capability to connect to the Internet from virtually any location. Mobility is a new
requirement for Internet-connected devices, in which a node can change its address as it changes
its physical attachment to the Internet and still maintain existing connections. Although there is a
specification for IPv4 mobility, due to a lack of infrastructure, communications with an IPv4
mobile node are inefficient.
These problems are addressed to by Internet Protocol IPv6. IPv6 is not a superset of IPv4 but a
completely new set of protocols. The various features of IPv6 are described below.
3.2 IPv6 Features
-
8/6/2019 Next Generation Internet_revised
68/97
Large Address Space: IPv6 addresses are 128 bits long, creating an address space with
3.4 10^38 unique addresses. This is plenty of address space for the foreseeable future and
allows all manner of devices to connect to the Internet without the use of NATs. Address
space can also be allocated internationally in a more equitable manner.
Hierarchical Addressing: Global addresses are those IPv6 addresses that are reachable on
the IPv6 portion of the Internet. There is sufficient address space for the hierarchy of
Internet service providers (ISPs) that typically exist between an organization or home and
the backbone of the Internet. Global addresses are designed to be summarizable and
hierarchical, resulting in relatively few routing entries in the routing tables of Internet
backbone routers.
In IPv6, there are three types of addressing modes. They are unicast, multicastand anycast
addressing modes. Unicast addresses are assigned to a single IPv6 node. Multicast
addresses are assigned to multiple nodes within a single a multicast group. When a packet
is sent to a multicast group, the packets must be delivered to all the packets within that
group. Anycast addressing is similar to multicast, the only difference being that the packet
can be sent to only one node in the group.
Stateless and stateful address configuration: IPv6 allows hosts to acquire IP addresses
either in a stateless or autonomous way or through a controlled mechanism such as
DHCPv6. IPv6 hosts can automatically configure their own IPv6 addresses and other
configuration parameters, even in the absence of an address configuration infrastructure
such as DHCP.
Quality of Service: The IPv6 packet header contains fields that facilitate the support for
QoS for both differentiated and integrated services.
Better Performance: IPv6 provides for significant improvements such as better handling
of packet fragmentation hierarchical addressing, and provisions for header chaining that
reduce routing table size and processing time.
-
8/6/2019 Next Generation Internet_revised
69/97
-
8/6/2019 Next Generation Internet_revised
70/97
Mobility: IPv6 provides mechanisms that allow mobile nodes to change their locations and
addresses without losing the existing connections through which those nodes are
communicating. This service is supported at the Internet level and therefore is
fully transparent to upper-layer protocols. Rather than attempting to add mobility to an
established protocol with an established infrastructure (as with IPv4), IPv6 can support
mobility more efficiently.
Improvements in IPv6 security
There is little or no security in IPv4 to make it completely vulnerable to external attacks and this
makes the improvements in IPv6 a huge difference in terms of network security.
1. Prevention of Port Scanning Attack
As mentioned above port scanning allows programmers with malicious intent to use black hats
to listen to certain ports (access points or connectors to the backbone with heavy load) that are
known to be vulnerable. In IPv4, port scanning is relatively simple. Most IPv4 segments
belong to Class C networks which have 8 bits for host addressing. So scanning a typical IPv4
subnet at the rate of one host per second translates into
128 Hosts x (1 Sec/1 Host) x (1 Minute/60 Seconds) = 4.267 Minutes.
In IPv6 networks, IPv6 subnets use 64 bits for allocating host addresses.
A typical IPv6 subnet requires:
2^64 Hosts x (1 Second/ 1 Host) x (1 Year/31536000 Seconds) = 584 Billion Yrs
Scanning such a huge address space is close to impossible solving the port scanning
problem quite well.
-
8/6/2019 Next Generation Internet_revised
71/97
2. IPSec
IPSec consists of a set of cryptographic protocols that provide for secure data communication
and key exchange. IPSec uses two wire-level protocols Authentication Header (AH) and
Encapsulating Security Payload (ESP). These two protocols are responsible for authentication,
data integrity and confidentiality. In IPv6 both AH and ESP are defined as a part of the
extension headers. Additionally there is a third suite of protocols called the Internet Key
Exchange (IKE) which is responsible for key exchange and protocol negotiation. This protocol
provides the initial information needed to establish and negotiate security parameters between
end devices. Additionally it keeps track of this information to guarantee that communication
continues to be secure up to the end.
2.1 Authentication Header: As mentioned above the authentication header prevents the IP
packets from being altered or tampered with. In an IPv4 packet, the AH is part of the payload.
The figure below shows an example of an IPv4 packet with an AH in the payload.
Figure 9. Authentication Header in IPv4 Packet
When the AH protocol was implemented there was some concern about how to integrate it
-
8/6/2019 Next Generation Internet_revised
72/97
into the new IPv6 format. The problem was that IPv6 header extensions can change in transit
through the network as the information they contain gets updated during transit.
To solve this problem, IPv6 AH was designed with flexibility in mindthe protocol
authenticates and does integrity check only on those fields in the IPv6 packet header that do
not change in transit. Also, in IPv6 packets, the AH is intelligently located at the end of the
header chainbut ahead of any ESP extension header or any higher level protocol such as
TCP/UDP. A typical sequence of IPv6 extension headers is shown in the figure below.
Figure 10. IPv6 Extension Headers Order
2.2 Encapsulating Security Payload:
In addition to providing the same functionality the AH protocol providesauthentication, data
integrity, and replay protectionESP also provides confidentiality. In the ESP extension header,
-
8/6/2019 Next Generation Internet_revised
73/97
thesecurity parameter index (SPI) field identifies what group of security parameters the sender is
using to secure communication. ESP supports any number of encryption mechanisms. However,
the protocol specifies DES-CBC as its default. Also, ESP does not provide the same level of
authentication available with AH. While AH authenticates the whole IP header (in fact, only those
fields that do not change in transit), ESP authenticates only the information that follows it [1].
ESP provides data integrity by implementing an integrity check value (ICV) that is part of the ESP
header trailerthe authentication field. The ICV is computed once any encryption is complete and
it includes the whole ESP header/trailerexcept for the authentication field, of course. The ICV
uses hash message authentication code (HMAC) with SHA-1 and MD5 as the recommended
cryptographic hash functions. Figure below shows a typical ESP extension header.
Figure 11. IPv6 Encapsulating Security Payload (Header and Trailer)
-
8/6/2019 Next Generation Internet_revised
74/97
2.3 Transportand tunnel modes
In IPv4 networks, IPSec provides two modes of securing traffic. The first one is called
transportmode and it is intended to provide secure communication between endpoints by securing
only the packets payload. The second one is called tunnelmode and it is intended to protect the
entire IPv4 packet. However, in IPv6 networks, there is no need for a tunnel mode because, as
mentioned above, both the AH and ESP protocols provide enough functionality to secure IPv6
traffic.
2.3 Protocol negotiation and key exchangemanagement
In addition to AH and ESP, IPSec also specifies additional functionality for protocol negotiation
and key exchange management. IPSec encryption capabilities depend on the ability to negotiate
and exchange encryption keys between parties. To accomplish this task, IPSec specifies an
Internet key exchange (IKE) protocol. IKE provides the following functionality:
a. Negotiating with other people the protocols, encryption algorithms, and keys, to use.
b. Exchanging keys easily, including changing them often.
c. Keeping track of all these agreements.
To keep track of all protocol and encryption algorithm agreements, IPSec uses the SPI field in both
the AH and ESP headers. This field is an arbitrary 32-bit number that represents a security
association (SA). When communication is negotiated, the receiver node assigns an available SPI
which is not in use, and preferably one that has not been used in a while. It then communicates this
SPI to its communication partner establishing a security association. From then until that SA
expires, whenever a node wishes to communicate with the other using the same SA, it must use the
same SPI to specify it.
3. Neighbor discovery and address auto configuration
Neighbor discovery (ND) is the mechanism responsible for router and prefixes discovery,
duplicate address and network unreachability detection, parameter discovery and link layer
-
8/6/2019 Next Generation Internet_revised
75/97
address resolution. This protocol is entirely based on Layer 3. ND operates in tandem with auto
configuration which is the mechanism used by IPv6 nodes to acquire either stateful or
stateless configuration information. In the stateless mode, all nodes get what they want for
global information including potential illegal ones. In stateful mode configuration information
can be provided selectively reducing the possibility of rogue nodes. Both ND and address auto
configuration contribute to make IPv6 more secure than IPv4. IPv6 provides for TTL values
of up to 255; it prevents against the outside sourcing of ND packets or duplicate addresses.
Mobility
Mobility is a totally new feature of IPv6 that was not available in its predecessor. Mobility is a
very complex function that raises a considerable amount of concern when considering security.
Mobility uses two types of addresses, the real address and the mobile address. The first is a typical
IPv6 address contained in an extension header. The second is a temporary address contained in the
IP header. Because of the characteristics of this networks (something more complicated if we
consider wireless mobility), the temporary component of a mobile node address could be exposed
to spoofing attacks on the home agent. Mobility requires special security measures and network
administrators must be fully aware of them
-
8/6/2019 Next Generation Internet_revised
76/97
Comparison of IPv4 and IPv6
Internet Protocol version 4 (IPv4) Internet Protocol version 6 (IPv6)
Deployed 1981 1999
Address size 32 - bit number 128 bit number
Address Format Dotted Decimal Notation
192.149.256.76
Hexadecimal Notation
3FFE:091A:19D6:12BC:AF89:A
ADD:1123:A101
Prefix Notation 192.149.0.0/24 3FFE:F200:0234::/48
Number of addresses 2^32 2^128
Routing Infrastructure Flat Routing Hierarchical Routing
Configuration Manual Configuration of ports
and end devices
Automatic Configuration of ports
and end devices
Security Features Dependent on end nodes In-built Security
Port Scanning For Class C network port
scanning will take all of 4
minutes.
For Class C Network, port
scanning will take 584 billion
years (approx)Packet Forwarding
Mobility Lack of infrastructure for IPv4
Networks
Good support for mobility
Priority of Packets Depends on ToS field Separate field for identifying
priority of packets.
Quality of Service QoS will not work if packet or Standardized QoS for both
-
8/6/2019 Next Generation Internet_revised
77/97
data is encrypted. differentiated and integrated
services in packet header itself.
Extensibility Header is 20 bytes for 32 bit
addressing space
Header is only 40 bytes (twice
that of IPv4) for 128 bit
addressing spaceEfficiency Not efficient Very Efficient
Authentication Header Part of IPv4 packet payload Part of extension header
World IPv6 Day
On 8 June, 2011, Google, Facebook, Yahoo!, Akamai and Limelight Networks will be amongst
some of the major organizations that will offer their content over IPv6 for a 24-hour "test drive".
The goal of the Test Drive Day is to motivate organizations across the industry Internet service providers, hardware makers, operating system vendors and web companies to prepare their
services for IPv6 to ensure a successful transition as IPv4 addresses run out.
Link to test if your network devices are IPv6 ready
http://test-ipv6.com/
Conclusion:
In order to fully deploy and use technologies like IPv6 an Internet2 all major Internet industry
players will need to take action to ensure a successful transition. For example:
Internet service providers need to make IPv6 connectivity available to their users
Web companies need to offer their services over IPv6
Operating system makers may need to implement specific software updates
Backbone providers may need to establish IPv6 peering with each other.
Hardware and home gateway manufacturers may need to update firmware
http://test-ipv6.com/http://test-ipv6.com/ -
8/6/2019 Next Generation Internet_revised
78/97
EDGE BASED NEXT GENERATION
INTERNET
Contents: Applications of Internet2
Introduction
Public Televisions Next-Generation Interconnection Pilot
High-Definition Television
Theater-Quality Film Transmission
The Space Physics and Aeronomy Research Collaboration
The Neptune Project
The California Orthopedic Research Network
Conclusion
-
8/6/2019 Next Generation Internet_revised
79/97
-
8/6/2019 Next Generation Internet_revised
80/97
4.2 Public Televisions Next-Generation Interconnection Pilot
`A one way satellite system is the dominant method used to connect the present public television
network. The Public Broadcasting Service (PBS) stations at University of Wisconsin and
Washington state university along with the universities in the Internet2 consortium are making use
of the Internet2 network to create and deploy advanced applications using this network. By making
use of broadband IP video connections, the project members are testing station to station videoquality, live HD video streaming, video segmentation and search, server-based video on demand
broadcast and collaborative program editing. The goal of this application is to demonstrate how the
television production process can be made more streamlined and thus offer better viewing options
for the subscribers.
4.3 High-Definition Television
High Definition TV has become the norm in the current Internet driven media transmission. TheResearch Channel Consortium based out of the University of Washington is at the forefront of
transmission of high definition video over advanced networks. The data rates range from high
quality uncompressed HD video at 1.5 Gbps, editable studio quality HD video at 270Mbps,
production quality HD video at 19.2 Mbps. Another advantage of this data transmission is that
different formats of data can be delivered in a single stream simultaneously during real time
through Internet2.
-
8/6/2019 Next Generation Internet_revised
81/97
4.4 Theater-Quality Film Transmission
Internet2 brings you the entire cinema experience without you leaving your house. In collaboration
with Nippon Telegraph and telegraph Corporation, the University of Illinois and the University of
Southern California have transmitted real time theater quality video over the Internet2 based
network. This partnership transmitted super high definition (SHD) video over the Abilene Network
to the Fall 2002 Internet2 Member meeting. An NTT system at the UIC Electronic visualizationlaboratory at Chicago sent SHD video to the digital arts center at the USC, California. SHD is four
times the data rate of High Definition streaming used in todays video on demand services and HD
cable television broadcast. The SHD stream was compressed to 400Mbps streams using an
experimental video encored, stored on the network and sent over the Abilene network to a real
time NTT decoder and displayed in a theater via an 8Mpixel projector to an audience of cinema
experts and technologists.
-
8/6/2019 Next Generation Internet_revised
82/97
4.5 The Space Physics and Aeronomy Research Collaboration
The most important and perhaps game changing application of Internet2 is the ability to
collaborate over large distances. In education it can be in the form of professors or scientists who
are actively involved in the field but at the same time can share their knowledge with students
around the world using the applications deployed over the Internet2 Network. One such instance of
collaboration is the University of Michigans Space Physics and Aeronomy ResearchCollaboration (SPARC). There is now no need to travel to Greenland and other remote locations to
study the earths upper atmosphere. Scientists can use SPARC tools (like those shown in fig) that
can provide scientists with real time access to their experiments from the comfort of their labs. A
consequence of this is that many students have now developed mentoring relationships with the
faculty as they now have full time access to the staff and research tools and data.
-
8/6/2019 Next Generation Internet_revised
83/97
Figure 5 - Space Physics and Aeronomy Research Collaboration
4.6 The NEPTUNE Project
In addition to university level collaboration, NEPTUNE is an international and multi institutional
project that is a part of a global project to help develop regional, coastal and global ocean
observatories. This project has a 3000 km network of sea floor and fiber optic cable across the
Pacific Ocean. A series of experimental data collection centers are set up across the cable to obtain
data from the tops of the waves to the core of the earth beneath the ocean floor. Hardwired to
advanced telecommunication and network equipment, this project will help in collecting real time
oceanic data across the world. This data will be sent across the world to classrooms and
laboratories so that they can be studied as and when the events occur providing a better
understanding for the events. As they are remotely operated and automatically recharged this
application can also be a potential lifesaver in avoiding the dangers of the deep ocean floor.
-
8/6/2019 Next Generation Internet_revised
84/97
The image below is a general overview of the NEPTUNE system. Using this system, students will
be able to view data for a specific location over long periods of time or view data for a large
section of the ocean floor at the same time which will help them get a better grasp of the ocean
floor.
Figure 6 - The NEPTUNE Project
-
8/6/2019 Next Generation Internet_revised
85/97
4.7 The California Orthopedic Research Network
Internet2 can also play an active role in the health care system. There is a dedicated Networ