FIRECOL: A COLLABORATIVE PROTECTION NETWORK FOR THE DETECTION OF FLOODING DDOS ATTACKS
-
Upload
deenuji -
Category
Technology
-
view
4.498 -
download
3
description
Transcript of FIRECOL: A COLLABORATIVE PROTECTION NETWORK FOR THE DETECTION OF FLOODING DDOS ATTACKS
FIRECOL
A COLLABORATIVE PROTECTION NETWORK FOR THE DETECTION OF FLOODING DDOS ATTACKS
ABSTRACT:
Distributed denial-of-service (DDoS) attacks remain a major security problem,
the mitigation of which is very hard especially when it comes to highly distributed
botnet-based attacks. The early discovery of these attacks, although challenging, is
necessary to protect end-users as well as the expensive network infrastructure
resources.
In this paper, we address the problem of DDoS attacks and present the
theoretical foundation, architecture, and algorithms of FireCol. The core of FireCol
is composed of intrusion prevention systems (IPSs) located at the Internet service
providers (ISPs) level.
The IPSs form virtual protection rings around the hosts to defend and
collaborate by exchanging selected traffic information. The evaluation of FireCol
using extensive simulations and a real dataset is presented, showing FireCol
effectiveness and low overhead, as well as its support for incremental deployment
in real networks.
1. MODULES:
The core of FireCol is composed of intrusion prevention systems (IPSs)
located at the Internet service provider (ISPs) level. The IPSs form virtual
protection rings around the hosts to defend and collaborate by exchanging selected
traffic information. In this Project Firecol blocked the users, whose creating DDOS
attack in the web application and put that user on block list.
The Modules are as follows
User Module
Web Admin Module
Firecol System
1.1 MODULE DESCRIPTION:
User Module
This module is for the Normal user and Expert user. The Normal user who is
need to post the question. The registered normal user allowed to posting the
question in this application. If any normal user overload the application with
parallel request means it consider as DDOS attack so that user consider as attacker.
The expert user first view the normal user posted question and if that expert users
know the answer for that question means they I will post the solution for
correspond questions.
Web Admin Module
This is a module for website admin, whose developed that application and
uploaded in the server. Admin give the registration form for the users, if any expert
users registered means after the verification admin activate the expert users
account. If expert didn’t register in correct manner means web admin put expert
account on hold. Then Web admin get the user full updates from the Firecol. The
admin calculate expert performance and keep the experts ranking records in their
database.
Firecol System:
The core of FireCol is composed of intrusion prevention systems (IPSs)
located at the Internet service provider (ISPs) level. The IPSs form virtual
protection rings around the hosts to defend and collaborate by exchanging selected
traffic information.
In this module the Firecol trace the all users details , when normal users post
the question, Firecol will check the rules if rules matched , Firecol will blocked
the user and put on that user type to Attacker list. Firecol checking the user traffic
details, when one normal user creating more traffic for the web application, Firecol
find that attacker user and add that user to attacker list.
2. EXISTING SYSTEM
To countering DDoS attacks by fighting the underlying vector which is usually the use of botnets.
A botnet is a large network of compromised machines (bots) controlled by one entity (the master).
The master can launch synchronized attacks, such as DDoS, by sending orders to the bots a Command & Control channel
2.1. DISADVANTAGES OF EXISTING SYSTEM
Distributed denial-of-service (DDoS) attacks remain a major security problem to implementing complex access control policies for accessing data
Huge traffic to transit through the Internet and only detect/block it at the host IDS/IPS may severely strain Internet resources.
The mitigation of network delay is very hard especially when it comes to highly distributed botnet-based attacks
3. PROPOSED SYSTEM
It is performed as close to attack sources as possible, providing a protection to subscribed customers and saving valuable network resources
Experiments showed good performance and robustness of FireCol and highlighted good practices for its configuration
FireCol: relies on a distributed architecture composed of multiple IPSs forming overlay networks of protection rings around subscribed customers.
3.1 ADVANTAGES OF PROPOSED SYSTEM
A future work to plan and extend FireCol to support different IPS rule structures.
The core of FireCol is composed of intrusion prevention systems (IPSs) located at the Internet service providers (ISPs) level.
The IPSs form virtual protection rings around the hosts to defend and collaborate by exchanging selected traffic information.
4. SYSTEM SPECIFICATION:
HARDWARE REQUIREMENTS:
Processor : Pentium –III
Speed : 1.1 GHz
RAM : 1GB
Hard Disk : 20 GB
Floppy Drive : 1.44 MB
Key Board : Standard Windows Keyboard
Mouse : Two or Three Button Mouse
Monitor : SVGA
SOFTWARE REQUIREMENTS
Front End : ASP.NET
Back End : SQL Server 2005
Operating System : Windows XP/07
IDE : Visual Studio 2008
5. Introduction:
DDOS, short for Distributed Denial of Service, is a type
of DOS attack where multiple compromised systems -- which are
usually infected with a Trojan -- are used to target a single system
causing a Denial of Service (DoS) attack. Victims of a DDoS attack
consist of both the end targeted system and all systems
maliciously used and controlled by the hacker in the distributed
attack. In the DDoS attack, the incoming traffic flooding the
victim originates from many different sources – potentially
hundreds of thousands or more. This effectively makes it
impossible to stop the attack simply by blocking a single IP
address; plus, it is very difficult to distinguish legitimate user
traffic from attack traffic when spread across so many points of
origin.
To avoid these issues, this paper focuses on the detection of
DDoS attacks and per se not their underlying vectors. Although
non distributed denial-of-service attacks usually exploit
vulnerability by sending few carefully forged packets to disrupt a
service, DDoS attacks are mainly used for flooding a particular
victim with massive traffic as highlighted in [1]. In fact, the
popularity of these attacks is due to their high effectiveness
against any kind of service since there is no need to identify and
exploit any particular service-specific flaw in the victim. Hence,
this paper focuses exclusively on flooding DDoS attacks. A single
intrusion prevention system (IPS) or intrusion detection system
(IDS) can hardly detect such DDoS attacks, unless they are
located very close to the victim. However, even in that latter
case, the IDS/IPS may crash because it needs to deal with an
overwhelming volume of packets (some flooding attacks reach
10–100 Gb/s). In addition, allowing such huge traffic to transit
through the Internet and only detect/block it at the host IDS/IPS
may severely strain Internet resources.
The core of FireCol is composed of intrusion prevention
systems (IPSs) located at the Internet service providers (ISPs)
level. The IPSs form virtual protection rings around the hosts to
defend and collaborate by exchanging selected traffic
information. The evaluation of FireCol using extensive
simulations and a real dataset is presented, showing FireCol
effectiveness and low overhead, as well as its support for
incremental deployment in real networks.
We are using Firecol System to detect those attacks, which
system use to prevent our server from DDOS attacks. FireCol is
designed in a way that makes it a service to which customers can
subscribe. Participating IPSs along the path to a subscribed
customer collaborate (vertical communication) by computing and
exchanging belief scores on potential attacks. The IPSs form
virtual protection rings around the host they protect. The virtual
rings use horizontal communication when the degree of a
potential attack is high. In this way, the threat is measured based
on the overall traffic bandwidth directed to the customer
compared to the maximum bandwidth it supports. In addition to
detecting flooding DDoS attacks, FireCol also helps in detecting
other flooding scenarios, such as flash crowds, and for botnet-
based DDoS attacks.
5.2 SYSTEM IMPLEMENTATION
Implementation is the stage of the project when the
theoretical design is turned out into a working system. Thus it can
be considered to be the most critical stage in achieving a
successful new system and in giving the user, confidence that the
new system will work and be effective.
The implementation stage involves careful planning,
investigation of the existing system and it’s constraints on
implementation, designing of methods to achieve changeover and
evaluation of changeover methods.
Implementation is the process of converting a new system
design into operation. It is the phase that focuses on user
training, site preparation and file conversion for installing a
candidate system. The important factor that should be considered
here is that the conversion should not disrupt the functioning of
the organization.
6. DIAGRAMS:
6.1 FIRECOL ARCHITECTURE:
6.2 Use Case Diagram:
6.3 Data Flow Diagram
6.4 Activity Diagram:
7. LANGUAGE SPECIFICATION
THE .NET FRAMEWORK
The .NET Framework is a new computing platform that
simplifies application development in the highly distributed
environment of the Internet.
7.1 OBJECTIVES OF. NET FRAMEWORK:
1. To provide a consistent object-oriented programming
environment whether object codes is stored and executed locally
on Internet-distributed, or executed remotely.
2. To provide a code-execution environment to minimizes
software deployment and guarantees safe execution of code.
3. Eliminates the performance problems.
There are different types of application, such as Windows-
based applications and Web-based applications.
To make communication on distributed environment to
ensure that code be accessed by the .NET Framework can
integrate with any other code.
7.2 COMPONENTS OF .NET FRAMEWORK
THE COMMON LANGUAGE RUNTIME (CLR):
The common language runtime is the foundation of the .NET
Framework. It manages code at execution time, providing
important services such as memory management, thread
management, and remoting and also ensures more security and
robustness. The concept of code management is a fundamental
principle of the runtime. Code that targets the runtime is known
as managed code, while code that does not target the runtime is
known as unmanaged code.
THE .NET FRAME WORK CLASS LIBRARY:
It is a comprehensive, object-oriented collection of reusable
types used to develop applications ranging from traditional
command-line or graphical user interface (GUI) applications to
applications based on the latest innovations provided by ASP.NET,
such as Web Forms and XML Web services.
The .NET Framework can be hosted by unmanaged
components that load the common language runtime into their
processes and initiate the execution of managed code, thereby
creating a software environment that can exploit both managed
and unmanaged features. The .NET Framework not only provides
several runtime hosts, but also supports the development of third-
party runtime hosts.
Internet Explorer is an example of an unmanaged
application that hosts the runtime (in the form of a MIME type
extension). Using Internet Explorer to host the runtime to enables
embeds managed components or Windows Forms controls in
HTML documents.
FEATURES OF THE COMMON LANGUAGE RUNTIME:
The common language runtime manages memory; thread
execution, code execution, code safety verification, compilation,
and other system services these are all run on CLR.
Security.
Robustness.
Productivity.
Performance.
SECURITY:
The runtime enforces code access security. The
security features of the runtime thus enable legitimate Internet-
deployed software to be exceptionally feature rich. With regards
to security, managed components are awarded varying degrees
of trust, depending on a number of factors that include their
origin to perform file-access operations, registry-access
operations, or other sensitive functions.
ROBUSTNESS:
The runtime also enforces code robustness by
implementing a strict type- and code-verification infrastructure
called the common type system (CTS). The CTS ensures that all
managed code is self-describing. The managed environment of
the runtime eliminates many common software issues.
PRODUCTIVITY:
The runtime also accelerates developer productivity. For
example, programmers can write applications in their
development language of choice, yet take full advantage of the
runtime, the class library, and components written in other
languages by other developers.
PERFORMANCE:
The runtime is designed to enhance performance. Although the
common language runtime provides many standard runtime
services, managed code is never interpreted. A feature called
just-in-time (JIT) compiling enables all managed code to run in the
native machine language of the system on which it is executing.
Finally, the runtime can be hosted by high-performance, server-
side applications, such as Microsoft® SQL Server™ and Internet
Information Services (IIS).
7.3 FEATURES OF ASP.NET
ASP.NET
ASP.NET is the next version of Active Server Pages (ASP); it
is a unified Web development platform that provides the services
necessary for developers to build enterprise-class Web
applications. While ASP.NET is largely syntax compatible, it also
provides a new programming model and infrastructure for more
secure, scalable, and stable applications.
ASP.NET is a compiled, NET-based environment, we can
author applications in any .NET compatible language, including
Visual Basic .NET, C#, and JScript .NET. Additionally, the
entire .NET Framework is available to any ASP.NET application.
Developers can easily access the benefits of these technologies,
which include the managed common language runtime
environment (CLR), type safety, inheritance, and so on.
ASP.NET has been designed to work seamlessly with
WYSIWYG HTML editors and other programming tools, including
Microsoft Visual Studio .NET. Not only does this make Web
development easier, but it also provides all the benefits that
these tools have to offer, including a GUI that developers can use
to drop server controls onto a Web page and fully integrated
debugging support. Developers can choose from the following two
features when creating an ASP.NET application. Web Forms and
Web services, or combine these in any way they see fit. Each is
supported by the same infrastructure that allows you to use
authentication schemes, cache frequently used data, or
customize your application's configuration, to name only a few
possibilities. Web Forms allows us to build powerful forms-based
Web pages. When building these pages, we can use ASP.NET
server controls to create common UI elements, and program them
for common tasks. These controls allow we to rapidly build a Web
Form out of reusable built-in or custom components, simplifying
the code of a page.
An XML Web service provides the means to access server
functionality remotely. Using Web services, businesses can
expose programmatic interfaces to their data or business logic,
which in turn can be obtained and manipulated by client and
server applications. XML Web services enable the exchange of
data in client-server or server-server scenarios, using standards
like HTTP and XML messaging to move data across firewalls. XML
Web services are not tied to a particular component technology or
object-calling convention. As a result, programs written in any
language, using any component model, and running on any
operating system can access XML Web services
Each of these models can take full advantage of all ASP.NET
features, as well as the power of the .NET Framework and .NET
Framework common language runtime. Accessing databases from
ASP.NET applications is an often-used technique for displaying
data to Web site visitors. ASP.NET makes it easier than ever to
access databases for this purpose. It also allows us to manage the
database from your code .
ASP.NET provides a simple model that enables Web
developers to write logic that runs at the application level.
Developers can write this code in the global.aspx text file or in a
compiled class deployed as an assembly. This logic can include
application-level events, but developers can easily extend this
model to suit the needs of their Web application.
ASP.NET provides easy-to-use application and session-state
facilities that are familiar to ASP developers and are readily
compatible with all other .NET Framework APIs.ASP.NET offers the
IHttpHandler and IHttpModule interfaces. Implementing the
IHttpHandler interface gives you a means of interacting with the
low-level request and response services of the IIS Web server and
provides functionality much like ISAPI extensions, but with a
simpler programming model. Implementing the IHttpModule
interface allows you to include custom events that participate in
every request made to your application.
ASP.NET takes advantage of performance enhancements
found in the .NET Framework and common language runtime.
Additionally, it has been designed to offer significant performance
improvements over ASP and other Web development platforms.
All ASP.NET code is compiled, rather than interpreted, which
allows early binding, strong typing, and just-in-time (JIT)
compilation to native code, to name only a few of its benefits.
ASP.NET is also easily factorable, meaning that developers can
remove modules (a session module, for instance) that are not
relevant to the application they are developing.
ASP.NET provides extensive caching services (both built-in
services and caching APIs). ASP.NET also ships with performance
counters that developers and system administrators can monitor
to test new applications and gather metrics on existing
applications. Writing custom debug statements to your Web page
can help immensely in troubleshooting your application's code.
However, it can cause embarrassment if it is not removed. The
problem is that removing the debug statements from your pages
when your application is ready to be ported to a production server
can require significant effort.
ASP.NET offers the Trace Context class, which allows us to
write custom debug statements to our pages as we develop them.
They appear only when you have enabled tracing for a page or
entire application. Enabling tracing also appends details about a
request to the page, or, if you so specify, to a custom trace
viewer that is stored in the root directory of your application.
The .NET Framework and ASP.NET provide default authorization
and authentication schemes for Web applications. we can easily
remove, add to, or replace these schemes, depending upon the
needs of our application .
ASP.NET configuration settings are stored in XML-based files,
which are human readable and writable. Each of our applications
can have a distinct configuration file and we can extend the
configuration scheme to suit our requirements.
7.4 DATA ACCESS WITH ADO.NET
As you develop applications using ADO.NET, you will have
different requirements for working with data. You might never
need to directly edit an XML file containing data - but it is very
useful to understand the data architecture in ADO.NET.
ADO.NET offers several advantages over previous versions of
ADO:
Interoperability
Maintainability
Programmability
Performance Scalability
INTEROPERABILITY:
ADO.NET applications can take advantage of the flexibility
and broad acceptance of XML. Because XML is the format for
transmitting datasets across the network, any component that
can read the XML format can process data. The receiving
component need not be an ADO.NET component.
The transmitting component can simply transmit the dataset
to its destination without regard to how the receiving component
is implemented. The destination component might be a Visual
Studio application or any other application implemented with any
tool whatsoever.
The only requirement is that the receiving component be
able to read XML. SO, XML was designed with exactly this kind of
interoperability in mind.
MAINTAINABILITY:
In the life of a deployed system, modest changes are
possible, but substantial, Architectural changes are rarely
attempted because they are so difficult. As the performance load
on a deployed application server grows, system resources can
become scarce and response time or throughput can suffer.
Faced with this problem, software architects can choose to divide
the server's business-logic processing and user-interface
processing onto separate tiers on separate machines. In effect,
the application server tier is replaced with two tiers, alleviating
the shortage of system resources. If the original application is
implemented in ADO.NET using datasets, this transformation is
made easier.
ADO.NET data components in Visual Studio encapsulate
data access functionality in various ways that help you program
more quickly and with fewer mistakes.
PERFORMANCE:
ADO.NET datasets offer performance advantages over ADO
disconnected record sets. In ADO.NET data-type conversion is not
necessary.
SCALABILITY:
ADO.NET accommodates scalability by encouraging
programmers to conserve limited resources. Any ADO.NET
application employs disconnected access to data; it does not
retain database locks or active database connections for long
durations.
VISUAL STUDIO .NET
Visual Studio .NET is a complete set of
development tools for building ASP Web applications, XML Web
services, desktop applications, and mobile applications In addition
to building high-performing desktop applications, you can use
Visual Studio's powerful component-based development tools and
other technologies to simplify team-based design, development,
and deployment of Enterprise solutions. Visual Basic .NET, Visual
C++ .NET, and Visual C# .NET all use the same integrated
development environment (IDE), which allows them to share tools
and facilitates in the creation of mixed-language solutions.
In addition, these languages leverage the functionality
of the .NET Framework and simplify the development of ASP Web
applications and XML Web services.
Visual Studio supports the .NET Framework,
which provides a common language runtime and unified
programming classes; ASP.NET uses these components to create
ASP Web applications and XML Web services. Also it includes
MSDN Library, which contains all the documentation for these
development tools.
7.5 FEATURES OF SQL-SERVER 2005
The OLAP Services feature available in SQL Server version
7.0 is now called SQL Server 2005 Analysis Services. The term
OLAP Services has been replaced with the term Analysis Services.
Analysis Services also includes a new data mining component.
The Repository component available in SQL Server version 7.0 is
now called Microsoft SQL Server 2005 Meta Data Services.
References to the component now use the term Meta Data
Services. The term repository is used only in reference to the
repository engine within Meta Data Services SQL-SERVER
database consist of six type of objects,
They are,
1. TABLE
2. QUERY
3. FORM
4. REPORT
5. MACRO
TABLE:
A database is a collection of data about a specific topic.
VIEWS OF TABLE:
We can work with a table in two types,
1. Design View
2. Datasheet View
Design View
To build or modify the structure of a table we work in the
table design view. We can specify what kind of data will be hold.
Datasheet View
To add, edit or analyses the data itself we work in tables
datasheet view mode.
QUERY:
A query is a question that has to be asked the data.
Access gathers data that answers the question from one or more
table. The data that make up the answer is either dynaset (if you
edit it) or a snapshot(it cannot be edited).Each time we run query,
we get latest information in the dynaset.Access either displays
the dynaset or snapshot for us to view or perform an action on
it ,such as deleting or updating.
FORMS:
A form is used to view and edit information in the database
record by record .A form displays only the information we want to
see in the way we want to see it. Forms use the familiar controls
such as textboxes and checkboxes. This makes viewing and
entering data easy.
Views of Form:
We can work with forms in several primarily there are two
views,
They are,
1. Design View
2. Form View
Design View
To build or modify the structure of a form, we work in forms
design view. We can add control to the form that are bound to
fields in a table or query, includes textboxes, option buttons,
graphs and pictures.
Form View
The form view which display the whole design of the form.
REPORT:
A report is used to vies and print information from the
database. The report can ground records into many levels and
compute totals and average by checking values from many
records at once. Also the report is attractive and distinctive
because we have control over the size and appearance of it.
MACRO:
A macro is a set of actions. Each action in macros does
something. Such as opening a form or printing a report .We write
macros to automate the common tasks the work easy and save
the time.
MODULE:
Modules are units of code written in access basic language. We
can write and use module to automate and customize the
database in very sophisticated ways. It is a personal computer
based RDBMS. This provides most of the features available in the
high-end RDBMS products like Oracle, Sybase, and Ingress etc. VB
keeps access as its native database. Developer can create a
database for development & further can create. The tables are
required to store data. During the initial Development phase data
can be stored in the access database & during the
implementation phase depending on the volume data can use a
higher – end database.
8. SYSTEM STUDY
FEASIBILTY STUDY
The feasibility of the project is analyzed in this phase and
business proposal is put forth with a very general plan for the
project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This
is to ensure that the proposed system is not a burden to the
company. For feasibility analysis, some understanding of the
major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that
the system will have on the organization. The amount of fund that
the company can pour into the research and development of the
system is limited. The expenditures must be justified. Thus the
developed system as well within the budget and this was
achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility,
that is, the technical requirements of the system. Any system
developed must not have a high demand on the available
technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands
being placed on the client. The developed system must have a
modest requirement, as only minimal or null changes are required
for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the
system by the user. This includes the process of training the user
to use the system efficiently. The user must not feel threatened
by the system, instead must accept it as a necessity.
The level of acceptance by the users solely depends on the
methods that are employed to educate the user about the system
and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism,
which is welcomed, as he is the final user of the system.
8 .2 SYSTEM TESTING AND MAINTENANCE
Testing is vital to the success of the system. System testing
makes a logical assumption that if all parts of the system are
correct, the goal will be successfully achieved. In the testing
process we test the actual system in an organization and gather
errors from the new system operates in full efficiency as stated.
System testing is the stage of implementation, which is aimed to
ensuring that the system works accurately and efficiently.
In the testing process we test the actual system in an
organization and gather errors from the new system and take
initiatives to correct the same. All the front-end and back-end
connectivity are tested to be sure that the new system operates
in full efficiency as stated. System testing is the stage of
implementation, which is aimed at ensuring that the system
works accurately and efficiently.
The main objective of testing is to uncover errors from the
system. For the uncovering process we have to give proper input
data to the system. So we should have more conscious to give
input data. It is important to give correct inputs to efficient
testing.
Testing is done for each module. After testing all the
modules, the modules are integrated and testing of the final
system is done with the test data, specially designed to show that
the system will operate successfully in all its aspects conditions.
Thus the system testing is a confirmation that all is correct and an
opportunity to show the user that the system works. Inadequate
testing or non-testing leads to errors that may appear few months
later.
This will create two problems, Time delay between the cause
and appearance of the problem. The effect of the system errors
on files and records within the system. The purpose of the
system testing is to consider all the likely variations to which it
will be suggested and push the system to its limits.
The testing process focuses on logical intervals of the
software ensuring that all the statements have been tested and
on the function intervals (i.e.,) conducting tests to uncover errors
and ensure that defined inputs will produce actual results that
agree with the required results. Testing has to be done using the
two common steps Unit testing and Integration testing. In the
project system testing is made as follows:
The procedure level testing is made first. By giving improper
inputs, the errors occurred are noted and eliminated. This is the
final step in system life cycle. Here we implement the tested
error-free system into real-life environment and make necessary
changes, which runs in an online fashion. Here system
maintenance is done every months or year based on company
policies, and is checked for errors like runtime errors, long run
errors and other maintenances like table verification and reports.
UNIT TESTING
Unit testing verification efforts on the smallest unit of
software design, module. This is known as “Module Testing”. The
modules are tested separately. This testing is carried out during
programming stage itself. In these testing steps, each module is
found to be working satisfactorily as regard to the expected
output from the module.
INTEGRATION TESTING
Integration testing is a systematic technique for
constructing tests to uncover error associated within the
interface. In the project, all the modules are combined and then
the entire programmer is tested as a whole. In the integration-
testing step, all the error uncovered is corrected for the next
testing steps.
9. LITERATURE SURVEY
9.1 Title: Worldwide ISP security report
Author: Arbor, Lexington
Description:
Arbor Networks, Inc., in cooperation with the Internet security
operations community, has completed this fourth edition of an
ongoing series of annual operational security surveys. This
survey, covering a 12-month period from August 2007 through
July 2008, is designed to provide data useful to network operators
so that they can make informed decisions about their use of
network security technology to protect their mission-critical
infrastructures. It is also meant to serve as a general resource for
the Internet operations and engineering community, recording
information on trends and employment of various infrastructure
security techniques.
Operational network securities issues—the day-to-day aspects of
security in commercial networks—are the primary focus of survey
respondents. As such, the results provided in this survey more
accurately represent real-world concerns than theoretical and
emerging attack vectors addressed and speculated about
elsewhere.
Key Findings
The ISP Security Battlefront Expands In the last three
surveys, ISPs reportedly spent most of their available security
resources combating distributed denial of service (DDoS) attacks.
For the first time, this year ISPs also describe a far more
diversified security landscape, including significant concerns over
domain name system (DNS) spoofing, border gateway protocol
(BGP) hijacking and spam. Almost half of the surveyed ISPs now
consider their DNS services vulnerable. Others expressed concern
over related service delivery infrastructure, including
voice over IP (VoIP), session border controllers (SBCs) and load
balancers. Attacks Now Exceed 40 Gigabits From relatively
humble megabit beginnings in 2000, the largest DDoS attacks
have now grown a hundredfold to break the 40 gigabit barrier this
year. The growth in attack size continues to significantly outpace
the corresponding increase in underlying
transmission speed and ISP infrastructure investment. Figure 1
shows the yearly reported maximum attack size.
9.2 Title: Survey of network- based defense mechanisms
countering the DoS and DDoS problems
Author: T. Peng, C. Leckie, and K. Ramamohanarao
Description:
The Internet was originally designed for openness and
scalability. The infrastructure is certainly working as envisioned
by that yardstick. However, the price of this success has been
poor security. For example, the Internet Protocol (IP) was
designed to support ease of attachment of hosts to networks, and
provides little support for verifying the contents of IP packet
header fields [Clark 1988]. This makes it possible to fake the
source address of packets, and hence difficult to identify the
source of traffic. Moreover, there is no inherent support in the IP
layer to check whether a source is authorized to access a service.
Packets are delivered to their destination, and the server at the
destination must decide whether to accept and service these
packets. While defenses such as firewalls can be added to protect
servers, a key challenge for defense is how to discriminate
legitimate requests for service from malicious access attempts. If
it is easier for sources to generate service requests than it is for a
server to check the validity of those requests, then it is difficult to
protect the server from malicious requests that waste the
resources of the server. This creates the opportunity for a class of
attack known as a denial of service attack.
9.3 Title: The zombie roundup: Understanding, detecting, and
disrupting botnets
Author: E. Cooke, F. Jahanian, and D. Mcpherson
Description:
Global Internet threats are undergoing a profound transformation
from attacks designed solely to disable infrastructure to those
that also target people and organizations. Behind these new
attacks is a large pool of compromised hosts sitting in homes,
schools, businesses, and governments around the world. These
systems are infected with a bot that communicates with a bot
controller and other bots to form what is commonly referred to as
a zombie army or botnet. Botnets are a very real and quickly
evolving problem that is still not well understood or studied. In
this paper we outline the origins and structure of bots and botnets
and use data from the operator community, the Internet Motion
Sensor project, and a honeypot experiment to illustrate the botnet
problem today. We then study the effectiveness of detecting
botnets by directly monitoring IRC communication or other
command and control activity and show a more comprehensive
approach is required. We conclude by describing a system to
detect botnets that utilize advanced command and control
systems by correlating secondary detection data from multiple
sources. This frightening new class of attacks directly impacts the
day-to-day lives of millions of people and endangers businesses
around the world. For example, new attacks steal personal
information that can be used to damage reputations or lead to
significant financial losses. Current mitigation techniques focus on
the symptoms of the problem, filtering the spam, hardening web
browsers, or building applications that warn against phishing
tricks. While tools such as these are important, it is also critical to
disrupt and dismantle the infrastructure used to perpetrate the
attacks. At the center of these threats is a large pool of
compromised hosts sitting in homes, schools, businesses, and
governments around the world. These systems are infected with a
bot that communicates with a bot controller and other bots to
form what is commonly referred to as a zombie army or botnet. A
bot can be differentiated from other threats by a communication
channel to a controller.
10. CONCLUSION AND FUTURE WORKS
This paper proposed FireCol, a scalable solution for the early
detection of flooding DDoS attacks. Belief scores are shared
within a ring-based overlay network of IPSs. It is performed as
close to attack sources as possible, providing a protection to
subscribed customers and saving valuable network resources.
Experiments showed good performance and robustness of FireCol
and highlighted good practices for its configuration.
Also, the analysis of FireCol demonstrated its light
computational as well as communication overhead. Being offered
as an added value service to customers, the accounting for
FireCol is therefore facilitated, which represents a good incentive
for its deployment by ISPs. As a future work, we plan to extend
FireCol to support different IPS rule structures.
11.REFERENCES:
[1] A. Networks, Arbor, Lexington,MA, “Worldwide ISP security report,” Tech.
Rep., 2010.
[2] T. Peng, C. Leckie, and K. Ramamohanarao, “Survey of network- based
defense mechanisms countering the DoS and DDoS problems,” Comput. Surv.,
vol. 39, Apr. 2007, Article 3.
[3] E. Cooke, F. Jahanian, and D. Mcpherson, “The zombie roundup:
Understanding, detecting, and disrupting botnets,” in Proc. SRUTI, Jun. 2005, pp.
39–44.
[4] T. Holz, M. Steiner, F. Dahl, E. Biersack, and F. Freiling, “Measurements and
mitigation of peer-to-peer-based botnets: A case study on storm worm,” in Proc.
USENIX LEET, 2008, Article no. 9.
[5] J. Françcois, A. El Atawy, E. Al Shaer, and R. Boutaba, “A collaborative
approach for proactive detection of distributed denial of service attacks,” in Proc.
IEEE MonAM, Toulouse, France, 2007, vol. 11.
[6] A. Feldmann, O. Maennel, Z. M. Mao, A. Berger, and B. Maggs, “Locating
Internet routing instabilities,” Comput. Commun. Rev., vol. 34, no. 4, pp. 205–218,
2004.
[7] A. Basu and J. Riecke, “Stability issues in OSPF routing,” in Proc. ACM
SIGCOMM , 2001, pp. 225–236.
[8] V. Paxson, “End-to-end routing behavior in the Internet,” IEEE/ACM Trans.
Netw., vol. 5, no. 5, pp. 601–615, Oct. 1997.
[9] K. Xu, Z.-L. Zhang, and S. Bhattacharyya, “Internet traffic behavior profiling
for network security monitoring,” IEEE/ACM Trans. Netw., vol. 16, no. 6, pp.
1241–1252, Dec. 2008.
[10] Z. Zhang, M. Zhang, A. Greenberg, Y. C. Hu, R. Mahajan, and B. Christian,
“Optimizing cost and performance in online service provider networks,” in Proc.
USENIX NSDI, 2010, p. 3.
[11] M. Dischinger, A. Mislove, A. Haeberlen, and K. P. Gummadi, “Detecting
bittorrent blocking,” in Proc. ACM SIGCOMM Conf. Internet Meas., 2008, pp. 3–
8.
[12] Y. Zhang, Z. M. Mao, and M. Zhang, “Detecting traffic differentiation in
backbone ISPs with NetPolice,” in Proc. ACM SIGCOMM Conf. Internet Meas.,
2009, pp. 103–115.
[13] G. Shafer, A Mathematical Theory of Evidence. Princeton, NJ: Princeton
Univ. Press, 1976.
[14] T. M. Gil and M. Poletto, “Multops: A data-structure for bandwidth attack
detection,” in Proc. 10th USENIX Security Symp., 2001, pp. 23–38.
[15] T. Peng, C. Leckie, and K. Ramamohanarao, “Protection from distributed
denial of service attacks using history-based IP filtering,” in Proc. IEEE ICC, May
2003, vol. 1, pp. 482–486
. [16] C. Siaterlis and B. Maglaris, “Detecting DDoS attacks with passive
measurement based heuristics,” in Proc. Int. Symp. Comput. Commun., 2004, vol.
1, pp. 339–344.