final doc

57
BOTNET CONSTRUCTION OF PEER-TO-PEER NETWORK SCHMES A PROJECT REPORT Submitted by SHARMILA.B SUBASHINI.H KALAISELVI.M in partial fulfillment for the award of the degree of BACHELOR OF TECHNOLOGY IN INFORMATION TECHNOLOGY PANIMALAR ENGINEERING COLLEGE ANNA UNIVERSITY : CHENNAI 600 025 APRIL & 2011

Transcript of final doc

Page 1: final doc

BOTNET CONSTRUCTION OF PEER-TO-PEER

NETWORK SCHMES

A PROJECT REPORT

Submitted by

SHARMILA.B

SUBASHINI.H

KALAISELVI.M

in partial fulfillment for the award of the degree

of

BACHELOR OF TECHNOLOGY

IN

INFORMATION TECHNOLOGY

PANIMALAR ENGINEERING COLLEGE

ANNA UNIVERSITY : CHENNAI 600 025

APRIL & 2011

Page 2: final doc

BONAFIDE CERTIFICATE

Certified that this project report titled “BOTNET CONSTRUCTION OF

PEER-TO-PEER NETWORK SCHEMES ” is the bonafide work of

Ms.B.SHARMILA, (21007205096) who carried out the research under my

supervision.

 

Ms. MUTHULAKSHMI Mrs. ANNE PRINCYSupervisor, Lecturer Head of the DepartmentDept. of Information Technology Dept. of Information Technology Panimalar Engineering College Panimalar Engineering CollegeChennai-602 103. Chennai-602 103.

Submitted to Project Viva Voice Examination held on………………………………

Internal Examiner                                                            External Examiner

Page 3: final doc

ABSTRACT

In Internet malware attacks have evolved into better-organized and more profit-

centered endeavors. E-mail spam, extortion through denial-of-service attacks, and click

fraud represent a few examples of this emerging trend. “Hackers” are a root cause of these

problems. A “hybrid” consists of a network of compromised computers controlled by an

attacker. Recently, normal peer to peer have become the root cause of many Internet

attacks. To be well prepared for future attacks, it is not enough to study how to detect and

defend against the hackers that have appeared in the past. Most of current research focuses

on existing hacking. In the existing peer to peer, it is done with Command authentication

and individualized encryption in Limited-sized peer lists.

We present the design of an advanced hybrid P2P Compared with current peer to

peer, the proposed one is harder to be monitored, and much harder to be shut down. It

provides robust network connectivity, individualized encryption and control traffic

dispersion, easy monitoring and recovery by its master. To defend against such an

advanced hacking, we point out that honeypots may play an important role. However, we

believe this research is still meaningful to provide secure community.

Page 4: final doc

ACKNOWLEDGEMENT

I am grateful to Dr.P.ChinnaDurai, Secretary and Correspondent, Panimalar

Engineering college, for his support, which enabled me to do this project.

I am grateful to Dr.Mani, Principal, Panimalar Engineering college, for his

kindness, which enabled me to do this project.

I also express my gratitude to Mrs.Anne Princy, Head of the Department of

Information Technology, Panimalar Engineering college, for her support throughout the

project.

I would like to take this opportunity to express my sincere thanks to my internal

guide Ms.Muthulakshmi, Lecturer,Information Technology, Panimalar Engineering

college for her valuable guidance and technical support in my project work.

I also express my gratitude to, Project Coordinator, Information Technology,

Panimalar Engineering college for their valuable guidance and support throughout my

project work.

SHARMILA.B

Page 5: final doc

TABLE OF CONTENTS

CHAPTER.NO TOPIC PAGE NO

ABSTRACT iii

LIST OF FIGURES viii

LIST OF ABBREVATIONS ix

1 INTRODUCTION 1

1.1 BOTS

1.2 GENERAL

1.3 SUMMARY

1

1

2

2 LITERATURE SURVEY 3

2.1 GENERAL

2.2 LITERATURE SURVEY

2.3 SUMMARY

3

3

6

3 SYSTEM ANALYSIS 7

3.1EXISTING P2P BOTNETS AND THEIR PROBLEM

3.2 PROPOSED P2P BOTNETS

3.2.1 Two classes of bots

3.2.2 Botnet Command and Control Architecture

3.2.3 Relationship between Traditional C&C Botnets

and the Proposed Botnet

3.3 SUMMARY

7

8

8

9

9

9

4 SYSTEM DESIGN

4.1 MODULES

4.2 KEY GENERATION

4.2.1 Cloud operating system

4.2.2 Individualized encryption key

4.2.3 Individualized service port

4.2.4 Blowfish algorithm

4.2.5 Fiestal network

4.2.6 Description of blowfish algorithm

10

10

10

10

10

11

12

12

13

Page 6: final doc

4.2.7 DES Algorithm

4.3 BOTNET CONSTRUCTION

4.3.1 Basic Construction Procedure

4.3.2 Advanced Construction Procedure

4.4 MONITORING

4.4.1 Defense Against the Proposed Hybrid P2P

Botnet

4.4.2 Annihilation

4.4.3 Botnet Monitoring Based on Honeypot

Techniques

4.4.4 Botnet Monitoring Based on Spying Honeypot

4.5 ARCHITECTURE DIAGRAM

4.6 UML DIAGRAMS

4.6.1 Sequence diagram

4.6.2 Activity diagram

4.6.3 Data flow diagram

4.7 SUMMARY

13

15

15

17

18

18

18

20

20

23

24

24

25

26

28

5 SYSTEM REQUIREMENTS

5.1 GENERAL

5.2 SYSTEM REQUIREMENTS

5.3 ABOUT JAVA

5.3.1 Introduction

5.3.2 Why we choose java

5.4 SUMMARY

29

29

29

29

30

30

30

6 IMPLEMENTATION AND RESULTS

6.1 GENERAL

6.1.1 Botmaster monitoring the bots

6.1.2 To enter the IP Address

6.1.3 To view the IP Adress and Resoureces files

6.1.4 Encrypted the command

6.1.5 The command Encrypted successfully

31

31

31

32

33

34

35

35

Page 7: final doc

6.2 SUMMARY

7 CONCLUSION AND FUTURE ENHANCEMENT

7.1 CONCLUSION

7.2 FUTURE WORK

REFERENCES

36

36

36

37

Page 8: final doc

LIST OF FIGURES

FIGURE NO. DESCRIPTION PAGE NO.

4.1 FIESTAL NETWORK ARCHITECTURE 15

4.2 DES ALGORITHM 17

4.3 ARCHITECTURE DIAGRAM 21

4.4 SEQUENCE DIAGRAM 22

4.5 ACTIVITY DIAGRAM 23

4.6 LEVEL 0 DFD DIAGRAM 24

4.7 LEVEL 1 DFD DIAGRAM 24

4.8 LEVEL 2 DFD DIAGRAM 25

4.9 LEVEL 3 DFD DIAGRAM 26

Page 9: final doc

LIST OF ABBREVATIONS

C&C - Command and control

IRC - Internet Relay Chat

DoS - Denial of service

P2P - Peer to Peer

DHCP - Dynamic host configuration protocol

DES - Data Encryption Standard

ACM - Access Control Model

VoIP - Voice over IP

CRF - Conditional Random Fields

Page 10: final doc

CHAPTER 1

INTRODUCTION

1.1 BOTS

Internet bots, also known as web robots, WWW robots or simply bots, are software

applications that run automated tasks over the Internet. Typically, bots perform tasks that

are both simple and structurally repetitive, at a much higher rate than would be possible for

a human alone. The largest use of bots is in web spidering, in which an automated script

fetches, analyzes and files information from web servers at many times the speed of a

human. Each server can have a file called robots.txt, containing rules for the spidering of

that server that the bot is supposed to obey.

In addition to their uses outlined above, bots may also be implemented where a

response speed faster than that of humans is required (e.g., gaming bots and auction-site

robots) or less commonly in situations where the emulation of human activity is required,

for example chat bots.

1.2 GENERAL

In the last several years, Internet malware attacks have evolved into better-

organized and more profit-centered endeavors. E-mail spam, extortion through denial-of-

service attacks , and click fraud represent a few examples of this emerging trend.

“Botnets” are a root cause of these problems. A “botnet” consists of a network of

compromised computers (“bots”) connected to the Internet that is controlled by a remote

attacker (“botmaster”). Since a botmaster could scatter attack tasks over hundreds or even

tens of thousands of computers distributed across the Internet, the enormous cumulative

bandwidth and large number of attack sources make botnet-based attacks extremely

dangerous and hard to defend against.

Compared to other Internet malware, the unique feature of a botnet lies in its

control communication network. Most botnets that have appeared until now have had a

Page 11: final doc

common centralized architecture. That is, bots in the botnet connect directly to some

special hosts (called “command-and-control” servers, or “C&C” servers). These C&C

servers receive commands from their botmaster and forward them to the other bots in the

network. From now on, we will call a botnet with such a control communication

architecture a “C&C botnet.” The basic control communication architecture for a typical

C&C botnet (in reality, a C&C botnet usually has more than two C&C servers). Arrows

represent the directions of network connections.

As botnet-based attacks become popular and dangerous, security researchers have

studied how to detect, monitor, and defend against them. Most of the current research has

focused upon the C&Cbotnets that have appeared in the past, especially Internet Relay

Chat (IRC)- based botnets. It is necessary to conduct such research in order to deal with the

threat we are facing today. However, it is equally important to conduct research on

advanced botnet designs that could be developed by attackers in the near future. Otherwise,

we will remain susceptible to the next generation of Internet malware attacks.

From a botmaster’s perspective, the C&C servers are the fundamental weak points

in current botnet architectures. First, a botmaster will lose control of her botnet once the

limited number of C&C servers are shut down by defenders. Second, defenders could

easily obtain the identities (e.g., IP addresses) of all C&C servers based on their service

traffic to a large number of bots, or simply from one single captured bot (which contains

the list of C&C servers). Third, an entire botnet may be exposed once a C&C server in the

botnet is hijacked or captured by defenders. As network security practitioners put more

resources and effort into defending against botnet attacks, hackers will develop and deploy

the next generation of botnets with a different control architecture.

1.3 SUMMARY

This report is organized into chapters, each focusing on different aspects of the

project. Chapter 1 explains the various aspects of the project. Chapter 2 explains the

system analysis. Chapter 3 explains the Literature survey. Chapter 4 gives an overview of

the system design and the list of modules and its description and also explains the block

diagrams. Chapter 5 addresses the system requirements. The simulation and results are

Page 12: final doc

depicted in Chapter 6 with the metrics and performances. Chapter 7 presents the

conclusion of the project.

CHAPTER 2

LITERATURE SURVEY

2.1 GENERAL

This chapter gives the overall description of the reference papers, through which

we can identify the problems of existing methodology. Also the methods to overcome such

problems can be identified

2.2 LITERATURE SURVEY

Paper1. “Botz-4-Sale: Surviving Organized DDOS Attacks That Mimic Flash

Crowds,” Proc. Second Symp. Networked Systems Design and Implementation

(NSDI ’05)

Recent denial of service attacks are mounted by professionals using Botnets of tens

of thousands of compromised machines. To circumvent detection, attackers are

increasingly moving away from bandwidth floods to attacks that mimic theWeb browsing

behavior of a large number of clients, and target expensive higher-layer resources such as

CPU, database and disk bandwidth. The resulting attacks are hard to defend against using

standard techniques, as the malicious requests differ from the legitimate ones in intent but

not in content. We present the design and implementation of Kill-Bots, a kernel extension

to protect Web servers against DDoS attacks that masquerade as flash crowds. Kill-Bots

provides authentication using graphical tests but is different from other systems that use

graphical tests. First, Kill-Bots uses an intermediate stage to identify the IP addresses that

ignore the test, and persistently bombard the server with requests despite repeated failures

at solving the tests. These machines are bots because their intent is to congest the server.

Once these machines are identified, Kill-Bots blocks their requests, turns the graphical

tests off, and allows access to legitimate users who are unable or unwilling to solve

graphical tests. Second, Kill-Bots sends a test and checks the client’s answer without

allowing unauthenticated clients access to sockets, TCBs, and worker processes. Thus, it

Page 13: final doc

protects the authentication mechanism from being DDoSed. Third, Kill- Bots combines

authentication with admission control. As a result, it improves performance, regardless of

whether the server overload is caused by DDoS or a true Flash Crowd.

Paper2. “Botnet Tracking: Exploring a Root-Cause Methodology to Prevent

Distributed Denial-of- Service Attacks,” Technical Report AIB-2005-07, CS Dept.

RWTH Aachen Univ., Apr. 2005.

Denial-of-Service (DoS) attacks pose a significant threat to the Internet today

especially if they are distributed, i.e., launched simultaneously at a large number of

systems. Reactive techniques that try to detect such an attack and throttle down malicious

traffic prevail today but usually require an additional infrastructure to be really effective.

In this paper we show that preventive mechanisms can be as effective with much less

effort: We present an approach to (distributed) DoS attack prevention that is based on the

observation that coordinated automated activity by many hosts needs a mechanism to

remotely control them. To prevent such attacks, it is therefore possible to identify, infiltrate

and analyze this remote control mechanism and to stop it in an automated fashion. We

show that this method can be realized in the Internet by describing how we infiltrated and

tracked IRC-based botnets which are the main DoS technology used by attackers today.

Paper3. “Modeling Botnet Propagation Using Time Zones,” Proc. 13th Ann. Network

and Distributed System Security Symp. (NDSS ’06), pp. 235-249, Feb.

2006.

Time zones play an important and unexplored role in malware epidemics. To

understand how time and location affect malware spread dynamics, we studied botnets, or

large coordinated collections of victim machines (zombies) controlled by attackers. Over a

six month period we observed dozens of botnets representing millions of victims.We noted

diurnal properties in botnet activity, which we suspect occurs because victims turn their

computers off at night. Through binary analysis, we also con_rmed that some botnets

demonstrated a bias in infecting regional populations. Clearly, computers that are of_ine

are not infectious, and any regional bias in infections will affect the overall growth of the

botnet. We therefore created a diurnal propagation model. The model uses diurnal shaping

functions to capture regional variations in online vulnerable populations. The diurnal

Page 14: final doc

model also lets one compare propagation rates for different botnets, and prioritize

response. Because of variations in release times and diurnal shaping functions particular to

an infection, botnets released later in time may actually surpass other botnets that have an

advanced start. Since response times for malware outbreaks is now measured in hours,

being able to predict short-term propagation dynamics lets us allocate resources more

intelligently. We used empirical data from botnets to evaluate the analytical model.

Paper4. “The Zombie Roundup: Understanding, Detecting, and Disrupting Botnets,”

Proc. USENIX Workshop Steps to Reducing Unwanted Traffic on the Internet

(SRUTI ’05), July 2005.

Global Internet threats are undergoing a profound transformation from attacks

designed solely to disable infrastructure to those that also target people and organizations.

Behind these new attacks is a large pool of compromised hosts sitting in homes, schools,

businesses, and governments around the world. These systems are infected with a bot that

communicates with a bot controller and other bots to form what is commonly referred to as

a zombie army or botnet. Botnets are a very real and quickly evolving problem that is still

not well understood or studied. In this paper we outline the origins and structure of bots

and botnets and use data from the operator community, the Internet Motion Sensor project,

and a honeypot experiment to illustrate the botnet problem today. We then study the

effectiveness of detecting botnets by directly monitoring IRC communication or other

command and control activity and show a more comprehensive approach is required. We

conclude by describing a system to detect botnets that utilize advanced command and

control systems by correlating secondary detection data from multiple sources.

Paper6. “Army of Botnets,” Proc. 14th Ann. Network and Distributed System

Security Symp. (NDSS ’07), Feb. 2007.

The trend toward smaller botnets may be more dangerous than large botnets, in

terms of large-scale attacks like distributed denials of service. We examine the possibility

of “super-botnets,” networks of independent botnets that can be coordinated for attacks of

Page 15: final doc

unprecedented scale. For an adversary, super-botnets would also be extremely versatile and

resistant to countermeasures. As such, superbotnets must be examined by the research

community, so that defenses against this threat can be developed proactively. Our

simulation results shed light on the feasibility and structure of super-botnets and some

properties of their command-and-control mechanism. New forms of attack that super-

botnets can launch are explored, and possible defenses against the threat of super-botnets

are suggested.

2.3 SUMMARY

This chapter explains some of the information present in these papers, which are

used as references for the development of this project.

CHAPTER 3

SYSTEM ANALYSIS

3.1 EXISTING P2P BOTNETS AND THEIR PROBLEMS

Considering the above weaknesses inherent to the centralized architecture of

current C&C botnets, it is a natural strategy for attackers to design a peer-to-peer (P2P)

control mechanism into their botnets. In the last several years, botnets such as Slapper,

Page 16: final doc

Sinit, Phatbot, and Nugache have implemented different kinds of P2P control

architectures. They have shown several advanced designs. For example, some of them

have removed the “bootstrap” process used in common P2P protocols.1 Sinit uses public

key cryptography for update authentication. Nugache attempts to thwart detection by

implementing an encrypted/obsfucated control channel.

Nevertheless, simply migrating available P2P protocols will not generate a sound

botnet, and the P2P designs used by several botnets in the past are not mature and have

many weaknesses. To remove bootstrap procedure, a Sinit bot uses random probing to find

other Sinit bots to communicate with. This results in poor connectivity for the constructed

botnet and easy detection due to the extensive probing traffic. Phatbot utilizes Gnutella

cache servers for its bootstrap process. This botnet can be easily shut down if security

community set up filter on those Gnutella cache servers, or block any traffic to and from

those cache servers.

In addition, its underlying WASTE P2P protocol is not scalable across a large

network. Nugache’s weakness lies in its reliance on a seed list of 22 IP addresses during its

bootstrap process. Slapper fails to implement encryption and command authentication,

enabling it to be easily hijacked by others. In addition, its list of known bots contains all

(or almost all) members of the botnet. Thus, one single captured bot would expose the

entire botnet to defenders. Furthermore, its complicated communication mechanism

generates a large amount of traffic, rendering it susceptible to monitoring via network flow

analysis.

3.2 PROPOSED P2P BOTNETS

3.2.1 Two Classes of Bots

The bots in the proposed P2P botnet are classified into two groups. The first group

contains bots that have static, nonprivate IP addresses and are accessible from the global

Internet. Bots in the first group are called servent bots since they behave as both clients and

servers. The second group contains the remaining bots, including 1) bots with dynamically

allocated IP addresses, 2) bots with private IP addresses, and 3) bots behind firewalls such

that they cannot be connected from the global Internet. The second group of bots is called

Page 17: final doc

client bots since they will not accept incoming connections. Only servent bots are

candidates in peer lists. All bots, including both client bots and servent bots, actively

contact the servent bots in their peer lists to retrieve commands. Because servent bots

normally do not change their IP addresses, this design increases the network stability of a

botnet. This bot classification will become more important in the future as a larger

proportion of computers will sit behind firewall, or use “Dynamic Host Configuration

Protocol” (DHCP) or private IP addresses due to shortage of IP space.

A bot could easily determine the type of IP address used by its host machine. For

example, on a Windows machine, a bot could run the command “ipconfig /all.” Not all

bots with static global IP addresses are qualified to be servent bots—some of them may

stay behind firewall, inaccessible from the global Internet. A botmaster could rely on the

collaboration between bots to determine such bots. For example, a bot runs its server

program and requests the servent bots in its peer list to initiate connections to its service

port. If the bot could receive such test connections, it labels itself as a servent bot.

Otherwise, it labels itself as a client bot.

3.2.2 Botnet Command and Control Architecture

The C&C architecture of the proposed botnet. The illustrative botnet shown in this

figure has five servent bots and three client bots. The peer list size is two (i.e., each bot’s

peer list contains the IP addresses of two servent bots). An arrow from bot A to bot B

represents bot A initiating a connection to bot B. This figure shows that a big cloud of

servent bots interconnect with each other—they form the backbone of the control

communication network of a botnet. A botmaster injects her commands through any bot(s)

in the botnet. Both client and servent bots periodically connect to the servent bots in their

peer lists in order to retrieve commands issued by their botmaster. When a bot receives a

new command that it has never seen before (e.g., each command has a unique ID), it

immediately forwards the command to all servent bots in its peer list. In addition, if itself

is a servent bot, it will also forward the command to any bots connecting to it.

Page 18: final doc

This description of command communication means that, in terms of command

forwarding, the proposed botnet has an undirected graph topology. A botmaster’s

command could pass via the links in both directions. If the size of the botnet peer list is

denoted by M, then this design makes sure that each bot has at least M venues to receive

commands.

3.2.3 Relationship between Traditional C&C Botnets and the Proposed Botnet

Compared to a C&C botnet (see Fig. 1), it is easy to see that the proposed hybrid

P2P botnet shown in Fig. 2 is actually an extension of a C&C botnet. The hybrid P2P

botnet is equivalent to a C&C botnet where servent bots take the role of C&C servers: the

number of C&C servers (servent bots) is greatly enlarged, and they interconnect with each

other. Indeed, the large number of servent bots is the primary reason why the proposed

hybrid P2P botnet is very hard to be shut down.

3.3 SUMMARY

This chapter explains the current botnets and its problems. The existing and

proposed works of this project are explained.

CHAPTER 4

SYSTEM DESIGN

4.1 MODULES

1. Key generation2. Construct botnet3. Monitoring

4.2 KEY GENERATION

4.2.1 Command Authentication

Compared with a C&C botnet, because bots in the proposed botnet do not receive

commands from predefined places, it is especially important to implement a strong

Page 19: final doc

command authentication. A standard public-key authentication would be sufficient. A

master generates a pair of public/private keys. There is no need for key distribution

because the public key is hard-coded in key generation program. Later, the command

messages sent from the master could be digitally signed by the private key to ensure their

authentication and integrity. This public-key-based authentication could also be readily

deployed by current C&C botnets. So botnet hacking is not a major issue.

4.2.2 Individualized Encryption and decryption Key

In the proposed botnet, the server randomly generates its symmetric encryption

key. With the help of same key only the client will be able to decrypt the contents.

This individualized encryption and decryption guarantees that if defenders capture

one bot, they wont be able to decrypt until and unless the hacker knows the key. Thus the

individualized encryption and decryption will not let the systems to be compromised.

4.2.5 Feistel Networks

A Feistel network is a general method of transforming any function (usually called an

Ffunction) into a permutation. It was invented by Horst Feistel and has been used in many

block cipher designs. The working of a Feistal Network is given below:

Split each block into halves

Right half becomes new left half

New right half is the final result when the left half is XOR’d with the result of

applying f to the right half and the key.

Note that previous rounds can be derived even if the function f is not invertible.

Page 20: final doc

Figure 4.1 Fiestal network architecture

4.2.7 DES Algorithm

Encryption of a block of the message takes place in 16 stages or rounds. From the

input key, sixteen 48 bit keys are generated, one for each round. In each round, eight so-

called S-boxes are used. These S-boxes are fixed in the specification of the standard. Using

the S-boxes, groups of six bits are mapped to groups of four bits. The contents of these S-

boxes has been determined by the U.S. National Security Agency (NSA). The S-boxes

appear to be randomly filled, but this is not the case. Recently it has been discovered that

these S-boxes, determined in the 1970s, are resistant against an attack called differential

cryptanalysis which was first known in the 1990s

The block of the message is divided into two halves. The right half is expanded

from 32 to 48 bits using another fixed table. The result is combined with the subkey for

that round using the XOR operation. Using the S-boxes the 48 resulting bits are then

transformed again to 32 bits, which are subsequently permutated again using yet another

fixed table. This by now thoroughly shuffled right half is now combined with the left half

using the XOR operation. In the next round, this combination is used as the new left half.

Page 21: final doc

Figure 4.2 DES algorithm

The figure should hopefully make this process a bit more clear. In the figure, the

left and right halves are denotes as L0 and R0, and in subsequent rounds as L1, R1, L2, R2

and so on. The function f is responsible for all the mappings described above.

4.3 BOTNET CONSTRUCTION

Unlike a traditional C&C-based botnet, the proposed hybrid P2P botnet does not

have pre-fixed communication architecture. Its network connectivity is solely determined

by the peer list in each bot. We will introduce the peer list construction procedure in this

section.

Page 22: final doc

4.3.1 Basic Construction Procedure

A natural way to build peer lists is to construct them as a botnet propagates. To

make sure that a constructed botnet is connected, the initial set of bots should contain some

servent bots whose IP addresses are in the peer list in every initial bot. Suppose the size of

peer list in each bot is configured . As a bot program propagates, the peer list in each bot is

constructed according to the following procedure:

Bot A passes its peer list to a vulnerable host B when compromising it. If A is a

servent bot, B adds A into its peer list (by randomly replacing one entry if its peer list is

full). If A knows that B is a servent bot (A may not be aware of B’s identity, for example,

when B is compromised by an e-mail virus sent from A), A adds B into its peer list in the

same way.

In order to study a constructed botnet topology and its robustness via simulations,

we first need to determine simulation settings. P2P file sharing systems and observed that

around 50 percent of computers change their IP addresses within four to five days. So we

expect the fraction of bots with dynamic addresses is around the similar range. In addition,

some other bots are behind firewalls or NAT boxes so that they cannot accept Internet

connections. We cannot find a good source specifying this statistics, so in this paper, we

assume that 25 percent of bots are servent bots.

Botnets in recent years have dropped their sizes to an average of 20,000, even

though the potential vulnerable population is much larger. Thus, we assume a botnet has a

potential vulnerable population of 500,000, but stops growing after it reaches the size of

20,000. In addition, we assume that the peer list has a size of M ¼ 20 and that there are 21

initial servent hosts to start the spread of the botnet. In this way, the peer list on every bot

is always full.

Page 23: final doc

Because scanning and vulnerability exploit is the dominant infection mechanism

used by current botnets, in this paper, we simulate the construction of a botnet by assuming

that the bot code finds and compromises vulnerable

Computers in the similar way as what a scanning worm does. the degree

distribution for servent bots (client bots always have a degree M, equal to the size of peer

list) after the botnet has accumulated 20,000 members. Because the botnet stops growing

when it reaches the size of 20,000, the reinfection events rarely happen (only around 600).

For this reason, connections to servent bots are extremely unbalanced: more than 80

percent (4,000) of servent bots have degrees less than 30, while each of the 21 initial

servent bots have a degree between 14,000 and 17,500 This is not an ideal botnet. The

constructed hybrid P2Pbotnet is approximately degraded to a C&C botnet where the initial

set of servent bots behave as C&C servers.

4.3.2 Advanced Construction Procedure

One intuitive way to improve the network connectivity would be letting bots, to

interact with any number of clients and master. The client and master interaction done with

the help of socket. A socket is one endpoint of a two-way communication link between

two programs running on the network. A socket is bound to a port number so that the TCP

layer can identify the application that data is destined to be sent. The master act as a server

as well as a client. It is used to both transfer the files and encrypt and decrypt the files. The

resources are saved in the server system. Because it is used for to access the server

resources from the client side, we do this using RMI technique. Remote Method Invocation

(Java RMI) enables the programmer to create distributed Java technology-based to Java

technology-based applications, in which the methods of remote Java objects can be

invoked from other Java virtual machines, possibly on different hosts. RMI uses object

serialization to marshal and unmarshal parameters and does not truncate types, supporting

true object-oriented polymorphism.

Page 24: final doc

4.4 MONITORING

4.4.1 Defense Against The Proposed Hybrid P2P Botnet

In this section, we discuss how defenders might defend against such an advanced

botnet. In addition, we provide simulation studies and mathematical analysis of the

performance of botnet monitoring.

4.4.2 fortification:

First, the proposed hybrid P2P botnet relies on “servent bots” in constructing its

communication network. If the botnet is unable to acquire a large number of servent bots,

the botnet will be degraded to a traditional C&C botnet (the relationship of these two

botnets is discussed in Section, which is much easier to shut down. For this reason,

defenders should focus their defense effort on computers with static global IP addresses,

preventing them from being compromised, or removing compromised ones quickly.

The second defense method relies on honeypot techniques. If a hacker cannot

detect honeypots, defenders could try to destroy its communication channel. Defenders let

their infected honeypots join the botnet and claim to have static global IP addresses (these

honeypots are configured to accept connections from other bots), they will be treated as

servent bots. As a result, they will occupy many positions in peer lists of many bots,

greatly decreasing the number of valid communication channels in the hybrid P2P botnet.

With the detailed knowledge of the botnet, defenders could effectively shut it down by

cutting off its remaining fragile communication channels.

4.4.3 Botnet Monitoring Based on Honeypot Techniques

Honeypot is an effective way to trap and spy on malware and malicious activities.

Because compromised machines in a botnet need to cooperate and work together, it is

particularly effective to use honeypot techniques in botnet,because it will act like a server

to the hackers. The third annihilation method introduced above relies on honeypot

techniques.

Page 25: final doc

4.5 ARCHITECTURE DIAGRAM

Figure 4.3 Architecture diagram

Page 26: final doc

4.6 UML DIAGRAM

4.6.1 Sequence diagram

Page 27: final doc

Figure 4.4 Sequence diagram

4.6.2 Activity diagram

Master HoneyPot client

Sending public key using bot program

Command encryption using private key

Sending symmetric key

decrypt the command

Check for command

Command encryption using symmetric key key

decrypt the command

Read for command

Page 28: final doc

Figure 4.5 Activity diagram

4.6.3 Data flow diagram

Page 29: final doc

Level 0:

Figure 4.6 Level 0 Data flow diagram

Level 1:

Figure 4.7 Level 1 Data flow diagram

Hackers Internet Attacks

Peer-to-Peer Network

Master Sending file Command & Control Server(C&C)

Easily affected by hackers

Master Taken Control Over all peer to peer

Implementing

Hybrid Peer-To-Peer Botnet

Harder To Be Shut Down, Monitored, And Hacked

Page 30: final doc

Level 2:

Figure 4.8 Level 2 Data flow diagram

Master Monitoring

Monitoring the unauthorized system

Monitoring compromised system

Implementing

Hybrid Peer-To-Peer Botnet

Harder To Be Shut Down, Monitored, And Hijacked By BOTNET

Page 31: final doc

Level 3:

Figure 4.9 Level 3 Data flow diagram

4.7 SUMMARY

This chapter explains list of modules with their descriptions. The detailed

description of the blowfish, DES Algorithms are explained

Peer-to-Peer Network

Master selects the file to send

Client uses the key to decypt

Master generate the key

Monitoring By Master

Encrypt the file

Store encrypt file in the honeypot

Hacker only get the encrypted format of file from server

Giving the Correct key to decrypt

Get the Original file

Individualized Decryption Key

Decrypt the file

Page 32: final doc

CHAPTER 5

SYSTEM REQUIREMENTS

5.1GENERAL

This chapter clearly depicts the software languages used in the system design and

the significance of it.

5.2 SYSTEM REQUIREMENTS

Hardware Requirements

Processor : Intel Pentium iv and above

Hard Disk : 20GB

Cache Memory : 512KB

RAM : 256MB

Monitor : Color Monitor

Keyboard : 104Keys

Cables : Serial cables & physical extensions

Software Requirements

Windows XP and Above.

JDK 1.6 or More

5.3 ABOUT JAVA

In this system java is used for various purposes, mainly for the implementing

various algorithms and for computing the metrics. Computations can be done in efficient

manner.

5.3.1 Introduction

Page 33: final doc

Java is a computer programming language. It enables programmers to write

computer instructions using English based commands, instead of having to write in

numeric codes. It’s known as a “high-level” language because it can be read and written

easily by humans. Like English, Java has a set of rules that determine how the instructions

are written. These rules are known as its “syntax”. Once a program has been written, the

high-level instructions are translated into numeric codes that computers can understand and

execute.

5.3.2 Why we choose java

Easy to Use: The fundamentals of Java came from a programming language called

c++. Although a powerful language, it was felt to be too complex in its syntax, and

inadequate for all of Java's requirements. Java built on, and improved the ideas of c++, to

provide a programming language that was powerful and simple to use.

Reliability: Java needed to reduce the likelihood of fatal errors from programmer

mistakes. With this in mind, object-oriented programming was introduced. Once data and

its manipulation were packaged together in one place, it increased Java’s robustness.

Secure: As Java was originally targeting mobile devices that would be exchanging

data over networks, it was built to include a high level of security. Java is probably the

most secure programming language to date.

Platform Independent: Programs needed to work regardless of the machine they

were being executed on. Java was written to be a portable language that doesn't care about

the operating system or the hardware of the computer.

5.4 SUMMARY

This chapter deals with the software, hardware requirements of the project.

CHAPTER 6

Page 34: final doc

IMPLEMENTATION AND RESULTS

6.1 GENERAL

The following are the results of Botmaster monitoring and command encryption.

6.1.1 Botmaster monitoring the bots

Figure 6.1 Botmaster monitoring the bots

The Botmaster is monitoring the clients. If the hackers are entering the p2p network, the botmaster finds out the hackers. The authorized peoples only are allowed.

6.1.2 To enter the IP Address

Page 35: final doc
Page 36: final doc

v

Figure 6.2 To enter the IP Address

The botmaster adds the client IP Addresses. Then the botmaster sends commands the bots. Then he communicates with the clients.

6.1.3 To view the IP Adress and Resoureces files

Page 37: final doc

Figure 6.3 To view the IP Adress and Resoureces files

To view the all clients IP Address and resources. the botmaster sends command to the particular client.

6.1.4 Encrypted the command

Page 38: final doc

Figure 6.4 Encrypted the command

The botmaster command is encrypted using DES Algorithm and then it is send to the particular client.

6.1.5 The command Encrypted successfully

Page 39: final doc

Figure 6.5 The command Encrypted successfully

The botmaster command is encrypted successfully.

6.2 SUMMARY

This chapter summarizes the implementation and results of my first module.

CHAPTER 7

Page 40: final doc

CONCLUSION AND FUTURE ENHANCEMENT

7.1 CONCLUSION

In this project, we present the design of an advanced hybrid P2P botnet. The

proposed system is harder to be monitored, and much harder to be shut down. It provides

robust network connectivity, individualized encryption, controls traffic dispersion, limited

botnet exposure by each captured bot, and easy monitoring and recovery by its botmaster.

we point out that honeypots may play an important role. We should invest more research

into determining how to deploy honeypots efficiently and avoid their exposure to botnets

and botmasters.

7.2 FUTURE WORK

To be well prepared for future botnet attacks, we should study advanced botnet

attack techniques that could be developed by botmasters in the near future. From the

robustness and the defense, we can see that the proposed hybrid P2P botnet makes a future

botnet harder to be monitored, but most importantly, makes a botnet MUCH harder to shut

down. By replacing a few isolated C&C servers with a significantly larger amount of

interleaved servent bots, the proposed botnet greatly increases its survivability. The

proposed hybrid P2P botnet utilizes centralized sensor hosts. This does not make it as weak

as a centralized version of botnets.

The proposed hybrid P2P botnet represents only a specific P2P botnet design. In reality,

botmasters may come up with some other types of P2P botnet designs. However, we

believe this research is still meaningful to security community. The proposed design is

practical and can be implemented by botmasters with little engineering complexities.

Botmasters will come with a similar design sooner or later, and we must be well prepared

for such an attack, or a similar attack, before it happens. Defenders would achieve better

poisoning defense if they have distributed honeypots and a large number of IP addresses.

Page 41: final doc

REFERENCES

[1] A. Ramachandran, N. Feamster, and D. Dagon, “Revealing Botnet Membership Using

DNSBL Counter-Intelligence,” Proc. USENIX Second Workshop Steps to Reducing

Unwanted Traffic on the Internet (SRUTI ’06), June 2006.

[2] B. McCarty, “Botnets: Big and Bigger,” IEEE Security & Privacy Magazine, vol. 1, no.

4, pp. 87-90, July-Aug. 2003.

[3] C.T. News, Expert: Botnets No. 1 Emerging Internet Threat,

http://www.cnn.com/2006/TECH/internet/01/31/furst/, 2006.

[4] D. Dagon, C. Zou, and W. Lee, “Modeling Botnet Propagation Using Time Zones,”

Proc. 13th Ann. Network and Distributed System Security Symp. (NDSS ’06), pp. 235-

249, Feb. 2006.

[5] E. Cooke, F. Jahanian, and D. McPherson, “The Zombie Roundup: Understanding,

Detecting, and Disrupting Botnets,” Proc. USENIX Workshop Steps to Reducing

Unwanted Traffic on the Internet (SRUTI ’05), July 2005.

[6] F. Monrose, “Longitudinal Analysis of Botnet Dynamics,” ARO/DARPA/DHS Special

Workshop Botnet, 2006.

[7] F. Freiling, T. Holz, and G. Wicherski, “Botnet Tracking: Exploring a Root-Cause

Methodology to Prevent Distributed Denial-of- Service Attacks,” Technical Report AIB-

2005-07, CS Dept. RWTH Aachen Univ., Apr. 2005.

[8] H. Project, Know Your Enemy: Tracking Botnets, http://

www.honeynet.org/papers/bots/, 2005.

[9] I. Arce and E. Levy, “An Analysis of the Slapper Worm,” IEEE Security & Privacy

Magazine, vol. 1, no. 1, pp. 82-87, Jan.-Feb. 2003.

Page 42: final doc

[10] J.R. Binkley and S. Singh, “An Algorithm for Anomaly-Based Botnet Detection,”

Proc. USENIX Second Workshop Steps to Reducing Unwanted Traffic on the Internet

(SRUTI ’06), June 2006.

[11] P. Barford and V. Yegneswaran, An Inside Look at Botnets, to appear in Series:

Advances in Information Security. Springer, 2006.

[12] Phatbot Trojan Analysis, http://www.lurhq.com/phatbot.html, 2008.

[13] R. Lemos, Bot Software Looks to Improve Peerage,

http://www.securityfocus.com/news/11390, May 2006.

[14] R. Puri, Bots & Botnet: An Overview, http://www.sans.org/rr/

whitepapers/malicious/1299.php, 2003.

[15] Servent, http://en.wikipedia.org/wiki/Servent, 2008.

[16] Sinit P2P Trojan Analysis, http://www.lurhq.com/sinit.html, 2008.

[17] S. Kandula, D. Katabi, M. Jacob, and A. Berger, “Botz-4-Sale: Surviving Organized

DDOS Attacks That Mimic Flash Crowds,” Proc. Second Symp. Networked Systems

Design and Implementation (NSDI ’05), May 2005.

[18] T. Strayer, “Detecting Botnets with Tight Command and Control,”

ARO/DARPA/DHS Special Workshop Botnet, 2006.

[19] Y. Chen, “IRC-Based Botnet Detection on High-Speed Routers,” ARO/DARPA/DHS

Special Workshop Botnet, 2006.