download Document

of 48

  • date post

  • Category


  • view

  • download


Embed Size (px)



Transcript of Document

  • 1. Enabling Worm and Malware Investigation Using Virtualization (Demo and poster this afternoon) Dongyan Xu, Xuxian JiangCERIAS andDepartment of Computer Science Purdue University

2. The Team Lab FRIENDS Xuxian Jiang (Ph.D. student) Paul Ruth (Ph.D. student) Dongyan Xu (faculty) CERIAS Eugene H. Spafford External Collaboration Microsoft Research 3. Our GoalIn-depth understanding of increasingly sophisticated worm/malware behavior 4. Outline Motivation An integrated approach Front-end : Collapsar (Part I) Back-end : vGround (Part II) Bringing them together On-going work 5. The Big PictureDomain ADomain BGRE Proxy ARP Worm CaptureWorm WormAnalysis Analysis 6. Part IFront-End: CollapsarEnabling Worm/Malware Capture * X. Jiang, D. Xu, Collapsar: a VM-Based Architecture for Network AttackDetention Center, 13th USENIX Security Symposium (Security04), 2004. 7. General Approach Promise of honeypots Providing insights into intruders motivations, tactics, and tools Highly concentrated datasets w/ low noise Low false-positive and false negative rate Discovering unknown vulnerabilities/exploitations Example: CERT advisory CA-2002-01 (solaris CDEsubprocess control daemon dtspcd) 8. Current Honeypot Operation Individual honeypots Limited local view of attacks Federation of distributed honeypots Deploying honeypots in different networks Exchanging logs and alerts Problems Difficulties in distributed management Lack of honeypot expertise Inconsistency in security and management policies Example: log format, sharing policy, exchange frequency 9. Our Approach: Collapsar Based on the HoneyFarm idea of Lance Spitzner Achieving two (seemingly) conflicting goals Distributed honeypot presence Centralized honeypot operation Key ideas Leveraging unused IP addresses in each network Diverting corresponding traffic to a detention center (transparently) Creating VM-based honeypots in the center 10. Collapsar ArchitectureProductionNetwork Attacker RedirectorProductionNetworkRedirectorRedirectorFront-End ProductionNetwork VM-based CollapsarHoneypotCenterManagementCorrelation StationEngine 11. Comparison with Current Approaches Overlay-based approach (e.g., NetBait, Dominooverlay) Honeypots deployed in different sites Logs aggregated from distributed honeypots Data mining performed on aggregated log information Key difference: where the attacks take place(on-site vs. off-site) 12. Comparison with Current Approaches Sinkhole networking approach (e.g., iSink ) Dark space to monitor Internet abnormality and commotion (e.g. msblaster worms) Limited interaction for better scalability Key difference: contiguous large address blocks (vs. scattered addresses) 13. Comparison with Current Approaches Low-interaction approach (e.g., honeyd, iSink ) Highly scalable deployment Low security risks Key difference: emulated services (vs. real things) Less effective to reveal unknown vulnerabilities Less effective to capture 0-day worms 14. Collapsar Design Functional components Redirector Collapsar Front-End Virtual honeypots Assurance modules Logging module Tarpitting module Correlation module 15. Collapsar Deployment Deployed in a local environment for a two-month period in 2003 Traffic redirected from five networks Three wired LANs One wireless LAN One DSL network ~ 50 honeypots analyzed so far Internet worms (MSBlaster, Enbiei, Nachi ) Interactive intrusions (Apache, Samba) OS: Windows, Linux, Solaris, FreeBSD 16. Incident: Apache Honeypot/VMware Vulnerabilities Vul 1: Apache (CERT CA-2002-17) Vul 2: Ptrace (CERT VU-6288429) Time-line Deployed: 23:44:03pm, 11/24/03 Compromised: 09:33:55am, 11/25/03 Attack monitoring Detailed log 17. Incident: Windows XP Honeypot/VMware Vulnerability RPC DCOM Vul. (MicrosoftSecurity Bulletin MS03-026) Time-line Deployed: 22:10:00pm,11/26/03 MSBlaster: 00:36:47am,11/27/03 Enbiei: 01:48:57am,11/27/03 Nachi: 07:03:55am,11/27/03 18. Summary (Front-End) A novel front-end for worm/malware capture Distributed presence and centralized operation of honeypots Good potential in attack correlation and log mining Unique features Aggregation of Scattered unused (dark) IP addresses Off-site (relative to participating networks) attack occurrences and monitoring Real services for unknown vulnerability revelation 19. Part IIBack-End: vGround Enabling Worm/Malware Analysis* X. Jiang, D. Xu, H. J. Wang, E. H. Spafford, Virtual Playgrounds for Worm Behavior Investigation, 8th International Symposium on Recent Advances in Intrusion Detection (RAID05), 2005. 20. Basic Approach A dedicated testbed Internet-inna-box (IBM), Blended Threat Lab (Symantec) DETER Goal: understanding worm behavior Static analysis/ execution trace Reverse Engineering (IDA Pro, GDB, ) Worm experiment within a limited scale Result: Only enabling relatively static analysis within a smallscale 21. The Reality Worm Threats Speed, Virulence, & Sophistication of Worms Flash/Warhol Worms Polymorphic/Metamorphic Appearances Zombie Networks (DDoS Attacks, Spam) What we also need A high-fidelity, large-scale, live but safe wormplayground 22. A Worm Playground Picture by Peter Szor, Symantec Corp. 23. Requirements Cost & Scalability How about a topology with 2000+ nodes? Confinement In-house private use? Management & user convenience Diverse environment requirement Recovery from damages from a worm experiment re-installation, re-configuration, and reboot 24. Our Approach vGround A virtualization-based approach Virtual Entities: Leveraging current virtual machine techniques Designing new virtual networking techniques User Configurability Customizing every node (end-hosts/routers) Enabling flexible experimental topologies 25. An Example Run: Internet WormsA worm playground VirtualPhysicalA shared infrastructure(e.g. PlanetLab) 26. Key Virtualization Techniques Full-System Virtualization Network Virtualization 27. Full-System Virtualization Emerging and New VM Techniques VMware, Xen, Denali, UML Supporting for real-world services DNS, Sendmail, Apache w/ native vulnerabilities Adopted technique: UML Deployability Convenience/Resource Efficiency 28. User-Mode Linux ( System-Call Virtualization User-Level Implementation UM UserUM UserpProcess 1Process 2trac Guest OS Kernele MMUDevice DriversHost OS KernelDevice DriversHardware 29. New Network Virtualization Link Layer Virtualization User-Level ImplementationVirtual Node 1Virtual Node 2IP-IP Virtual Switch 1 Host OS 30. User Configurability Node Customization System Template End Node (BIND, Apach, Sendmail, ) Router (RIP, OSPF, BGP, ) Firewall (iptables) Sniffer/IDS (bro, snort) Topology Customization Language Network, Node Toolkits 31. AS2_H1 AS2_H2 AS3_H1AS1_H2AS3_H2 AS1_H1 R1R2R3node AS3_H1 { Project Planetlab-Wormswitch AS1_lan1 { switch AS2_lan1 { switch AS3_lan1 {superclass slapper template slapper {unix_sock unix_sock sock/as2_lan1 unix_sock sock/as3_lan1network eth0 { image slapper.ext2sock/as1_lan1 host host AS3_lan1 cow enabled host} }address startup { planetlab6.millennium.router R2 {gateway start berkeley.eduswitch AS2_AS3 {superclass router } } } udp_sock 1500 } } host network eth0 {node AS3_H2 { template router {switch AS1_AS2 { } switch AS2_lan1 superclass slapper image router.ext2udp_sock 1500address network eth0 { routing ospf host node AS2_H1 { } switch AS3_lan1 startup {planetlab6.millennium. superclass slapper address network eth0 {network eth1 { gateway }} switch AS2_lan1switch AS1_AS2 } }address} router R1 {node AS1_H1 { gateway } router R3 { superclass router superclass slapper} superclass router network eth0 {} network eth2 { network eth0 {switch AS1_lan1 node AS2_H2 { switch AS2_AS3network eth0 {switch AS1_lan1address superclass slapperaddress switch AS3_lan1address eth0 {}address }gateway AS2_lan1} } }address network eth1 { } gateway network eth1 {switch AS1_AS2node AS1_H2 {}switch AS2_AS3address superclass slapper}address } network eth0 {} } switch AS1_lan1 } address Networked Node128.10.1.2/24Network System Templategateway }} 32. Features Scalability 3000 virtual hosts in 10 physical nodes Iterative Experiment Convenience Virtual node generation time: 60 seconds Boot-strap time: 90 seconds Tear-down time: 10 seconds Strict Confinement High Fidelity 33. Evaluation Current Focus Worm behavior reproduction Experiments Probing, exploitation, payloads, and propagation Further Potentials on-going work Routing worms / Stealthy worms Infrastructure security (BGP) 34. Experiment Setup Two Real-World Worms Lion, Slapper, and their variants A vGround Topology 10 virtual networks 1500 virtual Nodes 10 physical machines in an ITaP cluster 35. Evaluation Target Host Distribution Detailed Exploitation Steps Malicious Payloads Propagation Pattern 36. Probing: Target Network Selection Lion WormsNumber of Probes (Total 105)600 500 400 300 200 10001 917 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 145 153 161 169 177 185 193 201 209 217 225 233 241 249The First Octet of IP Address 13 243 Slapper WormsNumber of Probes (105)70060050040030020010001 1121 31 4151 61 71 8191101 111 121 131 141 151 161 171 181 191 201 211 221 231 241 251 The First Octet of IP Address80,81 37. Exploitation (Lion) 1: Probing 2: Exploitation!3: Propagation! 38. Exploita