AIT 681 Secure Software Engineering · 2020-01-29 · Ransomware downtime average downtime is 16.2...
Transcript of AIT 681 Secure Software Engineering · 2020-01-29 · Ransomware downtime average downtime is 16.2...
AIT 681 Secure Software Engineering
Topic #3. Risk Management
Instructor: Dr. Kun Sun
Outline
1. Risk management2. Standards on Evaluating Secure System 3. Security Analysis using Security Metrics
2
Reading
• This lecture: – McGraw: Chapter 2
• Cigital Risk Management Framework
– Security Metrics• Our Milcom paper on security metrics
3
Risk Assessment
RISK
Threats
Vulnerabilities Consequences
4
Real Cost of Cyber Attack
• Damage of the target may not reflect the real amount of damage
• Services may rely on the attacked service, causing a cascading and escalating damage
• Need: support for decision makers to – Evaluate risk and consequences of cyber attacks– Support methods to prevent, deter, and mitigate
consequences of attacks
5
RansomwareIn Q4 of 2019
• Defense1. back up; 2. Be wary of suspicious emails/link; 3. Patch
6
Average ransom payment the average ransom payment $84,116
Data recovery: 98 percent of companies that paid ransom received a working decryption tool
Ransomware downtime average downtime is 16.2 days
Ransom payments Bitcoin
Common types of ransomware
Sodinokibi, Ryuk Ransomware; Phobos and Dharma more focus on small enterprise ransomware attacks.
Common ransomware attack vectors
Remote Desktop Protocol (RDP); email phishing of initial compromise.
Ransomware target industries
Professional services firms, e.g., regional law firms, Public sector organization; specialized service providers within healthcare and the industry (ICS).
Risk Management Framework (Business Context)
Understand BusinessContext
Identify Business and Technical Risks
Synthesize and RankRisks
Define RiskMitigation Strategy
Carry Out Fixesand Validate
Measurement and Reporting
7
Understand the Business Context
• “Who cares?”• Identify business goals, priorities and
circumstances, e.g., – Increasing revenue– Meeting service-level agreements– Reducing development cost– Generating high return investment
• Identify software risk to consider
8
Identify Business and Technical Risks
• Business risk– Direct threat– Indirect threat
• Consequences– Financial loss– Loss of reputation– Violation of customer or
regulatory constraints– Liability
• Technical risk– Runs counter to planned
design and Implementation
• Consequences– Unexpected system calls– Avoidance of control
(audit)– Unauthorized data access– Needless rework of
artifacts
• “Why should business care?”
Tying technical risks to the business context in a meaningful way
9
Synthesize and Rank the Risks
• “What should be done first?”• Prioritization of identified risks based on
business goals• Allocating resources• Risk metrics:
– Risk likelihood– Risk impact– Risk severity– Number of emerging risks
10
Define the Risk Mitigation Strategy
• “How to mitigate risks?”• Available technology and resources• Constrained by the business context: what can
the organization afford, integrate, and understand
• Need validation techniques
11
Carry Out Fixes and Validate
• Perform actions defined in the previous stage• Measure “completeness” against the risk
mitigation strategy– Progress against risk– Remaining risks– Assurance of mechanisms
• Testing– Measure the effectiveness of risk mitigation
activities
12
Measuring and Reporting
• Continuous and consistent identification and storage of risk information over time
• Maintain risk information at all stages of risk management
• Establish measurements, e.g., – Number of risks, severity of risks, cost of
mitigation, etc.
13
Outline
1. Risk management2. Standards on Evaluating Secure System 3. Security Analysis using Security Metrics
14
Standards on Evaluating Secure System
• Trusted Computer System Evaluation Criteria (TCSEC) , also known as “Orange Book”
• Common Criteria (ISO 15408)
15
National Computer Security Center
• 1981: National Computer Security Center (NCSC) was established within NSA– To provide technical support and reference for government
agencies– To define a set of criteria for the evaluation and
assessment of security– To encourage and perform research in the field of security– To develop verification and testing tools– To increase security awareness in both federal and private
sector• 1985: Trusted Computer System Evaluation Criteria
(TCSEC) == Orange Book• Obsolete, replaced by Common Criteria
16
Orange Book
• Orange Book objectives– Guidance of what security features to build into
new products– Provide measurement to evaluate security of
systems– Basis for specifying security requirements
• Security features and Assurances• Trusted Computing Base (TCB) security
components of the system: hardware, software, and firmware + reference monitor
17
Orange Book
• It supplies– Users: evaluation metrics to assess the reliability
of the security system for protection of classified or sensitive information when
• Commercial product• Internally developed system
– Developers/vendors: design guide showing security features to be included in commercial systems
– Designers: guide for the specification of security requirements
18
Orange Book
• Set of criteria and requirements• Three main categories:
– Security policy – protection level offered by the system
– Accountability – of the users and user operations– Assurance – of the reliability of the system
19
Security Policy
• Concerns the definition of policy regulation on the access of users to information– Discretionary Access Control (DAC)– Mandatory Access Control (MAC)– Labels: for objects and subjects– Reuse of objects: basic storage elements must be
cleaned before released to a new user
20
Accountability
• Identification/authentication• Audit• Trusted path: no users are attempting to
access the system fraudulently
21
Assurance
• Reliable hardware/software/firmware components that can be evaluated separately
• Operation reliability• Development reliability
22
Operation reliability
• During system operation – System architecture: TCB isolated from user
processes, security kernel isolated from non-security critical portions of the TCB
– System integrity: correct operation (use diagnostic software)
– Covert channel analysis– Trusted facility management: separation of duties– Trusted recovery: recover security features after
TCB failures
23
Development reliability
• System reliable during the development process. Formal methods.– System testing: security features tested and
verified– Design specification and verification: correct
design and implementation wrt security policy. TCB formal specifications proved
– Configuration management: configuration of the system components and its documentation
– Trusted distribution: no unauthorized modifications
24
Documentation
• Defined set of documents • Minimal set:
– Trusted facility manual– Security features user’s guide– Test documentation– Design documentation– Personnel info: Operators, Users, Developers,
Maintainers
25
Orange Book Levels
• Highest Security– A1 Verified protection– B3 Security Domains– B2 Structured Protection– B1 Labeled Security Protections– C2 Controlled Access Protection– C1 Discretionary Security Protection– D Minimal Protection
• No Security
26
Common Criteria (ISO 15408)
• January 1996: Common Criteria– Joint work with Canada and Europe– Separates functionality from assurance– Nine classes of functionality: audit, communications,
user data protection, identification and authentication, privacy, protection of trusted functions, resource utilization, establishing user sessions, and trusted path.
– Seven classes of assurance: configuration management, delivery and operation, development, guidance documents, life cycle support, tests, and vulnerability assessment.
27
Common Criteria
• Evaluation Assurance Levels (EAL)• Lowest Security
• EAL1: functionally tested• EAL2: structurally tested• EAL3: methodologically tested and checked• EAL4: methodologically designed, tested and reviewed• EAL5: semi-formally designed and tested• EAL6: semi-formally verified and tested• EAL7: formally verified design and tested
• Highest Security
28
Outline
1. Risk management2. Standards on Evaluating Secure System3. Security Analysis using Security Metrics
29
Introduction
• How to quantitatively measure and demonstrate the amount of security for a computer/network?– Meaningful security metrics for networked (e.g.,
enterprise) systems are significantly more difficult to define, analyze, compose, and use intelligently.
• Challenges– What security metrics are meaningful and useful? – How to collect security metrics? What to measure?– How to compose enterprise-level security metrics?– How to present the security metrics in a clean
manner?
30“Automatic security analysis using security metrics”, Kun Sun et al., MILCOM 2011.
System Architecture
• Develop a toolkit including security metrics collection, security metrics analysis, and security metrics visualization using security metrics.
31
Step 1: Identify Security Metrics
• Summarize exiting security metrics and identify new security metrics.– Collect existing security metrics: financial metrics;
application security; configuration management; network management; asset management, etc.
– Identify new security metrics• Patch risk• Security score• Criticality• Time series
32
Patch Risk
• There is a risk to apply patches to fix vulnerabilities in applications.
– When an operating system is patched, the software may or may not function properly from that point forward.
– The patches themselves may contain vulnerabilities which require patching.
– The risk of a patch may be derived according to • the trustworthiness of its provider, and • how long the patch has been released and verified.
33
Security Score
• Security score provides an explicit number to evaluate the security of a computer or a network.
• Three types/levels of security scores:– Security score for individual vulnerability (e.g., CVSS
score**).– Security score for one computer with multiple
vulnerabilities.– Security score for a network with multiple computers.
A “one-shot” security score may not be meaningful or useful formission-awareness situations, in which different missions relydifferently on the available services and applications.
34
Criticality
• Criticality is a combined metric to evaluate the importance of one computer in the network.
• It depends on – Location (intranet, DMZ, internet)– Service (HTTP, FTP, SSH)– Role (Firewall, Desktop, Router)– Asset (database, financial files)
35
Time Series
• Time series is to show the changes of security in a period. – It tells whether the security of a computer is improved
or falls below a pre-determined threshold.
• Security changes can be triggered by many factors– Vulnerability changes over time (CVSS Temporal
Metrics)– Network configuration– Security training– Financial problems
36
Step 2: Collect Security Metrics
• We focus on collecting security metrics about vulnerability and network reachability automatically.– Scan vulnerabilities on computers using Nessus
scanner.– Obtain vulnerability score based on NVD/CVSS.– Obtain firewall rules, network configuration files,
and network topology to derive network reachability information.
37
NVD/CVSS
• http://nvd.nist.gov/download.cfm#CVE_FEED• Provides XML database on CVSS score and Vector.
38
XML Parser for CVSS Dataset
• Raw CVSS vulnerability record in XML format.• We develop an XML parser to extract security metrics
from the database and save the metrics in Vulnerability Scoring and Description Table (VSDT). Entry Identifier CVE-2009-0022Score 6.3Severity MediumVector (AV:N/AC:M/Au:S/C:C/I:N/A:N)Vuln_Types ConfRange NetworkVuln_Soft Samba 3.2.0, 3.2.1, 3.2.2, 3.2.3, 3.2.4, 3.2.5, 3.2.6Description Samba 3.2.0 through 3.2.6, when registry shares are
enabled, allows remote authenticated users to access theroot files ystem via a crafted connection request thatspecifies a blank share name.
39
Network Reachability
• Network reachability captures the interactions among all attack possibilities in a network, so it has direct impacts on security scores for interdependent computers.
• It consists of three components:– Network topology
• Import network topology from OPNET network design software.
• JANASSURE tool by IAI can automatically obtain network topology information.
– Router Configuration• We can import router configuration files from CISCO routers.
– Firewall Rules• We can import firewall rules from CISCO routers.
40
Composing Security Score
• Use AHP (Analytic hierarchy process) to decide different weights for exploitability (access vector, access complexity, authentication) and impact (confidentiality, integrity, availability).
Vulnerability Score1.0
Exploitability0.4
Impact 0.6
Confidentiality0.2
Integrity0.2
Availability0.2
Access Vector0.13333
Access Complexity
0.13333
Authentication
0.1333
41
Step 4: Visualize Security Metrics
• Use an example bank system to show the security metrics• Dashboard for the whole network and individual computer.
42
Main Dashboard
43
Host Dashboard
44
Time Series
45
Bar Chart
46
Limitations
• We assume vulnerabilities are independent to each other in Phase I, and it may not be true in real world.
• What if the vulnerabilities on one computer are correlated to each other? – E.g., one user who installs application A (with
vulnerability v1) always install the application B (with vulnerability v2).
– How to obtain this correlation information?– How to take the correlation into calculating the
security score?
47
Limitations
• Summary/Average/Max/Min of the scores on the computers are not good enough
• Combine vulnerability dependent information and network reachability information to measure the security of a network.– Assume we know the reachability information of the
network from firewall rules, network configuration. – From NVD, we know one vulnerability may lead to or
facilitate another vulnerability.• In simple cases, assume all the vulnerabilities on one computer are
independent to all the vulnerabilities in all other computers. • If vulnerability Va on computer A is a pre-requisite for vulnerability
Vb on computer B, how do we change the score?
48