Automated Software Engineering Research Group 1 Fix 12?: Title should be Limitations (?? Not...

64
Automated Software Engineering Research Group 1 Fix • 12?: Title should be Limitations (?? Not Challenges) • Slide 18: Verification -> counterexample collectoin

Transcript of Automated Software Engineering Research Group 1 Fix 12?: Title should be Limitations (?? Not...

Automated Software Engineering Research Group 1

Fix

• 12?: Title should be Limitations (?? Not Challenges)

• Slide 18: Verification -> counterexample collectoin

Computer Science

2

Mining Likely Properties of Access Control Policies

via Association Rule Mining

JeeHyun Hwang Advisor: Dr. Tao Xie

Preliminary Oral ExaminationDepartment of Computer Science

North Carolina State University, Raleigh

Automated Software Engineering Research Group 3

Access Control Mechanism

• Access control mechanisms control which subjects (such as users or processes) have access to which resources

• Policy defines rules according to which access control must be regulated

Request Response

(Permit, Deny)

Policy

Policy Evaluation

Engine

Automated Software Engineering Research Group 4

Access Control Mechanism

• Access control mechanisms control which subjects (such as users or processes) have access to which resources

• Policy defines rules according to which access control must be regulated

Policy

Request Response

(Permit, Deny)

Policy Evaluation

Engine

Faults

Automated Software Engineering Research Group 5

Research Accomplishments

• Quality of Access Control• Automated test generation [SRDS 08][SSIRI 09]

• Likely-property mining [DBSec 10]

• Property Quality Assessment [ACSAC 08]

• (Tool) Access Control Policy Tool (ACPT) [POLICY Demo 10]

• Debugging• Fault localization for

firewall policies [SRDS 09 SP]

• Automated fault correction for firewall policies [USENIX LISA 10]

• Performance• Efficient policy

evaluation engine [Sigmetrics 08]

Automated Software Engineering Research Group 6

Outline

•Motivation•Our approach•Future work

Automated Software Engineering Research Group 7

Outline

•Motivation•Our approach•Future work

Automated Software Engineering Research Group 8

Motivation• Access control is used to control access to a large

number of resources [1,2]

• Specifying and maintaining correct access control policies is challenging [1,2]

– Authorized users should have access to the data– Unauthorized users should not have access to the data

• Faults in access control policies lead to security problems [1,3]

1. A. Wool. A quantitative study of firewall configuration errors. Computer, 37(6):62–67, 2004.2. Lujo Bauer, Lorrie Cranor, Robert W. Reeder, Michael K. Reiter, and Kami Vaniea . Real Life Challenges

in Access-control Management. CHI 20093. Sara Sinclair, Sean W. Smith. What's Wrong with Access Control in the Real World?. IEEE Security and

Privacy 2010

Automated Software Engineering Research Group 9

Motivation – cont.• Need to ensure the correct behaviours of policies

– Property verification [1,2]

• Model a policy and verify properties against the policies• Check whether properties are satisfied by a policy• Violations of a property expose policy faults

1. Kathi Fisler, Shriram Krishnamurthi, Leo A. Meyerovich, Michael Carl Tschantz. Verification and change-impact analysis of access-control policies. ICSE 2005

2. Vladimir Kolovski, James Hendler, Bijan Parsia, Analyzing Web Access Control Policies, WWW 20073. Michael Carl Tschantz, Shriram Krishnamurthi. Towards Reasonability Properties for Access-Control Policy Languages.

SACMAT 2006

PolicyPolicy

Property VerificationPropertyProperty Satisfy?

(True, False)1. Faculty member is permitted to assign grades [3]

2. Subject (who is not a faculty member) is permitted to enroll in courses [3]

Automated Software Engineering Research Group 10

Problem

• Quality of properties is assessed in terms of fault-detection capability [1]

• Properties help detect faults• Confidence on policy correctness is dependent on

the quality of specified properties• CONTINUE subject [2]: 25% fault-detection

capability with its seven properties• In practice, writing properties of high quality is not

trivial 1. Evan Martin, JeeHyun Hwang, Tao Xie, and Vincent C. Hu. Assessing quality of policy properties in verification of access

control policies. ACSAC 20082. Kathi Fisler, Shriram Krishnamurthi, Leo A. Meyerovich, Michael Carl Tschantz. Verification and change-impact analysis

of access-control policies. ICSE 2005

Automated Software Engineering Research Group 11

Proposed Solution

• Mine likely properties automatically based on correlations of attribute values (e.g., write and modify)• High quality of properties (with high fault-detection

capability)• Our assumption: • Policy may include faults • Mine likely properties,

which are true for all or most of the policy behaviors (>= threshold)

Policy

Faults

Likely Properties

Mine

Detect Faults

Automated Software Engineering Research Group 12

Limitations (?? Not Challenges)• Policy is domain-specific

• Mine likely properties within a given policy• Limited set of decisions

• Two decisions (Permit or Deny) for any request• Prioritization

• Which counterexamples should be inspected first?

• Expressiveness of likely properties• How to find counterexamples?

Automated Software Engineering Research Group 13

XACML Policy Example

RBAC_schoolpolicy

<Policy PolicySetId="n" PolicyCombiningAlgId=“First-Applicable"> <Target/> <Rule RuleId=“1" Effect=“Permit"><Target> <Subjects><Subject> Faculty </Subject></Subjects> <Resources><Resource> InternalGrade </Resource>

<Resource> ExternalGrade </Resource> </Resources> <Actions><Action> View </Action>

<Action> Assign </Action> </Actions></Target></Rule>

Rule 1

<Rule RuleId=“3" Effect=“Permit "><Target> <Subjects><Subject> FacultyFamily </Subject></Subjects> <Resources><Resource> ExternalGrade </Resource</Resources> <Actions><Action> Receive </Action> </Actions> </Target></Rule>

Rule 3

<Rule RuleId=“2" Effect=“Permit"><Target> <Subjects><Subject> Student</Subject></Subjects> <Resources><Resource> ExternalGrade </Resource</Resources> <Actions><Action> Receive </Action> </Actions> </Target></Rule>

Rule 2If role = Faculty

and resource = (ExternalGrade or InternalGrade)and action = (View or Assign) then Permit

Automated Software Engineering Research Group 14

XACML Policy Example – cont.

Rule 3: Jim can change grades orrecords.

RBAC_schoolpolicy

<Rule RuleId=“4" Effect=“Permit"><Target> <Subjects><Subject> Lecturer </Subject></Subjects> <Resources><Resource> InternalGrade </Resource> <Resource> ExternalGrade </Resource> </Resources> <Actions><Action> View </Action> <Action> Assign </Action> </Actions></Target></Rule>

Rule 4

<Rule RuleId=“5" Effect=“Permit"><Target> <Subjects><Subject> TA</Subject></Subjects> <Resources><Resource> InternalGrade</Resource> </Resources> <Actions><Action> View </Action>

<Action> Assign </Action> </Actions> </Target></Rule>

Rule 5

Rule 6<Rule RuleId=“6" Effect=“Deny"><Target/> </Rule></Policy>

Automated Software Engineering Research Group 15

XACML Policy Example – cont.

Rule 3: Jim can change grades orrecords.

RBAC_schoolpolicy

<Rule RuleId=“4" Effect=“Permit"><Target> <Subjects><Subject> Lecturer </Subject></Subjects> <Resources><Resource> InternalGrade </Resource> <Resource> ExternalGrade </Resource> </Resources> <Actions><Action> View </Action> <Action> Assign </Action> </Actions></Target></Rule>

Rule 4

<Rule RuleId=“5" Effect=“Permit"><Target> <Subjects><Subject> TA</Subject></Subjects> <Resources><Resource> InternalGrade</Resource> </Resources> <Actions><Action> View </Action>

<Action> Assign </Action> </Actions> </Target></Rule>

Rule 5

Rule 6<Rule RuleId=“6" Effect=“Deny"><Target/> </Rule></Policy>

View Receive

Inject a fault

(Receive instead of View)

Incorrect Policy Behaviors: 1. TA is Denied to View InternalGrade2. TA is Permitted to Receive InternalGrade

(View) Permit → (Assign) Permit : Frequency: 4 (100%)(Assign) Permit → (View) Permit : Frequency: 4 (80%)(Assign) Permit → (Receive) Deny : Frequency: 4 (80%)

(View) Permit → (Assign) Permit : Frequency: 5 (100%)(Assign) Permit → (View) Permit : Frequency: 5 (100%)(Assign) Permit → (Receive) Deny : Frequency: 5 (100%)

Automated Software Engineering Research Group 16

Policy Model• Role-Based Access Control Policy [1]

– Permissions are associated with roles– Subject (role) is allowed or denied access to

certain objects (i.e., resources) in a system• Subject: role of a person• Action: command that a subject executes on the

resource• Resource: object• Environment: any other related constraints (e.g.,

time, location, etc.)

1. XACML Profile for Role Based Access Control (RBAC), 2004

Automated Software Engineering Research Group 17

Likely-Property Model• Implication relation

• Correlate decision (dec1) for an attribute value (v1) with decision (dec2) for another attribute value (v2)

(v1) dec1 → (v2) dec2

• Types• Subject attribute

(TA) permit → (Faculty) permit

• Action attribute(Assign) permit → (View) permit

• Subject-action attribute:(TA & Assign) permit → (Faculty & View) permit

Automated Software Engineering Research Group 18

Framework

• Our assumption: • Policy may include faults • Mine likely properties, which are true for all

or most of the policy behaviors (>= threshold)

Counter example collection

Automated Software Engineering Research Group 19

Relation Table Generation

• Find all possible request-response pairs in a policy • Generate relation tables (including all request-

response pairs) of interest• Input for an association rule mining tool

1. Faculty is Permitted to Assign ExternalGrade

2. Faculty is Permitted to View ExternalGrade

Automated Software Engineering Research Group 20

Association Rule Mining

1. Agrawal, R., Srikant, R.: Fast algorithms for mining association rules in large databases. VLDB 19942. Borgelt, C.: Apriori - Association Rule Induction/Frequent Item Set Mining. 2009

• Given a relation table, find implication relations of attributes via association rule mining [1,2]

• Find three types of likely properties • Report likely properties with confidence values over

a given thresholdSupport: Supp (X) = D / T

% of the total number of records- T is #total rows- D is #rows that includes attribute-decision

X = (Assign) Permit , Y = Supp (View) Permit

Supp (X) = 5/10 = 0.5, Supp (Y) = 4/10 = 0.4Supp (X Y) = 4/10 = 0.4∪

• Confidence : Confidence (X → Y) =

supp(X Y)/supp(X)∪

* Likelihood of an likely property

Confidence (X → Y) = 4/5 = 0.8

Automated Software Engineering Research Group 21

Likely Property Verification• Verify a policy with given likely properties and find

counterexamples

Counterexample: (v1) dec1 → (v2) ¬ dec2

• Inspect to determine whether counterexamples expose a fault

Rationale: counterexamples (which do not satisfy the likely properties) deviate from the policy’s normal behaviors and are special cases for inspection

Automated Software Engineering Research Group 22

Basic and Prioritization Techniques

• Basic technique: inspect counterexamples in no particular order

Prioritization technique designed to reduce inspection effort

• Inspect counterexamples by the order of their fault-detection likelihood• Duplicate CE first• CE produced from

likely properties with fewer CE

Likely Properties

CE : Counterexamples

CE

Detect

CE Remove Duplication

Automated Software Engineering Research Group 23

Evaluation• RQ1: fault-Detection Capability

– How higher percentage of faults are detected by our approach compared to an existing related approach [1]?

• RQ2: cost– How lower percentage of distinct counterexamples are

generated by our approach compared to the existing approach [1]?

• RQ3: cost– For cases where a fault in a faulty policy is detected by our

approach, how high percentage of distinct counterexamples (for inspection) are reduced by our prioritization?

1. Evan Martin and Tao Xie. Inferring Access-Control Policy Properties via Machine Learning. POLICY 2006

Automated Software Engineering Research Group 24

Metrics• Fault-detection ratio (FR)• Counterexample count (CC)• Counterexample-reduction ratio (CRB) for our

approach over the existing approach• Counterexample-reduction ratio (CRP) for the

prioritization technique over the basic technique

Automated Software Engineering Research Group 25

Mutation Testing

• Fault-detection capability [1,2]

– Seed a fault into a firewall policy and generate a mutant (a faulty version)

– # of detected faults / Total # of faults

DecisionsCounterexample

Mutant(faulty version)

Expected Decisions

Policy(correct version)

1. Evan Martin, JeeHyun Hwang, Tao Xie, and Vincent C. Hu. Assessing quality of policy properties in verification of access control policies. ACSAC 2008

2. Evan Martin and Tao Xie. A Fault Model and Mutation Testing of Access Control Policies. WWW 2007

Automated Software Engineering Research Group 26

Evaluation Setup• Seed a policy with faults for synthesizing faulty policies

– One fault in each faulty policy for ease of evaluation– Four fault types [1]

• Change-Rule Effect (CRE)• Rule-Target True (RTT)• Rule-Target False (RTF)• Removal Rule (RMR)

• Compare results of our approach with those of the previous DT approach based on decision tree[2]

1. Evan Martin and Tao Xie. A Fault Model and Mutation Testing of Access Control Policies. WWW 20072. Evan Martin and Tao Xie. Inferring Access-Control Policy Properties via Machine Learning. POLICY 2006

Automated Software Engineering Research Group 27

4 XACML Policy Subjects

• Real-life access control policies– codeD2 : modified version of codeD [1]

– continue-a, continue-b [1] : policies for a conference review system

– Univ [2] : policies for a univ.

• The number of rules ranges 12-306 rules1. Kathi Fisler, Shriram Krishnamurthi, Leo A. Meyerovich, Michael Carl Tschantz. Verification and change-impact analysis

of access-control policies. ICSE 20052. Stoller, S.D., Yang, P., Ramakrishnan, C., Gofman, M.I.. Efficient policy analysis for administrative role based access

control. CCS 2007

Automated Software Engineering Research Group 28

Evaluation Results (1/2) – CRE Mutants

FR: Fault-detection ratio CC: Counterexample countCRB: Counterexample-reduction ratio for our approach over DT approachCRP: Counterexample-reduction ratio for the prioritization technique over the basic technique

• Fault detection ratios:• DT (25.9%), Basic (62.3%), Prioritization (62.3%)

• Our approach (including Basic and Prioritization techniques) outperform DT in terms of fault-detection capability

Automated Software Engineering Research Group 29

Evaluation Results (1/2) – CRE Mutants

FR: Fault-detection ratio CC: Counterexample countCRB: Counterexample-reduction ratio for our approach over DT approachCRP: Counterexample-reduction ratio for the prioritization technique over the basic technique

• Our approach reduced the number of counterexamples by 55.5% over DT

• Our approach reduced the number of counterexamples while our approach detected a higher percentage of faults (addressed in RQ1)

• Prioritization reduced averagely 38.5% of counterexamples (for inspection) (in Column “% CRP”) over Basic

Automated Software Engineering Research Group 30

Evaluation Results (2/2) – Other Mutants

• Prioritization and Basic achieve the highest fault-detection capability for policies with RTT, RTF, or RMR faults

Fault-detection ratios of faulty policies

Automated Software Engineering Research Group 31

Conclusion

• A new approach that mines likely properties characterizing correlations of policy behaviors w.r.t. attribute values

• An evaluation on 4 real-world XACML policies– Our approach achieved >30% higher fault-

detection capability than that of the previous related approach based on decision tree

– Our approach helped reduce >50% counterexamples for inspection compared to the previous approach

Automated Software Engineering Research Group 32

Outline

•Motivation•Our approach•Future work

Automated Software Engineering Research Group 33

Future Work Dissertation Goal

Improving quality of access control: Automated Test Generation, likely properties mining

Debugging Fault Localization

•.Policy Combination• Access Control Policy Tool (ACPT)• Testing of policies in healthcare system• e.g., interoperability and regular compliance (e.g., HIPPA)

Automated Software Engineering Research Group 34

Questions?

Automated Software Engineering Research Group 35

Other Challenges

Generate properties of high qualityCover a large portion of policy behaviours

Obligation/Delegation/Environments

Automated Software Engineering Research Group 36

Related Work

• Assessing quality of policy properties in verification of access control policies [Martin et al. ACSAC 2008]

• Inferring access-control policy properties via machine learning [Martin&Xie Policy 2006]

• Detecting and resolving policy misconfigurations in access-control systems [Bauer et al. SACMAT 2008]

Automated Software Engineering Research Group 37

My Other Research Work

Automated Software Engineering Research Group 3838

Systematic Structural Testing of Firewall Policies

JeeHyun Hwang1, Tao Xie1, Fei Chen2, and Alex Liu2

North Carolina State University1

Michigan State University2

(SRDS 2008)

Automated Software Engineering Research Group 39

Problem

• Factors for misconfiguration– Conflicts among rules– Rule-set complexity– Mistakes in handling corner cases

• Systematic testing of firewall policies– Exhaustive testing is impractical– Considering test effort and their

effectiveness together– Complementing firewall verification

How to test

Firewall?

Automated Software Engineering Research Group 40

Firewall Policy Structure• A Policy is expressed as a set of rules.

Rule Src SPort Dest DPort Protocol Decision

r1 * * 192.168.*.* * * accept

r2 1.2.*.* * * * TCP discard

r3 * * * * * discard

• A Rule is represented as <predicate> → <decision>• <predicate> is a set of <clauses>

predicate

decision

• <decision> is “accept” or “discard”• Given a packet (Src, Sport, Dest, Dport, Protocol)

– When <predicate> is evaluated “True”, <decision> is returned

• <clauses>– Src, Sport, Dest, Dport, Protocol– Representing Integer range

• Given a packet (Src, Sport, Dest, Dport, Protocol)– <clause> can be evaluated “True” or “False”.

Firewall Format: CISCO REFLEXIVE ACLS

Automated Software Engineering Research Group 41

Random Packet Generation

• Given domain range (e.g., IP addresses [0, 28-1]), random packets are generated within the domain.

Src SPort Dest DPort Protocol

Domain * * * * *

162.168.12.5 5 192.168.0.5 10 TCP

• Easy to generate packets• Due to its randomness, difficult to achieve high

structural coverage

Automated Software Engineering Research Group 42

Packet Generation based on Local Constraint Solving

• Considering an individual rule, generates packets to evaluate constraints of clauses in a specified way

– For example, every value is evaluated to true

162.168.12.5 5 168.1.0.5 10 TCP162.168.12.5 5 192.168.0.5 10 TCP

– For example, Dest field value is evaluated to false, and the remaining values are evaluated to true

Rule Src SPort Dest DPort Protocol Decision

r1 * * 192.168.*.* * * accept

T T T TTT T T TF

• Conflicts among rules

Automated Software Engineering Research Group 43

Packet Generation based on Global Constraint Solving

• Considering preceding rules are not applicable, generates packets to evaluate constraints of certain rule’s clauses in a specified way

– Packet is applicable to r3 (considering that r1and r2 are not applicable)

162.168.12.5 5 1.5.0.5 10 TCPT T T TT

• Resolving conflicts among rules and require analysis time to solving such conflicts

Rule Src SPort Dest DPort Protocol Decision

r1 * * 192.168.*.* * * accept

r2 1.2.*.* * * * TCP discard

r3 * * * * * discard

FF

Automated Software Engineering Research Group 44

Mutation Testing

• Why mutation testing?– Measure the quality of a test packet set (i.e., fault detection

capability)• Seed a fault into a firewall policy and generate a mutant

(a faulty version)

.

Decisions

Test Packets

Mutant(faulty version)

Expected Decision

s

Firewall(correct version)

• Compare their decisions– The fault is detected in a mutant (i.e., the mutant is “killed”).

Automated Software Engineering Research Group 45

11 Mutation Operators

Remove RuleRMR

Change Rule DecisionCRD

Change Rule OrderCRO

Change Range End point OperatorCREO

Change Range Start point OperatorCRSO

Change Range End point ValueCREV

Change Range Start point ValueCRSV

Rule Clause FalseRCF

Rule Clause TrueRCT

Rule Predicate False RPF

Rule Predicate True RPT

DescriptionOperator

Automated Software Engineering Research Group 46

Experiment

• Given a firewall policy (assuming correct!) – Mutants– Packet sets (for each technique)

• Investigating the following correlations– Packet sets and their achieved structural coverage– Structural coverage criteria and fault-detection

capability– Packet sets and their reduced packet sets in terms

of fault-detection capability• Characteristics of each mutation operator

Automated Software Engineering Research Group 47

Experiment (Contd...)

• Notations– Rand : packet set generated by the random packet

generation technique– Local : packet set generated by the packet

generation technique based on local constraint solving

– Global : packet set generated by the packet generation technique based on global constraint solving

– R-Rand, R-Local, and R-Global are their reduced packet sets

Automated Software Engineering Research Group 48

Subjects• We used 14 firewall policies• Number of test packets : approximately 2 packets per rule

# Rules :

Number of rules

# Mutants :

Number of mutants

Gen time (milliseconds) :

packet generation time

(particularly for Global)

Global :

global constraint solving

Automated Software Engineering Research Group 49

Measuring Rule Coverage

Rand < Local ≤ Global

• Rand achieves the lowest rule coverage

• In general, Global achieves slightly higher rule coverage than Local

Automated Software Engineering Research Group 50

Reducing the number of packet sets

• Reduced packet set (e.g., R-Rand)

– Maintain same level of structural coverage

– R-Rand (5% of Rand), R-Local (66% of Local), and R-Global (60% of Global)

– Compare their fault-detection capabilities

Automated Software Engineering Research Group 51

Fault detection capability by subject policies

R-Rand ≤ Rand < R-Local ≤ Local < R-Global≤ Global

• Packet set with higher structural coverage has higher fault-detection capability

Automated Software Engineering Research Group 52

Fault detection capability by mutation operators

• Mutant killing ratios vary by mutation operators– Above 85% : RPT

– 30% - 40% : RPF, RMR

– 10 – 20% : CRSV, CRSO

– 0% - 10% : RCT, RCF, CREV, CREO, CRO

Automated Software Engineering Research Group 53

Related Work

• Testing of XACML access control policies [Martin et al. ICICS 2006, WWW 2007]

• Specification-based testing of firewalls [J¨urjens et al. PSI 2001]– State transition model between firewall and its

surrounding network• Defining policy criteria identified by

interactions between rules [El-Atawy et al. Policy 2007]

Automated Software Engineering Research Group 54

Conclusion• Firewall policy testing helps improve our

confidence of firewall policy correctness• Systematic testing of firewall policies

– Structural coverage criteria– Three automated packet generation techniques

• Measured Coverage: Rand < Local ≤ Global

• Mutation testing to show the fault detection capability– Generally, a packet set with higher structural

coverage has higher fault-detection capability– Worthwhile to generate test packet set for

achieving high structural coverage

Automated Software Engineering Research Group 55

Fault Localization for Firewall Policies

55

JeeHyun Hwang1, Tao Xie1, Fei Chen2, and Alex Liu2

North Carolina State University1

Michigan State University2

Symposium on Reliable Distributed Systems (SRDS 2009)

Automated Software Engineering Research Group 56

Fault Model

• Faults in an attribute in a Rule– Rule Decision Change (RDC)

• Change Rule Decision– R1: F1∈[0,10]∧F2∈[3, 5] →accept– R1’: F1∈[0,10]∧F2∈[3, 5] →deny

– Rule Field Interval Change (RFC)• Change selected rule’s interval randomly

– R1: F1∈[0,10]∧F2∈[3, 5] →accept– R1’: F1∈[2,7]∧F2∈[3, 5] →accept

Automated Software Engineering Research Group 57

Overview of Approach

• Input– Faulty Firewall Policy– Failed and Passed Test Packets

• Techniques– Covered-Rule-Fault Localization– Rule Reduction Technique – Rule Ranking Technique

• Output– Set of likely faulty rules (with their

ranking)

Automated Software Engineering Research Group 58

Covered-Rule-Fault Localization

R1: F1 [0,10]∈ ∧F2 [3, 5]∈ ∧F3 [3, 5] → ∈ accept

R2: F1 [5, 7]∈ ∧F2 [0, 10]∈ ∧F3 [3, 5] → ∈ discard

R3: F1 [5, 7]∈ ∧F2 [0, 10]∈ ∧F3 [6, 7] → ∈ accept

R4: F1 [2,10]∈ ∧F2 [0, 10]∈ ∧F3 [5,10] → ∈ discard

R5: F1 [0,10]∈ ∧F2 [0, 10]∈ ∧F3 [0,10] → ∈ discard

• Inspect a rule covered by a failed test

• R4 is selected for inspection• RDC faulty rule is effectively filtered out

# Rule Cov Se-lected

#Failed

#Passed

00020

22202

●ac-cept

Inject a Rule Decision Change Fault in R4

• “accept” rather than “dis-card”

Automated Software Engineering Research Group 59

Rule Reduction Technique

• Reduce # of rules for inspection• The earliest-placed rule covered by failed

tests; R4• Other rules with following criterion

• Rules above r’; R1, R2, R3• Rules with different decision of r’; R1, R3

R1: F1 [0,10]∈ ∧F2 [3, 5]∈ ∧F3 [3, 5] ∈ → accept

R2: F1 [5, 7]∈ ∧F2 [0, 10]∈ ∧F3 [3, 5] → ∈ discard

R3: F1 [5, 7]∈ ∧F2 [0, 10]∈ ∧F3 [6, 7] → ∈ accept

R4: F1 [2,10]∈ ∧F2 [0, 10]∈ ∧F3 [5,10] → ∈ discard

R5: F1 [0,10]∈ ∧F2 [0, 10]∈ ∧F3 [0,10] → ∈ discard

# Rule Cov Se-lected#Fa

iled

#Passed

00021

22212

●●

Inject a Field Interval Change Fault in R1’s F3

• F3∈[3, 3] rather than F3∈[3, 5] F3 [3, 3]∈

Automated Software Engineering Research Group 60

Rule Ranking Technique

• Rank rules based on their likelihood of being faulty using clause coverage.– FC1 <= FC2

• FC1: # clauses that are evaluated to false in a faulty rule• FC2: # clauses that are evaluated to false in other rules

– Ranking is calculated based on the following formula

FF(r) : # clauses evaluated to falseFT(r) : # clauses evaluated to true

Automated Software Engineering Research Group 61

Experiments

• 14 firewall policies

# Rules: # of rules

# Tests: # generated test

packets

# RDC: # RDC faulty poli-

cies

#RFC # RFC faulty poli-

cies

Automated Software Engineering Research Group 62

Results: Covered-Rule-Fault Localization

• 100 % of RDC faulty rules are detected• 69 % of RFC faulty rules are detected• 31% of RFC faulty rules are not covered by

only failed test

Automated Software Engineering Research Group 63

Results: Rule Reduction for Inspection

• Rule reduction percentage: % Reduce (30.63 % of

Rules)• Ranking-based rule reduction

percentage: % R-Reduce (66% of

Rules)

Automated Software Engineering Research Group 64

Conclusion and Future Work

• Our techniques help policy authors locating faults effectively by reducing # of rules for inspection– 100% of RDC faulty rules and 69% of

RFC faulty rules can be detected by inspecting covered rules

– 30.63% of rules on average are reduced for inspection based on our rule reduction technique and 56.53% of rule ranking technique

• We plan to investigate fault localization for multiple faults in a firewall policy