Erik Jonsson School of Engineering and Computer Science

35
create your future www.utdallas.edu Erik Jonsson School of Engineering and Computer Science FEARLESS Engineering www.utdallas.edu A MODEL-CHECKING IN-LINED REFERENCE MONITORING SYSTEM FOR ACTIONSCRIPT Meera Sridhar and Kevin W. Hamlen (The University of Texas at Dallas) Supported in part by grants from AFOSR Presentation at Adobe Systems Inc. San Francisco, CA November 19, 2009 11/19/2009 1

description

Erik Jonsson School of Engineering and Computer Science. A Model-Checking In-lined Reference Monitoring system for ActionScript. Meera Sridhar and Kevin W. Hamlen (The University of Texas at Dallas) Supported in part by grants from AFOSR Presentation at Adobe Systems Inc. San Francisco, CA - PowerPoint PPT Presentation

Transcript of Erik Jonsson School of Engineering and Computer Science

Page 1: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

Erik Jonsson School of Engineering and Computer Science

FEARLESS Engineering www.utdallas.edu

A MODEL-CHECKING IN-LINED REFERENCE MONITORING SYSTEM

FOR ACTIONSCRIPT

Meera Sridhar and Kevin W. Hamlen

(The University of Texas at Dallas)

Supported in part by grants from AFOSR

Presentation at Adobe Systems Inc.

San Francisco, CA

November 19, 200911/19/2009 1

Page 2: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

BEYOND ACCESS CONTROL

1. Security Policy – A security policy for a system defines desired “secure” behavior for a system. More formally, consider a computer system to be a finite-state automaton, with a set of transition functions that change state. Then, a security policy is a statement that partitions the states of the system into a set of authorized, or secure, states and a set of unauthorized, or nonsecure, states.

- Computer Security, Art and Science by Matt Bishop

11/19/2009 2

Page 3: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

TRUSTWORTHY = SMALL

• 2. Trusted Computing Base (TCB) – a small amount of software and hardware that security depends on and that we distinguish from a much larger amount that can misbehave without affecting security.

- Lampson et al, Authentication in Distributed Systems

• An important goal in computer security is to minimize the TCB.

11/19/2009 3

Page 4: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

SEPARATION OF CONCERNS

• 3. code-producer and code-consumer– will use this terminology frequently– code-producer – the party that wrote the code

• note that this could be different from the code-distributor, for example, in AIR, Adobe is considered the code-distributor, but the code-producer can be any person who chooses to create an AIR application and release it to the public.

– code-consumer – the end-user who will run the application

11/19/2009 4

Page 5: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

THE NEED FOR MORE SOPHISTICATED SECURITY ENFORCEMENT TOOLS FOR

FLASH AND AIR• Existing security mechanisms for ActionScript and related

technologies (including the AVM, Flash, Flex and AIR) mainly fall into two categories:

– Code-signing– Sandboxing

• Suffice for certain classes of attacks (mainly access control), BUT:– code-signing

• places the code-producer in the TCB

• consumer can either choose to trust the code-producer and run the program or choose not to trust the code-producer – there is no in-between

– sandboxing • enforces only a small class of coarse-grained access control policies

• policies are built into the AVM and runtime libraries.

11/19/2009 5

Page 6: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

THE NEED FOR MORE SOPHISTICATED SECURITY ENFORCEMENT TOOLS FOR FLASH AND AIR,

CONTD.• Limitations:

• cannot enforce system- and application-specific policies– e.g. prohibiting write access to files with only certain file extensions

• cannot enforce finer-grained policies – e.g. those that constrain arguments to individual ActionScript instructions (such as

arguments to eval)– programs cannot use file system – too coarse grained

• policy is encoded in the underlying OS/VM, so– code-producer needs to know the policy – policy cannot be modified/specified by code-

consumer after receiving the code• e.g. AIR force wants to enforce policies that they can’t even talk about

– changes to policy would need to be reflected in the OS/VM by modifying it – expensive, inflexible

11/19/2009 6

Page 7: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

ENTER IRMS: WHAT ARE THEY?

• In-Lined Reference Monitors (IRM’s) – inject runtime security guards directly into untrusted binaries– guards test whether an impending operation constitutes a

policy violation– if so, some corrective action is taken to prevent the violation,

such as premature termination. – result is self-monitoring code that can be safely executed

without external monitoring

11/19/2009 7

Page 8: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

IRMS CONTD.

IRMs • keep track of history of security-relevant events observed• enables enforcement of a powerful class of security policies

(not precisely enforceable by any purely static analysis)• do not require any admin/VM-level privileges to enforce policy

IRMs also• allow code consumers to specify or modify the security policy after

receiving the code– code-producer need not know the policy, which is the case with static

analyses/code-signing/sandboxing

• changes to policy doesn’t require modifications to underlying OS/VM

11/19/2009 8

Page 9: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

A VERY SIMPLE EXAMPLE

Let’s say the desired security policy is the following:

The program should not open more than 3 pop-up windows.

11/19/2009 9

Page 10: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

A VERY SIMPLE EXAMPLE, CONTD.

public class RUnsafeURL extends Sprite{

public function RUnsafeURL(){

//create textfield object here

var theField:TextField = new TextField();

theField.text = "Click me!";

theField.border = true;

theField.background = true;

theField.selectable = false;

addChild(theField);

//add event listener here

theField.addEventListener(MouseEvent.CLICK, clickListener);

}//constructor RUnsafeURL

private function clickListener(e:MouseEvent):void { var url:String="javascript:window.open(

http://www.youtube.com/watch?v=pXT9WSaH8uw&feature=topvideos ','title',04.'width=800,height=600,toolbar=no,resizable=o,menubar=no,05.status=no,scrollbars=no');void(0);";

while(true)

{

navigateToURL(new

URLRequest("http://www.youtube.com/watch?v=pXT9WSaH8uw&feature=topvideos")); }

}//end method clickListener

}//end class definition

11/19/2009 10

Page 11: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

A VERY SIMPLE EXAMPLE, CONTD.public class RUnsafeURL extends Sprite{

private var security:Number = 0;

public function RUnsafeURL(){

//create textfield object here

var theField:TextField = new TextField();

theField.text = "Click me!";

theField.border = true;

theField.background = true;

theField.selectable = false;

addChild(theField);

//add event listener here

theField.addEventListener(MouseEvent.CLICK, clickListener);

}//constructor RUnsafeURL

private function clickListener(e:MouseEvent):void { var url:String="javascript:window.open(

‘http://www.youtube.com/watch?v=pXT9WSaH8uw&feature=topvideos ','title',04.'width=800,height=600,toolbar=no,resizable=o,menubar=no,05.status=no,scrollbars=no');void(0);";

while(true)

{

if (security > 3)

HALT;

navigateToURL(new URLRequest("http://www.youtube.com/watch?v=pXT9WSaH8uw&feature=topvideos"));

security++;

}

}//end method clickListener

}//end class definition

11/19/2009 11

Page 12: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

BINARY-LEVEL IRM ARCHITECTURE

ABC INJECTOR

CODE GENERATOR

ABC EXTRACTOR

PARSER

ORIGINAL SWF FILE

ABC FILE

ABSTRACT SYNTAX TREE (AST)

INSTRUMENTED AST

INSTRUMENTED ABC FILE

ORIGINAL SWF DATA

SAFE SWF FILE

SECURITY POLICY

REWRITER

11/19/2009 12

Page 13: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

For more information on what I’ve presented so far, please refer to the following paper:

M. Sridhar and K.W. Hamlen. ActionScript In-Lined Reference Monitoring in Prolog. In Proc. Intl. Symposium on Practical Aspects of Declarative Languages, 2010. to appear.

11/19/2009 13

Page 14: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

TWO IMPORTANT CONCERNS

1. Does the IRM preserve good code? – inserted guards have no visible side affect, unless the code is policy-

violating– depends on strictly one defines “ preserve”

• not applicable to self-modifying code for example

– best determined through testing– fairly long history of doing this (may not always be known as IRM)

• Java compiler for example inserts array bounds check before any array access• user doesn’t notice any difference, unless there is a array out-of-bounds violation

2. More important concern – how do we know that the rewriter prevents policy violations?

– testing not good enough, since attacker is actively trying to find loop-holes in the code

– needs rigorous proving

11/19/2009 14

Page 15: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

THE NEED FOR VERIFICATION

• we would formally verify the rewriter• also, want to minimize the TCB

– rewriters are powerful, but therefore complex. – we generally have a rewriter tailored to a particular security

policy class• so, the combined size of all the rewriters can be large

– rewriters employ many optimization schemes, so any change to the optimization means a change to the TCB

• not desirable, since we want to keep the TCB fairly stable

11/19/2009 15

Page 16: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

THE NEED FOR VERIFICATION, CONTD.

• a verifier (separate, smaller program) checks the rewritten program against the security policy, and

• either – deems the program to be policy-adherent (accepts), or – deems it to be policy-violating (rejects), and produces a counterexample – a sequence

of events that led up to the security policy being violated in a certain program state

• can take the rewriter out of the TCB

• typically, a good verifier is general and less subject to change, and the verifier code is the same across security policy classes

• therefore, much more desirable to have the verifier rather than the rewriter in the TCB

11/19/2009 16

Page 17: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

VERIFYING IRM ARCHITECTURE

ABC INJECTOR

CODE GENERATOR

ABC EXTRACTOR

PARSER

ORIGINAL SWF FILE

ABC FILE

ABSTRACT SYNTAX TREE (AST)

INSTRUMENTED AST

INSTRUMENTED ABC FILE

ORIGINAL SWF DATA

SAFE SWF FILE

SECURITY POLICY R1 R2 Rn…

BINARY REWRITERS

TRUSTED COMPUTING BASE

PARSER

MODEL-CHECKER + ABSTRACT

INTERPRETER

REJECT + COUNTER-EXAMPLE

ACCEPT + VERIFIED ABC FILE

VERIFIER

IRM FRAMEWORK

AST

11/19/2009 17

Page 18: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

USING MODEL-CHECKING FOR VERIFICATION

• Model-checking is an extremely powerful software verification paradigm that can verify properties

– more complex than those typically expressible by type-systems• ActionScript already has some basic type-checking• while type-checking suffices to prove policies such as ensuring the

argument to a jump is indeed a label, does not prove policies like “do not open more than 3 pop-up windows”

– more semantically flexible and abstract than those typically encoded by contracts

• contracts tend to expect strict syntactic- adherence• not flexible for various rewriting-strategies

11/19/2009 18

Page 19: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

WHAT IS MODEL-CHECKING?• Informally, given a model of a system (in this case, the ActionScript

program), and a safety property, a model-checker explores all possible states that the system can be in, checking whether the property is violated in any state.

• If no violations occur, then the model-checker accepts.

• If in a particular state the property is violated, then the model-checker produces the sequence of states that led up to the violating state.

• clearly suited for verification• main challenge with model-checking – state space explosion – since the MC

explores each possible state, it quickly runs out of memory, especially if there are loops

– tackling this for IRMs has been one of the main focuses of our research

11/19/2009 19

Page 20: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

KEY ELEMENTS OF THE VERIFIER

• security policy is modeled as a security automaton– policy-writer uses a high-level policy language

• can be anything from check-boxes to Linear Temporal Logic

– high-level policy is reduced to an automaton • this idea is obtained from literature

• abstractly interprets untrusted programs, reasoning about possible event histories

11/19/2009 20

Page 21: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

KEY ELEMENTS OF THE VERIFIER

• uses memoization and co-induction to reason about fixed points– suppose sec-relevant property; no iteration changes whether

property is true or false

– the infinite number of states can be replaced with a single, representative state

– needed for termination of the model-checker

For more information on this aspect specifically, please see:B. W. DeVries, G. Gupta, K. W. Hamlen, S. Moore, and M. Sridhar.

ActionScript Bytecode Verification with Co-logic Programming. In Proc. of the ACM SIGPLAN Workshop on Prog. Languages and Analysis for Security (PLAS), 2009.

11/19/2009 21

Page 22: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

KEY ELEMENTS OF THE VERIFIER, CONTD.

• another way to combat state space explosion is to use abstract interpretation

– when certain properties of variable values, rather than their concrete values suffice for reasoning about policy-adherence, concrete variable values can be replaced with abstract values from an abstract domain

– can greatly reduce state space, since several concrete states can be mapped onto a single abstract state

11/19/2009 22

Page 23: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

KEY ELEMENTS OF THE VERIFIER, CONTD.

Abstract Interpretation, contd.

– domain is constructed based on knowledge of properties crucial for interpretation

– of course, one has to take care that this abstraction preserves soundness, but also,

– the precision of this abstraction determines the power of our analysis

• it is possible to create an abstraction that is correct, but abstracts away too much information – this results in a high conservative rejection rate – that is, no policy-violating programs are accepted, but many policy-adherent programs are conservatively rejected.

11/19/2009 23

Page 24: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

KEY ELEMENTS OF THE VERIFIER, CONTD.

• model-checking rewritten code requires abstract interpretation of – not only program variable values, but also – history of sec-relevant events

• uses a sophisticated abstract interpretation scheme – by creating an abstract interpretation lattice built from the

security policy automaton

11/19/2009 24

Page 25: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

KEY ELEMENTS OF THE VERIFIER, CONTD.

• the model-checking step is encoded in the operational semantics of the abstract machine, where policy-violations are modeled as stuck states

• thus, if abstract machine reaches a stuck state, program is rejected as policy-violating

11/19/2009 25

Page 26: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

KEY ELEMENTS OF THE VERIFIER, CONTD.

• We have proved that our method is sound – that is, adopting the proposed abstract interpretation scheme does not result in any policy-violating program being accepted by the model-checker.

• We have shown convergence - that our model-checker will always terminate in polynomial time.

11/19/2009 26

Page 27: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

• Please refer to the following paper for a detailed treatment:

M. Sridhar and K. W. Hamlen. Model-checking In-lined Reference Monitors. In Proc. Intl. Conf. on Verification, Model-Checking and Abstract Interpretation, 2010. to appear.

11/19/2009 27

Page 28: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

IMPLEMENTATION AND RESULTS

• Rewriting system is a prototype system; not a fully-production level system yet. Model-checking system is a very simple system that can verify these properties. We don’t want a very complex model-checker.

• ABC Extractor/Injector in C• Rest in Prolog (simplifies implementation, but doesn’t have to be in Prolog)

– Definite Clause Grammars greatly simplify parsing– reversible nature of predicates facilitates code-generation as reverse-parsing– Prolog already has memoization (tabling) and coinduction

• Lines of code:– binary-rewriters ~ 400 lines of Prolog code per security policy family– shared parser/generator ~ 900 lines– veriifer ~ 2000 lines of code

11/19/2009 28

Page 29: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

SAMPLE POLICIESredir policy:

• prohibits malicious URL-redirections by ABC ad applets

• redirections are implemented at the bytecode level by navigateToURL system calls

• The policy requires that method check_url(s) be called to validate destination s before any redirection to s may occur. Method check_url has a trusted implementation provided by the ad distributor and/or web host, and may incorporate dynamic information such as ad hit counts or webpage context.

• Our IRM enforces this policy by injecting calls to check_url into untrusted applets. For better runtime efficiency, it positions some of these calls early in the program's execution (to pre-validate certain URL's) and injects runtime security state variables that avoid potentially expensive duplicate calls by tracking the history of past calls.

11/19/2009 29

Page 30: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

SAMPLE POLICIES, CONTD.

postok policy:

• sanitizes strings entered into message box widgets

• This can be helpful in preventing cross-site scripting attacks, privacy violations, and buffer-overflow exploits that affect older versions of the ActionScript VM.

• We enforced the policy on the Posty AIR application, which allows users to post messages to social networking sites such as Twitter, Jaiku, Tumblr, and Friendfeed.

11/19/2009 30

Page 31: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

SAMPLE POLICIES, CONTD.

flimit policy

• enforces a resource bound that disallows the creation of more than n files on the user's machine

• enforced this policy on the FedEx Desktop AIR application, which continuously monitors a user's shipment status and sends tracking information directly to his or her desktop

• IRM implements the policy by injecting a counter into the untrusted code that tracks file creations

11/19/2009 31

Page 32: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

EXPERIMENTAL RESULTS

11/19/2009 32

PROGRAM TESTED

POLICY ENFORCED

SIZE BEFORE

SIZE AFTER

REWRITING TIME

VERIFICATION

countdownBadge redir 1.80 KB 1.95 KB 1.429s 0.532s

navToURL redir 0.93 KB 1.03 KB 0.863s 0.233s

fiona redir 58.9 KB 59.3 KB 15.876s 0.891s

calder redir 58.2 KB 58.6 KB 16.328s 0.880s

posty postok 112.0 KB 113.0 KB 54.170s 2.443s

fedex flimit 77.3 KB 78.0 KB 39.648s 1.729s

All tests were performed on an Intel Pentium Core 2 Duo machine running Yap Prolog v5.1.4.

Page 33: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

Our papers are available at:

www.utdallas.edu/~meera.sridhar

11/19/2009 33

Page 34: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

BIBLIOGRAPHY

1. B. W. DeVries, G. Gupta, K. W. Hamlen, S. Moore, and M. Sridhar. ActionScript Bytecode Verification with Co-logic Programming. In Proc. of the ACM SIGPLAN Workshop on Prog. Languages and Analysis for Security (PLAS), 2009.

2. K. W. Hamlen, G. Morrisett, and F. B. Schneider. Computability Classes for Enforcement Mechanisms. In ACM Trans. Prog. Languages and Systems, 2006.

3. F. B. Schneider. Enforceable Security Policies. ACM Trans. Information and System Security, 3:30–50, 2000.

4. M. Sridhar and K. W. Hamlen. Model-checking In-lined Reference Monitors. In Proc. Intl. Conf. on Verification, Model-Checking and Abstract Interpretation, 2010. to appear.

5. M. Sridhar and K.W. Hamlen. ActionScript In-Lined Reference Monitoring in Prolog. In Proc. Intl. Symposium on Practical Aspects of Declarative Languages, 2010. to appear.

11/19/2009 34

Page 35: Erik Jonsson School of Engineering and Computer Science

create your future www.utdallas.edu

ACKNOWLEDGEMENTS

• We thank the Air Force Office of Sponsored Research (AFOSR) for funding part of this research.

• We thank Dr. Gopal Gupta, Dr. Feliks Kluzniak, and Dr. R. Chandrasekaran from The University of Texas at Dallas for many helpful discussions and feedback.

• Finally, many thanks to Peleus Uhley here for giving us this opportunity to share our work with you.

11/19/2009 35