Roberto Giacobazzi and Isabella Mastroeni Dipartimento di...

71
A DJOINING D ECLASSIFICATION AND A TTACK M ODELS BY A BSTRACT I NTERPRETATION Roberto Giacobazzi and Isabella Mastroeni Dipartimento di Informatica Universit ` a di Verona Italy Edimburgh, April 8th, 2005 Adjoining Declassification and Attack Models by Abstract Interpretation – p.1/19

Transcript of Roberto Giacobazzi and Isabella Mastroeni Dipartimento di...

ADJOINING DECLASSIFICATION AND ATTACK MODELS

BY ABSTRACT INTERPRETATION

Roberto Giacobazzi and Isabella Mastroeni

Dipartimento di Informatica

Universita di Verona

Italy

Edimburgh, April 8th, 2005

Adjoining Declassification and Attack Models by Abstract Interpretation – p.1/19

The Problem: Non-Interference

SW

Public L:Secret H:Finantial investment Investment data

Public L: Log files

[Sabelfeld and Sands’01]

Adjoining Declassification and Attack Models by Abstract Interpretation – p.2/19

The Problem: Non-Interference

External observer

SW

Public L:Secret H:Finantial investment Investment data

Public L: Log files

Is it secure?

[Sabelfeld and Sands’01]

Adjoining Declassification and Attack Models by Abstract Interpretation – p.2/19

The Problem: Non-Interference

External observer

Is it secure?H

L

Secret H

NOSW

Public L:Secret H:Finantial investment Investment data

Public L: Log files

[Sabelfeld and Sands’01]

Adjoining Declassification and Attack Models by Abstract Interpretation – p.2/19

Background: Non-Interference

SECURITY PROPERTY: States which classes have not to interfere with otherclasses of objects.

Adjoining Declassification and Attack Models by Abstract Interpretation – p.3/19

Background: Non-Interference

SECURITY PROPERTY: States which classes have not to interfere with otherclasses of objects.

⇓Confinement problem[Lampson’73]: Preventing the results of computations

leaking even partial information about the confidential inputs.

Adjoining Declassification and Attack Models by Abstract Interpretation – p.3/19

Background: Non-Interference

SECURITY PROPERTY: States which classes have not to interfere with otherclasses of objects.

⇓Confinement problem[Lampson’73]: Preventing the results of computations

leaking even partial information about the confidential inputs.

⇓Non-interference policies require that any change upon confidential data has

not to be revealed through the observation of public data.

Many real systems are intended to leak some kind of information

Even if a system satisfies non-interference, some kind of tests couldreject it as insecure

Adjoining Declassification and Attack Models by Abstract Interpretation – p.3/19

Background: Non-Interference

SECURITY PROPERTY: States which classes have not to interfere with otherclasses of objects.

⇓Confinement problem[Lampson’73]: Preventing the results of computations

leaking even partial information about the confidential inputs.

⇓Non-interference policies require that any change upon confidential data has

not to be revealed through the observation of public data.

Characterizing released information: [Cohen’77], [Zdancewic &Myers’01], [Clark et al.’04], [Lowe’02];

Constraining attackers: [Di Pierro et al.’02], [Laud’01].

Adjoining Declassification and Attack Models by Abstract Interpretation – p.3/19

Our idea: Abstracting Non-Interference

H

Secret HExternal observer

Secret H Public L

Public L

L

SW

Adjoining Declassification and Attack Models by Abstract Interpretation – p.4/19

Our idea: Abstracting Non-Interference

HL

Secret H

SW

External observer

Observer:

Public L

Secret H Public L

ρ

ρ

Adjoining Declassification and Attack Models by Abstract Interpretation – p.4/19

Our idea: Abstracting Non-Interference

Secret

SW

External observer

Observable:

Public L

Secret H Public L

L

ρ

φ(H)

φ

φ(H)

Adjoining Declassification and Attack Models by Abstract Interpretation – p.4/19

Abstract domain completeness

Let < A, α, γ, C > a Galois insertion. [Cousot & Cousot ’77,’79]f : C −→ C, fa = α ◦ f ◦ γ : A −→ A (b.c.a. of f) and ρ=γ ◦ α

α(x)x

f

αf(x) =faα(x)

⊥ ⊥a

> >a

ρ correct for f

fa

αf(x)

αf(x)

Adjoining Declassification and Attack Models by Abstract Interpretation – p.5/19

Abstract domain completeness

Let < A, α, γ, C > a Galois insertion. [Cousot & Cousot ’77,’79]f : C −→ C, fa = α ◦ f ◦ γ : A −→ A (b.c.a. of f) and ρ=γ ◦ α

α(x)x

f

αf(x) = faα(x)

⊥ ⊥a

> >a

ρ complete for f

fa

α

ρfρ = ρf

f(x)

Adjoining Declassification and Attack Models by Abstract Interpretation – p.5/19

Standard non-interference

Private InputPublic Input

Public Output

JPK

∀l : L , ∀h1, h2 : H . JPK(h1, l)L = JPK(h2, l)L

Adjoining Declassification and Attack Models by Abstract Interpretation – p.6/19

Standard non-interference

Private InputPublic Input

Public Output

JPK

∀l : L , ∀h1, h2 : H . JPK(h1, l)L = JPK(h2, l)L

Adjoining Declassification and Attack Models by Abstract Interpretation – p.6/19

Standard non-interference

Private InputPublic Input

Public Output

JPK

∀l : L , ∀h1, h2 : H . JPK(h1, l)L = JPK(h2, l)L

Adjoining Declassification and Attack Models by Abstract Interpretation – p.6/19

Standard non-interference

Private InputPublic Input

Public Output

JPK

∀l : L , ∀h1, h2 : H . JPK(h1, l)L = JPK(h2, l)L

Adjoining Declassification and Attack Models by Abstract Interpretation – p.6/19

Standard non-interference

Private InputPublic Input

Public Output

JPK

∀l : L , ∀h1, h2 : H . JPK(h1, l)L = JPK(h2, l)L

Adjoining Declassification and Attack Models by Abstract Interpretation – p.6/19

Standard non-interference

Private InputPublic Input

Public Output

JPK

∀l : L , ∀h1, h2 : H . JPK(h1, l)L = JPK(h2, l)L

Adjoining Declassification and Attack Models by Abstract Interpretation – p.6/19

Abstracting non-interference I: Narrow ANI

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

ρ, η ∈ Abs(℘(VL )): [η]P(ρ): η(l1) = η(l2) ⇒ ρ(JPK(h1, l1)L ) = ρ(JPK(h2, l2)L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.7/19

Abstracting non-interference I: Narrow ANI

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

ρ, η ∈ Abs(℘(VL )): [η]P(ρ): η(l1) = η(l2) ⇒ ρ(JPK(h1, l1)L ) = ρ(JPK(h2, l2)L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.7/19

Abstracting non-interference I: Narrow ANI

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

ρ, η ∈ Abs(℘(VL )): [η]P(ρ): η(l1) = η(l2) ⇒ ρ(JPK(h1, l1)L ) = ρ(JPK(h2, l2)L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.7/19

Abstracting non-interference I: Narrow ANI

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

ρ, η ∈ Abs(℘(VL )): [η]P(ρ): η(l1) = η(l2) ⇒ ρ(JPK(h1, l1)L ) = ρ(JPK(h2, l2)L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.7/19

Abstracting non-interference I: Narrow ANI

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

ρ, η ∈ Abs(℘(VL )): [η]P(ρ): η(l1) = η(l2) ⇒ ρ(JPK(h1, l1)L ) = ρ(JPK(h2, l2)L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.7/19

Abstracting non-interference I: Narrow ANI

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

ρ, η ∈ Abs(℘(VL )): [η]P(ρ): η(l1) = η(l2) ⇒ ρ(JPK(h1, l1)L ) = ρ(JPK(h2, l2)L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.7/19

Abstracting non-interference II

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

ρ, η ∈ Abs(℘(VL )): (η)P(ρ):η(l1)=η(l2) ⇒ ρ(JPK(h1, η(l1))L )=ρ(JPK(h2, η(l2))L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.8/19

Abstracting non-interference II

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

ρ, η ∈ Abs(℘(VL )): (η)P(ρ):η(l1)=η(l2) ⇒ ρ(JPK(h1, η(l1))L )=ρ(JPK(h2, η(l2))L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.8/19

Abstracting non-interference II

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

ρ, η ∈ Abs(℘(VL )): (η)P(ρ):η(l1)=η(l2) ⇒ ρ(JPK(h1, η(l1))L )=ρ(JPK(h2, η(l2))L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.8/19

Abstracting non-interference II

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

ρ, η ∈ Abs(℘(VL )): (η)P(ρ):η(l1)=η(l2) ⇒ ρ(JPK(h1, η(l1))L )=ρ(JPK(h2, η(l2))L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.8/19

Abstracting non-interference II

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

ρ, η ∈ Abs(℘(VL )): (η)P(ρ):η(l1)=η(l2) ⇒ ρ(JPK(h1, η(l1))L )=ρ(JPK(h2, η(l2))L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.8/19

Abstracting non-interference III: ANI

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

φ

ρ, η ∈ Abs(℘(VL )),φ ∈ Abs(℘(VH )): (η)P(φ []ρ):η(l1)=η(l2) ⇒ ρ(JPK(φ(h1), η(l1))L )=ρ(JPK(φ(h2), η(l2))L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.9/19

Abstracting non-interference III: ANI

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

φ

ρ, η ∈ Abs(℘(VL )),φ ∈ Abs(℘(VH )): (η)P(φ []ρ):η(l1)=η(l2) ⇒ ρ(JPK(φ(h1), η(l1))L )=ρ(JPK(φ(h2), η(l2))L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.9/19

Abstracting non-interference III: ANI

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

φ

ρ, η ∈ Abs(℘(VL )),φ ∈ Abs(℘(VH )): (η)P(φ []ρ):η(l1)=η(l2) ⇒ ρ(JPK(φ(h1), η(l1))L )=ρ(JPK(φ(h2), η(l2))L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.9/19

Abstracting non-interference III: ANI

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

φ

ρ, η ∈ Abs(℘(VL )),φ ∈ Abs(℘(VH )): (η)P(φ []ρ):η(l1)=η(l2) ⇒ ρ(JPK(φ(h1), η(l1))L )=ρ(JPK(φ(h2), η(l2))L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.9/19

Abstracting non-interference III: ANI

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

φ

ρ, η ∈ Abs(℘(VL )),φ ∈ Abs(℘(VH )): (η)P(φ []ρ):η(l1)=η(l2) ⇒ ρ(JPK(φ(h1), η(l1))L )=ρ(JPK(φ(h2), η(l2))L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.9/19

Abstracting non-interference III: ANI

[POPL’04] Private InputPublic Input

Public Output

η

JPK

ρ

φ

ρ, η ∈ Abs(℘(VL )),φ ∈ Abs(℘(VH )): (η)P(φ []ρ):η(l1)=η(l2) ⇒ ρ(JPK(φ(h1), η(l1))L )=ρ(JPK(φ(h2), η(l2))L )

Adjoining Declassification and Attack Models by Abstract Interpretation – p.9/19

Examples

EXAMPLE I:while h do (l := l + 2; h := h − 1).

Standard Non-Interference ≡ [id]P(id)

h = 0, l = 1 ; l = 1

h = 1, l = 1 ; l = 3

h = n, l = 1 ; l = 1 + 2n

Adjoining Declassification and Attack Models by Abstract Interpretation – p.10/19

Examples

EXAMPLE I:while h do (l := l + 2; h := h − 1).

Standard Non-Interference ≡ [id]P(id)

h = 0, l = 1 ; l = 1

h = 1, l = 1 ; l = 3

h = n, l = 1 ; l = 1 + 2n

⇓[id]P(Par)

h = 0, l = 1 ; Par(l) = oddh = 1, l = 1 ; Par(l) = oddh = n, l = 1 ; Par(l) = odd

Adjoining Declassification and Attack Models by Abstract Interpretation – p.10/19

Examples

EXAMPLE II:P = l := 2 ∗ l ∗ h2.

[Par ]P(Sign)

h = 1, l = 4 (Par(4) = even) ; Sign(l) = +

h = 1, l = −4 (Par(−4) = even) ; Sign(l) = −

Adjoining Declassification and Attack Models by Abstract Interpretation – p.10/19

Examples

EXAMPLE II:P = l := 2 ∗ l ∗ h2.

[Par ]P(Sign)

h = 1, l = 4 (Par(4) = even) ; Sign(l) = +

h = 1, l = −4 (Par(−4) = even) ; Sign(l) = −

⇓(Par)P(Sign)

h = −3, Par(l) = even ; Sign(l) = I don’t knowh = 1, Par(l) = even ; Sign(l) = I don’t know

Adjoining Declassification and Attack Models by Abstract Interpretation – p.10/19

Examples

EXAMPLE III:P = l := l ∗ h2.

(id)P(Par)

h = 2, l = 1 ; Par(l) = evenh = 3, l = 1 ; Par(l) = odd

h = n, l = 1 ; Par(l) = Par(n)

Adjoining Declassification and Attack Models by Abstract Interpretation – p.10/19

Examples

EXAMPLE III:P = l := l ∗ h2.

(id)P(Par)

h = 2, l = 1 ; Par(l) = evenh = 3, l = 1 ; Par(l) = odd

h = n, l = 1 ; Par(l) = Par(n)

⇓(id)P(Sign []Par)

Sign(h) = +, l = 1 ; Par(l) = I don’t knowSign(h) = −, l = 1 ; Par(l) = I don’t know

Adjoining Declassification and Attack Models by Abstract Interpretation – p.10/19

Deriving output attackers

Abstract interpretation provides advanced methods for designing abstractions(refinement, simplification, compression ...)

Designing abstractions = designing attackers

Adjoining Declassification and Attack Models by Abstract Interpretation – p.11/19

Deriving output attackers

Abstract interpretation provides advanced methods for designing abstractions(refinement, simplification, compression ...)

Designing abstractions = designing attackers

Characterize the most concrete ρ such that (η)P(φ []ρ)

[The most powerful public observer]

Adjoining Declassification and Attack Models by Abstract Interpretation – p.11/19

Deriving canonical attackers

Abstract interpretation provides advanced methods for designing abstractions(refinement, simplification, compression ...)

Transforming abstractions = transforming attackers

Adjoining Declassification and Attack Models by Abstract Interpretation – p.12/19

Deriving canonical attackers

Abstract interpretation provides advanced methods for designing abstractions(refinement, simplification, compression ...)

Transforming abstractions = transforming attackers

Characterize the most concrete δ such that (δ)P(φ []δ)

[The most powerful canonical public observer]

⇒ This would provide a certificate for security.

Adjoining Declassification and Attack Models by Abstract Interpretation – p.12/19

Abstract declassification

Consider a program P and its finite computations.

A passive attacker may be able to learn some information by observing thesystem but, by assumption, that information leakage is allowed by the security

policy.[Zdancewic and Myers 2001]

Adjoining Declassification and Attack Models by Abstract Interpretation – p.13/19

Abstract declassification

Consider a program P and its finite computations.

A passive attacker may be able to learn some information by observing thesystem but, by assumption, that information leakage is allowed by the security

policy.[Zdancewic and Myers 2001]

We want to characterize the most abstract private observable propertysuch that (η)P(φ ⇒ ρ)

[The maximal amount of information disclosed]

⇒ This would provide a certificate for disclosed secrets.

Adjoining Declassification and Attack Models by Abstract Interpretation – p.13/19

Observer vs Observable

Consider |= (η)P(φ []ρ): In order to keep non-interference...

Adjoining Declassification and Attack Models by Abstract Interpretation – p.14/19

Observer vs Observable

Consider |= (η)P(φ []ρ): In order to keep non-interference...

More concreteMore concrete

More abstract More abstract

ρ φAND

uco(℘(VL )) uco(℘(VH ))

Adjoining Declassification and Attack Models by Abstract Interpretation – p.14/19

Observer vs Observable

Consider |= (η)P(φ []ρ): In order to keep non-interference...

More concreteMore concrete

More abstract More abstract

ρ φAND

uco(℘(VL )) uco(℘(VH ))

Adjoining Declassification and Attack Models by Abstract Interpretation – p.14/19

ANI: A completeness problem

Recall that [Joshi & Leino’00]

P is secure iff H H ; P; H H.= P ; H H

Adjoining Declassification and Attack Models by Abstract Interpretation – p.15/19

ANI: A completeness problem

Recall that [Joshi & Leino’00]

P is secure iff H H ; P; H H.= P ; H H

Let X = 〈XH , XL 〉 ⇒ H(X)def= 〈>H , XL 〉 ∈ uco(℘(V))

H H ; P; H H.= P ; H H

⇓H◦JPK◦H = H◦JPK

Adjoining Declassification and Attack Models by Abstract Interpretation – p.15/19

ANI: A completeness problem

Recall that [Joshi & Leino’00]

P is secure iff H H ; P; H H.= P ; H H

Let X = 〈XH , XL 〉 ⇒ H(X)def= 〈>H , XL 〉 ∈ uco(℘(V))

H H ; P; H H.= P ; H H

⇓H◦JPK◦H = H◦JPK

⇒ A COMPLETENESS PROBLEM

Adjoining Declassification and Attack Models by Abstract Interpretation – p.15/19

ANI: A completeness problem

Let X = 〈XH , XL 〉 ⇒ H(X)def= 〈>H , XL 〉 ∈ uco(℘(V))

H◦JPK◦H = H◦JPK

COMPLETENESS = NON-INTERFERENCE

⇓Transform H vs Core;

Transform H vs Shell. [Giacobazzi et al.’00]

Adjoining Declassification and Attack Models by Abstract Interpretation – p.15/19

Completeness shells and cores

[Giacobazzi et al.’00]

A

P holds: Shell of AP doesn’t hold

Rfdef= λρ.M(

[

y∈ρ

max(f−1(↓y)))

Adjoining Declassification and Attack Models by Abstract Interpretation – p.16/19

Completeness shells and cores

[Giacobazzi et al.’00]

A

P holds: Shell of AP doesn’t hold

Rfdef= λρ.M(

[

y∈ρ

max(f−1(↓y)))

Absolute shell of ρ: Rf(ρ) = gfpvρ λϕ.ρ u RB

f (ϕ);

Relative shell of η relative to ρ: Rρf(η) = η u Rf(ρ).

Adjoining Declassification and Attack Models by Abstract Interpretation – p.16/19

Completeness shells and cores

[Giacobazzi et al.’00]

A

P holds: Core of AP doesn’t hold

A

P holds: Shell of AP doesn’t hold

Cfdef= λρ.

{y ∈ C

˛

˛

˛ max(f−1(↓y)) ⊆ ρ

}

Adjoining Declassification and Attack Models by Abstract Interpretation – p.16/19

Completeness shells and cores

[Giacobazzi et al.’00]

A

P holds: Core of AP doesn’t hold

A

P holds: Shell of AP doesn’t hold

Cfdef= λρ.

{y ∈ C

˛

˛

˛ max(f−1(↓y)) ⊆ ρ

}

Absolute core of ρ: Cf(ρ) = lfpvρ λϕ.ρ t CB

f (ϕ);

Relative core of ρ relative to η: Cηf(ρ) = ρ t Cf(η).

Adjoining Declassification and Attack Models by Abstract Interpretation – p.16/19

ANI as completeness

Let ρ ∈ uco(℘(VL )) ⇒ Hρ(X)def= 〈>H , ρ(XL )〉 ∈ uco(℘(V))

Narrow abstract non-interference: Hρ◦JPK◦Hη = Hρ◦JPK;

Abstract non-interference: Hρ◦JPKη,φ◦Hη = Hρ◦JPKη,φ

Adjoining Declassification and Attack Models by Abstract Interpretation – p.17/19

ANI as completeness

Let ρ ∈ uco(℘(VL )) ⇒ Hρ(X)def= 〈>H , ρ(XL )〉 ∈ uco(℘(V))

Narrow abstract non-interference: Hρ◦JPK◦Hη = Hρ◦JPK;

Abstract non-interference: Hρ◦JPKη,φ◦Hη = Hρ◦JPKη,φ

⇓PUBLIC OBSERVER AS COMPLETENESS CORE: CHη

JPKη,φ(H) = (η)JPK(φ []id)

Adjoining Declassification and Attack Models by Abstract Interpretation – p.17/19

ANI as completeness

Let ρ ∈ uco(℘(VL )) ⇒ Hρ(X)def= 〈>H , ρ(XL )〉 ∈ uco(℘(V))

Narrow abstract non-interference: Hρ◦JPK◦Hη = Hρ◦JPK;

Abstract non-interference: Hρ◦JPKη,φ◦Hη = Hρ◦JPKη,φ

⇓PUBLIC OBSERVER AS COMPLETENESS CORE: CHη

JPKη,φ(H) = (η)JPK(φ []id)

PRIVATE OBSERVABLE AS COMPLETENESS SHELL: (η)P(RHρ

JPKη,id(Hη) ⇒ ρ)

Adjoining Declassification and Attack Models by Abstract Interpretation – p.17/19

ANI as completeness

PUBLIC OBSERVER AS COMPLETENESS CORE: CHη

JPKη,φ(H) = (η)JPK(φ []id)

PRIVATE OBSERVABLE AS COMPLETENESS SHELL: (η)P(RHρ

JPKη,id(Hη) ⇒ ρ)

ADJOINING ATTACKERS AND DECLASSIFICATION

id < (η)JPK(id []id) ⇔ P(uL∈ηM(ΠP (η, id)|L)) < >

The most concrete observer

The most abstract observable

Declassification

Secure

id

id>

Adjoining Declassification and Attack Models by Abstract Interpretation – p.17/19

A discussion

Certification of secrecy degree of programsAbstract Non−Interference

Most concretepublic observer

Most abstractprivate

observable

Adjoining Declassification and Attack Models by Abstract Interpretation – p.18/19

A discussion

Proof systemDesign of a

Certification of secrecy degree of programsAbstract Non−Interference

Most concretepublic observer

Most abstractprivate

observable

Adjoining Declassification and Attack Models by Abstract Interpretation – p.18/19

A discussion

CoreCompleteness

Adjunction

CompletenessProof system

Design of a

Certification of secrecy degree of programsAbstract Non−Interference

Most concretepublic observer

Shell

Most abstractprivate

observable

Adjoining Declassification and Attack Models by Abstract Interpretation – p.18/19

A discussion

Timed

Timed Abstract Non−Interference

CoreCompleteness

semantics

Adjunction

CompletenessProof system

Design of a

Trace

Certification of secrecy degree of programsAbstract Non−Interference

Most concretepublic observer

Shell

Most abstractprivate

observable

denotationalsemantics

Adjoining Declassification and Attack Models by Abstract Interpretation – p.18/19

A discussion

Timed

Timed Abstract Non−Interference

CoreCompleteness

semantics

Adjunction

CompletenessProof system

Design of a

Trace

Certification of secrecy degree of programsAbstract Non−Interference

Most concretepublic observer

Shell

Most abstractprivate

observable

denotationalsemantics

Generalized Abstract Non−Interference

Adjoining Declassification and Attack Models by Abstract Interpretation – p.18/19

A discussion: Future works

CoreCompleteness

Adjunction

CompletenessProof system

Design of a

Certification of secrecy degree of programsAbstract Non−Interference

public observer

Shell

Most abstractprivate

observable

Generalized Abstract Non−Interference

Most concrete

Probabilistic

Non−InterferenceAbstract

Abstractslicing

attackersActiveNon−interference

inBio−systems

Adjoining Declassification and Attack Models by Abstract Interpretation – p.19/19

A discussion: Future works

CoreCompleteness

Adjunction

CompletenessProof system

Design of a

Certification of secrecy degree of programsAbstract Non−Interference

Most concretepublic observer

Shell

Most abstractprivate

observable

Generalized Abstract Non−Interference

− Code obfuscation− Trasform protocols

Trasform semantics

Adjoining Declassification and Attack Models by Abstract Interpretation – p.19/19

A discussion: Future works

CoreCompleteness

Adjunction

CompletenessProof system

Design of a

Certification of secrecy degree of programsAbstract Non−Interference

Most concretepublic observer

Shell

Most abstractprivate

observable

Generalized Abstract Non−Interference

Proof CarryingCode

Adjoining Declassification and Attack Models by Abstract Interpretation – p.19/19