Neyman-Pearson Criterion (NPC): A Model Selection...

33
Neyman-Pearson Criterion (NPC): A Model Selection Criterion for Asymmetric Binary Classification Jingyi Jessica Li Department of Statistics University of California, Los Angeles http://jsb.ucla.edu

Transcript of Neyman-Pearson Criterion (NPC): A Model Selection...

Page 1: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Neyman-Pearson Criterion (NPC): A Model Selection

Criterion for Asymmetric Binary Classification

Jingyi Jessica Li

Department of Statistics

University of California, Los Angeles

http://jsb.ucla.edu

Page 2: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Model selection for binary classification: a motivating example

Automated disease diagnosis a binary classification problem

• Features/predictors: ∼ 20K human gene expression levels

• Response: binary disease status: 0 (diseased) and 1 (healthy)

Given a classification method (e.g., logistic regression),

• What subset of genes exhibits the highest diagnostic power?

A model selection problem

1

Page 3: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Model selection for binary classification: a motivating example

Automated disease diagnosis a binary classification problem

• Features/predictors: ∼ 20K human gene expression levels

• Response: binary disease status: 0 (diseased) and 1 (healthy)

Given a classification method (e.g., logistic regression),

• What subset of genes exhibits the highest diagnostic power?

A model selection problem1

Page 4: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Two binary classification paradigms: classical vs. Neyman-Pearson

Paradigm Oracle classifier Practical classifier

Classical φ∗ = arg minφ

R(φ) φ̂ = arg minφ

R̂(φ)

Neyman-Pearson φ∗α = arg minR0(φ)≤α

R1(φ) φ̂α by the NP umbrella algorithm

[Rigollet and Tong (2011)] [Tong, Feng and Li (2018)]

where α is a user-specified upper bound on the type I error

The Neyman-Pearson paradigm is suitable for disease diagnosis

• The two classes have asymmetric importance

• Mispredicting a normal tissue sample as malignant

⇒ patient anxiety & additional medical costs—type II error R1(φ)

• Mispredicting a tumor sample as normal

⇒ life loss —type I error R0(φ)

Policy makers often like to enforce a pre-specified threshold α on R0(φ)

2

Page 5: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Two binary classification paradigms: classical vs. Neyman-Pearson

Paradigm Oracle classifier Practical classifier

Classical φ∗ = arg minφ

R(φ) φ̂ = arg minφ

R̂(φ)

Neyman-Pearson φ∗α = arg minR0(φ)≤α

R1(φ) φ̂α by the NP umbrella algorithm

[Rigollet and Tong (2011)] [Tong, Feng and Li (2018)]

where α is a user-specified upper bound on the type I error

The Neyman-Pearson paradigm is suitable for disease diagnosis

• The two classes have asymmetric importance

• Mispredicting a normal tissue sample as malignant

⇒ patient anxiety & additional medical costs—type II error R1(φ)

• Mispredicting a tumor sample as normal

⇒ life loss —type I error R0(φ)

Policy makers often like to enforce a pre-specified threshold α on R0(φ)

2

Page 6: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Two binary classification paradigms: classical vs. Neyman-Pearson

Paradigm Oracle classifier Practical classifier

Classical φ∗ = arg minφ

R(φ) φ̂ = arg minφ

R̂(φ)

Neyman-Pearson φ∗α = arg minR0(φ)≤α

R1(φ) φ̂α by the NP umbrella algorithm

[Rigollet and Tong (2011)] [Tong, Feng and Li (2018)]

where α is a user-specified upper bound on the type I error

The Neyman-Pearson paradigm is suitable for disease diagnosis

• The two classes have asymmetric importance

• Mispredicting a normal tissue sample as malignant

⇒ patient anxiety & additional medical costs—type II error R1(φ)

• Mispredicting a tumor sample as normal

⇒ life loss —type I error R0(φ)

Policy makers often like to enforce a pre-specified threshold α on R0(φ)

2

Page 7: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Two binary classification paradigms: classical vs. Neyman-Pearson

Paradigm Oracle classifier Practical classifier

Classical φ∗ = arg minφ

R(φ) φ̂ = arg minφ

R̂(φ)

Neyman-Pearson φ∗α = arg minR0(φ)≤α

R1(φ) φ̂α by the NP umbrella algorithm

[Rigollet and Tong (2011)] [Tong, Feng and Li (2018)]

where α is a user-specified upper bound on the type I error

The Neyman-Pearson paradigm is suitable for disease diagnosis

• The two classes have imbalanced sample sizes

R(φ) = IP(Y = 0)R0(φ) + IP(Y = 1)R1(φ)

When IP(Y = 0) = IP(diseased)� IP(Y = 1) = IP(healthy),

• The classical oracle classifier may have an excessively large R0(φ)

• The NP oracle classifier will have R0(φ) ≤ α

3

Page 8: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Two binary classification paradigms: classical vs. Neyman-Pearson

Paradigm Oracle classifier Practical classifier

Classical φ∗ = arg minφ

R(φ) φ̂ = arg minφ

R̂(φ)

Neyman-Pearson φ∗α = arg minR0(φ)≤α

R1(φ) φ̂α by the NP umbrella algorithm

[Rigollet and Tong (2011)] [Tong, Feng and Li (2018)]

where α is a user-specified upper bound on the type I error

The Neyman-Pearson paradigm is suitable for disease diagnosis

• The two classes have imbalanced sample sizes

R(φ) = IP(Y = 0)R0(φ) + IP(Y = 1)R1(φ)

When IP(Y = 0) = IP(diseased)� IP(Y = 1) = IP(healthy),

• The classical oracle classifier may have an excessively large R0(φ)

• The NP oracle classifier will have R0(φ) ≤ α

3

Page 9: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Paper and software of the Neyman-Pearson umbrella algorithm

R package nproc

https://CRAN.R-project.org/package=nproc

Email: [email protected]

Page 10: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Model selection under the Neyman-Pearson paradigm (population)

• Goal: Develop a model selection criterion to compare models (i.e., feature

subsets) under the Neyman-Pearson (NP) paradigm

• Idea: prediction-based model selection

• Compare two feature subsets based on the type II errors of their

corresponding NP oracle classifiers

in contrast to

• Compare two feature subsets based on the risks of their corresponding

classical oracle classifiers

• Question: Will the model selection results be different under the two

paradigms?

5

Page 11: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Model selection under the Neyman-Pearson paradigm (population)

• Goal: Develop a model selection criterion to compare models (i.e., feature

subsets) under the Neyman-Pearson (NP) paradigm

• Idea: prediction-based model selection

• Compare two feature subsets based on the type II errors of their

corresponding NP oracle classifiers

in contrast to

• Compare two feature subsets based on the risks of their corresponding

classical oracle classifiers

• Question: Will the model selection results be different under the two

paradigms?

5

Page 12: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Model selection under the Neyman-Pearson paradigm (population)

• Goal: Develop a model selection criterion to compare models (i.e., feature

subsets) under the Neyman-Pearson (NP) paradigm

• Idea: prediction-based model selection

• Compare two feature subsets based on the type II errors of their

corresponding NP oracle classifiers

in contrast to

• Compare two feature subsets based on the risks of their corresponding

classical oracle classifiers

• Question: Will the model selection results be different under the two

paradigms?

5

Page 13: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Prediction-based model selection under the two paradigms (population)

A linear discriminant analysis (LDA) example

Two features X1,X2 ∈ IR with the following class conditional distributions:

X{1} | (Y = 0) ∼ N (−5, 22) , X{1} | (Y = 1) ∼ N (0, 22) ,

X{2} | (Y = 0) ∼ N (−5, 22) , X{1} | (Y = 1) ∼ N (1.5, 3.52) .

We would like to select the better feature between the two

• Classical oracle classifiers:

R(φ∗{1}

)= 0.106 < R

(φ∗{2}

)= 0.113

So X1 is the better feature

• Neyman-Pearson (NP) oracle classifiers:

R1

(φ∗α{1}

)vs. R1

(φ∗α{2}

)• α = 0.01, R1

(φ∗α{1}

)= 0.431 > R1

(φ∗α{2}

)= 0.299 =⇒ X2 better

• α = 0.20, R1

(φ∗α{1}

)= 0.049 < R1

(φ∗α{2}

)= 0.084 =⇒ X1 better

6

Page 14: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Prediction-based model selection under the two paradigms (population)

A linear discriminant analysis (LDA) example

Two features X1,X2 ∈ IR with the following class conditional distributions:

X{1} | (Y = 0) ∼ N (−5, 22) , X{1} | (Y = 1) ∼ N (0, 22) ,

X{2} | (Y = 0) ∼ N (−5, 22) , X{1} | (Y = 1) ∼ N (1.5, 3.52) .

We would like to select the better feature between the two

• Classical oracle classifiers:

R(φ∗{1}

)= 0.106 < R

(φ∗{2}

)= 0.113

So X1 is the better feature

• Neyman-Pearson (NP) oracle classifiers:

R1

(φ∗α{1}

)vs. R1

(φ∗α{2}

)• α = 0.01, R1

(φ∗α{1}

)= 0.431 > R1

(φ∗α{2}

)= 0.299 =⇒ X2 better

• α = 0.20, R1

(φ∗α{1}

)= 0.049 < R1

(φ∗α{2}

)= 0.084 =⇒ X1 better

6

Page 15: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Prediction-based model selection under the two paradigms (population)

Special scenarios where prediction-based model selection is the same under the

two paradigms

Lemma 1

X{1} | (Y = 0) ∼ N(µ0

1, (σ01)2), X{1} | (Y = 1) ∼ N

(µ1

1, (σ11)2),

X{2} | (Y = 0) ∼ N(µ0

2, (σ02)2), X{2} | (Y = 1) ∼ N

(µ1

2, (σ12)2).

Let α ∈ (0, 1) , φ∗α{1} and φ∗α{2} be the NP oracle classifiers based on X{1} and

X{2} ∈ IR respectively, and φ∗{1} and φ∗{2} be the corresponding classical oracle

classifiers. Ifσ0

2

σ12

=σ0

1

σ11

,

then

sign(R1

(φ∗α{1}

)− R1

(φ∗α{2}

))= sign

(R(φ∗{1}

)− R

(φ∗{2}

)).

for all α. The reverse statement also holds.

7

Page 16: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Prediction-based model selection under the two paradigms (population)

Special scenarios where prediction-based model selection is the same under the

two paradigms

Lemma 2

Let A1, A2 ⊆ {1, . . . , d} be two index sets. For a random vector X ∈ IRd , let

XA1 and XA2 be sub-vectors of X comprising of coordinates with indexes in A1

and A2 respectively, and assume they follow the class conditional distributions:

XA1 | (Y = 0) ∼ N (µ01,Σ1) , XA1 | (Y = 1) ∼ N (µ1

1,Σ1) ,

XA2 | (Y = 0) ∼ N (µ02,Σ2) , XA2 | (Y = 1) ∼ N (µ1

2,Σ2) ,

where µij ∈ IR` denotes the mean vector of feature set Aj in class i , and

Σj ∈ IR`×` denotes the variance-covariance matrix of feature set Aj , j = 1, 2,

i = 0, 1.

The selected feature set A∗α = A1 or A2 under the NP paradigm is invariant to α

and is consistent with the selected feature set A∗ under the classical paradigm.

8

Page 17: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Model selection under the Neyman-Pearson paradigm (sample)

• Goal: Develop a practical prediction-based model selection criterion under

the NP paradigm

• Idea: Compare two feature subsets based on the estimated type II errors

of their corresponding NP classifiers (constructed by our NP umbrella

algorithm)

• Question: How to construct a “good” estimator of the type II error of an

NP classifier?

Leave out some class 1 data!

9

Page 18: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Model selection under the Neyman-Pearson paradigm (sample)

• Goal: Develop a practical prediction-based model selection criterion under

the NP paradigm

• Idea: Compare two feature subsets based on the estimated type II errors

of their corresponding NP classifiers (constructed by our NP umbrella

algorithm)

• Question: How to construct a “good” estimator of the type II error of an

NP classifier?

Leave out some class 1 data!

9

Page 19: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Model selection under the Neyman-Pearson paradigm (sample)

• Goal: Develop a practical prediction-based model selection criterion under

the NP paradigm

• Idea: Compare two feature subsets based on the estimated type II errors

of their corresponding NP classifiers (constructed by our NP umbrella

algorithm)

• Question: How to construct a “good” estimator of the type II error of an

NP classifier?

Leave out some class 1 data!

9

Page 20: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Model selection under the Neyman-Pearson paradigm (sample)

• Goal: Develop a practical prediction-based model selection criterion under

the NP paradigm

• Idea: Compare two feature subsets based on the estimated type II errors

of their corresponding NP classifiers (constructed by our NP umbrella

algorithm)

• Question: How to construct a “good” estimator of the type II error of an

NP classifier?

Leave out some class 1 data!

9

Page 21: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

The model selection criterion: NPC (Neyman-Pearson Criterion)

Statistical formulation

Given α, δ ∈ [0, 1], a classification method, and a feature subset

A ⊆ {1, . . . , p}, a practical NP classifier φ̂αA is constructed by the NP

umbrella algorithm. Then the NPC for model A at level α is defined as

NPCαA := R̂1(φ̂αA)

where R̂1(φ) is the estimated type II error of any classifier φ on leave-out

class 1 data

• Sample splitting: split training data into three parts

• mixed classes 0 and 1 sample

• left-out class 0 sample

}NP umbrella algorithm

=⇒ φ̂αA

• left-out class 1 sampleφ̂αA=⇒ NPCαA

• Multiple random splits can be used to construct an ensemble estimator

with a smaller variance

10

Page 22: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Theoretical properties of NPC

Concentration of R̂1(φ̂αA) at R1(φ∗αA)

Given reasonable conditions on

(1) the two class conditional distributions of X | (Y = 0) and

X | (Y = 1)

(2) the scoring function of the given classification method

(3) the two class samples sizes

we can show that with probability at least 1− δ′

|R̂1(φ̂αA)− R1(φ∗αA)| ≤ C(δ′)

where

• R1(φ∗αA) is the population type II error of the method-specific

oracle classifier φ∗αA (the classifier that shares the same scoring

function as the ‘best’ classical classifier constrained by the

classification method)

• The deviation upper bound C(δ′) increases as δ′ decreases; also,

C(δ′) converges to zero as the sample sizes go to infinity11

Page 23: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Real data application: DNA methylation features for cancer diagnosis

A glance at data (Fleischer et al., 2014 Genome Biology):

• 46 (class 1) normal tissues V.S. 239 breast cancer (class 0) tissue

• Methylation levels are measured at 468, 424 genome locations in every

tissue

• After preprocessing and normalization, d = 19, 363� n = 285

12

Page 24: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Real data application: DNA methylation features for cancer diagnosis

Use L1 penalized logistic regression to generate a solution path

=⇒ 41 nested feature sets

Apply model selection criteria:

●●

●●●

●●●●●●

●●●●●●

●●

●●●●●●●

●●●●●●●●●●●●●●●

−250

−200

−150

−100

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

AIC●●

●●●

●●●●●●●●●●●●

●●

●●●●●●●

●●●●

●●

●●●●●●

●●●

−180

−150

−120

−90

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

BIC

●●●●●

●●●

●●●●●●●

●●●●●

●●●●●●

●●●●●

●●●●●●●●●

●● ●

●● ●

●●

●●

●●

● ●●

● ●

● ● ●

● ● ●●

● ●

● ● ●●

● ●●

● ●●

● ●

0.00

0.01

0.02

0.03

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

Classical Criterion

●●●

●●●●

●●

●●

●●

●●

●●●●●●●

●●●●●●●●●●●●●●

● ●●

●●

●●

● ●

● ● ● ● ● ●

● ● ● ● ●

● ● ● ● ● ● ● ● ●

0.0

0.1

0.2

0.3

0.4

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

NPC+ ZNF646

Remove: ZNF646 . . .

13

Page 25: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Real data application: DNA methylation features for cancer diagnosis

●●

●●●

●●●●●●

●●●●●●

●●

●●●●●●●

●●●●●●●●●●●●●●●

−250

−200

−150

−100

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

AIC●●

●●●

●●●●●●●●●●●●

●●

●●●●

●●●

●●●●

●●

●●●●●●●●

−180

−150

−120

−90

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

BIC

●●●●●

●●

●●●●

●●●

●●

●●●●●

●●●●

●●●

●●

●●●●●●●●●

● ●

●● ●

●●

●●

●● ●

● ●

● ● ●

● ●

●●

● ●

● ● ● ●●

●● ●

0.00

0.01

0.02

0.03

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

Classical Criterion

●●●●●

●●●

●●

●●●●●●●●●

●●●●●●●●●●●●●●

● ●● ●

●●

● ●

● ●●

●●

● ●●

● ● ● ● ●

● ● ● ● ● ● ● ● ●

0.0

0.1

0.2

0.3

0.4

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

NPC+ ERAP1

- ZNF646

Remove: ZNF646, ERAP1 . . .

14

Page 26: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Real data application: DNA methylation features for cancer diagnosis

●●

●●●

●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●−250

−200

−150

−100

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

AIC●●

●●●

●●●

●●●●●●●●●

●●

●●●●

●●●

●●●

●●●●●●●●●●

−180

−150

−120

−90

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

BIC

●●●●●

●●●

●●●

●●●●●

●●●

●●●●●●●●

●●●

●●●●●●

●●●

● ●

●● ●

●● ●

●●

●●

●●

● ● ●

● ●

●●

●●

●●

● ●● ●

●●

0.00

0.01

0.02

0.03

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

Classical Criterion

●●●●●

●●

●●

●●

●●●

●●●●●

●●●●

●●●●●●●●●●●●●●

● ●● ●

● ●

● ●

●●

●●

● ●●

●●

● ●●

●● ● ● ● ●

● ● ● ● ● ● ● ● ●

0.0

0.1

0.2

0.3

0.4

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

NPC+ LOC121952 (METTL21EP)

- ZNF646, ERAP1

Remove: ZNF646, ERAP1, LOC121952 . . .

15

Page 27: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Real data application: DNA methylation features for cancer diagnosis

●●

●●●

●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●−250

−200

−150

−100

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

AIC●●

●●●

●●●●

●●●●●●●●

●●

●●●

●●●●

●●●

●●●●●●●●●●

−180

−150

−120

−90

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

BIC

●●●●●

●●●●

●●

●●●●●

●●●

●●●●●●●●

●●●

●●●●●●

●●●

● ●

●● ●

●● ● ●

●●

●●

●● ●

● ● ●

● ● ●

●●

●●

●●

● ●● ●

●●

0.00

0.01

0.02

0.03

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

Classical Criterion

●●●●●

●●●

●●

●●

●●●●●●●

●●●●●●●

●●●●●●●●●

● ●● ●

● ● ●

● ●

● ●●

● ● ● ●●

●● ● ● ● ●

● ● ● ● ● ● ● ● ●

0.0

0.1

0.2

0.3

0.4

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

NPC+ GEMIN4

- ZNF646, ERAP1, LOC121952 (METTL21EP)

Remove: ZNF646, ERAP1, LOC121952, GEMIN4 . . .

16

Page 28: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Real data application: DNA methylation features for cancer diagnosis

●●

●●●

●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●−250

−200

−150

−100

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

AIC●●

●●●

●●●●●●

●●●●●●

●●

●●●●●●●

●●●

●●●●●●●●●●

−180

−150

−120

−90

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

BIC

●●●●●

●●●●●●

●●●●●●

●●●

●●●●

●●●●●●●

●●●●●●●

●●

● ●

●● ●

●● ● ● ●

● ●●

●● ●

● ●●

● ●●

●●

● ●● ●

●● ● ●

● ● ●● ●

0.00

0.01

0.02

0.03

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

Classical Criterion

●●●●●

●●●●●●●●

●●●●●

●●●●

●●●●●●●●●●●●●●●●●

● ●● ●

● ● ● ● ●

●●

●●

● ●

●● ● ●

● ● ●● ● ● ●

● ● ● ● ● ● ● ● ●

0.0

0.1

0.2

0.3

0.4

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

NPC+ BATF

- ZNF646, ERAP1, LOC121952 (METTL21EP), GEMIN4

Remove: ZNF646, ERAP1, LOC121952, GEMIN4, BATF . . .

17

Page 29: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Real data application: DNA methylation features for cancer diagnosis

●●

●●●

●●●●●●●

●●●●●

●●

●●●●●●●

●●●●●●●●●●●●●●●−250

−200

−150

−100

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

AIC●●

●●●

●●●●●●●

●●●

●●

●●

●●●●●●●

●●●

●●●●●●●●●●

−180

−150

−120

−90

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

BIC

●●●●●

●●●●●●●

●●●●●

●●●●●●

●●●●●●●

●●●●●●●

●●

● ●

●● ●

●● ● ● ●

● ●●

● ● ●

●●

●● ●

●●

●●

● ●● ●

●● ● ●

● ● ●● ●

0.00

0.01

0.02

0.03

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

Classical Criterion

●●●●●

●●●●●●

●●

●●●

●●●●●●

●●●●●●●●●●●●●●●●●

● ●● ●

● ● ● ● ● ●

● ●

● ● ●

●●

●● ● ●

● ● ●● ● ● ●

● ● ● ● ● ● ● ● ●

0.0

0.1

0.2

0.3

0.4

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

NPC+ MIR21

- ZNF646, ERAP1, LOC121952 (METTL21EP), GEMIN4, BATF

Remove: ZNF646, ERAP1, LOC121952, GEMIN4, BATF, MIR21 . . .

18

Page 30: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Real data application: DNA methylation features for cancer diagnosis

●●

●●●

●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●−250

−200

−150

−100

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

AIC●●

●●●

●●●●●●●●●

●●●

●●●●

●●●●●

●●●●

●●●●

●●●●●●

−180

−150

−120

−90

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

BIC

●●●●●

●●●●●●●●

●●●

●●●●●●

●●●●●

●●●

●●●●

●●●●●

● ●

●● ●

●● ● ● ●

● ● ●

●● ●

●● ●

●●

●●

●●

●●

●●

● ●●

●●

●●

0.00

0.01

0.02

0.03

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

Classical Criterion

●●●●●

●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●

● ●● ●

● ● ● ● ● ● ●●

●●

● ● ●●

●● ● ●

●● ● ● ●

●● ● ● ● ● ● ● ● ●

0.0

0.1

0.2

0.3

0.4

0 10 20 30 40Feature Subset Index

Crit

erio

n Va

lue

NPC

- ZNF646, ERAP1, LOC121952 (METTL21EP), GEMIN4, BATF, MIR21

No obvious rise in NPC

19

Page 31: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Real data application: DNA methylation features for cancer diagnosis

Among the 41 genes,

protein-coding microRNA pseudogene

remained 32 (20) 3 0

removed 4 (2) 1 1

() means the number of genes that express proteins in breast cancer tissues

according to the Human Protein Atlas database

Observations:

• 9 out of 32 genes do not yet have available protein expression data in

breast cancer tissues in the Human Protein Atlas database

• A specificity higher than 90% is achievable with only three gene markers:

HMGB2, MIR195 and SPARCL1. Inclusion of SPARCL1 increases

specificity from ∼ 70% to over 90%

20

Page 32: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Conclusions

• A new model selection criterion: NPC, tailored for asymmetric binary

classification under the NP paradigm

• NPC allows users to select a model that achieve the best specificity among

candidate models while maintaining a high sensitivity

• Useful in disease diagnosis, and other applications (network security

control, loan screening and prediction of regional conflicts)

• Flexible to the choice of classification methods and the way of

constructing NP classifiers

21

Page 33: Neyman-Pearson Criterion (NPC): A Model Selection ...jsb.ucla.edu/sites/default/files/031919_NPC_ENAR2019.pdfModel selection under the Neyman-Pearson paradigm (population) Goal: Develop

Acknowledgements

Yiling Chen

(UCLA)

Dr. Xin Tong

(USC)

22