De-biasing arbitrary convex regularizers and asymptotic ...€¦ ·...

35
De-biasing arbitrary convex regularizers and asymptotic normality Pierre C Bellec, Rutgers University Mathematical Methods of Modern Statistics 2, June 2020

Transcript of De-biasing arbitrary convex regularizers and asymptotic ...€¦ ·...

Page 1: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

De-biasing arbitrary convex regularizers andasymptotic normality

Pierre C Bellec, Rutgers University

Mathematical Methods of Modern Statistics 2, June 2020

Page 2: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Joint work with Cun-Hui Zhang (Rutgers).

I Second order Poincaré inequalities and de-biasing arbitraryconvex regularizers arXiv:1912.11943

I De-biasing the Lasso with degrees-of-freedom adjustment.arXiv:1902.08885.

Page 3: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

High-dimensional statistics

I n data points (x i ,Yi , i = 1, ..., n)I p covariates, x i ∈ Rp

p ≥ n, p ≥ cn p ≥ nα

For instance, linear model Yi = x>i β + εi for unknown β

Page 4: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

M-estimators and regularization

β = arg minb∈Rp

{1n

n∑i=1

`(x>i b,Yi ) + regularizer(b)}

for some loss `(·, ·) and regularization penalty.

Typically in the linear model, with the least-squares loss,

β = arg minb∈Rp

{‖y − Xb‖2/(2n) + g(b)

}with g convex.

ExampleI Lasso, Elastic-NetI Bridge g(b) =

∑pj=1 |bj |c

I Group-LassoI Nuclear Norm penaltyI Sorted L1 penalty (SLOPE)

Page 5: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Different goals, different scales

β = arg minb∈Rp{‖y − Xb‖2/(2n) + g(b)

}, g convex

1. Design of regularizer g with intuition about complexity,structureI convex relaxation of unknown structure (sparsity, low-rank)I `1 balls are spiky at sparse vectors

2. Upper and lower bounds on the risk of β:

crn ≤ ‖β − β‖2 ≤ Crn.

3. Characterization of the risk

‖β − β‖2 = rn(1 + oP(1))

under some asymptotics, e.g., p/n→ γ or s log(p/s)/n→ 0.4. Asymp. distribution in fixed direction a0 ∈ Rp (resp a0 = ej)

and confidence interval for a>0 β (resp βj)√

na>0 (β−β)→? N(0,V0),√

n(βj−βj)→? N(0,Vj).

Page 6: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Focus of today: Confidence interval in the linear model

based on convex regularized estimators of the form

β = arg minb∈Rp{‖y − Xb‖2/(2n) + g(b)

}, g convex

√n(bj − βj)⇒ N(0,Vj), βj unknown parameter of interest

Page 7: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Confidence interval in the linear model

Design X with iid N(0,Σ) rows, known Σ, noise ε ∼ N(0, σ2In),

y = Xβ + ε, and a given initial estimator β.

Goal: Inference for θ = a>0 β, projection in direction a0

Examples:I a0 = ej , interested in inference on the j-th coefficient βjI a0 = xnew where xnew is the characteristics of a new patient,

inference for xnew>β.

Page 8: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

De-biasing, confidence intervals for the Lasso

Page 9: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Confidence interval in the linear model

Design X with iid N(0,Σ) rows, known Σ, noise ε ∼ N(0, σ2In),

y = Xβ + ε, and a given initial estimator β.

Goal: Inference for θ = a>0 β, projection in direction a0

Examples:I a0 = ej , interested in inference on the j-th coefficient βjI a0 = xnew where xnew is the characteristics of a new patient,

inference for xnew>β.

De-biasing: construct an unbiased estimate in the direction a0

i.e., find a correction such that [a>0 β−correction] is an unbiasedestimator of a>0 β∗

Page 10: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Existing results

LassoI Zhang and Zhang (2014) (s log(p/s)/n→ 0)I Javanmard and Montanari (2014a) ; Javanmard and Montanari

(2014b) ; Javanmard and Montanari (2018) (s log(p/s)/n→ 0)I Van de Geer et al. (2014) (s log(p/s)/n→ 0)I Bayati and Montanari (2012) ; Miolane and Montanari (2018)

(p/n→ γ)

Beyond Lasso?I Robust M-estimators El Karoui et al. (2013) Lei, Bickel, and

El Karoui (2018) Donoho and Montanari (2016) (p/n→ γ)I Celentano and Montanari (2019) symmetric convex penalty and

(Σ = Ip, p/n→ γ), using Approximate Message Passing ideasfrom statistical physics

I logistic regression Sur and Candès (2018) (Σ = Ip, p/n→ γ)

Page 11: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Focus today: General theory for confidence intervals

based on any convex regularized estimators of the formβ = arg minb∈Rp

{‖y − Xb‖2/(2n) + g(b)

}, g convex.

Little or no constraint on the convex regularizer g .

Page 12: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Degrees-of-freedom of estimator

β = arg minb∈Rp

{‖y − Xb‖2/(2n) + g(b)

}I then y → Xβ for fixed X is 1-LipscthizI the Jacobian of y 7→ Xβ exists everywhere (Rademacher’s

theorem)

df = trace∇(y 7→ Xβ), df = trace[X ∂β(X , y)

∂y].

used for instance in Stein’s Unbiased Risk Estimate (SURE).

The Jacobian matrix H is also useful. H is always symmetric1

H = X ∂β(X , y)∂y ∈ Rn×n

1P.C.B and C.-H. Zhang (2019) Second order Poincaré inequalities andde-biasing arbitrary convex regularizers when p/n → γ

Page 13: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Isotropic design, any g , p/n→ γ (B. and Zhang, 2019)

Assumptions

I Sequence of linear regression problems y = Xβ + εI with n, p → +∞ and p/n→ γ ∈ (0,∞),I g : Rp → R coercive convex penalty, strongly convex if γ ≥ 1.I Rows of X are iid N(0, Ip) andI Noise ε ∼ N(0, σ2In) is independent of X .

Page 14: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Isotropic design, any penalty g , p/n→ γ

Theorem (B. and Zhang, 2019)

β = arg minb∈Rp

{‖y − Xb‖2/(2n) + g(b)

}I βj = 〈ej ,β〉 parameter of interestI H = X(∂/∂y)β, df = trace H,I V (βj) = ‖y − Xβ‖2 + trace[(H − In)2](βj − βj)2.

Then there exists a subset Jp ⊂ [p] of size at least (p− log log p) s.t.

supj∈Jp

∣∣∣P((n − df)(βj − βj) + e>j X>(y − Xβ)V (βj)1/2

≤ t)− Φ(t)

∣∣∣→ 0.

Page 15: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Correlated design, any g , p/n→ γ

Assumption

I Sequence of linear regression problems y = Xβ + εI with n, p → +∞ and p/n→ γ ∈ (0,∞),I g : Rp → R coercive convex penalty, strongly convex if γ ≥ 1.I Rows of X are iid N(0,Σ) andI Noise ε ∼ N(0, σ2In) is independent of X .

Page 16: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Correlated design, any penalty g , p/n→ γTheorem (B. and Zhang, 2019)

β = arg minb∈Rp

{‖y − Xb‖2/(2n) + g(b)

}I θ = 〈a0,β〉 parameter of interestI H = X(∂/∂y)β, df = trace H,I V (θ) = ‖y − Xβ‖2 + trace[(H − In)2](〈a0, β〉 − θ)2.I Assume a>0 Σa0 = 1 and set

z0 = Σ−1a0.

Then there exists a subset S ⊂ Sp−1 with relative volume|S|/|Sp−1| ≥ 1− 2e−p0.99

supa0∈Σ1/2S

∣∣∣P((n − df)(〈β, a0〉 − θ) + 〈z0, y − Xβ〉V (θ)1/2

≤ t)−Φ(t)

∣∣∣→ 0.

This applies to at least (p − φcond(Σ) log log p) indices j ∈ [p].

Page 17: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Resulting 0.95 confidence interval

CI ={θ ∈ R :

∣∣∣(n − df)(〈β, a0〉 − θ) + 〈z0, y − Xβ〉V (θ)1/2

∣∣∣ ≤ 1.96}

Variance approximationTypically, V (θ) ≈ ‖y − Xβ‖2 and the length of the interval is

2 · 1.96‖y − Xβ‖/

(n − df).

CIapprox ={θ ∈ R :

∣∣∣(n − df)(〈β, a0〉 − θ) + 〈z0, y − Xβ〉‖y − Xβ‖

∣∣∣ ≤ 1.96}.

Page 18: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Simulations using the approximation V (θ) ≈ ‖y − Xβ‖2n = 750, p = 500, correlated Σ.β is the vectorization of a row-sparse matrix of size 25× 20.a0 is a direction that leads to large initial bias.Estimators: 7 different penalty functions:I Group Lasso with tuning parameters µ1, µ2I Lasso with tuning parameters λ1, ..., λ4I Nuclear norm penalty

Boxplots of initial errors√

na>0 (β − β) (biased!)

Page 19: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Simulations using the approximation V (θ) ≈ ‖y − Xβ‖2n = 750, p = 500, correlated Σβ is the vectorization of a row-sparse matrix of size 25× 20Estimators: 7 different penalty functions:I Group Lasso with tuning parameters µ1, µ2I Lasso with tuning parameters λ1, ..., λ4I Nuclear norm penalty

Boxplots of√

n[a>0 (β − β) + z>0 (y − Xβ)]

Page 20: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Before/after bias correction

Page 21: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

QQ-plot, Lasso, λ1, λ2, λ3, λ3.For Lasso, df =

∣∣{j = 1, ..., p : βj 6= 0}∣∣.

Pivotal quantity when using ‖y − Xβ‖2 instead of V (θ) for thevariance.

I The visible discrepancy in the last plot is fixed when using V (θ)instead.

Page 22: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

QQ-plot, Group Lasso, µ1, µ2. Explicit formula for df

Page 23: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

QQ-plot, Nuclear norm penalty

No explicit formula for df available,although it is possible to compute numerical approximations.

Page 24: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Summary of the main result2

Asymptotic normality result, and valid 1− α confidence interval byde-biasing any convex regularized M estimator.

I Asymptotics p/n→ γI Under Gaussian design, known covariance matrix ΣI Strong convexity of the penalty required if γ ≥ 1;

otherwise any penalty is allowed.

2P.C.B and C.-H. Zhang (2019) Second order Poincaré inequalities andde-biasing arbitrary convex regularizers when p/n → γ

Page 25: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Time-pertmitting

1. Necessity of degrees-of-freedom adjustment

2. Central Limit Theorems and Second Order Poincar’einequalities

3. Unknown Σ.

Page 26: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

1. Necessity of degrees-of-freedom adjustment

The previous de-biasing correction features a “degrees-of-freedom”adjustment in the form of multiplication by

(1− df/n)

or depending on the normalization, multiplication by

n − df.

Generalization, in high-dimensions, of the classicalnormalization by multiplying by n − p to obtain unbiasedestimates when p ≪ n.This degrees-of-freedom adjustment for the Lasso was initiallymotivated by statistical physics arguments3

3Javanmard and Montanari (2014b), Hypothesis Testing in High-DimensionalRegression under the Gaussian Random Design Model: Asymptotic Theory

Page 27: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Initial proposals for de-biasing the Lasso do not include the“degrees-of-freedom” adjustment

Page 28: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

1. Necessity of degrees-of-freedom adjustmentI Sparse linear regression y = Xβ + ε, sparsity s0 = ‖β‖0I X has iid N(0,Σ) rows, noise ε ∼ N(0, σ2In)

θν de-biasing estimate with adjustment of the form (1− ν/n),here ν represents a possible degrees-of-freedom adjustmentor absence thereof (ν = 0).√

n(θν − θ) when the initial estimator is the LassoThe pivotal quantity for ν = 0(unadjusted) is biased.(Yellow boxplot).The degrees-of-freedomadjustment exactly repairs this.

For s0 ≫ n2/3, absence of degrees-of-freedom adjustment provablyleads to incorrect coverage for certain directions a − 0.4

4B. and Zhang (2018): De-Biasing The Lasso With Degrees-of-FreedomAdjustment

Page 29: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

2. Central Limit Theorems/Second Order Poinacr’einequalities

If f : Rn → Rn and z0 ∼ N(0, In), then the random variable

z>0 f (z0)− divf (z0)

is close to normal when

E‖∇f (z0)‖2FE‖f (z0)‖2

is small5.

I This leads to the asymptotic normal results when de-biasingconvex regularizers

I Mechanically computing/bounding gradients leads toasymptotic normality results (Second Order Poincar’einequalities, see Chatterjee (2009))

5P.C.B and C.-H. Zhang (2019) Second order Poincaré inequalities andde-biasing arbitrary convex regularizers when p/n → γ

Page 30: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

3. Unknown Σ

The general theory of de-biasing/asymptotic normality for arbitraryregularizers is applicable to any penalty when Σ is known.

In practice, z0 = Σ−1a0 needs to be estimated.I sample splittingI case-by-case basis for a given regularizer gI e.g.: Nodewise Lasso. Dense and sparse a0 have to be handled

differently.6I leaves open interesting problems for different regularizers

6B. and Zhang (2018), Section 2.2. De-Biasing The Lasso WithDegrees-of-Freedom Adjustment

Page 31: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

Thank you!

Asymptotic normality result, and valid 1− α confidence interval7 byde-biasing any convex regularized M estimator.

I Asymptotics p/n→ γI Under Gaussian design, known covariance matrix ΣI Strong convexity of the penalty required if γ ≥ 1;

otherwise any penalty is allowed.

7P.C.B and C.-H. Zhang (2019) Second order Poincaré inequalities andde-biasing arbitrary convex regularizers when p/n → γ

Page 32: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

References I

Bayati, Mohsen, and Andrea Montanari. 2012. “The Lasso Risk forGaussian Matrices.” IEEE Transactions on Information Theory 58(4). IEEE: 1997–2017.

Celentano, Michael, and Andrea Montanari. 2019. “FundamentalBarriers to High-Dimensional Regression with Convex Penalties.”arXiv Preprint arXiv:1903.10603.

Chatterjee, Sourav. 2009. “Fluctuations of Eigenvalues and SecondOrder Poincaré Inequalities.” Probability Theory and Related Fields143 (1-2). Springer: 1–40.

Donoho, David, and Andrea Montanari. 2016. “High DimensionalRobust M-Estimation: Asymptotic Variance via ApproximateMessage Passing.” Probability Theory and Related Fields 166 (3-4).Springer: 935–69.

Page 33: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

References IIEl Karoui, Noureddine, Derek Bean, Peter J Bickel, Chinghway Lim,and Bin Yu. 2013. “On Robust Regression with High-DimensionalPredictors.” Proceedings of the National Academy of Sciences 110(36). National Acad Sciences: 14557–62.

Javanmard, Adel, and Andrea Montanari. 2014a. “ConfidenceIntervals and Hypothesis Testing for High-Dimensional Regression.”The Journal of Machine Learning Research 15 (1). JMLR. org:2869–2909.

———. 2014b. “Hypothesis Testing in High-Dimensional RegressionUnder the Gaussian Random Design Model: Asymptotic Theory.”IEEE Transactions on Information Theory 60 (10). IEEE: 6522–54.

———. 2018. “Debiasing the Lasso: Optimal Sample Size forGaussian Designs.” The Annals of Statistics 46 (6A). Institute ofMathematical Statistics: 2593–2622.

Page 34: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

References IIILei, Lihua, Peter J Bickel, and Noureddine El Karoui. 2018.“Asymptotics for High Dimensional Regression M-Estimates: FixedDesign Results.” Probability Theory and Related Fields 172 (3-4).Springer: 983–1079.

Miolane, Léo, and Andrea Montanari. 2018. “The Distribution ofthe Lasso: Uniform Control over Sparse Balls and AdaptiveParameter Tuning.” arXiv Preprint arXiv:1811.01212.

Sur, Pragya, and Emmanuel J Candès. 2018. “A ModernMaximum-Likelihood Theory for High-Dimensional LogisticRegression.” arXiv Preprint arXiv:1803.06964.

Van de Geer, Sara, Peter Bühlmann, Ya’acov Ritov, and RubenDezeure. 2014. “On Asymptotically Optimal Confidence Regionsand Tests for High-Dimensional Models.” The Annals of Statistics42 (3). Institute of Mathematical Statistics: 1166–1202.

Page 35: De-biasing arbitrary convex regularizers and asymptotic ...€¦ · JointworkwithCun-HuiZhang(Rutgers). I Second order Poincaré inequalities and de-biasing arbitrary convex regularizers

References IV

Zhang, Cun-Hui, and Stephanie S Zhang. 2014. “ConfidenceIntervals for Low Dimensional Parameters in High DimensionalLinear Models.” Journal of the Royal Statistical Society: Series B(Statistical Methodology) 76 (1). Wiley Online Library: 217–42.