Machine Learning and Automatic Text Classification: What's Next?

52
Machine Learning and Automatic Text Classification What’s Next? Fabrizio Sebastiani (Joint work with Giacomo Berardi and Andrea Esuli) Istituto di Scienza e Tecnologie dell’Informazione Consiglio Nazionale delle Ricerche 56124 Pisa, Italy ASC Methods Conference Winchester, UK – September 6-7, 2013

Transcript of Machine Learning and Automatic Text Classification: What's Next?

Machine Learning and Automatic Text ClassificationWhat’s Next?

Fabrizio Sebastiani(Joint work with Giacomo Berardi and Andrea Esuli)

Istituto di Scienza e Tecnologie dell’InformazioneConsiglio Nazionale delle Ricerche

56124 Pisa, Italy

ASC Methods ConferenceWinchester, UK – September 6-7, 2013

Prequel: ML for Automated Verbatim Coding

In the last 10 years we have championed an approach to automatically codingopen-ended answers (“verbatims”) based on “machine learning”;

2003 : D. Giorgetti, I. Prodanof, and F. Sebastiani. Automatic Coding ofOpen-ended Questions Using Text Categorization Techniques. Proceedings ofthe 4th International Conference of the Association for Survey Computing,Warwick, UK, pp. 173-–184.2007 : T. Macer, M. Pearson, and F. Sebastiani. Cracking the Code: WhatCustomers Say, in Their Own Words. In Proceedings of the 50th AnnualConference of the Market Research Society, Brighton, UK. (Best NewThinking Award, Shortlisted for Best Paper Award and for ASC/MRS TechEffectiveness Award)2010 : A. Esuli and F. Sebastiani. Machines that learn how to codeopen-ended survey data. International Journal of Market Research, 52(6).(Shortlisted for best 2010 IJMR paper)

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 2 / 41

Prequel: ML for Automated Verbatim Coding (cont’d)

Based on these principles we have built a software system, called VCS(“Verbatim Coding System”), which has been variously applied to codingsurveys in the social sciences, customer relationship management, and marketresearch.VCS is based on a “supervised learning” metaphor : the classifier learns (or:is trained), from sample manually classified verbatims, the characteristics anew verbatim should have in order to be attributed a given code.The human operator who feeds sample manually classified verbatims to thesystem plays the role of the “supervisor”.

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 3 / 41

A Verbatim Coding System based on ML

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 4 / 41

A Verbatim Coding System based on ML

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 5 / 41

A Verbatim Coding System based on ML

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 6 / 41

A Verbatim Coding System based on ML

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 7 / 41

What I’ll be talking about today

A talk about the role of humans in the verbatim coding process, and abouthow to best support their workI will be looking at scenarios in which

1 automated verbatim coding technology is used ...2 ... but the level of accuracy that can be obtained from the classifier is not

considered sufficient ...3 ... with the consequence that one or more human coders are asked to inspect

(and correct where appropriate) a portion of the classification decisions, withthe goal of increasing overall accuracy.

ProblemHow can we support / optimize the work of the human coders?

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 8 / 41

What I’ll be talking about today

A talk about the role of humans in the verbatim coding process, and abouthow to best support their workI will be looking at scenarios in which

1 automated verbatim coding technology is used ...2 ... but the level of accuracy that can be obtained from the classifier is not

considered sufficient ...3 ... with the consequence that one or more human coders are asked to inspect

(and correct where appropriate) a portion of the classification decisions, withthe goal of increasing overall accuracy.

ProblemHow can we support / optimize the work of the human coders?

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 8 / 41

What I’ll be talking about today

A talk about the role of humans in the verbatim coding process, and abouthow to best support their workI will be looking at scenarios in which

1 automated verbatim coding technology is used ...2 ... but the level of accuracy that can be obtained from the classifier is not

considered sufficient ...3 ... with the consequence that one or more human coders are asked to inspect

(and correct where appropriate) a portion of the classification decisions, withthe goal of increasing overall accuracy.

ProblemHow can we support / optimize the work of the human coders?

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 8 / 41

What I’ll be talking about today

A talk about the role of humans in the verbatim coding process, and abouthow to best support their workI will be looking at scenarios in which

1 automated verbatim coding technology is used ...2 ... but the level of accuracy that can be obtained from the classifier is not

considered sufficient ...3 ... with the consequence that one or more human coders are asked to inspect

(and correct where appropriate) a portion of the classification decisions, withthe goal of increasing overall accuracy.

ProblemHow can we support / optimize the work of the human coders?

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 8 / 41

A worked out example

predictedY N

true Y TP = 4 FP = 3N FN = 4 TN = 9

F1 = 2TP2TP + FP + FN = 0.53

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 9 / 41

A worked out example (cont’d)

predictedY N

true Y TP = 4 FP = 3N FN = 4 TN = 9

F1 = 2TP2TP + FP + FN = 0.53

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 10 / 41

A worked out example (cont’d)

predictedY N

true Y TP = 5 FP = 3N FN = 3 TN = 9

F1 = 2TP2TP + FP + FN = 0.63

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 11 / 41

A worked out example (cont’d)

predictedY N

true Y TP = 5 FP = 2N FN = 3 TN = 10

F1 = 2TP2TP + FP + FN = 0.67

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 12 / 41

A worked out example (cont’d)

predictedY N

true Y TP = 6 FP = 2N FN = 2 TN = 10

F1 = 2TP2TP + FP + FN = 0.75

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 13 / 41

A worked out example (cont’d)

predictedY N

true Y TP = 6 FP = 1N FN = 2 TN = 11

F1 = 2TP2TP + FP + FN = 0.80

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 14 / 41

What I’ll be talking about (cont’d)

We need methods thatgiven a desired level of accuracy, minimize the human coders’ effort necessaryto achieve it; alternatively,given an available amount of human coders’ effort, maximize the accuracythat can be obtained through it

This can be achieved by ranking the automatically classified verbatims insuch a way that, by starting the inspection from the top of the ranking, thecost-effectiveness of the human coders’ work is maximizedWe call the task of generating such a ranking Semi-Automatic TextClassification (SATC)

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 15 / 41

What I’ll be talking about (cont’d)

We need methods thatgiven a desired level of accuracy, minimize the human coders’ effort necessaryto achieve it; alternatively,given an available amount of human coders’ effort, maximize the accuracythat can be obtained through it

This can be achieved by ranking the automatically classified verbatims insuch a way that, by starting the inspection from the top of the ranking, thecost-effectiveness of the human coders’ work is maximizedWe call the task of generating such a ranking Semi-Automatic TextClassification (SATC)

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 15 / 41

What I’ll be talking about (cont’d)

Previous work has addressed SATC via techniques developed for activelearningIn both cases, the automatically classified verbatims are ranked with the goalof having the human coder start inspecting/correcting from the top; however

in active learning the goal is providing new training examplesin SATC the goal is increasing the overall accuracy of the classified set

We claim that a ranking generated “à la active learning” is suboptimal forSATC1

1G Berardi, A Esuli, F Sebastiani. A Utility-Theoretic Ranking Method for Semi-Automated TextClassification. Proceedings of the 35th Annual International ACM SIGIR Conference on Research andDevelopment in Information Retrieval (SIGIR 2012), Portland, US, 2012.Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 16 / 41

What I’ll be talking about (cont’d)

Previous work has addressed SATC via techniques developed for activelearningIn both cases, the automatically classified verbatims are ranked with the goalof having the human coder start inspecting/correcting from the top; however

in active learning the goal is providing new training examplesin SATC the goal is increasing the overall accuracy of the classified set

We claim that a ranking generated “à la active learning” is suboptimal forSATC1

1G Berardi, A Esuli, F Sebastiani. A Utility-Theoretic Ranking Method for Semi-Automated TextClassification. Proceedings of the 35th Annual International ACM SIGIR Conference on Research andDevelopment in Information Retrieval (SIGIR 2012), Portland, US, 2012.Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 16 / 41

A Verbatim Coding System based on ML

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 17 / 41

A Verbatim Coding System based on ML

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 18 / 41

Outline of this talk

1 We discuss how to measure “error reduction” (i.e., the increase in accuracyderiving from the human coder’s inspection activity)

2 We discuss a method for maximizing the expected error reduction for a fixedamount of annotation effort

3 We show some promising experimental results

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 19 / 41

Error Reduction, and How to Measure it

Outline

1 Error Reduction, and How to Measure it

2 Error Reduction, and How to Maximize it

3 Some Experimental Results

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 20 / 41

Error Reduction, and How to Measure it

Error Reduction, and how to measure it

Assume we have1 Code c;2 Classifier h for c;3 Set of unlabeled verbatims D that we have automatically classified by means

of h, so that every verbatim in D is associatedwith a binary decision (Yes or No)with a confidence score (a positive real number)

4 Measure of accuracy A, ranging on [0,1]

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 21 / 41

Error Reduction, and How to Measure it

Error Reduction, and how to Measure it (cont’d)

We will assume that A is

F1 = 2 · TP(2 · TP) + FP + FN

but any measure of accuracy based on a contingency table may be usedAn amount of error, measured as E = (1−A), is present in the automaticallyclassified set DHuman coders inspect-and-correct a portion of D with the goal of reducingthe error present in D

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 22 / 41

Error Reduction, and How to Measure it

Error Reduction, and how to Measure it (cont’d)

We will assume that A is

F1 = 2 · TP(2 · TP) + FP + FN

but any measure of accuracy based on a contingency table may be usedAn amount of error, measured as E = (1−A), is present in the automaticallyclassified set DHuman coders inspect-and-correct a portion of D with the goal of reducingthe error present in D

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 22 / 41

Error Reduction, and How to Measure it

Error Reduction, and how to Measure it (cont’d)

We define error at rank n (noted as E (n)) as the error still present in D afterthe coder has inspected the verbatims at the first n rank positions

E(0) is the initial error generated by the automated classifierE(|D|) is 0

We define error reduction at rank n (noted as ER(n)) to be

ER(n) = E (0)− E (n)E (0)

the error reduction obtained by the human coder who inspects the verbatimsat the first n rank positions

ER(n) ∈ [0, 1]ER(n) = 0 indicates no reductionER(n) = 1 indicates total elimination of error

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 23 / 41

Error Reduction, and How to Measure it

Error Reduction, and how to Measure it (cont’d)

We define error at rank n (noted as E (n)) as the error still present in D afterthe coder has inspected the verbatims at the first n rank positions

E(0) is the initial error generated by the automated classifierE(|D|) is 0

We define error reduction at rank n (noted as ER(n)) to be

ER(n) = E (0)− E (n)E (0)

the error reduction obtained by the human coder who inspects the verbatimsat the first n rank positions

ER(n) ∈ [0, 1]ER(n) = 0 indicates no reductionER(n) = 1 indicates total elimination of error

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 23 / 41

Error Reduction, and How to Measure it

Error Reduction, and how to Measure it (cont’d)

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 24 / 41

Error Reduction, and How to Maximize it

Outline

1 Error Reduction, and How to Measure it

2 Error Reduction, and How to Maximize it

3 Some Experimental Results

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 25 / 41

Error Reduction, and How to Maximize it

Error Reduction, and how to Maximize it

ProblemHow should we rank the verbatims in D so as to maximize the expected errorreduction?

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 26 / 41

Error Reduction, and How to Maximize it

A worked out example

predictedY N

true Y TP = 4 FP = 3N FN = 4 TN = 9

F1 = 2TP2TP + FP + FN = 0.53

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 27 / 41

Error Reduction, and How to Maximize it

A worked out example (cont’d)

predictedY N

true Y TP = 4 FP = 3N FN = 4 TN = 9

F1 = 2TP2TP + FP + FN = 0.53

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 28 / 41

Error Reduction, and How to Maximize it

A worked out example (cont’d)

predictedY N

true Y TP = 5 FP = 3N FN = 3 TN = 9

F1 = 2TP2TP + FP + FN = 0.63

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 29 / 41

Error Reduction, and How to Maximize it

A worked out example (cont’d)

predictedY N

true Y TP = 5 FP = 2N FN = 3 TN = 10

F1 = 2TP2TP + FP + FN = 0.67

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 30 / 41

Error Reduction, and How to Maximize it

A worked out example (cont’d)

predictedY N

true Y TP = 6 FP = 2N FN = 2 TN = 10

F1 = 2TP2TP + FP + FN = 0.75

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 31 / 41

Error Reduction, and How to Maximize it

A worked out example (cont’d)

predictedY N

true Y TP = 6 FP = 1N FN = 2 TN = 11

F1 = 2TP2TP + FP + FN = 0.80

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 32 / 41

Error Reduction, and How to Maximize it

Error Reduction, and how to Maximize it

Intuition 1: Verbatims that have a higher probability of being misclassifiedshould be ranked higherIntuition 2: Verbatims that, if corrected, bring about a higher gain (i.e., abigger increase in A) should be ranked higherThis means that verbatims that have a higher utility (= probability × gain)should be ranked higher

A false positive and a false negative may have different impacts on A !

While in active learning only the probability of misclassification is relevant, inSATC gains are also relevant

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 33 / 41

Error Reduction, and How to Maximize it

Error Reduction, and how to Maximize it (cont’d)

Given a set Ω of mutually disjoint events, a utility function is defined as

U(Ω) =∑ω∈Ω

P(ω)G(ω)

whereP(ω) is the probability of occurrence of event ωG(ω) is the gain obtained if event ω occurs

We can thus estimate the utility, for the aims of increasing A, of manuallyinspecting a verbatim d as

U(TP,TN,FP,FN) = P(FP) · G(FP) + P(FN) · G(FN)

provided we can estimateIf d is labelled with code c: P(FP) and G(FP)If d is not labelled with code c: P(FN) and G(FN)

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 34 / 41

Error Reduction, and How to Maximize it

Error Reduction, and how to Maximize it (cont’d)

Given a set Ω of mutually disjoint events, a utility function is defined as

U(Ω) =∑ω∈Ω

P(ω)G(ω)

whereP(ω) is the probability of occurrence of event ωG(ω) is the gain obtained if event ω occurs

We can thus estimate the utility, for the aims of increasing A, of manuallyinspecting a verbatim d as

U(TP,TN,FP,FN) = P(FP) · G(FP) + P(FN) · G(FN)

provided we can estimateIf d is labelled with code c: P(FP) and G(FP)If d is not labelled with code c: P(FN) and G(FN)

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 34 / 41

Error Reduction, and How to Maximize it

Error Reduction, and how to Maximize it (cont’d)

Estimating P(FP) and P(FN) (the probability of misclassification) can bedone by converting the confidence score returned by the classifier into aprobability of correct classification

Tricky: requires probability “calibration” via a generalized sigmoid function tobe optimized via k-fold cross-validation

Gains G(FP) and G(FN) can be defined “differentially”; i.e.,The gain obtained by correcting a FN is (AFN→TP − A)The gain obtained by correcting a FP is (AFP→TN − A)Gains need to be estimated by estimating the contingency table on thetraining set via k-fold cross-validationKey observation: in general, G(FP) 6= G(FN)

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 35 / 41

Error Reduction, and How to Maximize it

Error Reduction, and how to Maximize it (cont’d)

Estimating P(FP) and P(FN) (the probability of misclassification) can bedone by converting the confidence score returned by the classifier into aprobability of correct classification

Tricky: requires probability “calibration” via a generalized sigmoid function tobe optimized via k-fold cross-validation

Gains G(FP) and G(FN) can be defined “differentially”; i.e.,The gain obtained by correcting a FN is (AFN→TP − A)The gain obtained by correcting a FP is (AFP→TN − A)Gains need to be estimated by estimating the contingency table on thetraining set via k-fold cross-validationKey observation: in general, G(FP) 6= G(FN)

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 35 / 41

Error Reduction, and How to Maximize it

Error Reduction, and how to Maximize it (cont’d)

Estimating P(FP) and P(FN) (the probability of misclassification) can bedone by converting the confidence score returned by the classifier into aprobability of correct classification

Tricky: requires probability “calibration” via a generalized sigmoid function tobe optimized via k-fold cross-validation

Gains G(FP) and G(FN) can be defined “differentially”; i.e.,The gain obtained by correcting a FN is (AFN→TP − A)The gain obtained by correcting a FP is (AFP→TN − A)Gains need to be estimated by estimating the contingency table on thetraining set via k-fold cross-validationKey observation: in general, G(FP) 6= G(FN)

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 35 / 41

Some Experimental Results

Outline

1 Error Reduction, and How to Measure it

2 Error Reduction, and How to Maximize it

3 Some Experimental Results

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 36 / 41

Some Experimental Results

Some Experimental Results

Dataset:# Codes # Training # Test

Reuters-21578 115 9603 3299Baseline: ranking by probability of misclassification (“à la active learning”),equivalent to applying our ranking method with G(FP) = G(FN) = 1

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 37 / 41

Some Experimental Results

0.0 0.2 0.4 0.6 0.8 1.0Inspection Length

0.0

0.2

0.4

0.6

0.8

1.0Err

or Re

ducti

on (E

R)Learner: MP-Boost; Dataset: Reuters-21578; Type: Macro

RandomBaselineUtility-theoreticOracle

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 38 / 41

Some Experimental Results

A few side notes

This approach allows the human coder to know, at any stage of theinspection process, what the estimated accuracy is at that stage; obtained by

Estimating accuracy at the beginning of the process, via k-fold cross validationUpdating after each correction is made

This approach lends itself to having more than one coder working in parallelon the same inspection-and-correction taskRecent research I have not discussed today :

A “dynamic” SATC method in which gains are updated after each correctionis performed“Microaveraging” and “Macroaveraging” -oriented methods

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 39 / 41

Some Experimental Results

A few side notes

This approach allows the human coder to know, at any stage of theinspection process, what the estimated accuracy is at that stage; obtained by

Estimating accuracy at the beginning of the process, via k-fold cross validationUpdating after each correction is made

This approach lends itself to having more than one coder working in parallelon the same inspection-and-correction taskRecent research I have not discussed today :

A “dynamic” SATC method in which gains are updated after each correctionis performed“Microaveraging” and “Macroaveraging” -oriented methods

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 39 / 41

Some Experimental Results

Concluding Remarks

Take-away message: Semi-automatic text classification needs to be addressedas a task in its own right

Active learning typically makes use of probabilities of misclassification but doesnot make use of gains ⇒ ranking “à la active learning” is suboptimal for SATC

The use of utility theory means that the ranking algorithm is optimized for aspecific accuracy measure ⇒ Choose the accuracy measure the best mirrorsyour applicative needs (e.g., Fβ with β > 1), and choose it well!SATC is important, since in more and more application contexts the accuracyobtainable via completely automatic text classification is not sufficient; moreand more frequently humans will need to enter the loop

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 40 / 41

Some Experimental Results

Thank you!

Fabrizio Sebastiani (ISTI-CNR, Pisa, IT) ML & ATC: What’s Next? ASC Methods Conference 41 / 41