Doe Bug Prediction Support Human Developers? Findings From a Google Case Study Chris Lewis,...

32
ICSE 2013 Bug Prediction Session Does Bug Prediction Support Human Developers? Findings From a Google Case Study Transfer Defect Learning

Transcript of Doe Bug Prediction Support Human Developers? Findings From a Google Case Study Chris Lewis,...

ICSE 2013 Bug Prediction

Session

• Does Bug Prediction Support Human Developers? Findings From a Google Case Study

• Transfer Defect Learning

Doe Bug Prediction Support Human Developers? Findings

From a Google Case Study

Chris Lewis, ZhongPeng Lin, Caitlin Sadowski, Xiaoyan Zhu, Rong Ou, E.James Whitehead Jr.

University of California, Santa CruzGoogle Inc.

Xi’an Jiaotong University

Motivations

Little empirical data validating that areas predicted to be bug-prone match the expectations of expert developers

Little data showing whether the information provided by bug prediction algorithms leads to modification of developer behavior

Three Questions

Q1: According to expert opinion, given a collection of bug prediction algorithms, how many bug-prone files do they find and which algorithm is preferred?

Q2: What are the desire characteristics a bug prediction algorithm should have?

Q3: Using the knowledge gained from the other two questions to design a likely algorithm, do developers modify their behavior when presented with bug prediction results?

Algorithm Choice

FixCache If a file is recently changed, it is likely to contain faults If a file contains a fault, it is likely to contain more faults Files that change alongside faulty files are more likely to

contain faults LRU Problem: 10% of files, no severity

Reduce the cache size to 20

Order the cache by duration( total commits )

Rahman

Project Choice

User Studies

19 interviewees ( A: 9 B: 10 )

3 lists of files

Choices: Bug-prone Not bug-prone No strong feelings either way about No experience with the file

Results

Results

Results

Results

Q2: desirable characteristics

Actionable( take clear steps that will result in the area no longer being flagged )

Obvious reasoning

Bias towards the new file

Parallelizable

Effectiveness scaling

Time-Weighted Risk Algorithm ( TWR )

Modify Rahman

i: bug-fixing commit

n: number of bug-fixing commit

ti: normalized time of the current bug-fixing commit

w: how strong the decay should be

Experiment

Mondrian ( code review software ) + lint

Duration: 3 months in Google Inc.

Metrics: The average time a review containing a bug-

prone file takes from submission to approval The average number of comments on a review

that contains a bug-prone file

Results

Conclusion

Failure due to TWR

No actionable means of removing the flag

Transfer Defect Learning

Jaechang Nam, Sinno Jialin Pan, Sunghun KimDepartment of Computer Science and Engineering

The Hong Kong University of Science and Technology, ChinaInstitute for Infocomm Research, Singapore

Motivations

Poor cross-project prediction performance Same feature space Different data distribution

On the basis of transfer learning, propose transfer defect learning. Modify existing method: TCA( Transfer Component Analysis )

TCA is sensitive to normalization

TCA

TCA tries to learn a transformation to map the original data of source and target domains to a latent space where the difference between domains is small and the data variance after transformation is large.

TCA

TCA+

Choose Normalization options automatically

Rules

5 rules

Process

Experiment Projects

ReLink Apache HTTP Server OpenIntents Safe Zxing

AEEEM Equinox Eclipse JDT Core Apache Lucene Mylyn Eclipse PDE UI

TCA with different normalization options

TCA+

Contributions

First to observe improved prediction performance by applying TCA for cross-project defect prediction

Proposed TCA+