Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects...

29
Curating GitHub for Engineered Software 1 Projects 2 Nuthan Munaiah 1 , Steven Kroh 1 , Craig Cabrey 1 , and Meiyappan 3 Nagappan 2 4 1 Department of Software Engineering, Rochester Institute of Technology, 134 Lomb 5 Memorial Drive, Rochester, NY, USA 14623 6 2 David R. Cheriton School of Computer Science, University of Waterloo, 200 University 7 Avenue West, Waterloo, Ontario, Canada N2L 3G1 8 Corresponding author: 9 Nuthan Munaiah and Meiyappan Nagappan 10 Email address: [email protected], [email protected] 11 ABSTRACT 12 Software forges like GitHub host millions of repositories. Software engineering researchers have been able to take advantage of such a large corpora of potential study subjects with the help of tools like GHTorrent and Boa. However, the simplicity in querying comes with a caveat: there are limited means of separating the signal (e.g. repositories containing engineered software projects) from the noise (e.g. repositories containing home work assignments). The proportion of noise in a random sample of repositories could skew the study and may lead to researchers reaching unrealistic, potentially inaccurate, conclusions. We argue that it is imperative to have the ability to sieve out the noise in such large repository forges. 13 14 15 16 17 18 19 20 We propose a framework, and present a reference implementation of the framework as a tool called reaper, to enable researchers to select GitHub repositories that contain evidence of an engineered software project. We identify software engineering practices (called dimensions) and propose means for validating their existence in a GitHub repository. We used reaper to measure the dimensions of 1,994,977 GitHub repositories. We then used the data set train classifiers capable of predicting if a given GitHub repository contains an engineered software project. The performance of the classifiers was evaluated using a set of 200 repositories with known ground truth classification. We also compared the performance of the classifiers to other approaches to classification (e.g. number of GitHub Stargazers) and found our classifiers to outperform existing approaches. We found stargazers-based classifier to exhibit high precision (96%) but an inversely proportional recall (27%). On the other hand, our best classifier exhibited a high precision (82%) and a high recall (83%). The stargazer-based criteria offers precision but fails to recall a significant potion of the population. 21 22 23 24 25 26 27 28 29 30 31 32 1 INTRODUCTION 33 Software repositories contain a wealth of information about the code, people, and processes that go into 34 the development of a software product. Retrospective analysis of these software repositories can yield 35 valuable insights into the evolution and growth of the software products contained within. We can trace 36 such analysis all the way back to the 1970s, when Belady and Lehman (1976) proposed Lehman’s Laws 37 of software evolution. Today, the field is significantly invested in retrospective analysis with the Boa 38 project (Dyer et al., 2013) receiving more than $1.4 million to support such analysis 1 . 39 The insights gained through retrospective analysis can affect the decision-making process in a project, 40 and improve the quality of the software system being developed. An example of this can be seen in the 41 recommendations made by Bird et al. (2011) in their study regarding the effects of code ownership on 42 the quality of software systems. The authors suggest that quality assurance efforts should focus on those 43 components with many minor contributors. 44 1 National Science Foundation (NSF) Grant CNS-1513263 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Transcript of Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects...

Page 1: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

Curating GitHub for Engineered Software1

Projects2

Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan3

Nagappan24

1Department of Software Engineering, Rochester Institute of Technology, 134 Lomb5

Memorial Drive, Rochester, NY, USA 146236

2David R. Cheriton School of Computer Science, University of Waterloo, 200 University7

Avenue West, Waterloo, Ontario, Canada N2L 3G18

Corresponding author:9

Nuthan Munaiah and Meiyappan Nagappan10

Email address: [email protected], [email protected]

ABSTRACT12

Software forges like GitHub host millions of repositories. Software engineering researchers have beenable to take advantage of such a large corpora of potential study subjects with the help of tools likeGHTorrent and Boa. However, the simplicity in querying comes with a caveat: there are limited meansof separating the signal (e.g. repositories containing engineered software projects) from the noise(e.g. repositories containing home work assignments). The proportion of noise in a random sample ofrepositories could skew the study and may lead to researchers reaching unrealistic, potentially inaccurate,conclusions. We argue that it is imperative to have the ability to sieve out the noise in such large repositoryforges.

13

14

15

16

17

18

19

20

We propose a framework, and present a reference implementation of the framework as a tool calledreaper, to enable researchers to select GitHub repositories that contain evidence of an engineeredsoftware project. We identify software engineering practices (called dimensions) and propose meansfor validating their existence in a GitHub repository. We used reaper to measure the dimensions of1,994,977 GitHub repositories. We then used the data set train classifiers capable of predicting if agiven GitHub repository contains an engineered software project. The performance of the classifiers wasevaluated using a set of 200 repositories with known ground truth classification. We also compared theperformance of the classifiers to other approaches to classification (e.g. number of GitHub Stargazers)and found our classifiers to outperform existing approaches. We found stargazers-based classifier toexhibit high precision (96%) but an inversely proportional recall (27%). On the other hand, our bestclassifier exhibited a high precision (82%) and a high recall (83%). The stargazer-based criteria offersprecision but fails to recall a significant potion of the population.

21

22

23

24

25

26

27

28

29

30

31

32

1 INTRODUCTION33

Software repositories contain a wealth of information about the code, people, and processes that go into34

the development of a software product. Retrospective analysis of these software repositories can yield35

valuable insights into the evolution and growth of the software products contained within. We can trace36

such analysis all the way back to the 1970s, when Belady and Lehman (1976) proposed Lehman’s Laws37

of software evolution. Today, the field is significantly invested in retrospective analysis with the Boa38

project (Dyer et al., 2013) receiving more than $1.4 million to support such analysis1.39

The insights gained through retrospective analysis can affect the decision-making process in a project,40

and improve the quality of the software system being developed. An example of this can be seen in the41

recommendations made by Bird et al. (2011) in their study regarding the effects of code ownership on42

the quality of software systems. The authors suggest that quality assurance efforts should focus on those43

components with many minor contributors.44

1National Science Foundation (NSF) Grant CNS-1513263

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 2: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

The richness of the data and the potential insights that it represents were the enabling factors in45

the inception of an entire field of research–Mining Software Repositories (MSR). In the early days46

of MSR, researchers had limited access to software repositories, which were primarily hosted within47

organizations. However, with the proliferation of open access software repositories such as GitHub,48

Bitbucket, SourceForge, and CodePlex for source code and Bugzilla, Mantis, and Trac for bugs, researchers49

now have an abundance of data from which to mine and draw interesting conclusions.50

Every source code commit contains a wealth of information that can be used to gain an understanding51

of the art of software development. For example, Eick et al. (2001) dived into the rich (fifteen-plus52

year) commit history of a large telephone switching system in order to explore the idea of code decay.53

Modern day source code repositories provide features that make managing a software project as seamless54

as possible. While the integration of features provides improved traceability for developers and project55

managers, it also provides MSR researchers with a single, self-contained, organized, and more importantly,56

publicly-accessible source of information from which to mine. However, anyone may create a repository57

for any purpose at no cost. Therefore, the quality of information contained within the forges may58

be diminishing with the addition of many noisy repositories e.g. repositories containing home work59

assignments, text files, images, or worse, the backup of a desktop computer. Kalliamvakou et al. (2014)60

identified this noise as one of the nine perils to be aware of when mining GitHub data for software61

engineering research. The situation is compounded by the sheer volume of repositories contained in these62

forges. As of June, 2016, GitHub alone hosts over 38 million repositories 2 and this number is rapidly63

increasing.64

Researchers have used various criteria to slice the mammoth software forges into data sets manageable65

for their studies. For example, MSR researchers have leveraged simple filters such as popularity to remove66

noisy repositories. Filters like popularity (measured as number of watchers or stargazers on GitHub, for67

example) are merely proxies and may neither be general-purpose nor representative of an engineered68

software project. Furthermore, MSR researchers should not have to reinvent filters to eliminate unwanted69

repositories. There are a few examples of research that take the approach of developing their own filters70

in order to procure a data set to analyze:71

• In a study of the relationship between programming languages and code quality, Ray et al. (2014)72

selected 50 most popular (measured by the number of stars) repositories in each of the 19 most73

popular languages.74

• Bissyande et al. (2013) chose the first 100,000 repositories returned by the GitHub API in their75

study of the popularity, interoperability, and impact of programming languages.76

• Allamanis and Sutton (2013) chose 14,807 Java repositories with at least one fork in their study of77

applying language modeling to mining source code repositories.78

The project sites for GHTorrent (GHTorrent, 2016) and Boa (Iowa State University, 2016) list more79

papers that employ different filtering schemes.80

The assumption that one could make is that the repositories sampled in these studies contain engineered81

software projects. However, source code forges are rife with repositories that do not contain source code,82

let alone an engineered software project. Kalliamvakou et al. (2014) manually sampled 434 repositories83

from GitHub and found that only 63.4% (275) of them were for software development; the remaining84

159 repositories were used for experimental, storage, or academic purposes, or were empty or no longer85

accessible. The inclusion of repositories containing such non-software artifacts in studies targeting86

software projects could lead to conclusions that may not be applicable to software engineering at large.87

At the same time, selecting a sample by manual investigation is not feasible given the sheer volume of88

repositories hosted by these source code forges.89

The goal of our work is to identify practices that an engineered software project would typically90

exhibit with the intention of developing a generalizable framework with which to identify such projects in91

the real-world.92

The contributions of our work are:93

• A generalizable evaluation framework defined on a set of dimensions that encapsulate typical94

software engineering practices;95

2https://github.com/about/press

2/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 3: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

• A reference implementation of the evaluation framework, called reaper, available as an open-96

source project (Munaiah et al., 2016c);97

• A publicly-accessible data set of dimensions obtained from 1,994,977 GitHub repositories (Munaiah98

et al., 2016b).99

The remainder of this paper is organized as follows: we begin by introducing the notion of an100

engineered software project in Section 2. We then propose an evaluation framework in Section 2.1 that101

aims to operationalize the definition of an engineered software project along a set of dimensions. We102

describe the various sources of data used in our study in Section 3. In Section 4, we introduce the eight103

dimensions used to represent a repository in our study. In Section 5, we define propose two variations104

to the definition of an engineered software project, collect a set of repositories that conform to the105

definitions, present approaches to build classifiers capable of identifying other repositories that conform106

to the definition of an engineered software project. The results from validating the classifiers and using107

them to identify repositories that conform to a particular definition of an engineered software project108

from a sample of 1,994,977 GitHub repositories is presented in Section 6. We contrast our study with109

prior literature in Section 7, discuss prior and potential research scenarios in which the data set and the110

classifier could be used in Section 8, and discuss nuances of certain repositories in Section 9. We address111

threats to validity in Section 10 and conclude the paper with Section 11.112

2 ENGINEERED SOFTWARE PROJECT113

Laplante (2007) defines software engineering as “a systematic approach to the analysis, design, assessment,114

implementation, test, maintenance and reengineering of software”. A software project may be regarded as115

“engineered” if there is discernible evidence of the application of software engineering principles such as116

design, test, maintenance, etc. On similar lines, we define an engineered software project in Definition117

2.1.118

Definition 2.1. An engineered software project is a software project that leverages sound software119

engineering practices in each of its dimensions such as documentation, testing, and project management.120

Definition 2.1 is intentionally abstract; the definition may be customized to align with a set of different,121

yet relevant, concerns. For instance, a study concerned with the extent of testing in software projects122

could define an engineered software project as a software project that leverages sound software testing123

practices. In our study, we have customized the definition of an engineered software project in two124

ways: (a) an engineered software project is similar to the projects contained within repositories owned by125

popular software engineering organizations such as Amazon, Apache, Microsoft and Mozilla and (b) an126

engineered software project is similar to the projects that have a general-purpose utility to users other127

than the developers themselves. We elaborate on these two definitions in the Implementation Section (§5).128

2.1 Evaluation Framework129

In order to operationalize Definition 2.1, we need to (a) identify the essential software engineering130

practices that are employed in the development and maintenance of a typical software project and (b)131

propose means of quantifying the evidence of their use in a given software project. The evaluation132

framework is our attempt at achieving this goal.133

The evaluation framework, in its simplest form, is a boolean-valued function defined as a piece-wise134

function shown in (1).135

f (r) =

{true If repository r contains an engineered software projectf alse Otherwise

(1)

The evaluation framework makes no assumption of the implementation of the boolean-valued function,136

f (r). In our implementation of the evaluation framework, we have chosen to realize f (r) in two ways:137

(a) f (r) as a score-based classifier and (b) f (r) as a Random Forest classifier. In both approaches, the138

implementation of the function, f (r), is achieved by expressing the repository, r, using a set of quantifiable139

attributes (called dimensions) that we believe are essential in reasoning that a repository contains an140

engineered software project.141

3/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 4: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

3 DATA SOURCES142

In this section, we describe two primary sources of data used in our study. We note that our study is143

restricted to publicly-accessible repositories available on GitHub and the data sources described in the144

subsections that follow are in the context of GitHub.145

3.1 Metadata146

GitHub metadata contains a wealth of information with which we could describe several phenomena147

surrounding a source code repository. For example, some of the important pieces of metadata are the148

primary language of implementation in a repository and the commits made by developers to a repository.149

GitHub provides a REST API (GitHub, Inc., 2016a) with which GitHub metadata may be obtained150

over the Internet. There are several services that capture and publish this metadata in bulk, avoiding151

the latency of the official API. The GitHub Archive project (GitHub, Inc., 2016b) was created for this152

purpose. It stores public events from the GitHub timeline and publishes them via Google BigQuery.153

Google BigQuery is a hosted querying engine that supports SQL-like constructs for querying large data154

sets. However, accessing the GitHub Archive data set via BigQuery incurs a cost per terabyte of data155

processed.156

Fortunately, Gousios (2013) has a free solution via their GHTorrent Project. The GHTorrent project157

provides a scalable and queryable offline mirror of all Git and GitHub metadata available through the158

GitHub REST API. The GHTorrent project is similar to the GitHub Archive project in that both start159

with the GitHub’s public events timeline. While the GitHub Archive project simply records the details160

of a GitHub event, the GHTorrent project exhaustively retrieves the contents of the event and stores161

them in a relational database. Furthermore, the GHTorrent data sets are available for download, either as162

incremental MongoDB dumps or a single MySQL dump, allowing offline access to the metadata. We163

have chosen to use the MySQL dump which was downloaded and restored on to a local server. In the164

remainder of the paper, whenever we use the term database we are referring to the GHTorrent database.165

The database dump used in this study was released on April 1, 2015. The database dump contained166

metadata for 16,331,225 GitHub repositories. In this study, we restrict ourselves to repositories in which167

the primary language is one of Java, Python, PHP, Ruby, C++, C, or C#. Furthermore, we do not consider168

repositories that have been marked as deleted and those that are forks of other repositories. Deleted169

repositories restrict the amount of data available for the analysis while forked repositories can artificially170

inflate the results by introducing near duplicates into the sample. With these restrictions applied, the size171

of our sample is reduced to 2,247,526 repositories.172

An inherent limitation of the database is the staleness of data. There may be repositories in the173

database that no longer exist on GitHub as they may have been deleted, renamed, made private, or blocked174

by GitHub.175

3.2 Source Code176

In addition to the metadata about a repository, the code contained within is an important source of177

information about the project. Developers typically interact with their repositories using either the git178

client or the GitHub web interface. Developers may also use the GitHub REST API to programmatically179

interact with GitHub.180

We use GitHub to obtain a copy of the source code for each repository. We cannot use GitHub’s181

REST API to retrieve repository snapshots, as the API internally uses the git archive command to182

create those snapshots. As a result, the snapshots may not include files the developers may have marked183

irrelevant to an end user (such as unit test files). Since we wanted to examine all development files in our184

analysis, we used the git clone command instead to ensure all files are downloaded.185

As mentioned earlier, the metadata used in this study is current as of April 1, 2015. However, this186

metadata may not be consistent with a repository cloned after April 1, 2015, as the repository contributors187

may have made commits after that date. In order to synchronize the repository with the metadata, we188

reset the state of the repository to a past date. For each evaluated repository in the database, we retrieved189

the date of the most recent commit to the repository. We then identified the SHA of the last commit190

made to the repository before the end of the day identified by date using the command git log -1191

--before="{date} 11:59:59". For repositories with no commits recorded in the database, we192

used the date when the GHTorrent metadata dump was released i.e. 2015-04-01. With the appropriate193

4/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 5: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

commit SHA identified, the state of the cloned repository was reset using the command git reset194

--hard {SHA}.195

4 DIMENSIONS196

In this section, we describe the dimensions used to represent a repository in the context of the evaluation197

framework introduced earlier. In our study, a repository is represented using a set of eight dimensions,198

they are:199

1. Architecture, as evidence of code organization.200

2. Community, as evidence of collaboration.201

3. Continuous integration, as evidence of quality.202

4. Documentation, as evidence of maintainability.203

5. History, as evidence of sustained evolution.204

6. Issues, as evidence of project management.205

7. License, as evidence of accountability.206

8. Unit testing, as evidence of quality.207

In the selection of dimensions, while relevance to software engineering practices was paramount, we208

also had to consider aspects such as implementation simplicity and measurement accuracy. We needed209

the algorithm to measure each of the dimensions to be generic enough to account for the plethora of210

programming languages used in the development of software projects, yet be specific enough to produce211

meaningful results. We acknowledge that this list is subjective, and by no means exhaustive; however, the212

evaluation framework makes no assumption of either the different dimensions or the way in which the213

dimensions are used in determining if a repository contains an engineered software project.214

In the subsections that follow, we describe each of the eight dimensions in greater detail. In each215

subsection, we describe the attribute of an software project that the dimension represents, propose a metric216

to quantify the dimension, and describe an approach to collect the metric from a source code repository.217

The process of collecting a dimension’s metric may require either or both of the sources of data introduced218

earlier.219

We have developed an open-source tool called reaper that is capable of collecting the metric for220

each of the eight dimensions from a given source code repository. The source code for reaper is221

available on GitHub at https://github.com/RepoReapers/reaper. In its current version, the222

capabilities of reaper are subject to the following restrictions:223

• The source code repository being analyzed must be publicly-accessible on GitHub and224

• The primary language of the repository must be one of Java, Python, PHP, Ruby, C++, C, or C#. We225

choose these languages based on their popularity on GitHub as reported by GitHut (Carlo Zapponi,226

2016).227

reaper was designed with flexibility and extensibility in mind. Extending reaper, to add the228

capability to analyze source code repositories in a new language (say JavaScript) for instance, is fairly229

trivial and the process is detailed in the project wiki.3230

4.1 Architecture231

IEEE 1471 defines software architecture as “the fundamental organization of a system embodied in its232

components, their relationships to each other and to the environment, and the principles guiding its design233

and evolution” (Maier et al., 2001). In effect, software architecture is the high-level structure of a software234

system that depicts the relationships between various components that compose the system.235

Software architecture comes in all shapes, sizes, and complexities. There is no one fixed architecture236

to which all software systems adhere; however, software systems that employ some form of architecture237

have discernible relationships between components that compose the system. These components can be238

as granular as functions and as coarse as entire binaries. For our purposes, any software system that has239

well-defined relationships between its components can be said to have an architecture. The presence of an240

architecture indicates that some form of design was employed in the development of the software project.241

Consequently, the software project may contain further evidence of being an engineered software project.242

3https://github.com/RepoReapers/reaper/wiki/Extending-reaper

5/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 6: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

Metric243

We approximate repository architecture as an undirected graph in which nodes represent files in the244

repository and edges represent relationships between the files. We hypothesize that a well-architected245

software system, at the very least, will have a majority of its files related to one another. We propose246

a metric, monolithicity, to quantify the extent to which source files in the repository are related to one247

another.248

Definition 4.1. Monolithicity is the ratio of the number of nodes in the largest connected sub-graph to the249

number of nodes in the entire architecture graph.250

There is a specific case that must be handled when calculating monolithicity. For example, certain251

projects may have only one source file present in the repository. Zazworka et al. (2011) investigated the252

impact of using god classes in the architecture of a software system and found that their inclusion has a253

detrimental effect on that system’s quality. Thus, we consider a repository with a single source file to254

have no architecture.255

Approach256

Bowman et al. (1999) designed a methodology to extract the high-level architecture of the Linux257

kernel. The researchers began by grouping source files into modules based on naming and directory258

structure. Individual source file relationships were promoted to relationships between the modules to259

produce an architectural overview of the Linux kernel. We use a similar approach to reverse engineer260

software architecture from the source code of a software system.261

We use a popular syntax highlighter package called Pygments (Georg Brandl and Pygments Contrib-262

utors, 2016) to identify symbol definition and usage in source files. Although Pygments’ primary use263

case is as a syntax highlighter, the lexer that powers it internally is suited for our purposes. Furthermore,264

Pygments supports lexical analysis of over 300 programming languages ensuring our implementation can265

scale beyond the languages chosen in this study.266

A repository may contain files with source code in multiple programming languages. We use267

ack (Lester, 2014) to obtain a list of files that contain code in the primary language (as noted in the268

database) of the repository. We then follow a two pass approach to construct the architecture graph.269

1. Pygments lexically analyzes each source file identified by ack to collect the symbols defined and270

referenced by the file. The file, along with a list of its symbol definitions and references, is added271

as a node to the graph.272

2. Iterating through all the nodes in the graph, an edge is added between a node A, and a node B, if at273

least one symbol referenced by A is defined by B.274

Figure 1 shows the architecture graph constructed from an example repository containing three source275

files: main.py, app.py, and util.py. Each node displays a file name above a list of externally276

visible symbols. The edges represent cross-file symbol references. The monolithicity of this graph is277

2/3 = 0.66, as the largest connected component has two files (main.py and app.py).278

st ar t ( )

mai n. py- - - - - - -+mai n( )

ut i l . py- - - - - - -+l og( )

app. py- - - - - -

+st ar t ( )

Figure 1. Example view of an extracted architecture

While simple, this approach does have limitations, notably:279

• Dynamic Loading: Lack of support for programming languages that support dynamic loading280

e.g. JavaScript. JavaScript is a dynamic language which makes it difficult to determine symbol281

references among source files purely from analyzing the contents of the source files.282

6/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 7: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

• Over-approximation: Since the approach relies purely on the name of the symbols, the resulting283

architecture may depict relationships that may not occur in the source code e.g., when multiple files284

define symbols with the same name.285

• Language Bias: Since the approach only considers files containing source code in the primary286

language of the repository, there is a possibility whereby a significant portion of source code287

in the repository may be excluded. For example, a repository meant for the development of a288

Django application may be flagged with JavaScript being the primary language if the percentage289

of JavaScript code is higher than that of Python, perhaps by the inclusion of vendor files such as290

jquery.js or bootstrap.js in the repository.291

An alternative to Pygments may have been to use static analysis tools to generate a call graph from292

which the monolithicity metric may be estimated. Such an approach would involve (a) identifying293

an appropriate static analysis tool from among the plethora of tools available for each of the seven294

programming languages considered in our study and (b) developing a parser to transform the output from295

each of the identified static analysis tools to a graph.296

As an exploratory exercise, we computed the difference between the value of the monolithicity metric297

estimated using the Pygments approach and that estimated using the static analysis approach. In this298

exercise, we restricted ourselves to the C programming language because a Python script capable of299

parsing a call graph generated by GNU cflow—a static call graph generation utility for programs written300

in C4—was available as a component in an open-source project called the Attack Surface Meter (Munaiah301

et al., 2016a). The empirical data set in this exercise was composed of 50 randomly selected repositories in302

which the primary programming language was C. The median difference in the value of the monolithicity303

metric was 0.1384 (with a standard deviation of 0.2256) with the Pygments-based approach producing304

lower values for the monolithicity metric. While the approach to estimating monolithicity using the305

static analysis approach is more accurate, the implementation overhead is substantial, deterring us from306

pursuing this approach any further.307

The Pygments-based approach does have one implementation concern: computational complexity.308

The algorithm to generate the architecture graph has a computational complexity of O(n3). In the case of309

some very large projects, the approach was not feasible to get a result in a reasonable amount of time.310

Therefore, the reference implementation includes a timeout period. If the tool is unable to calculate the311

monolithicity of a project within the designated time period, the calculation is halted and the repository312

receives a no score for the architecture dimension alone.313

4.2 Community314

Software engineering is an inherently collaborative discipline. With the advent of the Internet and315

a plethora of tools that simplify communication, software development is increasingly decentralized.316

Open source development in particular thrives on decentralization, with globally dispersed developers317

contributing code, knowledge, and ideas to a multitude of open source projects. Collaboration in open318

source software development manifests itself as a community of developers.319

The presence of a developer community indicates that there is some form of collaboration and320

cooperation involved in the development of the software system, which is partial evidence for the321

repository containing an engineered software project.322

Metric323

Whitehead et al. (2010) have hypothesized that the development of a software system involving more324

than one developer can be considered as an instance of collaborative software engineering. We propose a325

metric, core contributors, to quantify the community established around a source code repository.326

Definition 4.2. Core contributors is the cardinality of the smallest set of contributors whose total number327

of commits to a source code repository accounts for 80% or more of the total contributions.328

Approach329

The notion of core contributors is prevalent in open source software where a set of contributors take330

ownership of and drive a project towards a common goal. Mockus et al. (2000) have applied this concept331

4http://www.gnu.org/software/cflow/

7/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 8: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

in their study of open source software development practices. The definition of core contributors is the332

same as that of core developers as defined by Syer et al. (2013).333

We computed total contributions by counting the number of commits made to a repository as recorded334

in the database. We then grouped the commits by author and picked the first n authors for which the335

cumulative number of commits accounted for 80% of the total contributions. The value of n represents336

the core contributors metric.337

There is one issue in the implementation of this metric in reaper. We use the GHTorrent data to338

find unique contributors of a repository. However, GHTorrent has the notion of “fake users” who do not339

have GitHub accounts (GHTorrent, 2016) but publish their contributions with the help of real GitHub340

users. For example, a “fake user” makes a commit to a local Git repository, then a “real user” pushes341

those commits to GitHub using their account. Sometimes, the real and fake users may be the same. This342

is the case when a developer with a GitHub account makes commits with a secondary email address. This343

tends to inflate the core contributors metric for small repositories with only one real contributor, and344

could be improved in the future by detecting similar email addresses.345

4.3 Continuous Integration346

Continuous integration (CI) is a software engineering practice in which developers regularly build, run,347

and test their code combined with code from other developers. CI is done to ensure that the stability of348

the system as a whole is not impacted by changes. It typically involves compiling the software system,349

executing automated unit tests, analyzing system quality and deploying the software system.350

With millions of developers contributing to thousands of source code repositories, the practice of351

continuously integrating changes ensures that the software system contained within these constantly352

evolving source code repositories is stable for development and/or release. The use of CI is further353

evidence that the software project might be considered an engineered software project.354

Metric355

The metric for the continuous integration dimension may be defined as a piecewise function as shown356

below:357

Mci(r) =

{1 If repository r uses a CI service0 Otherwise

Approach358

The use of a continuous integration service is determined by looking for a configuration file (required359

by certain CI services) in the source code repository. An inherent limitation of this approach is that it360

supports the identification of stateless CI services only. Integration with stateful services such as Jenkins,361

Atlassian Bamboo, and Cloudship cannot be identified since there may be no trace of the integration in362

the repository. The continuous integration services currently supported are: Travis CI, Hound, Appveyor,363

Shippable, MagnumCI, Solano, CircleCI, and Wercker.364

4.4 Documentation365

Software developers create and maintain various forms of documentation. Some forms are part of366

the source files, such as code comments, whereas others are external to source files, such as wikis,367

requirements, and design documents. One purpose of documentation is to aid the comprehension of368

the software system for maintenance purposes. Among the many forms of documentation, source code369

comments were found to be most important, second only to the source code itself (de Souza et al., 2005).370

The presence of documentation, in a sufficient quantity, indicates the author thought of maintainability;371

this serves as partial evidence towards a determination that the software system is engineered.372

Metric373

In this study, we restrict ourselves to documentation in the form of source code comments. We propose374

a metric, comment ratio, to quantify a repository’s extent of source code documentation.375

Definition 4.3. Comment ratio is the ratio of the number of comment lines of code (cloc) to the numberof non-blank lines of source code (sloc) in a repository r.

Md(r) =cloc

sloc+ cloc(2)

8/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 9: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

Approach376

We use a popular Perl utility cloc (Danial, 2014) to compute source lines of code and comment lines377

of code. cloc returns blank, comment, and source lines of code grouped by the different programming378

languages in the repository. We aggregate the values returned by cloc when computing the comment379

ratio.380

We note that comment ratio only quantifies the extent of source code documentation exhibited by a381

repository. We do not consider the quality, staleness, or relevancy of the documentation. Furthermore,382

we have only considered a single source of documentation—source code comments—in quantifying this383

dimension. We have not considered other (external) sources of documentation such as wikis, design384

documents, and any associated README files because identifying and quantifying these external sources385

may not be as straightforward. We may have to leverage natural language processing techniques to analyze386

these external documentation artifacts.387

4.5 History388

Eick et al. (2001) have shown that source code must undergo continual change to thwart feature star-389

vation and remain marketable. A change could be a bug fix, feature addition, preventive maintenance,390

vulnerability resolution, etc. The presence of sustained change indicates that the software system is being391

modified to ensure its viability. This is partial evidence towards a determination that the software system392

is engineered.393

Metric394

In the context of a source code repository, a commit is the unit by which change can be quantified. We395

propose a metric, commit frequency, to be the frequency by which a repository is undergoing change.396

Definition 4.4. Commit frequency is the average number of commits per month.

Mh(r) =1m

m

∑i=1

ci (3)

Where,397

• ci is the number of commits for the month i398

• m is the number of months between the first and last commit to the repository r399

Approach400

Each ci was computed by counting the number of commits recorded in the database for the month i.401

However, m was computed as the difference, in months, between the date of the first commit and date of402

the last commit to the repository. If m was computed to be 0, the value of the metric was set to 0.403

4.6 Issues404

Over the years, there have been a plethora of tools developed to simplify the management of large405

software projects. These tools support some of the most important activities in software engineering such406

as management of requirements, schedules, tasks, defects, and releases. We hypothesize that a software407

project that employs project management tools is representative of an engineered software project. Thus,408

the evidence of the use of project management tools in a source code repository may indicate that the409

software system contained within is engineered.410

There are several commercial enterprise tools available, however, there is no unified way in which these411

tools integrate with a source code repository. Source code repositories hosted on GitHub can leverage412

a deceptively named feature of GitHub—GitHub Issues—to potentially manage the entire lifecycle413

of a software project. We say deceptively named because an “issue” on GitHub may be associated414

with a variety of customizable labels which could alter the interpretation of the issue. For example,415

developers could create user stories as GitHub issues and label them as User Story. The richness and416

flexibility of the GitHub Issues feature has fueled the development of several third party services such as417

Codetree (Codetree Studios, 2016), HuBoard (HuBoard Inc., 2016), waffle.io (CA Technologies, 2016),418

and ZenHub (Zenhub, 2016). These services use GitHub Issues to support lifecycle management of419

projects.420

Metric421

9/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 10: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

In this study, we assume the sustained use of the GitHub Issues feature to be indicative of management422

in a source code repository. We propose a metric, issue frequency, to quantify the sustained use of GitHub423

Issues in a repository.424

Definition 4.5. Issue frequency is the average number of issue events transpired per month.

Mi(r) =1m

m

∑i=1

si (4)

Where,425

• si is the number of issues events for the month i426

• m is the number of months between the first and last commit to the repository r427

Approach428

Each si was computed by counting the number of issue events recorded in the database for the month429

i. However, m was computed as the difference in months between the date of the first commit and date of430

the last commit to the repository. If m was computed to be 0, the value of the metric was set to 0.431

An inherent limitation in the approach is that it does not support the discovery of other project432

management tools. Integration with other project management tools may not be easy to detect because433

structured links to these tools may not exist in the repository source code.434

4.7 License435

A user’s right to use, modify, and/or redistribute a piece of software is dictated by the license that436

accompanies the software. Licenses are especially important in the context of open source projects as an437

article (Center, 2012) by The Software Freedom Law Center discusses. The article highlights the need for438

and best practices in licensing open source software.439

A software with no accompanying license is typically protected by default copyright laws, which440

state that the author retains all rights to the source code (GitHub Inc., 2016). Although there is no legal441

requirement to include a license in a source code repository, it is considered a best practice. Furthermore,442

the terms of service agreement of source forges such as GitHub may allow publicly-accessible repositories443

to be forked (copied) by other users. Thus, including a license in the repository explicitly dictates the444

rights, or lack thereof, of the user making copies of the repository. The presence of a software license is445

necessary but not sufficient to indicate a repository contains an engineered software project according to446

our definition of the dimension.447

Metric448

The metric for the license dimension may be defined as a piecewise function as shown below:449

Ml(r) =

{1 If repository r has a license0 Otherwise

Approach450

The presence of a license in a source code repository is assessed using the GitHub License API. The451

license API identifies the presence of popular open source licenses by analyzing files such as LICENSE452

and COPYING in the root of the source code repository.453

The GitHub License API is limited in its capabilities in that it does not consider license information454

contained in README.md or in source code files. Furthermore, the API is still in “developer preview”455

and as a result may be unreliable. On the other hand, any improvements in the capabilities of the456

API is automatically reflected in our approach. In the interim, however, we have overcome some of457

the limitations by analyzing the files in a source code repository for license information. We identify458

license information by searching repository files for excerpts from the license text of 12 most popular459

open source licenses on GitHub. For example, we search for “The MIT License (MIT)” to detect the460

presence of The MIT License. The 12 chosen licenses were enumerated by the GitHub License API461

(https://api.github.com/licenses). We note that the interim solution implemented may462

10/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 11: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

have its own side-effect in cases where a repository, with no license of its own, includes source code files463

of an external library instead of defining the library as a dependency. If any of the external library source464

code files contain excerpts of the license we search for, the license dimension may falsely indicate the465

repository to contain a license.466

4.8 Unit Testing467

An engineered product is assumed to function as designed for the duration of its lifetime. This assumption468

is supported by the subjection of the product to rigorous testing. An engineered software product is no469

different in that the guarantee of the product functioning as designed is provided by rigorous testing.470

Evidence of testing in a software project implies that the developers have spent the time and effort to471

ensure that the product adheres to its intended behavior. However, the mere presence of testing is not472

a sufficient measure to conclude that the software project is engineered. The adequacy of tests is to473

be taken into consideration as well. Adequacy of the tests contained within a software project may474

be measured in several ways (Zhu et al., 1997). Metrics that quantify test adequacy by measuring the475

coverage achieved when the tests are executed are commonly used. Essentially, collecting coverage476

metrics requires the execution of the unit tests which may in-turn require satisfying all the dependencies477

that the program under test may have. Fortunately, there are means of approximating adequacy of tests in478

a software project through static analysis. Nagappan et al. (2005) have used the number of test cases per479

source line of code and number of assertions per source line of code in assessing the test quantity in Java480

projects. Additionally, Zaidman et al. (2008) have shown that test coverage is positively correlated with481

the percentage of test code in the system.482

Metric483

We propose a metric, test ratio, to quantify the extent of unit testing effort.484

Definition 4.6. Test ratio is the ratio of number of source lines of code in test files to the number ofsource lines of code in all source files.

Mu(r) =slotcsloc

(5)

Where,485

• slotc is the number of source lines of code in test files in the repository r486

• sloc is the number of source lines of code in all source files in the repository r487

Approach488

In order to compute slotc, we must first identify the test files. We achieved this by searching for489

language- and testing framework-specific patterns in the repository. For example, test files in a Python490

project that use the native unit testing framework may be identified by searching for patterns import491

unittest or from unittest import TestCase.492

We used grep to search for and obtain a list of files that contain specific patterns such as above.493

We then use the cloc tool to compute sloc from all source files in the repository and slotc from the494

test files identified. Occasionally, a software project may use multiple unit testing frameworks e.g. a495

Django web application project may use Python’s unittest framework and Django’s extension of496

unittest–django.test. In order to account for this scenario, we accumulate the test files identified497

using patterns for multiple language-specific unit testing frameworks before computing slotc.498

The multitude of unit testing frameworks available for each of the programming languages considered499

makes the approach limited in its capabilities. We currently support 20 unit testing frameworks. The unit500

testing frameworks currently supported are: Boost, Catch, googletest, and Stout gtest for C++; clar, GLib501

Testing, and picotest for C; NUnit, Visual Studio Testing, and xUnit for C#; JUnit and TestNG for Java;502

PHPUnit for PHP; django.test, nose, and unittest for Python; and minitest, RSpec, and Ruby Unit Testing503

for Ruby.504

In scenarios where we are unable to identify a unit testing framework, we resort to considering all505

files in directories named test, tests, or spec as test files.506

11/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 12: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

5 IMPLEMENTATION507

In this section, we describe two of the (potentially) many approaches of implementing the boolean-valued508

function representing the evaluation framework from Equation (1) in Section 2.1. In both approaches,509

a repository is represented by the eight (quantifiable) dimensions that were introduced in the previous510

section.511

5.1 Training Data Sets512

The boolean-valued function representing the evaluation framework is essentially a classifier capable513

of classifying a repository as containing an engineered software project or not. The classifier is trained514

using a set of repositories (called the training data set) that have been manually classified as containing515

engineered software projects according to some specific definition of an engineered software project.516

In the context of our study, we demonstrate training the classifier using two definitions of an engineered517

software project, they are:518

• Organization - A repository is said to contain an engineered software project if it is similar to519

repositories owned by popular software engineering organizations.520

• Utility - A repository is said to contain an engineered software project if it is similar to repositories521

that have a fairly general-purpose utility to users other than the developers themselves. For instance,522

a repository containing a Chrome plug-in is considered to have a general-purpose utility, however,523

a repository containing a mobile application developed by a student as a course project may not524

considered to have a general-purpose utility.525

In the subsections that follow, we describe the approach used to manually identify repositories that526

conform to each of the definitions of an engineered software project from above. These repositories527

compose respective (training) data sets. Additionally, the two (training) data sets were appended with528

negative instances i.e. repositories that do not conform to either of the definitions of an engineered529

software project presented above.530

5.1.1 Organization Data Set531

The process of identifying the repositories that compose the organization data set was fairly trivial. The532

preliminary step was to manually sift through repositories owned by organizations such as Amazon,533

Apache, Facebook, Google, and Microsoft and identify a set of 150 repositories. The task was divided534

such that three of the four authors independently identified 50 repositories each, ensuring that there was535

no overlap between the individual authors. The manual identification of repositories was supported by a536

set of guidelines that were established prior to the sifting process. These guidelines dictated the aspects of537

a repository that were to be considered in deciding whether to include a repository. Some of the guidelines538

used were (a) repository must be licensed under an open-source license, (b) repository uses comments to539

document code, (c) repository uses continuous integration, and (d) repository contains unit tests. With 150540

repositories identified, the next step was for each author to review the 100 repositories identified by the541

other two authors to mitigate any biases that may have been induced by subjectivity. The repositories that542

at least one author marked for review were discussed further. At the end of the discussion, a decision was543

made to either include the repository or replace it with another repository that was unanimously chosen544

during the discussion.545

scrapy/scrapy, phalcon/incubator, JetBrains/FSharper, and owncloud/calendar546

are some examples of repositories included in the organization data set that are known to contain engi-547

neered software projects.548

The organization data set is available for download as a CSV file—organization.csv—from GitHub Gist549

accessible at https://gist.github.com/nuthanmunaiah/23dba27be17bbd0abc40079411dbf066.550

5.1.2 Utility Data Set551

Unlike the process of identifying the repositories that compose the organization data set, the process552

of identifying the repositories that compose the utility data set was non-trivial. The repositories that553

composed the utility data set were identified by manually evaluating a random sample from the 1,994,977554

repositories that were analyzed by reaper. Similar to the process of composing the organization data set,555

we used a set of guidelines for deciding if a repository should be included or not. The guidelines dictated556

the various aspects that were to be considered in deciding whether a repository has a general-purpose557

12/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 13: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

utility. The guidelines used here were more subjective than those used in the process of composing the558

organization data set. Some of the guidelines used were (a) repository contains sufficient documentation559

to enable the project contained within to be used in a general-purpose setting, (b) repository contains an560

application or service that is used by or has the potential to be used by people other than the developers, (c)561

repository does not contain cues indicating that the source code contained within may be an assignment.562

The potential for bias was mitigated by two authors independently evaluating the same random sample of563

repositories. The first 150 repositories that both authors agreed to include composed the utility data set.564

brandur/casseo, stephentu/forwarder, smcameron/opencscad, and apanzerj/zit565

are some examples of repositories included in the utility data set that are known to contain engineered566

software projects.567

The organization data set is available for download as a CSV file—utility.csv—from GitHub Gist acces-568

sible at https://gist.github.com/nuthanmunaiah/23dba27be17bbd0abc40079411dbf066.569

5.1.3 Negative Instances Data Set570

The negative instances data set is essentially a collection of 150 repositories that do not conform to either571

of the two definitions (i.e. organization and utility) of an engineered software project. The repositories572

that compose the negative instances data set were identified during the process of composing the utility573

data set in that, the first 150 repositories (not owned by an organization) that both authors agreed to574

exclude from the utility data set were considered to be a part of the negative instances data set.575

The organization and utility data set files available for download on GitHub Gist also contain reposito-576

ries from the negative instances data set.577

5.1.4 Data Sets Summary578

In this section, we summarize the training data sets using visualizations. The repositories that compose the579

organization and utility data sets are labeled as “project” and the repositories that compose the negative580

instances data set are labeled as “not project”. Shown in Figures 2 and 3 is the number of repositories of581

each kind (“project” and “not project”) grouped by programming language in the organization and utility582

data sets, respectively.583

6

25

14

18

13

21

44

35

25

12

26

23 22

16

0

10

20

30

40

C C#

C++

Java

PHP

Python

Ruby

Language

# R

epos

itori

es

Label Project Not Project

Organization Data Set

Figure 2. Number of repositories in the organization data set grouped by programming language

The distribution of number of source lines of code (SLOC) in and number of stargazers of repositories584

of each kind (“project” and “not project”) in organization and utility data sets are shown in Figures 4 and585

5, respectively.586

Additionally, shown in Figure 6 and 7 are the distributions of all eight dimensions obtained from the587

repositories in the organization and utility data sets, respectively.588

5.2 Approaches589

In this section, we introduce two approaches of implementing the boolean-valued function representing590

the evaluation framework.591

13/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 14: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

6

1214

9

13

17

44

2725

17

26

38

22

30

0

10

20

30

40

C C#

C++

Java

PHP

Python

Ruby

Language

# R

epos

itori

esLabel Project Not Project

Utility Data Set

Figure 3. Number of repositories in the utility data set grouped by programming language

SLOC Stargazers

100

10,000

1,000,000

10

1,000

Not P

rojec

t

Projec

t

Not P

rojec

t

Projec

t

Label

Valu

e (L

og S

cale

)

Organization Data Set

Figure 4. Distribution of SLOC in and number of stargazers of repositories in the organization data set

5.2.1 Score-based Classifier592

The score-based classifier is a custom approach to implementing the evaluation framework that allows com-593

plete control over the classification. In this approach, the boolean-valued function from (1) (Section 2.1,594

takes the form shown in Equation (6).595

f (r) =

{true If score(r)≥ scorere f

f alse Otherwise(6)

score(r) = ∑d∈D

hd(Md , td)×wd

Where,596

• r is the repository to classify597

• D is a set of dimensions along which the repository, r, is evaluated. These may be analogous to the598

software engineering practices e.g., unit testing, documenting source code, etc.599

• Md is the metric that quantifies evidence of the repository, r, employing a certain software engi-600

neering practice in the dimension, d. For example, the proportion of comment lines to source lines601

quantifies documentation.602

14/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 15: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

SLOC Stargazers

100

10,000

1,000,000

10

1,000

Not P

rojec

t

Projec

t

Not P

rojec

t

Projec

t

Label

Valu

e (L

og S

cale

)

Utility Data Set

Figure 5. Distribution of SLOC in and number of stargazers of repositories in the utility data set

• td is a threshold that must be satisfied by the corresponding metric, Md , for the repository, r, to be603

considered engineered along the dimension, d. For example, having a sufficiently high proportion of604

comment lines to source lines may indicate that the project is engineered along the documentation605

dimension.606

• hd(Md , td) is a heuristic function that evaluates to 1 if the metric value, Md , satisfies the correspond-607

ing threshold requirement, td , 0 otherwise.608

• wd is the weight that specifies the relative importance of each dimension d.609

• scorere f is the reference score i.e. the minimum score that a repository must evaluate to in order to610

be considered to contain an engineered software project.611

In the case of the score-based classifier, the training data set is used to determine the thresholds,612

td , and compute the reference score, scorere f . For all repositories in each of the two training data613

sets (plus the repositories from the negative instances data set), we collected the eight metric values.614

Outliers were eliminated using the Peirce criterion (Ross, 2003). For the boolean-valued metrics, 1 (i.e.615

True) is the threshold. For all other metrics, the minimum non-zero metric value was chosen to be the616

corresponding threshold. The threshold values corresponding to each of the eight dimensions, established617

from repositories in each of the two training data sets, are shown in Table 1. Also shown in Table 1 are618

the relative weights that we have used in our score-based classifier. We note that these weights, while619

subjective, are an acceptable default. Furthermore, we also considered the limitations in collecting the620

value of the associated metric from a source code repository when deciding the weights. For example,621

the approach to evaluating a source code repository along the architecture dimension is more robust and622

thus its weight is higher than the unit testing dimension, where there are inherent limitations owing to our623

non-exhaustive set of framework signatures.624

The weights and thresholds from Table 1 were used to compute the scores of all repositories in the625

organization and utility data sets. The distribution of the scores is shown in Figure 8. The reference score626

in the organization and utility training data sets was found to be 65 and 30, respectively.627

The score-based classifier approach is flexible and enables a finer control over the classification. The628

weights, in particular, enable the implementer to explicitly define the importance of each dimension. In629

effect, Equation (6) may be tailored to implement a variety of different classifiers using different set of630

dimensions, D, and corresponding metrics, thresholds, and weights. For instance, if there is a need to631

build a classifier that considers gender bias in the acceptance of contributions in open-source community632

(like in the work by Kofink (2015)), one could introduce a new dimension, say bias, define a metric633

to quantify gender bias in a repository, identify an appropriate threshold, and weight the dimension in634

relation to other dimensions that may be pertinent to the study.635

5.2.2 Random Forest Classifier636

Random forest classifier is a tree-based approach to classification in which multiple trees are trained such637

that each tree casts a vote which is then aggregated to produce the final classification (Breiman, 2001).638

The random forest classifier is simpler to implement but the simplicity comes at a loss of finer control639

15/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 16: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

Architecture

0.00

0.25

0.50

0.75

1.00

Not P

rojec

t

Projec

t

Community

100

200

300

Not P

rojec

t

Projec

t

No

Yes

No Yes

Continuous Integration

0

50

100

150

Not P

rojec

t

Projec

t

Documentation

0.0

0.2

0.4

0.6

Not P

rojec

t

Projec

t

History

250

500

750

Not P

rojec

t

Projec

t

Issues

100

200

300

400

Not P

rojec

t

Projec

t

Unit Test

0.00

0.25

0.50

0.75

1.00

Not P

rojec

t

Projec

t

No

Yes

Yes

License

0

50

100

150

Not P

rojec

t

Projec

t

Distribution of Dimensions of Repositories in Organization Data Set

Label

Valu

e (V

aryi

ng S

cale

s)

Figure 6. Distribution of the dimensions of repositories in the organization data set

over the classification. The training data set is the only way to affect the classifier performance. While the640

implementer has the option to ignore dimensions when training the classifier, there is no way to express641

relative weighting of dimensions as supported in the score-based classifier.642

6 RESULTS643

In this section, we present (a) the results from the validation of the classifiers and (b) the results from644

applying the classifiers to identify (or predict) engineered software projects in a sample of 1,994,977645

GitHub repositories. Since we have two different classifiers (score-based and random forest) trained using646

two different data sets (organization and utility), the validation and prediction analysis is repeated four647

times.648

6.1 Validation649

In this section, we present the approach to and results from the validation of the score-based and random650

forest classifiers trained with organization and utility data sets. We considered validation from two651

perspectives: internal, in which the performance of the classifiers itself was validated, and external, in652

which the performance of the classifiers was compared to that of a classification scheme that uses number653

of stargazers (Ray et al., 2014) as the criteria. We used false positive rate (FPR), false negative rate (FNR),654

precision, recall, and F-measure to assess the classification performance. The validation was carried out in655

the context of a set of 200 repositories, called the validation set, for which the ground truth classification656

was manually established.657

6.1.1 Establishing the Ground Truth658

The performance evaluation of any classifier typically involves using the classifier to classify a set of659

samples for which the ground truth classification is known. On similar lines, to evaluate the performance660

of the score-based and random forest classifiers, we manually composed a set of 200 repositories (100661

repositories that have been assessed to contain an engineered software project and 100 repositories that do662

not). We followed a process similar to that followed in identifying the repositories to compose the utility663

16/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 17: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

Architecture

0.00

0.25

0.50

0.75

1.00

Not P

rojec

t

Projec

t

Community

10

20

30

Not P

rojec

t

Projec

t

No

Yes

No

Yes

Continuous Integration

0

50

100

150

Not P

rojec

t

Projec

t

Documentation

0.0

0.2

0.4

0.6

Not P

rojec

t

Projec

t

History

50

100

150

Not P

rojec

t

Projec

t

Issues

10

20

30

40

Not P

rojec

t

Projec

t

Unit Test

0.0

0.3

0.6

0.9

Not P

rojec

t

Projec

t

No

Yes

No

Yes

License

0

50

100

Not P

rojec

t

Projec

t

Distribution of Dimensions of Repositories in Utility Data Set

Label

Valu

e (V

aryi

ng S

cale

s)

Figure 7. Distribution of the dimensions of repositories in the utility data set

data set. To ensure an unbiased evaluation, the validation set was independently evaluated by two authors664

and only those repositories that both authors agreed on were included and appropriately labeled.665

Shown in Figure 9 is the distribution of the eight dimensions collected from repositories in the666

validation set. As seen in the figure, repositories known to contain engineered software project tend to667

have higher median values in almost all dimensions.668

The validation set is available for download as a CSV file—validation.csv—from GitHub Gist accessi-669

ble at https://gist.github.com/nuthanmunaiah/23dba27be17bbd0abc40079411dbf066.670

6.1.2 Internal Validation671

In this validation perspective, the performance of score-based and random forest classifiers trained using672

organization and utility data sets is evaluated in the context of the validation set.673

Organization Data Set674

The performance of score-based and random forest classifiers trained with organization data set is675

shown in Table 2.676

Clearly, the score-based classifier performs better than the random forest classifier in terms of F-677

measure. If a lower false positive rate is desired, the random forest model may be better suited as it has a678

considerably lower false positive rate than the score-based classifier. We now present some examples of679

repositories from the organization data set that were misclassified by both score-based and random forest680

classifiers.681

1. False Positive - software-engineering-amsterdam/sea-of-ql is the quintessential682

example of a repository that should not be included in a software engineering study because it is683

essentially a collaboration space where all students enrolled in a particular course develop their684

individual software projects. The repository received a score of 95 which is closer to a perfect score685

of 100. Analyzing the dimensions of the repository, we found, quite unsurprisingly, that almost all686

dimensions satisfied the threshold requirement. The repository was contributed to by 24 developers687

with an average of over 330 commits made every month. Although the repository does not have a688

17/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 18: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

Table 1. Dimensions and their corresponding weights, metrics, and thresholds (established from theorganization and utility training data sets)

Dimension (d) Metric (Md) Weight (wd)Threshold (td)

Organization Utility

Architecture Monolithicity 20 6.4912E-01 6.2500E-02

Community Core Contributors 20 2 2

Continuous Integration Evidence of CI 5 1 1

Documentation Comment Ratio 10 1.8660E-03 1.1710E-03

History Commit Frequency 20 2.0895 1.5000E-01

Issues Issue Frequency 5 2.2989E-02 1.1905E-02

License Evidence of License 10 1 1

Unit Test Test Ratio 10 1.0160E-03 1.4200E-04

Organization Utility

0

25

50

75

100

0

25

50

75

100

Not P

rojec

t

Projec

t

Not P

rojec

t

Projec

t

Label

Sco

re

Distribution of Scores

Figure 8. Distribution of scores for repositories in the organization and utility data sets

license, the limitation of our implementation of the license dimension seems to have identified a689

license in library files that may have been included in the source code repository.690

2. False Negative - kzoll/ztlogger is a repository that contains PHP scripts to log website691

traffic information. The repository received a score of 20 which is considerably lower than the692

reference score of 65. Analyzing the dimensions of the repository, we found that the repository was693

contributed to by a single developer, had limited architecture, did not use continuous integration,694

issues or unit testing.695

Utility Data Set696

The performance of score-based and random forest classifiers trained with utility data set is shown in697

Table 3.698

Clearly, the random forest model performs better than the score-based model. A particularly surprising699

outcome from the validation is the large false positive rate of the score-based classifier. The large false700

positive rate indicates that the classifier may have classified almost all repositories as containing an701

engineered software project. We now present some examples of repositories from the utility data set that702

were misclassified by both score-based and random forest classifiers.703

1. False Positive - mer-packages/qtgraphicaleffects is a repository that contains source704

code for certain visual items that may be used with images or videos. The repository was manually705

18/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 19: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

Architecture

0.00

0.25

0.50

0.75

1.00

Not P

rojec

t

Projec

t

Community

5

10

15

2025

Not P

rojec

t

Projec

t

No

No

Yes

Continuous Integration

0

25

50

75

100

Not P

rojec

t

Projec

t

Documentation

0.0

0.2

0.4

0.6

Not P

rojec

t

Projec

t

History

100

200

300

400500

Not P

rojec

t

Projec

t

Issues

10

20

30

4050

Not P

rojec

t

Projec

t

Unit Test

0.00

0.25

0.50

0.75

1.00

Not P

rojec

t

Projec

t

No

Yes

No

Yes

License

0

25

50

75

100

Not P

rojec

t

Projec

t

Distribution of Dimensions of Repositories in Validation Set

Label

Valu

e (V

aryi

ng S

cale

s)

Figure 9. Distribution of dimensions of repositories in the validation set

Table 2. Performance of score-based and random forest classifiers trained with organization data set

Classifier FPR FNR Precision Recall F-measure

Score-based 14% 32% 83% 68% 75%

Random forest 4% 59% 91% 41% 57%

classified as not containing an engineered software project because of the unusual repository orga-706

nization. Furthermore, looking at other repositories owned by the mer-packages organization,707

it appears that the repository may actually be a copy of the source code of the Qt Graphical Effects708

module,5 being ported to the Mer6 mobile platform. The repository received a score of 95. An-709

alyzing the dimensions of the repository, we found almost all dimensions satisfied the threshold710

requirement. Although the repository was not manually classified as containing an engineered711

software project, the inclusion of this particular repository may not be a problem as the project,712

possibly a clone from a different repository, has a general-purpose utility.713

2. False Negative - EsotericSoftware/dnsmadeeasy is a repository that contains a simple714

Java tool that periodically updates the IP addresses in the DNS servers maintained by DNS Made715

Easy.7 Clearly, the tool has a general-purpose utility for any customer using the DNS Made Easy716

service. However, the repository received a score of 10. Analyzing the source code contained in717

and the dimensions of the repository, we found that the project is too simple. The architecture718

dimension could not be computed because the repository contains only a single source file which719

has no source code comments. The repository does have a license but does not use continuous720

integration, unit testing, or issues. However, the ground truth classification of this repository was721

that it contained an engineered software project because of the utility of the Java tool.722

5http://doc.qt.io/qt-5/qtgraphicaleffects-index.html6http://merproject.org/7http://www.dnsmadeeasy.com/

19/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 20: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

Table 3. Performance of score-based and random forest classifiers trained with utility data set

Classifier FPR FNR Precision Recall F-measure

Score-based 88% 1% 53% 99% 69%

Random forest 18% 17% 82% 83% 83%

6.1.3 External Validation723

In this validation perspective, the performance of score-based and random forest classifiers is compared to724

that of the stargazers-based classifier used in prior literature (Ray et al., 2014). We previously noted that725

the popularity of a repository is one potential criterion in identifying a data set for research studies. The726

intuition is that popular repositories (i.e. repositories with many “stargazers”) will contain actual software727

that people like and use (Jarczyk et al., 2014). The intuition has been the basis for several well-received728

studies. For example, the papers by Ray et al. (2014) on programming languages and code quality (which729

has over 34 citations) and Guzman et al. (2014) on commit comment sentiment analysis (which has730

over 15 citations) use the number of stargazers as a way to select projects for their case studies.8 These731

papers use the top starred projects in various languages, which are bound to be extremely popular. The732

mongodb/mongo repository used in the data set by Ray et al. (2014), for instance, has over 8,927 stars.733

In Tables 2 and 3 from the previous section, we presented the performance of score-based and734

random forest classifiers trained using organization and utility data sets, respectively. We now use the735

stargazers-based classifier to classify the repositories from the validation set. In using a stargazers-based736

classifier, Ray et al. (2014) ordered and picked the top 50 repositories in each of the 19 popular languages.737

We applied the same filtering scheme to a sample of 1,994,977 GitHub repositories and established the738

minimum number of stargazers to be 1,123. In other words, a repository is classified as containing an739

engineered software project (based on popularity) if it has 1,123 or more stars. As an exploratory exercise,740

we also evaluated other thresholds (500, 50 and 10) for number of stargazers. The performance metrics741

from the stargazers-based classifier for varying thresholds of number of stargazers are shown in Table 4.742

In cases where the classifier produced no positive classifications (i.e. both true positive and false positive743

are zeros), precision and F-measure cannot be computed and are shown as NA in the table.744

Table 4. Performance of stargazers-based classifier against the ground truth for varying thresholds ofnumber of stargazers

Threshold FPR FNR Precision Recall F-measure

1,123 0% 100% NA 0% NA

500 0% 100% NA 0% NA

50 0% 89% 100% 11% 20%

10 1% 73% 96% 27% 42%

As seen in Table 4, at high thresholds (1,123 and 500) the stargazers-based classifier misclassifies all745

repositories known to contain engineered software projects. As we lower the threshold, the performance746

improves, albeit marginally. The most striking limitation of the stargazers-based classifier is the low747

percentages of recall. While a repository with a large number of stars is likely to contain an engineered748

software project, the contrary is not always true.749

The validation results indicate that by using the stargazers-based classifier, researchers may be750

excluding a large set of repositories that contain engineered software projects but may not be popular.751

In contrast, the score-based and random forest classifiers trained on organization and utility data sets752

perform much better in terms of recall while achieving an acceptable level of precision.753

As a next level of validation, we compared the performance of the best classifier from our study to the754

best stargazer-based classifier from Table 4. In terms of F-measure, the random forest classifier trained755

8Citation counts retrieved from Google Scholar

20/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 21: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

using the utility data set exhibited the best performance. Similarly, the stargazer-based classifier with 10756

as the threshold for number of stargazers performed the best. We compare these two classifiers not in757

terms of the performance evaluation metrics but in terms of the actual predicted classification. In Table 5,758

we present the percentage of repositories where (a) both classifiers agreed with the ground truth, (b) both759

classifiers disagreed with the ground truth, (c) prediction by random forest classifier matches the ground760

truth but that of stargazers-based classifier does not, and (d) prediction by stargazers-based classifier761

matches the ground truth but that by random forest classifier does not.762

Table 5. Comparison of predictions from random forest classifier trained with the utility data set and thestargazers-based classifier with a threshold of 10 stargazers for validation set repositories with differentground truth labels

Ground Truth Both Agree Both Disagree Random Forest Agrees Stargazers Agrees

Project 26% 17% 83% 27%

Not project 82% 1% 82% 99%

For repositories that we believe to be useful in MSR data sets, both approaches incorrectly classify the763

repositories as “not project” 17% of the time. However, random forest classifies 83% of the repositories764

correctly when the stargazers classifies only 27% of the repositories. A prime example of a project765

that was missed by stargazers-based classifier but correctly classified by the random forest classifier766

is jruby/jruby-ldap from the team that maintains the Ruby implementation of the Java Virtual767

Machine (JVM). The repository contains a Ruby gem for LDAP support in JRuby. Our random forest768

classifies the repository as a project due to its architecture, commit history, test suite, and documentation,769

among other factors. However, the repository has only 7 stars. While the stargazers-based approach770

misses jruby/jruby-ldap, this repository may be a worthy candidate in a software engineering771

study. We also note that there was only one case in which stargazers-based classifier predicted a repository772

to be a project while random forest classifier did not. Hence, any repository classified as a project by the773

stargazers-based classifier is highly likely to be classified the same by the random forest classifier as well.774

In the case where the ground truth classification is “not project”, 82% of the time, both approaches775

correctly classified repositories as “not project”. In addition, the stargazers-based classifier correctly776

classified repositories as “not project” 99% of the time where the random forest classifier did so 82%777

of the time. Consider the repository liorkesos/drupalcamp, which has sufficient documentation,778

commit history, and community to be classified as a “project” by the random forest classifier, however, the779

repository is essentially a collection of static PHP files of a Drupal Camp website, not incredibly useful in a780

general software engineering study. The stargazers-based classifier predicts liorkesos/drupalcamp781

as “not project” only for its lack of stars.782

Summary783

We can make three observations about the suitability of the score-based or random forest classifiers to784

help researchers generate useful data sets. First, the strict stargazers-based classifier ignores many valid785

projects but enjoys almost 0% false positive rate. Second, the random forest classifier trained with the786

utility data set is able to correctly classify many “unpopular” projects, helping extend the population from787

which sample data sets may be drawn. Third, the score-based and random forest projects have their own788

imperfections as well. Our classifiers are likely to introduce false positives into research data sets. Perhaps,789

our classifiers could be used an an initial selection criteria augmented by the stargazers-based classifier.790

Nevertheless, we have shown that more work can be done to improve the data collection methods in791

software engineering research.792

6.2 Prediction793

In this section, we present the results from applying the score-based and random forest classifiers to794

identify engineered software projects in a sample of 1,994,977 GitHub repositories. Shown in Table 6795

are the number of repositories classified as containing an engineered software project by the score-based796

and random forest classifiers. With the exception of the score-based classifier trained using the utility797

data set, the number of repositories classified as containing an engineered software project is, on average,798

12.45% of the total number of repositories analyzed. We can also see from Table 6 that there are far fewer799

21/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 22: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

repositories similar to the ones owned by software development organizations than there are repositories800

similar to the ones that have a general-purpose utility. The number of repositories predicted to be similar801

to the ones that have a general-purpose utility by the score-based classifier is considerably high. A likely802

explanation for the unusually high number of repositories could be because of the relatively low reference803

score of 30 established from the utility data set.804

Table 6. Number of repositories classified as containing an engineered software project by score-basedand random forest classifiers trained using organization and utility data sets

Data Set Classifier # Repositories (% Total)

OrganizationScore-based 224,064 (11.23%)

Random forest 118,073 (5.92%)

UtilityScore-based 1,767,435 (88.59%)

Random forest 402,815 (20.19%)

Shown in Figure 10 is a grouping of results by programming language.805

Random Forest Score−based

9,986

148,392

5,511

119,839

9,844

161,736

25,345

473,884

18,302

278,998

24,175

328,603

23,373

366,989

38,674

119,704

19,588

105,762

38,210

133,370

91,551

407,678

72,173

225,127

75,143

277,635

71,727

318,635

22,383

135,995

11,636

113,714

23,900

147,680

63,173

436,056

34,166

263,134

38,675

314,103

30,131

360,231

135,143

23,235

120,184

5,166

152,176

19,404

460,892

38,337

254,847

42,453

286,909

65,869

357,284

33,078

0

100,000

200,000

300,000

400,000

0

100,000

200,000

300,000

400,000

Organization

Utility

C C#

C++

Java

PHP

Python

Ruby C C#

C++

Java

PHP

Python

Ruby

Language

# R

epos

itori

es

Predicted Label Project Not Project

Prediction Results by Programming Language

Figure 10. Number of repositories classified by the score-based and random forest classifiers groupedby programming language

As mentioned in Section 4.1 (Architecture), the computational complexity may prevent the collection806

of the monolithicity metric for certain large repositories. There were 4,451 such repositories in our data set807

(a mere 0.22% of the total number of repositories). On average, 1,770 of the 4,451 repositories (39.77%)808

were classified as containing an engineered software project with the architecture dimension defaulted to809

zero.810

The entire data set may be viewed and downloaded as a CSV file from https://reporeapers.811

github.io. The data set includes the metric values collected from each repository. The data set812

available online contains information pertaining to 2,247,526 GitHub repositories but 252,105 of those813

repositories were inactive at the time reaper was run and as a result the metric values will all be NULL.814

22/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 23: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

We hope the data set will help researchers overcome the limitation posed by the arduous task of815

manually identifying repositories to study, especially in the mining software repositories community.816

7 RELATED WORK817

An early work by Nagappan (2007) revealed opportunities and challenges in studying open source818

repositories in an empirical context. Kalliamvakou et al. (2014) described various perils of mining819

GitHub data; specifically, Peril IV is: “A large portion of repositories are not for software development”.820

In this work, the researchers manually analyzed a sample of 434 GitHub repositories and found that821

approximately 37% of them were not used for software development. In our study, the best performing822

classifier predicted that approximately 20.19% of 1,994,977 GitHub repositories contain engineered823

software projects. Our prediction results highlight the reality that even though 63% of GitHub repositories824

are used for software development, only a small percentage of those repositories actually contain projects825

that software engineering researchers may be interested in studying.826

Dyer et al. (2013) and Bissyande et al. (2013) have created domain specific languages—Boa and827

Orion, respectively—to help researchers mine data about software repositories. Dyer et al. (2013) have828

used Boa to curate a sizable number of source code repositories from GitHub and SourceForge, however,829

only Java repositories are currently available. In contrast, we have curated over 1,994,977 spanning seven830

programming languages with the intention of simplifying the process of study selection in large-scale831

source code mining research.832

On the open source community front, Ohloh.net (now Black Duck Open Hub) is a publicly-editable833

directory of free and open source software projects. The directory is curated by Open Hub users, much834

like a public wiki, resulting in an accurate and up-to-date directory of open source software. The website835

provides interesting visualization about software projects curated by Black Duck Open Hub. Tung et al.836

(2014) have used Open Hub to search for repositories containing engineered software projects as perceived837

by Open Hub users.838

8 USAGE SCENARIOS839

In the validation of score-based and random forest classifiers, we found the that these classifiers recalled840

considerably higher number of repositories than a stargazers-based classifier. However, improving the841

recall of repositories is not as impressive as enabling researchers to exercise finer control over the aspects842

of repositories that are most pertinent in selecting study subjects for their research. For instance, consider843

the following studies from prior literature that have all used some ad hoc approach to identify a set of844

repositories.845

• In a study about GitHub issue tracking practices, Bissyande et al. (2013) started with a random846

sample of the first 100,000 repositories returned by the GitHub API. The repositories that did not847

have a the GitHub Issues feature were then removed. The issues dimension may have helped in848

identifying only those repositories that have the GitHub Issues feature turned on and in use.849

• In a study of the use of continuous integration practices, Vasilescu et al. (2014) used the GHTor-850

rent (Gousios, 2013) database and applied a series of filters to identify a small subset of 223 GitHub851

repositories to include. The study was restricted to repositories that used Travis CI. However,852

the use of the continuous integration dimension may have simplified the process of selecting all853

repositories that use continuous integration.854

• In a study of license usage in Java projects on GitHub, Vendome (2015) retrieved the metadata for855

all Java repositories and selected a random sample of 16,221 repositories. The license dimension856

may have been ideal to select all Java repositories that are licensed as open source software.857

• In a study of testing practices in open source projects, Kochhar et al. (2013) used the GitHub API to858

select 50,000 repositories specifically stating that they removed toy projects by manually examining859

and including only famous projects such as JQuery and Ruby on Rails. The unit testing dimension860

may have helped by removing the need to manually examine the repositories.861

Admittedly, the stargazers-based classifier is much simpler to use and Occam’s razor would suggest862

using a simpler solution instead of a (unnecessarily) complex one. However, by using the stargazers-based863

classifier, a large number of potentially relevant repositories may be ignored.864

23/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 24: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

The aforementioned studies from prior literature focused on specific aspects of software development865

such as licensing, testing, or issue tracking. While the score-based and random forest classifiers may866

be used to support such studies, the real benefit of the classifiers may only be evident when researchers867

need access to repositories that simultaneously satisfy a variety of requirements. For instance, consider868

the following hypothetical studies in which the classifiers would prove useful in identifying a set of869

repositories to include.870

• A study investigating the relationship between collaboration and testing in open source projects871

could use the community and unit testing dimensions to identify repositories.872

• A study investigating the evolution of documentation in open source projects could use history and873

documentation dimensions to identify repositories.874

We hope that some of these hypothetical studies become a reality and that our data set and classifiers875

help overcome the barrier to entry.876

9 DISCUSSION877

In our study, we attempted to identify repositories that contain engineered software projects according to878

two different definitions of the term. The implementation of one of the definitions involved training two879

classifiers using repositories in the organization data set. One would assume that the outcome of applying880

these classifiers can be matched by merely considering all repositories owned by any organization on881

GitHub as containing an engineered software project. However, not all repositories owned by organizations882

contain engineered software project. We reuse the validation set from Section 6.1 here to further explore883

the nuances of repositories owned by organizations.884

The validation set contains 200 repositories, 100 of which are known to contain engineered software885

project and the remaining 100 are known to not contain engineered software project. 45 of the 200886

repositories are owned by organizations.887

Shown in Figure 11 is a comparison between the distribution of the eight dimensions collected from888

repositories owned by organizations but having different manual classification labels. As seen in the889

figure, the difference in the distribution of the dimensions provides qualitative evidence to support the890

notion that not all repositories owned by organizations are similar to one another.891

On similar lines, we compared the distribution of the eight dimensions collected from repositories892

known to contain engineered software project but owned by organizations and users. The comparison is893

shown in Figure 12. As seen in the figure, the medians of most dimensions are comparable between the894

repositories owned by users to that owned by organizations. The similarity in dimensions is exactly the895

aspect that our approach aims to take advantage of.896

Shown in Table 7 is a break down of the prediction results from Section 6.2 into repositories owned by897

organizations and users. As seen in the table, a considerable number of repositories that were classified898

as containing engineered software project are owned by individual users. On the other hand, a sizable899

number of repositories that were classified as not containing an engineered software project were owned900

by organizations. In effect, filtering repositories based solely on the owner being an organization may901

lead to the exclusion of potentially relevant, user-owned, repositories or the inclusion of repositories that902

may not contain engineered software project or both.903

Table 7. Segregation of repositories into those owned by organizations and those owned by individualusers classified by the organization data set trained score-based and random forest classifiers

Classifier Predicted Label# Repositories

Organization User

Score-basedproject 64,649 159,415

not project 157,034 1,613,879

Random forestproject 41,747 71,639

not project 179,936 1,701,655

24/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 25: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

Architecture

0.00

0.25

0.50

0.75

1.00

Not P

rojec

t

Projec

t

Community

5

10

15

2025

Not P

rojec

t

Projec

t

No

No

Yes

Continuous Integration

0

10

20

30

Not P

rojec

t

Projec

t

Documentation

0.0

0.1

0.2

0.3

0.4

0.5

Not P

rojec

t

Projec

t

History

100

200

300

400500

Not P

rojec

t

Projec

t

Issues

10

20

30

4050

Not P

rojec

t

Projec

t

Unit Test

0.00

0.25

0.50

0.75

1.00

Not P

rojec

t

Projec

t

No YesNo

Yes

License

0

10

20

30

Not P

rojec

t

Projec

t

Distribution of Dimensions of Repositories Owned by Organizations

Label

Valu

e (V

aryi

ng S

cale

s)

Figure 11. Comparing the distribution of dimensions of repositories with different manual classificationlabels but all owned by organizations

10 THREATS TO VALIDITY904

10.1 Subjectivity of Dimensions, Thresholds, and Weights905

The dimensions used to represent source code repositories in the classification model are subjective,906

however, we believe that the eight dimensions we have used to be an acceptable default. The open-source907

tool (reaper) developed to measure the dimensions was designed with extensibility in mind. Extending908

reaper to add or modify dimensions is fairly trivial and the process is detailed in the README.md file909

in the reaper GitHub repository (Munaiah et al., 2016c).910

In addition to the dimensions, the thresholds and weights used in the score-based classifier are911

subjective as well. Here again, we consider the weights we have used to be an acceptable default, however,912

alternative weighting schemes may be used to mitigate the subjectivity to a certain extent. Some of these913

alternative approaches are using (a) a machine learning algorithm to evaluate importance of dimensions914

using repositories in a training data set, (b) a uniform weighting across dimensions, or (c) a voting-based915

weighting scheme. Researchers using reaper can modify a single file, manifest.json, that contains916

the list of dimensions with their respective thresholds, weights, and settings (e.g. 80% as the cutoff when917

measuring core contributors) to define their version of a score-based classifier.918

As an exploratory exercise, we used the random forest classifier trained with organization and utility919

data sets to assess the relative importance of the variables (dimensions) in the model. Shown in Figure 13920

is the outcome of the exercise. We used the relative importance to adjust the weights in both score-based921

classifiers. When the classifiers with adjusted weights were used to classify repositories in the sample of922

1,994,977 GitHub repositories, the number of repositories classified as containing engineered software923

projects increased by 22% and 2% for score-based classifiers trained using the organization and utility924

data sets, respectively. We, however, chose to retain our weighting scheme as an acceptable default.925

10.2 reaper-induced Bias926

In describing the dimensions measured by reaper in Section 4, we outlined the limitations in our927

approach to collect dimensions’ metric from a repository. These limitations may lead to the induction of928

25/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 26: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

Architecture

0.00

0.25

0.50

0.75

1.00

User

Organ

izatio

n

Community

2

4

6

8

User

Organ

izatio

n

No

Yes

No

Yes

Continuous Integration

0

10

20

30

40

User

Organ

izatio

n

Documentation

0.0

0.1

0.2

0.3

0.4

0.5

User

Organ

izatio

n

History

100

200

300

400500

User

Organ

izatio

n

Issues

10

20

30

4050

User

Organ

izatio

n

Unit Test

0.0

0.2

0.4

0.6

User

Organ

izatio

n

No

Yes

No

Yes

License

0

20

40

60

User

Organ

izatio

n

Distribution of Dimensions of Repositories Owned by Users and Organizations

Owner

Valu

e (V

aryi

ng S

cale

s)

Figure 12. Comparing the distribution of dimensions of repositories known to contain engineeredsoftware project owned by organizations and users

bias in the repositories selected. For example, if the goal of a study is to analyze the proliferation of unit929

testing in the real-world, using reaper will inherently bias the repositories selected toward unit testing930

frameworks that are currently recognized by reaper. However, the researcher may configure reaper931

such that the unit testing dimension is ignored in the computation of the score thereby mitigating the skew932

in the repositories selected.933

10.3 Extensibility934

In addition to reaper, the publicly-accessible data set available for download from the project website935

(https://reporeapers.github.io/) containing the raw values of the eight dimensions for936

2,247,526 repositories is an important contribution of our work. A compute cluster with close to 200937

nodes took over a month to analyze these repositories. As an alternative to modifying reaper and938

rerunning the analysis, researchers can develop a simple script to directly use the raw values and their939

own thresholds and weights to compute customized scores for the repositories.940

11 CONCLUSION941

The goal of our work was to understand the elements that constitute an engineered software project.942

We proposed eight such elements, called dimensions. The dimensions are: architecture, community,943

continuous integration, documentation, history, issues, license, and unit testing. We developed an open-944

source tool called reaper that was used to measure the dimensions of 2,247,526 GitHub repositories945

spanning seven popular programming languages. Two sets of repositories, each corresponding to a946

different definition of an engineered software project, were composed and a score-based and random947

forest classifiers were trained.948

The classifiers were then used to identify all repositories in the sample of 1,994,977 GitHub repositories949

that were similar to the ones that conform to the definitions of the engineered software project. Our best950

performing random forest model predicted 20.19% of 1,994,977 GitHub repositories contain engineered951

software project.952

26/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 27: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

Organization

0.0

0.1

0.2

0.3

Issue

sHist

ory

Comm

unity

Unit Te

stLic

ense

Contin

uous

Inte

grat

ionArc

hitec

ture

Docum

enta

tion

Utility

0.0

0.1

0.2

0.3

Issue

sLic

ense

Histor

y

Contin

uous

Inte

grat

ionCom

mun

ityUnit

Test

Archit

ectu

re

Docum

enta

tion

Relative Importance of Dimensions

Dimension

Rel

ativ

e Im

port

ance

Figure 13. Relative importance of dimensions in the organization and utility data sets

ACKNOWLEDGMENTS953

We thank Nimish Parikh for his contributions during the initial stages of the research. We thank our peers954

for providing us with their GitHub authentication keys which helped us attain the API bandwidth required.955

We also thank the Research Computing Group at Rochester Institute of Technology for providing us the956

computing resources necessary to execute a project of this scale.957

REFERENCES958

Allamanis, M. and Sutton, C. (2013). Mining Source Code Repositories at Massive Scale using Language959

Modeling. In Proceedings of the 10th Working Conference on Mining Software Repositories, pages960

207–216. IEEE.961

Belady, L. A. and Lehman, M. M. (1976). A model of large program development. IBM Systems journal,962

15(3):225–252.963

Bird, C., Nagappan, N., Murphy, B., Gall, H., and Devanbu, P. (2011). Don’t Touch My Code!: Examining964

the Effects of Ownership on Software Quality. In Proceedings of the 19th ACM SIGSOFT symposium965

and the 13th European conference on Foundations of software engineering, pages 4–14. ACM.966

Bissyande, T., Thung, F., Lo, D., Jiang, L., and Reveillere, L. (2013). Orion: A Software Project Search967

Engine with Integrated Diverse Software Artifacts. In Engineering of Complex Computer Systems968

(ICECCS), 2013 18th International Conference on, pages 242–245.969

Bissyande, T. F., Lo, D., Jiang, L., Reveillere, L., Klein, J., and Traon, Y. L. (2013). Got issues? Who cares970

about it? A large scale investigation of issue trackers from GitHub. In 2013 IEEE 24th International971

Symposium on Software Reliability Engineering (ISSRE), pages 188–197.972

Bissyande, T. F., Thung, F., Lo, D., Jiang, L., and Reveillere, L. (2013). Popularity, Interoperability,973

and Impact of Programming Languages in 100,000 Open Source Projects. In Computer Software and974

Applications Conference (COMPSAC), 2013 IEEE 37th Annual, pages 303–312. IEEE.975

Bowman, I. T., Holt, R. C., and Brewster, N. V. (1999). Linux as a case study: Its extracted software976

architecture. In Proceedings of the 21st International Conference on Software Engineering, ICSE ’99,977

pages 555–563, New York, NY, USA. ACM.978

Breiman, L. (2001). Random Forests. Machine Learning, 45(1):5–32.979

CA Technologies (2016). Waffle.io - Work Better on GitHub Issues. https://waffle.io/. Ac-980

cessed: 2016-03-11.981

Carlo Zapponi (2016). GitHut - Programming Languages and GitHub. http://githut.info.982

Accessed: 2016-03-11.983

Center, S. F. L. (2012). Managing copyright information within a free software project.984

Codetree Studios (2016). Codetree - GitHub Issues, Managed. https://codetree.com/. Accessed:985

2016-03-11.986

Danial, A. (2014). CLOC – Count Lines of Code. http://cloc.sourceforge.net/. Accessed:987

2016-03-11, Version: 1.62.988

de Souza, S. C. B., Anquetil, N., and de Oliveira, K. M. (2005). A study of the documentation essential989

27/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 28: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

to software maintenance. In Proceedings of the 23rd Annual International Conference on Design of990

Communication: Documenting & Designing for Pervasive Information, SIGDOC ’05, pages 68–75,991

New York, NY, USA. ACM.992

Dyer, R., Nguyen, H. A., Rajan, H., and Nguyen, T. N. (2013). Boa: A language and infrastructure993

for analyzing ultra-large-scale software repositories. In 35th International Conference on Software994

Engineering, ICSE’13, pages 422–431.995

Eick, S. G., Graves, T. L., Karr, A. F., Marron, J. S., and Mockus, A. (2001). Does code decay? assessing996

the evidence from change management data. Software Engineering, IEEE Transactions on, 27(1):1–12.997

Georg Brandl and Pygments Contributors (2016). Pygments: Python Syntax Highlighter. http:998

//pygments.org/. Accessed: 2016-03-11.999

GHTorrent (2016). GHTorrent Hall of Fame. http://ghtorrent.org/halloffame.html.1000

Accessed: 2016-03-11.1001

GHTorrent (2016). GHTorrent: The relational DB schema. http://ghtorrent.org/1002

relational.html. Accessed: 2016-03-11.1003

GitHub, Inc. (2016a). GitHub API v3 — GitHub Developer Guide. https://developer.github.1004

com/v3/. Accessed: 2016-03-11.1005

GitHub, Inc. (2016b). GitHub Archive. https://www.githubarchive.org/. Accessed: 2016-1006

06-19.1007

GitHub Inc. (2016). No License - Choose a License. http://choosealicense.com/no-1008

license/. Accessed: 2016-03-11.1009

Gousios, G. (2013). The GHTorrent Dataset and Tool Suite. In Proceedings of the 10th Working1010

Conference on Mining Software Repositories, MSR ’13, pages 233–236, Piscataway, NJ, USA. IEEE1011

Press.1012

Guzman, E., Azocar, D., and Li, Y. (2014). Sentiment analysis of commit comments in github: an1013

empirical study. In Proceedings of the 11th Working Conference on Mining Software Repositories,1014

pages 352–355. ACM.1015

HuBoard Inc. (2016). HuBoard - GitHub issues made awesome. https://huboard.com/. Accessed:1016

2016-03-11.1017

Iowa State University (2016). Publications Related to Boa - Boa - Iowa State University. http:1018

//boa.cs.iastate.edu/papers/. Accessed: 2016-03-11.1019

Jarczyk, O., Gruszka, B., Jaroszewicz, S., Bukowski, L., and Wierzbicki, A. (2014). Github projects.1020

quality analysis of open-source software. In Social Informatics, pages 80–94. Springer.1021

Kalliamvakou, E., Gousios, G., Blincoe, K., Singer, L., German, D. M., and Damian, D. (2014). The1022

Promises and Perils of Mining GitHub. In Proceedings of the 11th Working Conference on Mining1023

Software Repositories, MSR 2014, pages 92–101, New York, NY, USA. ACM.1024

Kochhar, P. S., Bissyande, T. F., Lo, D., and Jiang, L. (2013). Adoption of Software Testing in Open1025

Source Projects–A Preliminary Study on 50,000 Projects. In 2013 17th European Conference on1026

Software Maintenance and Reengineering, pages 353–356.1027

Kofink, A. (2015). Contributions of the Under-appreciated: Gender Bias in an Open-source Ecology. In1028

Companion Proceedings of the 2015 ACM SIGPLAN International Conference on Systems, Program-1029

ming, Languages and Applications: Software for Humanity, SPLASH Companion 2015, pages 83–84,1030

New York, NY, USA. ACM.1031

Laplante, P. (2007). What Every Engineer Should Know about Software Engineering. CRC Press.1032

Lester, A. (2014). Beyond grep: ack 2.14, a source code search tool for programmers. http://1033

beyondgrep.com/. Accessed: 2016-03-11, Version: 2.14.1034

Maier, M., Emery, D., and Hilliard, R. (2001). Software architecture: introducing ieee standard 1471.1035

Computer, 34(4):107–109.1036

Mockus, A., Fielding, R. T., and Herbsleb, J. (2000). A case study of open source software development:1037

the Apache server. In Proceedings of the 22nd international conference on Software engineering, pages1038

263–272. Acm.1039

Munaiah, N., Gonzalez, K. C., and Meneely, A. (2016a). GitHub - andymeneely/attack-surface-metrics:1040

Scripts for collecting metrics of the attack surface. https://github.com/andymeneely/1041

attack-surface-metrics. Accessed: 2016-10-26.1042

Munaiah, N., Kroh, S., Cabrey, C., and Nagappan, M. (2016b). Home of the reporeapers. https:1043

//reporeapers.github.io. Accessed: 2016-03-1.1044

28/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ:

Page 29: Curating GitHub for engineered software projects1 Curating GitHub for Engineered Software 2 Projects Nuthan Munaiah1, Steven Kroh1, Craig Cabrey1, and Meiyappan 3 Nagappan2 4 1Department

Munaiah, N., Kroh, S., Cabrey, C., and Parikh, N. (2016c). reaper - reference implementation.1045

https://github.com/RepoReapers/reaper. Accessed: 2016-03-11.1046

Nagappan, N. (2007). Potential of open source systems as project repositories for empirical studies1047

working group results. In Basili, V., Rombach, D., Schneider, K., Kitchenham, B., Pfahl, D., and Selby,1048

R., editors, Empirical Software Engineering Issues. Critical Assessment and Future Directions, volume1049

4336 of Lecture Notes in Computer Science, pages 103–107. Springer Berlin Heidelberg.1050

Nagappan, N., Williams, L., Osborne, J., Vouk, M., and Abrahamsson, P. (2005). Providing test quality1051

feedback using static source code and automatic test suite metrics. In Software Reliability Engineering,1052

2005. ISSRE 2005. 16th IEEE International Symposium on, pages 10–pp. IEEE.1053

Ray, B., Posnett, D., Filkov, V., and Devanbu, P. (2014). A Large Scale Study of Programming Languages1054

and Code Quality in Github. In Proceedings of the 22nd ACM SIGSOFT International Symposium on1055

Foundations of Software Engineering, pages 155–165. ACM.1056

Ross, S. M. (2003). Peirce’s criterion for the elimination of suspect experimental data. Journal of1057

Engineering Technology, 20(2):38–41.1058

Syer, M. D., Nagappan, M., Hassan, A. E., and Adams, B. (2013). Revisiting prior empirical findings for1059

mobile apps: an empirical case study on the 15 most popular open-source Android apps. In CASCON,1060

pages 283–297.1061

Tung, Y.-H., Chuang, C.-J., and Shan, H.-L. (2014). A framework of code reuse in open source software.1062

In Network Operations and Management Symposium (APNOMS), 2014 16th Asia-Pacific, pages 1–6.1063

IEEE.1064

Vasilescu, B., van Schuylenburg, S., Wulms, J., Serebrenik, A., and van den Brand, M. G. J. (2014).1065

Continuous Integration in a Social-Coding World: Empirical Evidence from GitHub. In 2014 IEEE1066

International Conference on Software Maintenance and Evolution, pages 401–405.1067

Vendome, C. (2015). A Large Scale Study of License Usage on GitHub. In 2015 IEEE/ACM 37th IEEE1068

International Conference on Software Engineering, volume 2, pages 772–774.1069

Whitehead, J., Mistrık, I., Grundy, J., and van der Hoek, A. (2010). Collaborative software engineering:1070

concepts and techniques. In Collaborative Software Engineering, pages 1–30. Springer.1071

Zaidman, A., Van Rompaey, B., Demeyer, S., and Van Deursen, A. (2008). Mining software repositories1072

to study co-evolution of production & test code. In Software Testing, Verification, and Validation, 20081073

1st International Conference on, pages 220–229. IEEE.1074

Zazworka, N., Shaw, M. A., Shull, F., and Seaman, C. (2011). Investigating the Impact of Design Debt on1075

Software Quality. In Proceedings of the 2nd Workshop on Managing Technical Debt, MTD ’11, pages1076

17–23, New York, NY, USA. ACM.1077

Zenhub (2016). ZenHub - Project Management for Agile Teams on GitHub. https://www.zenhub.1078

io/. Accessed: 2016-03-11.1079

Zhu, H., Hall, P. A., and May, J. H. (1997). Software unit test coverage and adequacy. Acm computing1080

surveys (csur), 29(4):366–427.1081

29/29

PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2617v1 | CC BY 4.0 Open Access | rec: 6 Dec 2016, publ: