Software evaluation via users’ feedback at runtime

42
Software Evaluation via Users’ Feedback at Runtime Presented by: Nada Sherief [email protected] 12 May 2014 EASE'14| London, UK 1

description

Nada Sherief. Software Evaluation via Users’ Feedback at Runtime. The 18th International Conference on Evaluation and Assessment in Software Engineering (EASE 2014) - Doctoral Symposium. London, UK. 13-14 May 2014.

Transcript of Software evaluation via users’ feedback at runtime

Page 1: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 1

Software Evaluation via Users’ Feedback at Runtime

Presented by: Nada [email protected]

12 May 2014

Page 2: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 2

Research Stage

12 May 2014

Page 3: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 3

Introduction Problem and Motivation Research Aims and Objectives Related Work Research Questions Results Research Method Research Plan Acknowledgement

Agenda

12 May 2014

Page 4: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 4

The stakeholders of software evaluation include users who utilize the software to reach their needs and expectations, i.e. requirements.

Users’ acceptance and efficient use of the software is a main subject of evaluation.

In a dynamic world, users’ acceptance and view of the software would be also dynamic and would need to be captured throughout the life time of the software to stay up-to-date.

The explicit evaluation feedback from users is a main mechanism to communicate their perception of the role and quality of software at runtime.

Introduction(1/2)

12 May 2014

Page 5: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 5

Users’ feedback is meaningful information given by the users of the software, based on their experience in using it

Feedback can contribute in identifying problems in the software, modifying existing requirements or requesting new additional requirements leading to better users’ acceptance of the software.

The subject of evaluation feedback is the role of the system in meeting requirements, we could say that this feedback enables a user-centric requirements-based evaluation.

Introduction(2/2)

12 May 2014

Page 6: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 6

Introduction Problem and Motivation Research Aims and Objectives Related Work Research Questions Results Research Method Research Plan Acknowledgement

Agenda

12 May 2014

Page 7: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 7

Traditional evaluation methods, which is mainly a design-time activity, have a range of limitations including the following:

◦ Limitations in predicting and simulating the actual context of use.

◦ The unstructured and varied ways in which users provide their feedback typically in a natural language.

◦ Capturing the opinion of only an elite group of users.

What is the Problem?

12 May 2014

Page 8: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 8

Crowdsourcing is a new form of problem solving, which is typically online and relies on a large number of people.

Crowdsourcing is a flexible business model and it requires a less strict recruitment and contracting process.

Crowdsourcing is typically used for non-critical tasks and tasks which naturally require input from the general public and where the right answer is based on people’s acceptance and dynamics.

Our motivating principle is using the power of the crowd for software evaluation.

Research Motivation(1/3)

12 May 2014

Page 9: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 9

This has various potential benefits in comparison to centralized approaches:

◦ An ability to evaluate software while users are using it in practice (i.e. at runtime).

◦ The access to a large crowd enables fast and scalable evaluation.

◦ An ability to maintain the evaluation knowledge up-to-date

◦ An access to a wider and diverse set of users and contexts of use that are unpredictable by designers at design time

◦ Enabling users to introduce new quality attributes and requirements.

Research Motivation(2/3)

12 May 2014

Page 10: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 10

Despite the speculated benefits, the literature is still limited in providing engineering approaches and foundations to develop crowdsourcing frameworks of software evaluation.

Most of the available evaluation approaches that use crowdsourcing imply the use of the paradigm and use commercial platforms, such as mTurk (https://www.mturk.com/ ).

We advocate that the study and specification of the structure of crowdsourced feedback have a major impact on the possibility of getting evaluation knowledge more meaningful and helpful to the software and developers.

Research Motivation(3/3)

12 May 2014

Page 11: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 11

Introduction Problem and Motivation Related Work Research Aims and Objectives Research Questions Results Research Method Research Plan Acknowledgement

Agenda

12 May 2014

Page 12: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 12

There are several established approaches where the role of users is central, such as: User centered design, User Experience, Agile methodology, and Usability Testing.

These techniques involve users in the software development life cycle, including the prototyping and evaluation.

These techniques, are expensive and time consuming when used for highly variable software designed to be used by a large crowd in contexts unpredictable at design time.

Related Work(1/6)

12 May 2014

Page 13: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 13

Our work on crowd-sourced evaluation is a kind of end-user computing in the motivation to provide end users with the ability to change the system according to their views to meet their needs.

In contrast to end-user computing, crowdsourced evaluation relies on users’ feedback as a way to evolve the system, or to adapt the system by switching between configurations at runtime according to the analysis of collective feedback, instead of relying on one user feedback.

This helps cope with large scale system where a large number of variations exist.

Related Work(2/6)

12 May 2014

Page 14: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 14

We would also give a particular focus on studying the requirements engineering models which support variability.

Models can represent the prominent aspects of the software that when formally used enables automated reasoning to derive essential information from the software employing them.

Since we are proposing a crowd-sourced evaluation process, we propose that using these models to represent stakeholder’s goals, software features, configurations, and relating users’ feedback to them would be easy to the users.

Also, they will provide a systematic assistance to the developers in extracting new requirements and problems.

Related Work(3/6)

12 May 2014

Page 15: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 15

Recently, more work has been directed towards inventing systematic methods for representing and obtaining users’ feedback.

◦ Processes for continuous and context-aware user input were proposed.

◦ An empirical study on users’ involvement for the purpose of software evaluation and evolution has been conducted.

◦ The crowd feedback was also advocated for shaping software adaptation.

Related Work(4/6)

12 May 2014

Page 16: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 16

In general, when designing an empirical study in software engineering, engaging the necessary type of participants and appropriate number is always a challenge.

Researchers are often required to perform trade-offs to be able to perform the study.

The crowdsourcing paradigm and the mTurk platform have been used in evaluating the usability of a school’s website. ◦ The advantages are claimed to be more participants’

involvement, low cost, high speed, and various users’ backgrounds.

◦ The disadvantages include lower quality feedback, less interactions, more spammers, less focused user groups.

Related Work(5/6)

12 May 2014

Page 17: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 17

A general observation of the current state of the art is that it treats crowdsourcing as a whole concept without addressing its peculiarities and its different configurations

Aspects like the interaction style and the model of obtained feedback are generally overlooked.

Our work is a first attempt to address that range of challenges.

Related Work(6/6)

12 May 2014

Page 18: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 18

Introduction Problem and Motivation Related Work Research Aims and Objectives Research Questions Results Research Method Research Plan Acknowledgement

Agenda

12 May 2014

Page 19: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 19

To enable users to structure their feedback themselves. This will lead to a more effective management and richness of their role as evaluators.

To reuse the crowd-sourced evaluation knowledge to recommend to the user that they may use a certain configuration in a certain context according to the collective evaluation of other users or similar users in similar contexts in a way similar to collaborative filtering.

We plan for our approach to intelligently enhance the system, for example by learning user interactions, or what problems are solved by which features in order to resolve similar cases.

Research Aims

12 May 2014

Page 20: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 20

To develop a novel user-driven feedback modelling language that enables users to define their feedback structures, acquisition methods.

To develop a crowdsourced engineering framework designed for both users and engineers to model and correlate collective users’ feedback, context, and software features at runtime for the purpose of software evaluation.

To implement a plug-in tool, this can be used in parallel with a main application to utilize the developed engineering framework. Since our modelling framework is intended to be used by end-users, therefore an implementation will add the “look and feel” to the whole story.

Research Objectives

12 May 2014

Page 21: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 21

Introduction Problem and Motivation Related Work Research Aims and Objectives Research Questions Results Research Method Research Plan Acknowledgement

Agenda

12 May 2014

Page 22: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 22

In the following we discuss the research questions (RQ) of this work.

These questions contribute to the bigger picture by trying to address the important aspects that can be incorporated in the new user-driven feedback modelling language for the purpose of continuously evaluating the software while being in use (i.e. at runtime).

◦ RQ1) What is the current support in the literature for the acquisition and structuring of users’ feedback in the context of software evaluation?

◦ RQ2) What are the key feedback qualities that can help developers extract requirements knowledge from the collected end-user feedback?

Research Questions (1/2)

12 May 2014

Page 23: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 23

◦ RQ3) What are the methods and mechanisms that could be adopted to reuse other’s feedback or feedback structures? How can they be adapted in our framework?

◦ RQ4) What are the key aspects that could be included to our framework to increase user willingness to actively participate in such a new role as evaluators and how to support that by software tools?

◦ RQ5) What are the validations that must be developed to assess whether the proposed approach enhances the obtainment and analysis of user feedback? How does this help in planning the software’s adaptation and evolution?

Research Questions (2/2)

12 May 2014

Page 24: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 24

Introduction Problem and Motivation Related Work Research Aims and Objectives Research Questions Results Research Method Research Plan Acknowledgement

Agenda

12 May 2014

Page 25: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 25

A multi-sessions focus group study was conducted, which is a popular technique of qualitative research in software engineering.

The main purpose of this focus group was to elicit requirements from various stakeholders to understand how crowdsourcing should be practiced in terms of feedback gathering.

It was also used to explore the opportunities to use crowdsourcing mechanisms to obtain user feedbacks during software development.

Our main aim was to collect some insights from both users and developers that can inform our research questions.

Results(1/5)

12 May 2014

Page 26: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 26

Both junior and senior software developers were invited to join the first session where the emphasis of this session was to:

◦ understand how software developers normally gather user feedback

◦ how they think a good feedback should be structured

◦ how they collaborate and communicate with users in the development as this could inform the way we design feedback requests

Results(2/5)

12 May 2014

Page 27: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 27

The second session was conducted with regular software users who are used to provide feedback.

The emphasis of this session was to:◦ explore the ways that users would like feedback requests to look

like

◦ what drives them to provide feedback

◦ their concerns for not getting involved enough and also for being involved more than what they expect

◦ This session was also used to investigate their motivations to take part in projects and learn their experience from that participation

Results(3/5)

12 May 2014

Page 28: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 28

Following the approach presented in, five thematic areas were formed.

The five thematic areas are: subject, structure, reusability, engagement, and involvement.

The five thematic areas with their brief description, examples, and relation to research questions are summarized in the following table:

Results(4/5)

12 May 2014

Page 29: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 29

Thematic Area Theme Example

Subject Method “Snapshots, Text, or Audio”

Clarity “reach a clear problem specification”

Specificity “specific to the software’s features”

Structure Timing “real-time feedback”

Level of Detail “giving detailed feedbacks”

Measurement “a group of predefined keywords”“structured in a specific way”

Specificity “give feedback to specific problems”

Reusability Storage “There can be a bank of statements “

Rated “users can view feedbacks and rate how much they agree/ disagree with the feedback”,

Statistical “statistics should occur to represent how much a feedback was meaningful, useful/useless”

Variability Awareness

“the system can increase user awareness by giving him a list of friends’ experiences with features”

Results(5/6)

12 May 2014

Page 30: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 30

Results(6/6)

12 May 2014

Thematic Area Theme Example

Engagement Recognition “a friendly confirmation for participation”“more personalized options for feedback”

Value “it encourages users to give feedback if they can meet with analysts to discuss problems in some ways”.

Channel “the interactions should be very simple”

Transparency “it would increase the users’ trust and willingness to give feedback if they know the cycle of how their feedback will be used”

Involvement Privacy “would like to stay anonymous”“it is important if the user can control who is able to see his feedback”.

Response “the software’s speed of response to my feedback affects my willingness to give feedback”

Support “there can be videos to explain to the users what they can do (in order to provide feedback)”

Rewards “the system should do some favor to the user who gave useful feedback that helped enhance the system”“trying new features or versions for free if the user gave good feedback” “users who gave useful feedback can be accredited to the use to increase their reputations”

Page 31: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 31

Introduction Problem and Motivation Related Work Research Aims and Objectives Research Questions Results Research Method Research Plan Acknowledgement

Agenda

12 May 2014

Page 32: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 32

We will design a questionnaire to gather more relationships between the concepts we have concluded from the focus groups and to confirm the results from a larger population for additional tackling of RQ1.

This will increase the formality and validity of our approach, and help us reach more generalizable conclusions.

We will design and conduct a set of the interviews with engineers to understand and get insights on what makes users’ feedback useful to the evolution and maintenance decisions for further focus on RQ2.

Data will be analyzed using categorization and thematic analysis.

Research Methods(1/3)

12 May 2014

Page 33: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 33

To address RQ5, one of the key techniques we may use during our validation phase is Controlled Experiments.

Controlled experiments have several advantages. They will allow us to:

◦ conduct well-defined, focused studies that produce statistically meaningful results;

◦ to capture relationships between users' context of and different usages of the software;

◦ provide good explanations to why results do and do not occur;

◦ capture important variables and different relationships between variables, such as the ease of use and/or user incentives and their ability to provide good feedback.

Research Methods(2/3)

12 May 2014

Page 34: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 34

There are drawbacks to controlled experiments as any empirical method in software engineering.

Yet the better the experiment design the better results and relationships can be captured.

Research Methods(3/3)

12 May 2014

Page 35: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 35

Introduction Problem and Motivation Related Work Research Aims and Objectives Research Questions Results Research Method Research Plan Acknowledgement

Agenda

12 May 2014

Page 36: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 36

We summarize our research plan and milestones for the whole research project. The milestones are as follows:

◦ First, we will complete the empirical investigation to using questionnaires and interviews as mentioned earlier.

◦ Second, after analyzing the data from all methods, we will reach a set of conclusions that will be incorporated to the design of our aimed crowdsourced engineering framework for software evaluation.

Research Plan(1/2)

12 May 2014

Page 37: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 37

◦ Third, we will begin in designing our prototype tool. It will put the developed framework into practice, and will include both the verification of the designed model and an easy to use user interface to allow for feedback modeling and acquisition.

◦ Finally, we will be applying our approach and prototype in practice for both validation and refinements, and reporting the results.

Research Plan(2/2)

12 May 2014

Page 38: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 3812 May 2014Figure 1. Flowchart for the research plan 

.

 

 

Page 39: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 39

Introduction Problem and Motivation Related Work Research Aims and Objectives Research Questions Results Research Method Research Plan Acknowledgement

Agenda

12 May 2014

Page 40: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 40

I would like to thank my supervisors Dr Raian Ali, and Prof. Keith Phalp for their invaluable feedback and support.

The research was supported by an FP7 Marie Curie CIG grant (the SOCIAD Project) and by Bournemouth University – Fusion Investment Fund and the Graduate School PGR Development Fund.

Acknowledgements

12 May 2014

Page 41: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 41

Question?

12 May 2014

Page 42: Software evaluation via users’ feedback at runtime

EASE'14| London, UK 42

The End

12 May 2014