Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny...

46
Honours Report: Tailoring Feedback in ITSs Danny Rajendra

Transcript of Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny...

Page 1: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

Honours Report: Tailoring Feedback in ITSs

Danny Rajendra

Page 2: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

Abstract

The most effective way for students to learn is face-to-face with a qualifiedtutor. Intelligent Tutoring Systems (ITS) aim to provide a learning expe-rience that approaches the standard of learning with a human tutor. Thisproject looks at the factors why and how human tutors are so effective andhow these factors can be implemented in SQL-Tutor. An evaluation of thechanges was conducted. The evaluation showed that the changes made nosignificant contribution to the student’s learning.

Page 3: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

Contents

1 Introduction 21.1 Report Structure . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Background 42.1 Intelligent Tutoring Systems . . . . . . . . . . . . . . . . . . . 42.2 SQL and SQL-Tutor . . . . . . . . . . . . . . . . . . . . . . . 52.3 Relevant Work . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 The Human Tutor 10

4 The Interviews 124.1 Possible Improvements . . . . . . . . . . . . . . . . . . . . . . 12

4.1.1 Factors that affect feedback . . . . . . . . . . . . . . . 194.1.2 Determining when help is needed . . . . . . . . . . . . 214.1.3 Population learning . . . . . . . . . . . . . . . . . . . 23

5 Evaluation Study 245.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

5.1.1 Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . 245.1.2 Design and Procedure . . . . . . . . . . . . . . . . . . 25

5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265.2.1 Pre-test . . . . . . . . . . . . . . . . . . . . . . . . . . 265.2.2 Student logs . . . . . . . . . . . . . . . . . . . . . . . . 275.2.3 Post-test . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

6 Conclusion 33

Page 4: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

Chapter 1

Introduction

Computer Aided Instruction (CAI) systems are the first systems to attemptto tutor students using computers. The conventional CAI approach is tofirst determine how a good teacher would respond to each possible studentaction and then build a branching program with each response explicitlyprogrammed. CAI systems typically deliver their instructions in a rigid andstructured manner with no regard for the differences in the way individualsassimilate information [1]. Since CAI systems lack understanding of howindividuals assimilate information, they frequently mismatch the system’sinternal processes and the student’s cognitive processes. In order for a com-puter based educational system to achieve the same kind of individualizedattention that a student would receive from a human tutor, it must be ableto reason about the domain and the learner. This has prompted research inthe field of Intelligent Tutoring Systems (ITSs).

They began as an attempt to address the deficiencies of CAI systemsby adapting to the knowledge and needs of individual students. IntelligentTutoring Systems (ITSs) are computer based training systems that incor-porate techniques for communicating and transferring knowledge and skillsto students. They provides a means to capture expert knowledge which thesystem uses to compose dynamic instructional interactions. The expert canprogram the knowledge used to make their decisions, rather than simplyprogram the decision itself, devoid of content.

The major advantage that ITSs offer is individualized instruction with-out the expense of one-to-one tutoring. However, most ITSs are not yetable to do this as well as a human tutor. The objective of this project isto extend the capabilities of SQL-Tutor, an ITS used to tutor students inStructured Query Language(SQL), by trying to individualize feedback tomatch the student’s level of ability.

Page 5: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

1.1 Report Structure 3

1.1 Report Structure

Chapter 2 presents an introduction to the background work, describing ITSsrelated to this project, and SQL.

Chapter 3 identifies the most effective way for students to learn andexamines the difficulties of trying to identify how human tutors individualizefeedback to students.

In Chapter 4, four experienced tutors were interviewed. This sectionexpands on the interview and the inferences that can be drawn from it.Based on the inferences drawn, appropriate changes are implemented.

Chapter 5 evaluates the changes to SQL-Tutor and a discussion of theresults follow.

Finally, Section 6 presents the conclusion, and describes some ideas forfurther work.

Page 6: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

Chapter 2

Background

There are two areas of research associated with this study: ITSs, in partic-ular SQL-Tutor, and SQL.

2.1 Intelligent Tutoring Systems

ITSs originated from the artificial intelligence (AI) movement of the late1950s and early 1960s. Then, workers such as Alan Turing, Marvin Minskyand John McCarthy thought that computers that could “think” as humanswere just around the corner, and it seemed reasonable to assume that once“thinking” machines were created, they could perform any task associatedwith human thought, such as tutoring.

Researchers developed a number of CAI systems designed to presentthe student with a problem, and receive and record the student’s response.However, these systems did not explicitly address the issues of how peoplelearn and did not enjoy much success. By the early 1970s, many researchersshifted the focus from knowledge transfer to knowledge communication andthis led to the birth of ITS. By making use of research in the field of ArtificialIntelligence, ITSs were able to employ knowledge representation strategiesto model a student’s cognitive processes. Using an accurate model of boththe student’s and expert’s knowledge, ITSs are able to provide instructionat the appropriate pace and level of abstraction for the student.

Although there is no standard architecture for ITSs, four componentsemerge from literature as typical of an ITS [2]. For the purpose of concep-tualization and design, it is often easier to think of an ITS as consisting ofseveral interdependent modules. A typical architecture [3] consists of: thedomain module, the student model, the tutor module and the user inter-face module, as shown in Figure 2.1. The domain module contains domainknowledge needed for the generation of problems and /or solving the prob-lem given to the student. The student model stores a model or road-mapof the student’s knowledge and is the most important part of an ITS, since,

Page 7: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

2.2 SQL and SQL-Tutor 5

in the teaching process, the student has to play a central role. The tutormodule determines the way in which the ITS reacts, and, finally, the userinterface module allows communication between the student and the ITS.Together, these modules generate and control the interaction between theITS and the student.

Domain module Student model

Tutor module

User Interface

Student

Figure 2.1: Typical architecture of ITSs

The range of programs that qualify as Intelligent Tutoring Systems is vastand can vary from primitive to sophisticated. In Section 2.3, a discussion ofseveral ITSs is presented.

2.2 SQL and SQL-Tutor

Structured Query Language (SQL) is a language for accessing data in arelational database and was adopted as the industry standard in 1986 [4].It is a simple, highly structured language. However, students are knownto experience difficulties in learning it. The main problems with learningSQL by working with a database management system (DBMS) are that theerror messages are limited to the syntax only, and that the DBMS is unableto deal with semantic errors [5]. With this in mind, SQL-Tutor has beendeveloped for tutoring upper-level undergraduate students in SQL [6]. Itis designed as a practise environment, where student are able to practisesolving problems.

SQL-Tutor has a simple architecture, as shown in Figure 2.2. A screen-shot of the interface is illustrated in Figure 2.3. Domain knowledge is rep-resented as constraints. Student modelling is undertaken by the constraintbased modelling (CMB) module. The pedagogical module observes everyaction of the student. The solution entered by the student is sent to theCBM module which determines the relevant constraints. The violated con-straints are updated to the student model (which is basically a list of violated

Page 8: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

2.2 SQL and SQL-Tutor 6

constraints). The pedagogical module then generates appropriate feedback.There are currently six levels of feedback implemented in SQL-Tutor, eachdetermining how much information is provided to the student. The levelsof feedback, arranged in increasing order of information content, are: posi-tive or negative feedback, error flag, hint, partial solution, complete solutionand all errors. A positive or negative feedback tells the students whetherthe solution is correct or not. An error flag informs the students about theclause in which the error occurred. A partial solution displays the correctsolution for the incorrect clause. A complete solution displays the correctsolution to the problem. Finally, an all errors feedback message lists theviolated constraints for the problem. Figure 2.4 displays how the feedbacklevel is selected in SQL-Tutor. When the student attempts a problem anddoes not provide a correct solution, he or she would receive a positive ornegative feedback. If the student again attempts the same problem andgets it wrong, he or she would receive an error flag message. Finally, if thestudent again gets it wrong, a hint will be presented to the student. Theother feedback levels (partial solution, complete solution and list all errors)are available only if the student chooses it.

student models CBM

Pedagogical module

constraints

databases,problems, solutions

Interface

Student

Figure 2.2: Architecture of SQL-Tutor

In SQL-Tutor, instruction can be individualized in several ways, by gen-erating feedback dynamically, selecting topics and problems, and fading thescaffolding on the basis of the student model. This project focuses on gen-erating feedback dynamically, in particular, trying to dynamically selectfeedback for each student based on his or her level of ability. The benefits ofselecting feedback that matches the student’s level of ability is that studentslearn better and faster.

Page 9: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

2.2 SQL and SQL-Tutor 7

Figure 2.3: The interface of SQL-Tutor

Page 10: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

2.3 Relevant Work 8

First Try

Second Try

Third Try

All Students

Hint

Error Flag

Positive/Negative

Avaliable only if requested

Partial Solution

Complete Solution

All Errors

Figure 2.4: Current feedback selection method in SQL-Tutor

2.3 Relevant Work

Much research has examined issues related to modelling the student’s cog-nitive processes, rather than improving (individualizing) the quality of feed-back from the tutoring system. Those that have improved feedback arebriefly summarised below:

ANDES is an ITS that tutors students in classical Newtonian physics. Ithas two modules that it uses to provide feedback and hints tailoredto the student’s knowledge and goals [7]. The helper module tries tounderstand which plan or goals the student is pursuing as the studentdoes an activity, and offers help specific to what it perceives to behelpful to the student’s current plan. This means that ANDES indi-vidualizes feedback with respect to the individual’s plan. If it detectsan important misconception or a bad learning habit, it may engagethe student in extensive multimedia instructional activities. The as-sessor module uses Bayesian reasoning to maintain a long-term modelof the student’s level of mastery of concepts and the student’s preferredmethods of problem solving and learning. The integration of the twomodules means that ANDES can adapt its help as the student’s levelof knowledge changes over time.

RAPITS is an ITS that teaches introductory Pascal loop construction to

Page 11: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

2.3 Relevant Work 9

students from diverse backgrounds [8]. The system is aimed at novicesin Pascal programming who might have a range of computing and cul-tural backgrounds. The aim of this system is to build a tutoring sys-tem with versatile teaching strategies. RAPITS uses a meta-teachingstrategy where it determines the appropriate teaching strategy basedon the student’s history.

CIRCSIM-TUTOR is a natural language-based intelligent tutoring sys-tem to teach first-year medical students about the reflex control ofblood pressure [9]. Students solve small problems and are tutored bySocratic dialogue with the computer. CIRCSIM-TUTOR individual-izes feedback by searching for clue words in the student’s response. Forexample, if the student does not understand the question and types “Ido not think I understand the question”, the system will look up thelist of clue words and use the response associated with the particularclue word. CIRCSIM-TUTOR individualizes feedback with respect tothe student’s response.

MFD is an ITS that teaches arithmetic to fifth and sixth graders [10].MFD learns the time an individual student requires to solve a prob-lem by training a neural network, and then uses the neural networkto make necessary predictions. Using the predictions, MDF will de-termine whether the student is not making sufficient progress on thecurrent problem, and if so generates an easier problem. Besides in-telligently selecting a topic on which a student should work on, anddynamically constructing problems that are appropriate for the stu-dent, MFD also provides hints that match the student’s level of ability.

The common link between the four ITSs described above is that all adapttheir feedback based on certain aspects of the student’s behaviour. Despitethe years in research and development in the field of ITSs, there are notmany usable tutoring systems in use. This should encourage re-examinationof the objectives and approaches to developing ITSs.

Page 12: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

Chapter 3

The Human Tutor

Empirical studies have demonstrated that the most effective way for studentsto learn is to work alone, face-to-face with a qualified tutor, equipped withinstructional material and laboratory equipment.

The main reason why human tutors are so effective is because they areable to tailor feedback (respond to student’s questions about the subjectmatter, determine when students need help and identify the type of helpneeded) to students. Unfortunately, the need for qualified tutors makes thispreferred form of instruction very costly and rare. ITSs aim to provide alearning experience for each student that approaches the standard of learn-ing that he or she would receive in one-to-one tutoring from an expert tutorequipped with all the necessary training aids. The ideal ITS would be onemodelled on a human tutor. However, trying to identify how human tutorsindividualize feedback to students is a complex topic because the process de-pends on many different factors that the tutor takes into account on any oneoccasion. For example, the tutor has several different ways of responding,and chooses an appropriate one depending on:

• What model of the learning process the tutor has in mind for eachoutcome of the instruction, and what steps the tutor thinks the studentneeds to follow to reach the outcome.

• Where the tutor thinks the student is in the sequence of steps thestudent needs to follow.

• The tutor’s diagnosis of why the student might have made a mistake(which is based on the model of the student’s abilities, backgroundknowledge, and personality, that the tutor has in mind).

All this varies with different kinds of subject matter, how easy or difficult thetutor thinks the subject matter is, and how much time and effort the tutorthinks the subject matter deserves. It also varies with the tutor’s estimateof the motivational level of the student.

Page 13: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

11

The main goal of ITSs is to provide a learning experience for each studentthat approaches the standard of learning that he or she would receive in one-to-one tutoring from an expert tutor. One method of achieving this is toprovide indivualization of instructions (providing instructions that matchthe student’s level of ability). Individualization of instructions is importantbecause students may have different prior knowledge, learning experience,particular interests, motivation, or personality, and these factors affect howa human tutor interacts with the student. It is proposed that SQL-Tutor bemodified so that it would better individualize feedback to different students.In order to gain a better understanding of how human tutors individualizeinstructions or feedback, and what factors they take into account during theprocess of adapting instructions or feedback, several tutors were interviewed.

Page 14: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

Chapter 4

The Interviews

Four tutors, each with several years of tutoring in database topics were in-terviewed. Only four tutors were interviewed because they had both thetutoring experience and domain expertise. Unskilled tutors may have do-main expertise, but lack the tutoring experience that is often crucial to thesuccessful interactions between the tutor and the student. The tutors weregiven a brief description of the project and its aims, before being interviewed.Each interview session lasted for approximately fifteen minutes, and a totalof thirteen questions were asked. The questions centred around the interac-tions between the tutor and the student; in particular how the tutor decideswhat instruction or feedback to provide to the student (Appendix A). Anexample of a question asked is : What factors do you take into account whendeciding how to reply? Section 4.1 reports the findings of the interviews andhow it can apply to SQL-Tutor.

4.1 Possible Improvements

After analysing the answers given by the tutors, there emerged several pos-sible improvements that can be made to SQL-Tutor.

Using examples

Firstly, tutors usually explain by giving examples and explaining each step ofthe example. By providing the student with an example that is related to thequestion the student is trying to solve, it allows the students to analogicallyinfer an explanation for the new situation. With the tutor explaning eachstep of the example, students are also able to view the (analogous) solutionpath. SQL-Tutor does not have such a feedback and this provides motivationfor this feedback method to be introduced. The best way to illustrate howthis works is with an example. Figure 4.1 illustrates a situation in whichproblem 1 requires the student to specify a query.

Page 15: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

4.1 Possible Improvements 13

Correct solution :

Retrieve the last name and address of all employees who work for the Research department.

Select lname, addressFrom employee, department Where dname=’Research’ and dnumber=dno Where dname=’Research’

From employee, department

Student solution:

Feedback messages provided by SQL−Tutor Positive/Negative Feedback : Good try − but there are some mistakesrError Flag : Almost there − a few mistakes though. One of them is in the where clause Hint : Almost there − you made 2 mistakes. If there are N tables in the from clause, then there should be N−1 join conditions in the where clause.

Select lname

Partial Solution : Almost there − you made 2 mistakes. One of them is in the where clause. It should be : where dname=’Research’ and dnumber=dno List all errors : 1. If there are N tables in the from clause. then there should be N−1 join conditions in where clause. 2. Check the expressions in the select clause!Complete solution : The correct solution of this problem is: select lname, address from employee, department where dname=’Research’ and dnumber=dno

Problem 1:

Figure 4.1: Feedback produced by SQL-Tutor for problem 1

Page 16: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

4.1 Possible Improvements 14

When the student enters his/her incorrect solution, the messages gener-ated by SQL-Tutor may not be helpful, especially if the student does notunderstand certain concepts. In this example, if the student does not fullyunderstand the concept of join conditions, the messages may not be of muchuse. In this situation, a human tutor would present to the student an ex-ample and explain each step of the example. Figure 4.2 is a log of goals,actions and results that the tutor would perform and would look for, whenexplaining the example to the student.

for the query. If not, point out to them which tables are and whyACTION : Ask the student to point out the relevant tables

GOAL : Determine if student knows which tables are relevant to

EXPECTED RESULT : Student learns how to select relevant tables

GOAL : Determine if student understands the join condition. If not draw example and use it to illustrate the join conditionACTION : Tutor sketches the relevant tables and uses it to illustrate the join condition

GOAL : Determine if student knows how to select the correct attributes ACTION : Display the relevant part of the question that relates to the selection of the correct attributes

EXPECTED RESULT : Student understands the join condition

EXPECTED RESULT : Student understands how to select the correct attributes

Figure 4.2: Log of goals, actions and expected results of a human tutor

The advantage of doing it in steps is so that it is easier for the student tounderstand and to allow the student to acquire the mental process of plan-ning (learning how to decompose problems into subgoals) and implementingit. By viewing each step of the example, the student is able to determinewhich step he/she might have gone wrong. By drawing the relevant ta-bles and attributes, students are able to visualize the relevant tables andattributes. There are also other advantages to drawing the relevant tablesand attributes, such as, people generally prefer pictures, there are no lan-guage barriers and it eases the memory load. This method of using examplescould potentially help the student learn better and will be implemented inSQL-Tutor.

Page 17: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

4.1 Possible Improvements 15

This method of using examples could be implemented in SQL-Tutor.The process in which the tutor explains the example can be shown as a se-quence of slides. Figure 4.3, Figure 4.4, Figure 4.5 and Figure 4.6 shows thesame process as described in the Figure 4.2. On the top of the Figures 4.3to Figure 4.6, a subgoal is displayed, followed by the keywords to look forhighlighted in red, and finally the sketch of the relevant tables. Figure 4.3shows the tables in the database and highlights the relevant tables. Fig-ure 4.4 identifies the keywords that would suggest to the student what therelevant tables are. Figure 4.5 displays the joining condition between thetables. Finally, Figure 4.6 highlights in red the attributes that needs to beretrieved (last name and address).

Figure 4.3: Slide 1 of the example

Examples for the first twenty questions for the company database wasdeveloped. However, for complex queries (queries that involve using thegroup by and having clauses), there are more attributes, tables and rela-

Page 18: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

4.1 Possible Improvements 16

Figure 4.4: Slide 2 of the example

Page 19: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

4.1 Possible Improvements 17

Figure 4.5: Slide 3 of the example

Page 20: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

4.1 Possible Improvements 18

Figure 4.6: Slide 4 of the example

Page 21: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

4.1 Possible Improvements 19

tionships between tables are involved. A novice might find it difficult tocomprehend the slides and in this situation, it would be ideal to have ahuman tutor present to tutor the student. It was decided that until betterrepresentations for examples are found, examples would not be implementedin SQL-Tutor.

4.1.1 Factors that affect feedback

The second finding was that tutors looked at four main factors when de-ciding what reply or feedback to give. These factors are: the proficiency ofthe student in the subject matter, the difficulty of the subject matter, thebackground of the student in the subject matter, and the level of motivationthe student has. There are other factors such as facial expression, gestures,voice tone, etc..., that the tutor looks for, but is not considered as importantas the main four factors. The list below explains in greater detail the fourmain factors.

• Proficiency : The proficiency of the student in the subject mattermeasures the overall level of understanding that the student has insubject matter (SQL). If the student has a low level of understandingin the subject matter, tutors tend to provide more complete feedback(perhaps the partial solution instead of a hint), and vice-versa. Hu-man tutors usually determine the proficiency of the student based onprevious encounters, measuring it by how well the student knows thesubject. In SQL-Tutor, the proficiency of the student can be estimatedby the number of problems solved over the number of attempts takento solve the problems.

• Problem difficulty : The level of difficulty measures how difficult theproblem is. Human tutors find that as the problem difficulty increases,they tend to spend more time and efforts into explaining the solutionand the steps needed to derive the solution. This would suggest thattutor tend to provide vague feedback for easier problems, and feedbackwith more information content for difficult problems. If the tutor de-cides that the problem is too difficult, the student will be recommendedto try another problem. Human tutors classify problems by an ad-hocmethod. In SQL-Tutor, every problem is already assigned a difficultylevel, ranging from one to ten, with problems of difficulty level onebeing the easiest problem.

• Background : The background of the student in the subject mat-ter refers to the prior experience the student has in the subject mat-ter. Human tutors usually determine the background of the studentthrough informal contact. If the tutor thinks that the student has agood background in the subject matter, the tutor would provide less

Page 22: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

4.1 Possible Improvements 20

hints compared to the number of hints a student who does not havea good background. Trying to measure the background of a studentmay prove to be difficult for SQL-Tutor. As mentioned above, tu-tors determine the background of the student mainly through informalcontact. Currently, SQL-Tutor is not able to do this. An alternativemethod to gauge the background of the student is to request the stu-dent enter what he or she thinks their background level is. There areadvantages and disadvantages to using this method. The advantageis that students usually have a better knowledge of their backgroundthan tutors, and this allows a more accurate way of identifying thebackground level. The disadvantage of this method is that it is sub-jective. Students may incorrectly assume their background and thiscould affect the type of feedback they receive. Currently, SQL-Tutorrequires students to state (selecting one from the following three op-tions: Novice, Familiar or Experienced) their background level beforeinteracting with the system. The background level is given a numericvalue ranging from one to three, with novice having the value of one,familiar having a value of two and experienced having a value of three.

• Motivational level : Motivation is mentioned as one of the learningfunctions which stimulate learning [11]. The motivational level of astudent depends on many factors such as confidence, curiosity, control,attention, satisfaction and relevance (of the topic to be taught) [12].Human tutors are able to detect the motivational state of the stu-dent and takes that into consideration when tailoring feedback. Astudent with a high motivational level would receive a vague feed-back that will motivate them to think. The detection of a student’smotivational state is constrained by interface limitations (unable to ac-curately detect confidence, attention and so on). The simplest methodto determine the motivational level of a student is by the number ofproblems the student attempts to solve.

The four factors mentioned above are not all of equal importance. Theinterviewed tutors view the proficiency and problem difficulty of equal im-portance. The background and motivational level of the student is of equalimportance, but is of secondary importance when compared to the profi-ciency and problem difficulty. The following list orders the factors startingfrom the most important.

1. Proficiency in subject matter and Problem difficulty

2. Background and motivational level of student

By combining these four factors into a single variable (called ability) [14],tutors then split students into categories. The ability of a student is definedas:

Page 23: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

4.1 Possible Improvements 21

ability = (proficiency ∗5)+problemdifficulty + background+motivation(4.1)

The proficiency variable from (4.1) is multiplied by five to normalize it,so that the proficiency variable has a range from zero to ten (same as theproblem difficulty). Equation (4.1) is derived via an ad-hoc attempt. It maybe that this equation does not match how human tutors determine the abil-ity of the student, and should be viewed as a first step towards developingthe correct equation (if there is one such equation) that human tutors useto determine the ability of students.

Students of roughly the same ability are grouped in the same category.Each category will receive feedback suitable for that particular category. Us-ing the model developed by Okazaki [13] as a guide, to classify the state of astudent, a model to map the student’s level of ability with the appropriatefeedback is developed. There are two main criteria for this model. Firstly,the students with high ability would receive less support through the infor-mation content of the feedback messages, while students with a low abilitywould receive more support. An example of this would be that a studentwith high ability would not be able to obtain the complete solution (unlesshe or she explicitly requests it), whereas a student with a low ability wouldbe able to obtain the complete solution. The other criteria is that sequenceof feedback messages should be given in increasing amounts of information.For example, a student will receive a hint before receiving the complete so-lution. This ensure that the student is not overwhelmed with information.The current feedback selection method in SQL-Tutor is shown in Figure 2.4.Figure 4.7 illustrates the proposed feedback selection method.

Once the ability of the student is worked out, one can infer the state thestudent is in and provide appropriate feedback. For example, if a studenthas an ability of six (state 1), he or she would receive feedback messages inthe following order: Error flag, Hint, Example, Partial solution and Com-plete solution. The other feedback messages (Positive/Negative feedback,complete solution) are available if the student requests it.

This model of determining what feedback to provide based on the abilityof the student is implemented in SQL-Tutor.

4.1.2 Determining when help is needed

Tutors do sometimes initiate discussions with student when they feel thatthe students require assistance. How tutor know when to offer help to thestudent is mostly based on physical or verbal cues such as facial expression,gestures, voice tone and other cues. An example of a cue would be a student

Page 24: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

4.1Possib

leIm

provem

ents

22

First Try

Second Try

Third Try

Error Flag

Hint

Error Flag

Hint

Example

Fourth Try Partial Solution

Fifth Try Complete Solution

Example

Partial Solution

Error Flag

Hint

Example

Partial Solution

Positive/Negative

Error Flag

Hint

Example

Positive/Negative

Error Flag

Hint

High ScoreLow Score

29 15 2420

State 1 State 2 State 3 State 4 State 5

26

Figure 4.7: Proposed feedback selection method in SQL-Tutor

Page 25: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

4.1 Possible Improvements 23

staring blankly at a screen. By initiating a discussion, tutors hope to correctany misconceptions the student might have, setting them on the correctsolution path.

Due to the limitations in the communication channel between the studentand the ITS (the only feedback the ITS can receive from the student is whatthe student types in), it would be difficult to observe the necessary cues. Acrude way of trying to estimate when to provide help is by timing howlong a student takes to complete a question. If a student takes longer thanaverage amount of time, a message will appear, prompting the student if heor she requires assistance. This is a crude way of estimating if the studentrequires help and is prone to problems. An example of such a problem is ifa student does not interact with the system for the average amount of time,the system thinks that the student requires assistance and provides it whenit is not needed. Another problem with this approach is the problem oftrying to determine the average amount of time needed to solve a problem.It should take into account the level of difficulty for the particular problemand adjust the average time appropriately.

Until a more accurate method of determining when students require helpis discovered, this feature will not be implemented in SQL-Tutor.

4.1.3 Population learning

When a group of students encounter the same difficulty with a problem,this usually indicates that students share a common misconception or sev-eral misconceptions. In this situation, a human tutor would give a generalrundown of the question to the entire class. This general rundown of thequestion would provide the entire class to be aware of the possible mis-conceptions that they may have. A similar method could be implementedin SQL-Tutor, where if a certain number of students violate a same setof constraints, all students using the system will be informed of the com-mon error. In SQL-Tutor, each student has an individual student modelthat keeps track of what constraints the student has mastered or violated.By examining each student model, one could determine if a certain set ofconstraints get violated, and provide appropriate feedback to the entire pop-ulation. There are certain problems to this approach such as: How manystudents should encounter the same difficulty before providing appropriatefeedback? The population of students using SQL-Tutor is not fixed and canvary, so no specific number of students can be chosen. Due to the problemsmentioned above, this feature will not be implemented in SQL-Tutor.

Page 26: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

Chapter 5

Evaluation Study

After implementing the changes, it is crucial to determine if they contributeto student’s learning. The aim of this evaluation is to access the contributionthe new feedback selection method made, in particular:

• Did the new feedback selection method select appropriate feedback?

• Would the student prefer to select their own feedback or have feedbackautomatically selected by the system?

The rest of this section describes the method, which constitutes the designand physical makeup of the experiment. That is followed by the results andfinally the discussion which interprets the results within the context of thetwo questions asked.

5.1 Method

5.1.1 Subjects

The subjects were students who were enrolled in the second year Introduc-tion to Database (COSC226) course. All students had several lectures onSQL before they were able to interact with SQL-Tutor. Some subjects mayhave previous exposure to SQL. The subjects were randomly assigned toeither the control or experimental group. The subjects in the control groupinteracted with the current version of SQL-Tutor, while the subjects in theexperimental group interacted with the new version of SQL-Tutor. Therewere a total of 55 subjects in both the control and experimental group1. Theexperiment was conducted in the computer laboratories of the computer sci-ence department.

1There were more than 55 subjects in the experiment. However, the data collected forseveral subjects had to be discarded due to inconsistencies.

Page 27: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

5.1 Method 25

5.1.2 Design and Procedure

The evaluation consists of several data collection sessions, the pre-test, sys-tem interaction and post-test. Figure 5.1 illustrates the data collection ses-sions. Before students are able to interact with the system, they had tocomplete a pre-test. They were then allowed to interact with the system.When the evaluation period ended, the students were asked to complete apost-test. The aim of the pre-test and the post-test is to determine thesubject’s knowledge of SQL and also to find out if there are any significantdifferences between the control and experimental group.

Pre test

SQL−Tutor without

Post test

28th Oct

SQL−Tutor with

changes

changes

N = 28

N =55

N = 27

N = 55

Figure 5.1: Experiment design

The pre-test and post-test (included in the Appendix B) each consists ofthree multiple choice questions. These questions were designed to evaluatethe subject’s knowledge of SQL. The marks allocated for the first, secondand third questions were 1, 5 and 1 marks respectively. After answeringeach question, subjects recorded how confident they were of their answersusing a three point Likert scale. The pre-test was given to the subjects whenthey logged on to the system for the first time.

After the subjects had submitted the pre-test, they were allowed to in-teract with the respective version of SQL-Tutor. The subjects were able tochoose the problems they wish to solve, when they wish to log out and soon. The idea behind this is to have data collection that takes place not in anartificially created environment where subjects are constrained to act a cer-tain way, but rather in a normal enviroment, where subjects have completefreedom. Every action that the subjects made was recorded automaticallyby the system in the form of student logs. The time limit in which the datacollection took place occurred over thirty days. Since the subjects were ableto choose when they wish to interact with the system (how many problemsthey wished to solve), having a data collecting session starting from the 4th

of September to the 28th of October. The long data collection period en-sured that the student logs were of sufficient length to perform any analysis

Page 28: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

5.2 Results 26

on.On the 28th of October, subjects took the post-test. The post test was

included as part of the final exam for the course.

5.2 Results

The results were obtained from three separate sources: the pre-test, studentlogs and the post-test. The results from each source is discussed.

5.2.1 Pre-test

A total of 55 subjects participated in the pre-test. 28 subjects belonged tothe control group, while the rest (27) belonged to the experimental group.The objective of the pre-test is to determine the knowledge level of the sub-jects. The ideal situation is that there is no significant difference in theknowledge level between both groups. As mentioned in Section 5.1.2, thepre-test consists of three questions, worth 1, 5 and 1 mark respectively. Fig-ure 5.2 shows the mean scores for the pre-test. On average, the subjectsbelonging to the experimental group did slightly better than the subjectsin the control group for questions one and three. The mean score for thecontrol and experimental group was 3.97 and 4.13 respectively.

Control Group Experimental GroupScores

Q1 0.39 0.44Q2 2.92 2.91Q3 0.66 0.78Total 3.97 4.13

Confidence levelQ1 1.35 1.44Q2 0.8 1.16Q3 1.14 1.36Total 3.29 3.96

Figure 5.2: Mean scores for the pre-test

Subjects in both groups were also asked how confident they were oftheir answers to the questions in the pre-test. Subjects are able to choosetheir confidence level from a scale of 0 (not confident) to 2 (very confident).Results indicate that subjects in the control group were slightly less confidentthan subjects in the experimental group. The mean confidence level ofsubjects in the control group was 3.29 and the mean confidence level ofsubjects in the control group was 3.96

Page 29: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

5.2 Results 27

To determine if there is a statistical difference in the knowledge levelbetween the two groups, a two tailed t test was carried out. The null hy-potheses for the t test would be that there is no difference between the meansat the 0.05 level of significance. The alternative hypotheses is that there isa difference between the means at the 0.05 level of significance. The resultsindicate that there is no significant difference between the mean scores of thetwo groups (t = 0.35, p > 0.05) and hence the null hypotheses is accepted.

5.2.2 Student logs

Every action performed by a subject during the evaluation period was recordedin a log, along with a time stamp. Figure 5.3 shows a portion of a log filefor a subject. It shows a subject logging on to SQL-Tutor for the first time.Problem 59 was the first problem the subject attempted. The subject solvedthe problem correctly on the first attempt.

Once analyzed, the log files can be used to answer the questions posedin Section 5. Figure 5.4 presents a summary of the variables that wereanalyzed.

Page 30: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

5.2 Results 28

The average amount of time the subjects in the control group spentinteracting with the system was 55.89 minutes. However, it is difficult todetermine if subject actually spend all the time attempting to solve theproblem or if they are doing something else. An example of this situationis if a student attempts a question, but leaves for a short break insteadof spending the time attempting to solve the question. Subjects in theexperimental group spent an average of 83.9 minutes interacting with thesystem. This indicates that subjects in the experimental group were morewilling to spend time interacting with the system. Another possible reasonwhy subjects in the experimental group spent more time interacting withthe system was that the feedback provided was not what they expected andwasted time selecting the type of feedback they are expecting.

This issue can be clarified up by examining the number of problems at-tempted by the subjects, the number of solved problems and the attemptsto solve problems that could not be solved on the first attempt. If the sub-jects in the experimental group attempted and solved more problems thenthe subjects in the control group, we would know that the system providinginappropriate feedback does not really affect the amount of time the sub-ject spent interacting with the system. By examining the attempts to solveproblems that could be solved on the first attempt, we are able to determinethe usefulness of the feedback. If subjects in the experimental group requiresmore attempts to solve a problem, it suggests that the feedback provided tothis group is not particularly useful.

On average, subjects in the control group attempted and solved lessproblems than the subjects in the experimental group. Subjects in thecontrol group attempted an average of 10.5 problems and managed to solvean average of 7.35 problems, while the subjects in the experimental groupattempted an average of 15.5 problems and managed to solve an averageof 11.73 problems. This shows that subjects in the experimental groupattempted and learnt more about SQL than the control group. However,this conclusion may not be true in certain cases. For example, if subjectsreceives the complete solution or partial solution as feedback, they maynot attempt to solve the problem, even though they do know the correctsolution. Taking this into account, the average number of solved problemsfor the control group increased to 8 and 13 for the experimental group. Usingthis new measure of solved problems, the percentage of solved problems overattempted problems for the control and experimental groups are 0.76 and0.85 respectively.

In order to evaluate how useful the feedback selected by the system is,we should examine the average number of attempts taken to solve problemsthat could not be solved on the first attempt. This figure is a good measureof the usefulness of the feedback. For example, if a subject requires manyattempts before the problem is solved, this would indicate that the feedbackprovided to the subject is not very useful.

Page 31: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

5.2 Results 29

Figure 5.3: Part of a student log

Page 32: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

5.2R

esults

30

Mean Standard DeviationControl Group Experimental Group Control Group Experimental Group

Total Interaction Time (mins) 55.89 83.9 68.36 80.59Number of attempts 10.5 15.5 9.8 13.96Number of solved problems 7.35 11.73 7.52 12.29Attempts to solve 1.72 3.77 1.04 3.58problems that could notsolved on the first attempt

Figure 5.4: Mean interaction details

Page 33: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

5.3 Discussion 31

The average number of attempts taken to solve problems that could notbe solved on the first attempt was 1.72 and 3.77 for the control and ex-perimental group respectively. This result indicates that subjects in theexperimental group persisted in trying to solve problems that initially couldnot be solved. Another interpretation of this result could be that the feed-back provided by the system in the control group is more useful than feed-back provided by the system in the experimental group, hence, subjects inthe control group are able to solve the problem in less attempts. However,it could just be that students in the control group requests the completesolution whereas subjects in the experimental group may just wait acceptfeedback provided by the system, instead of requesting specific feedback.The student logs will need to be examined in greater detail to determine ifthis is so.

After the student logs were examined, it was found that subjects in thecontrol group requested specific feedback less than subjects in the experi-mental group. This suggests that the feedback provided to the subjects inthe experimental group was not as useful as the feedback provided to thesubjects in the control group. To determine if the the changes contributedto the subjects’ learning, a post test will be conducted to determine theknowledge the subjects have in SQL.

5.2.3 Post-test

The main objective and format of the post-test is identical to the pre-test:to determine the knowledge the subject has in SQL. We expect to find thatthere is a significant difference between the knowledge level of subjects fromthe control and experimental group, with subjects from the experimentalgroup having a better knowledge of SQL. All 55 subjects who participatedin the pre-test, completed the post-test. Figure 5.5 shows the mean scoresfor the post-test. Subjects in the control group scored an average of 4.64out of a possible 7, while subjects in the control group scored an average of5.07. A two tailed t test was carried out to determine if there is a statisticaldifference between the two groups. The results of the t test indicates thatthere is no significant difference between the mean scores of the two groups(t = 1.387, p > 0.05).

5.3 Discussion

The pre-test showed that there are no significant differences in the averageknowledge levels of subjects in both the control and experimental groups.Examination of the students’ log revealed that subjects in the experimentalgroup attempted and completed more problems. However, the feedback tothe experimental group seemed less useful than the feedback provided to thecontrol group. This indicates that the sequences of feedback proposed for

Page 34: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

5.3 Discussion 32

Control Group Experimental GroupScores

Q1 0.82 0.77Q2 3.28 3.4Q3 0.53 0.88Total 4.64 5.07

Figure 5.5: Mean scores for the post-test

students of different categories of ability were not the ideal sequences. Thepost-test revealed that after interacting with the system, both groups hada better knowledge of SQL. However, there was no significant difference inthe knowledge level between the two groups.

It is evident that the selection of feedback for the experimental groupfailed to provide the expected individualization. Despite the lack of encour-aging results, it should be noted that this is a rough model of how humantutors individualize feedback and is possible that with a more sophisticatedmodel of how human tutors adapt feedback, the gains of individualizationwill be realised.

Page 35: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

Chapter 6

Conclusion

Since that late 1950s, computers have been used to tutor students in variousdomains. These early tutoring systems were not able to effectively adaptinstructions to suit individual students and this led the development of ITSs.The main goal of ITSs is to provide individualized feedback. However, mostITSs are not able to do this as well as a human tutor.

A series of interviews with tutors took place to determine what factorshuman tutors take into account when deciding what sort of feedback toprovide. Tutors took note of how proficient the student is in the subjectmatter, the problem difficulty, the background of the student in subjectmatter and the motivation of the student, when deciding on what sort offeedback to provide.

A model of how human tutor decide what sort of feedback to provideis constructed and implemented into SQL-Tutor. An evaluation study ofthe changes was conducted and it revealed that the changes made did notsignificantly contribute to the subject’s learning. If a more sophisticatedmodel could be developed, taking into account more factors, it may possiblyhave a significant effect on the student’s learning experience.

Page 36: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

34

Acknowledgements

I would like to thank Dr Antonija Mitrovic for guiding and assisting withthis research, Kurt J Hausler for implementing the changes to SQL-Tutor,Jane McKenzie for proof reading this report, and to the tutors who agreedto be interviewed.

Page 37: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

Bibliography

[1] Bloom, B. S. The 2 Sigma Problem: The Search for Methods ofGroup Instruction as Effective as One-to-One Tutoring. EducationalResearcher, 13, pp. 3-16. 1948.

[2] Burns, H. L. and Capps, C. G. Foundations of Intelligent TutoringSystems: An Introduction. Foundations of Intelligent Tutoring Systems.Lawrence Erlbaum Associates, Hillsdale, NJ. 1988.

[3] Woolf, B. AI in Education. Encyclopedia of Artificial Intelligence,Shapiro, S., ed., John Wiley & Sons, Inc., New York, pp. 434-444.1992.

[4] Elmasri, R. and Navathe, S. B., Fundamentals of database systems (2nded) Benjamin / Cummings, Redwood, CA. 1994.

[5] Mitrovic, A., SQL-Tutor: a Preliminary Report, Technical Report TR-COSC 08/97, Computer Science Department, University of Canterbury,1997.

[6] Mitrovic, A., Learning SQL with a computerised tutor, Proc. 29thSIGCSE Tech. 307-311. 1998.

[7] Gertner, A, Conati, C, and VanLehn, K., Procedural help in Andes:Generating hints using a Bayesian network student model. Proc. 15thNational Conference on Artificial Intelligence. Madison, Wisconsin.1998.

[8] Pamela J. Woods and James R. Warren., Adapting Teaching Strategiesin Intelligent Tutoring Systems. ITS’96 Workshop on Architectures andMethods for Designing Cost-Effective and Reusable ITSs, Montreal.1996.

[9] Hume, Gregory D., Using Student Modelling to Determine When andHow to Hint in an Intelligent Tutoring System, Ph.D. , Illinois Instituteof Technology. 1995.

[10] Joseph E. Beck and Beverly P. Woolf, Using a Learning Agent with aStudent Model. ITS’98, San Antonio, USA. 1998.

Page 38: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

BIBLIOGRAPHY 36

[11] Shuell, T, J., Designing instructional computing systems for meaningfullearning., Foundations and Frontiers in Instructional Computing Sys-tems (in press).

[12] Teresa, D, Soldato., Detecting and reacting to the learner’s motivationalstate, ITS 92, Montreal, Canada, 1992.

[13] Okazaki, Y., Watanabe, K. and Kondo, H., An Implementation ofthe WWW Based ITS for Guiding Differential Calculations. Proc ofworkshop: Intelligent Educational Systems on the World Wide Web”.Proc. 8th World Conference of the AIED Society, Kobe, Japan. 18-22.1997.

[14] Akira, Takeuchi. and Setsuko, Otsuki., EXPITS: an Experimental En-vironment on ITS, ITS 92, Montreal, Canada, 1992.

Page 39: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

Appendix A

Four tutors, selected on the basis that they had at least a few years oftutoring, were interviewed. A brief overview of the project was given tothe tutors. The table below summarises the questions, reasons why thequestion was asked, answers and analysis of the answers. Note that thetutor responses have been combined and edited where appropriate.

Page 40: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

BIB

LIO

GR

AP

HY

38

QUESTION REASON ANSWER ROUGH ANALYSISRoughly how long have youbeen tutoring ?

To gauge how much weightshould be placed on their an-swers.

All the tutors had severalyears of experience tutoringdatabase material.

Their answers carry roughlythe same weight.

Do you find that studentshave problems with the con-cepts (what it means) ratherthan how to go about doingthe question ?

Students are assumed toknow the background knowl-edge and concepts, but isthis assumption actuallytrue ?

Students are expected toknow the concepts. Theyusually have problems withconcepts rather than how togo about doing the question.

Students can roughly be di-vided into two groups. Thefirst group of students in-clude students who havelearnt and remember thebackground materials andkey concepts. This meansthat unnecessary materialshould not be presented tothis group. The other groupincludes students who havenever really learnt the back-ground material and con-cepts, of have forgottenthem. This group should bepresented with basic infor-mation.

Page 41: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

BIB

LIO

GR

AP

HY

39QUESTION REASON ANSWER ROUGH ANALYSISDo you initiate any discus-sions with students ?

To find if tutors do initi-ate discussions with studentsand why.

Sometimes, depending onwhether the class is big ornot.

In ITS, the size of the classdoes not matter. Human tu-tors are able to determineif a student is lagging be-hind and approach the stu-dent. ITS suffers from thelimitations in bandwidth ofthe communication channeland the only feedback that itreceives is what the studenttypes in. One way of try-ing to measure if a studentis behind is to time how longhe or she takes to completea question. If the studenttakes longer than the aver-age time (obtained by tak-ing the average time thatall students take to com-plete that question), perhapsSQL-Tutor could provide a“pop-up” message asking thestudent is he or she needshelp. This is a crude wayof estimating the pace ofthe students and this methodcan be easily “fooled”.

Page 42: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

BIB

LIO

GR

AP

HY

40

QUESTION REASON ANSWER ROUGH ANALYSISWhen students request assis-tance, what are the typicalstatements they say ?

To find if students knowwhat they want help on.

They usually say somethinglike “Here is what I havedone so far. Am I doingit right?” or “I do notknow how to do this prob-lem. Could you help me?”

SQL-Tutor would have tolook at where students havegone wrong and inform thestudent.

From the statement the stu-dent makes, do you look forcertain key words that helpyou determine a particularway to help a student? (e.g.If you hear the word EX-PLAIN, do you tend to ex-plain to the student the soul-tion rather than giving thema hint?)

To find out if student dostate which method theyprefer their feedback in.

Yes. For example, if theymention that they wouldlike an explanation, the tu-tor would switch from what-ever method of feedback theyplanned and give an explana-tion.

SQL-Tutor should have arange of feedback methodsavaliable and students want-ing a particular method offeedback can choose it. SQL-Tutor should keep a recordof which feedback method astudent prefers, so that whenthey next log on, there wouldbe no need to ask themwhich feedback method theyprefer. A flexible way forstudents to change their pre-ferred method should be pro-vided.

If the student does notclearly state his or her inten-tion (i.e. does not state ifhe or she wishes to know ifthey are on the right track,or where they might havegone wrong), how do you goabout trying to understandwhat the student wishes togain out of the discussion?

To find out what a studentwants if he or she does notmention it.

Ask them questions to clarifywhat they want.

SQL-Tutor currently has fivelevels of feedback, which thestudent can select. If a stu-dent does not select a feed-back level, SQL-Tutor au-tomatically selects the firstlevel of feedback (positive ornegative), then the secondlevel of feedback (error flag)and finally stopping at thethird level of feedback (hint).SQL-Tutor could be modi-fied so that it does not auto-matically select the level offeedback, but decides whichfeedback level to use basedon the student model.

Page 43: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

BIB

LIO

GR

AP

HY

41

QUESTION REASON ANSWER ROUGH ANALYSISWhen you approach a stu-dent, what factors do youtake into account when de-ciding how to reply?

1. Difficulty level of theproblem.

2. Motivation of student.

3. Pace of student.

4. How much time youthink the problem de-serves.

5. The student’s level ofproficiency with thetopic.

6. Other factors.

To find out what factors hu-man tutors take into accountwhen deciding on how to re-ply.

All of the below:

1. If the problem has ahigh level of difficulty,more time would bespent. If the prob-lem is a simple prob-lem, leading questionswould be asked, hopingto lead the student tothe correct solution.

2. Yes. If the student ismotivated and wishesto learn more, moretime would be spent.The amount of timespent with a studentalso depends on howlarge the class is.

3. Yes. If a studentis behind, fewer lead-ing questions will beasked and straightfor-ward explanations willbe given.

4. When dealing withlarge classes, tutorstend to move betweenstudents quickly.

5. Tend to give studentsfeedback that theyare able to use. (i.e.matches their levelof proficiency on thetopic).

The student model in SQL-Tutor could be extended totake note of the level of dif-ficulty of the question, themotivation and pace of a stu-dent and the level of profi-ciency of the student.

Page 44: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

BIB

LIO

GR

AP

HY

42

QUESTION REASON ANSWER ROUGH ANALYSISDo you form some sort ofplan of how each student isfaring ?

To find if human tutors doform and keep a plan of howa student is faring.

Yes. SQL-Tutor has a studentmodel which is the equiva-lent of a human tutor’s men-tal ‘plan’. However, the stu-dent model in SQL-Tutor isnot as complicated as a hu-man tutor’s plan and shouldbe extended to consider fac-tors such as how well a stu-dent retains information overtime, how long a studenttakes to master a concept,etc. The student modelalso has to take into accounthow difficult the question is,how motivated the studentis, how well the student isfaring and so on.

How do you determine if astudent has understood whatyou have said to him or her?

Ideally, tutors should stopexplaining once the studenthas understood the concept.This question is to findout how human tutors knowwhen to stop explaining.

Get the student to solve asimilar problem and if heor she is able to do it,that means that they under-stood what you have said tothem. Another method isto ask them some questionsabout what you have justtold them.

SQL-Tutor could determineif a student has understooda concept by asking the stu-dent to solve a similar ques-tion.

What if a student does notunderstand what you havesaid to him or her? Do you

1. Explain in more detail.

2. Use another method.

3. Brush off and recom-mend another prob-lem.

4. Try another method.

To find how a tutor shouldreply when a student doesnot understand the replythat the tutor gave.

If the student does not un-derstand what was said, an-other method would be used.If that does not work, the an-swer would be shown to thestudent, the example closed,and the student asked towork on a similar problem.

SQL-Tutor should have sev-eral feedback methods, sothat if a student does notunderstand the reply, SQL-Tutor can fall back onanother feedback method.Also, students tend to re-spond better if they see avisual representation (as op-posed to just a verbal repre-sentation) of how the queryshould be done and how it isprocessed.

Page 45: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

BIB

LIO

GR

AP

HY

43

QUESTION REASON ANSWER ROUGH ANALYSISIf a few students have en-countered difficulty with thesame problem, what wouldyou do? Do you spend ex-tra time explaining the prob-lem?

When a few students havedifficulty with a question,this usually indicates thatsomething might have gonewrong somewhere and thisquestion is to find out howhuman tutors deal with this.

If a few students encoun-tered the same problem witha question, a general run-down of the question wouldbe given to the entire class.Extra time would be spent ifstudents still have difficultywith the problem.

If more that a few stu-dents have difficulties with aquestion, SQL-Tutor shouldrecognise this and perhapsgive a message to other stu-dents attempting the samequestion, informing themthat other students have dif-ficulties with this questionand

Do you find that studentshave difficulty with morecomplex queries (queriesthat need at least two sub-skills to solve, i.e. a querythat required the use of thegroup-by statement) ?

To confirm that studentshave more difficulty withmore complex queries.

Yes. Students tend to havemore difficulty with complexqueries.

This suggest that more at-tention should be paid to thefeedback methods and levelsof more complex queries.

A question usually has a con-cept or point to get acrossto the student. For complexqueries, one has to know cer-tain sub-concepts before at-tempting the question. Sup-pose a student is attempt-ing to do a complex queryand has made two mistakes:one mistake with the conceptthat the question is tryingto get across and the othermistake is with a sub-skill,how do you go about helpingthe student with the prob-lem ? For this question, aproblem dealing with a com-plex query and a incorrectanswer was shown to the tu-tors. They were asked howthey would go about helpinga student with that answer.

To find out how human tu-tors deal with multiple er-rors.

If a student has problemswith a complex query, his orher previous work on thatquestion would be examined.The basic “building blocks”would be examined first todetermine that the studenthas all the sub-skills to solvethe question. After that,the explanation will be builton those basic sub-skills andget progressively closer tothe concept that the ques-tion was trying to get across(i.e. for a query that neededto use the group by clauseof the select statement, theyfirst explained how the basicclauses (select, where, from),then go on to the more com-plex clauses). Trying to visu-ally show how the query wasdone is also useful.

Human tutors build up thebasic “building blocks” anduses these “blocks” as afoundation to work on morecomplex queries. SQL-Tutorshould follow this methodwhen dealing with complexqueries.

Page 46: Danny Rajendra - cosc.canterbury.ac.nz · Honours Report: Tailoring Feedback in ITSs Danny Rajendra. Abstract The most e ective way for students to learn is face-to-face with a quali

Appendix B