SCHOOL OF FINANCE AND ECONOMICS -...

29
SCHOOL OF FINANCE AND ECONOMICS UTS: BUSINESS WORKING PAPER NO. 150 November 2006 Reducing the Expectations Gap: Facilitating Improved Student Writing in an Intermediate Macroeconomics Course Peter Docherty Harry Tse Ross Forman Jo McKenzie ISSN: 1036-7373 http://www.business.uts.edu.au/finance/

Transcript of SCHOOL OF FINANCE AND ECONOMICS -...

Page 1: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

SCHOOL OF FINANCE AND ECONOMICSUTS: BUSINESS

WORKING PAPER NO. 150 November 2006 Reducing the Expectations Gap: Facilitating Improved Student Writing in an Intermediate Macroeconomics Course Peter Docherty Harry Tse Ross Forman Jo McKenzie ISSN: 1036-7373 http://www.business.uts.edu.au/finance/

Page 2: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

Reducing the Expectations Gap: Facilitating Improved Student Writing in

an Intermediate Macroeconomics Course

Peter Docherty & Harry Tse School of Finance and Economics, University of Technology, Sydney

Ross Forman ELSSA Centre

University of Technology, Sydney

Jo McKenzie Institute for Interactive Media and Learning

University of Technology, Sydney

Abstract This paper reports on the implementation of a pilot program aimed at improving student writing in an intermediate macroeconomics course. The program attempted to reduce what is labelled the expectations gap between student and academic perceptions of what constitutes “good writing”. This was done in two ways. Firstly, a range of resources designed to describe the characteristics of good writing was provided to students who were helped to structure their writing according to these characteristics. A series of academic literacy workshops formed the centrepiece of this strategy. Secondly, markers themselves were briefed on these characteristics and an approach to marking based upon them was negotiated. The impact of this program on student writing was very promising. Students who attended the academic literacy workshops performed better in the first of two written assignments than those who did not, controlling for general ability. These students were less likely to fail and more likely to be awarded a grade at Distinction level or above. The paper also identifies a number of important areas that need to be developed at the next stage of implementation including better integration of published writing guidelines and sample papers into the workshop curriculum, and collection of more qualitative data to supplement the quantitative evaluations the paper offers. JEL Classification Numbers: A20, A22

Keywords: student writing, assessment, expectations, academic literacies, embedded programs.

Thanks to Bill Becker, Gordon Menzies, Ruth French, participants at the Teaching and Learning Forum in the Faculty of Business, University of Technology, Sydney and participants at the 12th Australasian Teaching Economics Conference, University of Western Australia for comments and suggestions without in any way implicating them in the final product. *Correspondence to Peter Docherty, School of Finance and Economics, University of Technology, Sydney. Ph. 61 2 9514-7780; Fax 61 2 9514-7777; email: [email protected]

Page 3: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

Introduction Complaints about the inability of undergraduates to write clearly and analytically are now commonplace in universities. Poor use of grammar, failure to correctly judge the kinds of language and style appropriate to a particular writing task, and limited reasoning skills all contribute to this picture of uninspired student writing. The recent Royal Literary Fund Report on Student Writing in Higher Education (2006, vii-viii) raises “grave concerns about shortcomings in student writing skills (in the UK)”, suggesting that many students “are arriving at university without the skills necessary to make the most of their education”.

This is a complaint echoed in the academic teaching literature. Simpson & Carroll (1999, 402), for example, point to employer dissatisfaction with the writing proficiency of economics and business graduates in the US and identify a lack of writing-assignments in intermediate and upper level economics programs as a contributing factor to this lack of proficiency. Hansen (1998, 82) argues that “students enter the university with weak writing skills and are required to do little to improve these skills after they enrol”. With increasing emphases on desired graduate profiles shaping the delivery and evaluation of university programs,1 and more intense competition in tertiary sectors around the world, improving the writing proficiency of graduates must be a priority for every university and faculty.

This paper reports on a key aspect of the design, implementation and evaluation of a pilot program to improve the writing skills of students in the Faculty of Business at the University of Technology, Sydney (UTS) in the first semester of 2006. The key aspect was the program’s attempt to address problems caused by differing expectations of what constitutes good student writing in higher education. The program was both collaborative and diagnostic in nature. Staff from two of the university’s specialist units joined forces with staff from the Faculty of Business to provide students with significant resources and direction in a short program of writing, embedded within an intermediate macroeconomics subject. The objective was to test potential strategies and identify points of improvement for a more intensive program of writing development at the next stage of implementation.

The paper firstly reviews the literature on student writing and associated assessment issues, especially within economics and business programs. It secondly outlines the central design features of the UTS program and takes a closer look at the centrepiece of its strategy for overcoming expectational problems: a series of Academic Literacy workshops targeted at the writing tasks assigned within the intermediate macroeconomics course. An evaluation of the program’s attempt to deal with this problem is then offered before some general conclusions and suggestions for future program development are made in the final section.

Literature Review Writing can be an effective instrument for both assessment and for learning itself. As Hansen (1998, 80) argues “ by integrating writing into . . . courses . . . students will not only improve their writing skills but also enhance their learning of the subject”. Ramsden (1992, 183) supports this perspective in so far as writing comprises one dimension of any assessment regime. He examines the role of expectations in determining the effectiveness of such regimes for teaching, suggesting that clear or

1 The Statement of Graduate Attributes at the University of Technology, Sydney (UTS 2000, 2), for example, explicitly lists the ability to communicate effectively within appropriate professional contexts as a key skill UTS graduates should possess.

1

Page 4: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

“unequivocal” expectations play an important role in supporting the contribution that assessment makes to the learning process (Ramsden 1992, 185).

Lea & Street (1998) use the idea of expectations to explore more specifically the nature and potential causes of problems with the quality of student writing in higher education. They argue that three models have traditionally be used to examine these problems and that characterisation of these problems depends crucially on which model is employed. The first model is what they call the study skills approach. Within this approach, student literacy is constituted by a general set of skills focusing on reading, writing and sentence-level grammar. The student writing “problem” arises because students lack these skills, and the solution is simply to inculcate them. The second academic socialisation model defines the term “skills” more broadly to include the following of academic conventions such as correct referencing and the avoidance of plagiarism. According to this approach, the academic’s role is to induct students into the academy and this is done by teaching them how to follow appropriate writing conventions. The third model is the academic literacies approach. Here the notion of “literacy” is socially constructed rather than purely objective as suggested by the previous two approaches. What constitutes “good writing” depends on power relations within the university. “Good writing” is what academics say it is, and the fact that students are not “good writers” does not necessarily imply an objective problem but may simply signal a disparity between the respective judgments of students and academics as to what good writing entails.

To explore the nature of differing expectations of writing in greater detail, Lea & Street undertake a series of interviews of students and academics at British universities, both old and new. Their specific objective is to ascertain what is meant by terms such as “structure”, “argument” and “clarity” which are commonly used by academics to characterise good writing. They make a number of discoveries about the use of these terms. Firstly, a surprisingly wide semantic range for them is discovered amongst academics. Some academics are not even able to provide the most basic definition of these terms as they pertain to student writing (Lea & Street 1998, 162-163). Secondly, students use the very same terms to describe their own perceptions of what constitutes good writing. This is surprising if the reason for poor writing outcomes is that students fail to apprehend the characteristics of good writing. Thirdly, students appear to be aware of the diverse meanings attributed to these terms by academics, and this awareness causes considerable uncertainty as students approach written assessment tasks in unfamiliar courses.

Sander, Stevenson, King and Coates (2000) offer additional evidence for the diversity of expectations among various assessment stakeholders. They surveyed students at the beginning of a range of courses at a number of UK universities to determine their expectations of the type of teaching they would experience and the kinds of assessment they would be asked to complete. Sander et al report two major findings. The first is that students expect to be disappointed in their university learning experiences. In outlining this finding, Sander et al (2000, 309-310) differentiate between three types of student expectations: ideal, normative and predictive. Ideal expectations reflect what students believe should happen in a course with respect to teaching or assessment. Normative expectations reflect students’ experiences over the range of courses already taken, a kind of historical benchmark used to anticipate what they will face in the remainder of their university careers. Predictive expectations are those formulated by students about what will happen in a specific course. These predictive expectations reflect students’ normative expectations, but they also take into account any additional information about the

2

Page 5: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

specific course in question. When Sander et al observe that students expect to be disappointed in their learning experiences, they are highlighting a dissonance between students’ ideal and predictive expectations. For example, students may believe that interactive lectures represent an ideal teaching format, but their normative expectations are of traditional, one way, information-flow type formats.

Sander et al’s second finding is that students value essays and research reports as ideal assessment vehicles but their normative expectations were of multiple choice exam questions and problem sets.

These findings indicate that the expectations gap identified by Lea & Street is a wider phenomenon in higher education than that represented by differences between academics and students as to what constitutes good writing. They do, however, support the existence of differing expectations among higher education participants and suggest that there are gains to be made from addressing these differences.

Problems are also identified in the literature with the quality of feedback received by students on written assessment. This is not surprising given the range of meaning identified above for terms used by academics to define good writing. Terms such as “clarity” and “structure” often find their way into marginal comments on assignments, and if academics fail to agree on their precise meaning it is not surprising that students fail to understand how these terms are being used to evaluate the quality of their written work. In fact Lea & Street (1998, 168) and Carless (2006, 221) go further, arguing that marginal and summary comments frequently function simply to reinforce academic power and authority over students rather than to help them understand why their writing has been evaluated in a particular way, even if this effect is unwitting on the part of academics. At best written comments are frequently perfunctory and formalistic, conveying little information that can help students to genuinely improve in future assignments. Students, for example, surveyed by Higgins, Hartley & Skelton (2002, 56) frequently viewed feedback negatively in terms of its usefulness for continued learning because it was either too vague or too impersonal, or because it was simply unreadable. These students also tended to find feedback that was too general of little use in their ongoing development. In contradiction to this, students surveyed by Carless (2006, 225) suggested that comments perceived as too specific to the particular assignment could not really be used for improving future assignments either. A number of studies point out that the tendency for feedback to be provided towards the end of a course further undermines its value for improving student writing since it obviously cannot be used within that course to improve, and academic expectations are likely to be different in subsequent courses (see Simpson & Carroll 1999, 408; Higgins, Hartley & Skelton 2002, 62; Carless 2006, 225).

Carless (2006, 220-221) specifically applies a version of the expectations gap idea to understanding problems with feedback and suggests a series of measures designed to reduce the gap in this context and improve the potential of feedback for improving student performance. He builds a framework similar in conception to Lea and Street’s analysis described above, arguing that feedback on student assignments may be understood in terms of the concepts of discourse, power and emotion. Feedback is often couched in the discourse of academic language which effectively constitutes a code that students can find difficult to decipher. It thus often serves only to reinforce the power relations between tutor and student in the way described by Lea & Street. But, because of its frequent connection to the awarding of grades, feedback can also arouse negative emotions in students who therefore lose motivation to interpret and use feedback formatively.

3

Page 6: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

Carless (2006, 230) argues that ‘assessment dialogues’ between tutors and students may be used to diffuse these problems. Such dialogues are aimed at helping students to better understand the assessment process: what is expected of them and how they will be graded. He suggests four objectives which may be used to organise such dialogues:

• Helping students to unpack assessment criteria; • Genuine use of these criteria to grade assessment tasks; • Greater awareness by tutors of the formative function performed by

comments on written assessment; • Second marking or moderation processes to improve student perception of

fairness in the grading process and to minimise negative student sentiment and improve receptivity to feedback.

As suggested above, a number of studies argue that a contributing factor to the poor quality of student writing is the lack of opportunity for students to develop writing skills because academics are increasingly setting multiple choice exams and problems sets rather than essay-type assignments. This is especially true in economics and business studies. Walstad (2001, 283-285) investigates the reasons for these academic assessment choices. He identifies seven key factors that underpin academic selection of assessment tasks:

• Ease of test construction; • Economy of scoring; • Coverage of domain; • Potential for scoring bias; • Freedom of student response; • Opportunity for cheating or guessing; • Cognitive level tested.

He then creates an ordinal ranking of various assessment options including essays, multiple choice exams and problems sets by giving each of the factors in the list above a rating between negative and positive 2. Essays score well on ease of construction (+1), freedom of student response (+2) and the levels of cognition tested (+2), but badly on economy of scoring (-2), coverage of domain (-1), which is relatively narrow for a single essay topic, potential for examiner bias (-1), and opportunity for cheating or guessing (-1). Thus if the seven factors are weighted evenly, the overall score for essays would sum to zero compared to -1 for each of multiple choice exams and problem sets.

However the seven factors are not weighted evenly in this assessment calculus according to Walstad. The incentive structure for academics in North American colleges and universities strongly favours the accumulation of refereed publications over improving the educational quality of undergraduate teaching. This results in a high opportunity cost for academics in devoting time simply to grading essays, let alone providing more extensive feedback to help students develop their writing proficiency. The “economy of scoring” factor in the above list is thus weighted very heavily when academics think about assessment structures, and this drags essays down the ranking of assessment methods, resulting in the much more frequent setting of multiple choice exams and problem sets. This suggests, of course, that part of the solution to improving student writing might be to provide academics with incentives to alter assessment structures in favour of more essays and writing-based assessment

4

Page 7: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

vehicles. But such a profound change to academic culture would require a belief that setting more such tasks would indeed result in improved writing as asserted in the literature. It would also require a belief that the improved ability of undergraduates to write was worth the cost of fewer journal publications at the aggregate level.

While it is beyond the scope of this paper to further investigate the balance between journal papers and undergraduate writing ability, it is within its purview to further consider the potential for improving student writing by devoting more resources to essay-based assessment. Two studies in particular outline programs which have been designed to directly improve student writing outcomes, and these hold up possible approaches for academics and institutions interested in making improvements in this area. Hansen (1998) and Simpson & Carroll (1999) both describe so-called writing intensive (WI) courses offered as part of Writing Across the Curriculum (WAC) programs in some universities which stress the integration of the teaching of writing into disciplinary courses and view writing as an important tool for teaching analytical skills within a range of discipline areas (cf. Cohen & Spencer 1993, 219 & 229). Each of these programs warrants a little attention before we summarise the main features of the literature on writing in higher education and move on to describe the program at UTS.

Hansen (1998) reports on a long established WI course at the University of Wisconsin, Madison in which he emphasises the “critical role of writing in the learning process” (Hansen 1998, 80). This course in elementary economics, with an enrolment of between 50 and 100, embeds a program of writing instruction within an economics course, the objectives of which are linked explicitly to specific graduate proficiencies developed in conjunction with colleagues and employers. Students are given separate but complementary writing instruction, they are required to complete a range of writing tasks of varying length and complexity, and feedback is an essential element in the development of students’ writing skills. Hansen clearly spells out a set of general assessment criteria at the beginning of each semester and students are exposed to and study a collection of good quality writing in the field of economics which act as models that they can use to shape their own writing. This program embodies some of the key recommendations of Carless’s assessment dialogue approach outlined above and implicitly addresses some of the key problems which generate the expectations gap identified in other parts of the literature.

Simpson and Carroll (1999, 402) report on the implementation of a program designed to reverse the effects of Walstad’s observed assessment calculus and improve the quality of student writing. This program was made up of a set of WI courses introduced into the existing undergraduate economics program at Davidson College in the US. Three key features characterised these courses: firstly, written assessment made up at least 50% of course final grades; secondly, academic support and feedback was provided for the entire writing process within these courses including drafting, editing and revising of drafted material; thirdly, course enrolment was capped at 12. The program’s objective was to encourage students to “become flexible, critical thinkers who were simultaneously knowledgeable in a discipline and able to apply their skills beyond it” (Simpson & Carroll 1999, 408).

The Davidson intensive writing program highlights the important role played by feedback in attempts to improve students’ writing skills. Students learn best which writing strategies work and which do not when strategies can be tested and constructive feedback used to improve subsequent drafts. Carless (2006, 225) also notes that students particularly value tutor comments on draft or outline work because they ‘provided an immediate opportunity to act on advice’. The 50% weighting of

5

Page 8: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

written assessment in the Davidson program was clearly designed to provide students with the incentive to invest in the development of their writing skills. The capping of enrolment at 12 was clearly designed to provide the level of feedback that program directors regarded as optimal for significant improvements to writing outcomes.

Despite its intelligent conceptual underpinnings and design, the Davidson program was not subjected to extensive evaluation and it remains outside the possibility set of most large degree programs which have enrolments numbering in the hundreds. A similar observation applies to Hansen’s program. Scope therefore exists to develop programs which reflect the principles embodied in the Hansen and Simpson & Carroll programs but which may have application to larger classes. Such programs must, however, be carefully evaluated if a thorough case for their effectiveness is to be made.

The literature on student writing in higher education (especially with respect to economics and business programs) thus identifies two issues in particular that might be used to develop programs for the improvement of student writing outcomes. Firstly, there appears to exist what might be called an expectations gap between students and academics about what constitutes good student writing. Bridging this gap would appear to be a potentially useful strategy for equipping students to write better and academics to better evaluate student writing. Secondly, attention paid to the whole writing process, rather than simply to evaluating its final product, is likely to enhance student writing performance. Timely provision of good quality feedback would be an important dimension of this approach.

The following two sections outline a program based on these principles but implemented in the context of a significantly larger course than that described by either Hansen (1998) or Simpson & Carroll (1999). This program also encompasses all of the recommendations in Carless’s “assessment dialogue”. The section after these provides a detailed evaluation of this program.

Design Features of the UTS Program The program implemented within an intermediate macroeconomics subject in the School of Finance and Economics at the University of Technology, Sydney (UTS) can be regarded as a pilot program to improve student writing in a manner similar to the University of Wisconsin and Davidson College programs described in the previous section. More particularly, it was deliberately designed around the two insights drawn from the literature and also discussed in the previous section: that effective programs for improving student writing must address the expectations gap; and that the provision of feedback is essential if students are to improve their writing.

The program was embedded within a subject called Macroeconomics: Theory and Applications, a standard intermediate macroeconomics subject dealing with a range of macroeconomic models including the simple Keynesian model, the IS/LM/BP model, the AD/AS/NX model, the theory of exchange rates, as well as monetary and fiscal policies. This subject has a strong theoretical core but applies its theoretical content to a range of practical and policy issues facing modern macroeconomies. It usually has an on-semester and an off-semester with enrolments in the on-semester being around 500 and in the off-semester being around 280.

The insights identified from the literature were incorporated into the writing program offered as part of Macroeconomics: Theory and Applications by:

• reorganising the assessment structure of the subject and establishing channels for the provision of detailed feedback on student writing;

6

Page 9: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

• providing a series of self-access writing support materials to students online;

• offering a series of voluntary writing workshops which specifically targeted assignments for the subject.

Each of these dimensions of the subject is described in detail below.

(a) Assessment and Feedback

Assignment Design The assessment structure for this subject has traditionally involved a multiple choice mid semester exam worth 20% of the final grade, an essay worth 20% of the final grade and a final exam worth 60%. This structure was changed in the pilot program to provide incentive and opportunity for students to invest in developing their writing skills. The principal change involved replacing the single essay with two written assignments relating to the same issue. The idea was that students could use feedback on assignment one to improve their performance on assignment two. The relative weights on these assignments in the final grade were designed to provide sufficient incentive for students to invest time and effort in the first assignment but to recover a respectable grade if they were not very successful in this assignment but effectively used feedback to complete the second assignment. Respective weights of 10% and 20% on the first and second assignments were chosen to achieve this effect. The focus of both assignments on the same topic was intended to assist students to make connections between feedback on their first assignment and writing for the second.

New Assessment Criteria Fifteen new assessment criteria relating to both assignment tasks were developed in consultation with staff from the Institute for Interactive Media and Learning (IML) at UTS, in order to provide students with detailed information about the way in which their writing would be assessed. These criteria were printed on the assignment coversheet, with each criterion to be rated on a six point scale and they are reproduced in the Appendix. They deal with issues ranging from accurate grammar to argument structure and progression of logic, and they were made available to students early in the semester on the subject’s website (a Blackboard Version 6-supported platform called UTS Online).

Marking Consistency To reduce the expectations gap from the academic side, examiners were briefed on the content and approach of the academic literacy workshops, the assessment criteria and an assignment marking guide. An approach to marking was agreed after this briefing and a sample of 10 assignments was cross-marked by each of the three examiners, the workshop leader and the subject co-coordinator to verify whether the agreed assessment criteria were consistently applied. The marks from this sample were tabulated across each marker and a second meeting held to discuss problems and differences and to achieve further convergence. A refined marking approach was thus developed for the remainder of the marking process. This reflects Carless’s (2006, 231) fourth application for assessment dialogues considered above.

Feedback to Students Specific feedback was provided to each student once papers were graded. This feedback took the form of a rating (on a six point scale) for each of the fifteen assessment criteria listed on the cover sheet, marginal comments throughout the paper at the discretion of markers and a general summary comment also on the coversheet.

7

Page 10: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

In addition to this individual feedback, broader feedback on the strengths and weaknesses of the general student response was made available via UTS Online. These various forms of feedback were designed to help students evaluate the quality of their first piece of writing and to make appropriate adjustments in completing their second writing assignment. This reflects Carless’s (2006, 231) third application for assessment dialogues.

(b) Online Self-Access Resources

Guides to Writing Specifying assessment criteria and attempting to make clear what these criteria mean is an important first step in reducing the expectations gap. Facilitating student achievement of writing that satisfies these criteria requires, however, direction and instruction. The most obvious starting place in providing this direction is to make use of already-published writing guides. The Faculty of Business at UTS publishes A Guide to Writing Assignments which outlines the usual academic conventions for citation and assignment presentation as well as providing guidance about the writing process itself. It examines how to read for an assignment and how to prepare for writing, as well as providing advice about writing styles and help to students in understanding the range of business writing genres. This document is, however, some 70 pages in length and once again no explicit examination of its material was covered in class time. It was simply made available online and students’ attention was drawn to its availability. A more concise and informal guide to writing that one of the authors had been using in his teaching for some time was also made available to students online, as a summary and quick-reference guide. This document dealt with the same range of issues but was somewhat more prescriptive and obviously less detailed than the larger document.

Sample Assignment Paper As discussed above, publishing assessment criteria is a good first step towards reducing the expectations gap, but key terms used in these criteria may be open to a variety of interpretations. To further narrow this gap, a sample assignment (on another topic from a previous semester) was provided online with annotations linking its strengths to the 15 pre-published assessment criteria. The idea was to provide a concrete example to students of what the assessment criteria were attempting to specify. Students were made aware of this assignment’s availability and encouraged to use it, but no explicit instruction regarding it was provided in a class room context. This reflects Carless’s (2006, 231) first application for assessment dialogues. References While the ability to search the available literature and to identify books and papers relevant to the question under consideration is a valuable skill, it is also a skill many students lack. Failure to identify appropriate literature adds an additional barrier to the delivery of high quality writing and further widens the expectations gap. Since the focus of the program was the writing process rather than the development of research skills, it was decided to eliminate this barrier by specifying a short list of references that dealt with the issues students needed to consider in their writing. References were thus selected and made available through the University Library’s Digital Resources Register (DRR) via UTS Online.

8

Page 11: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

(c) Academic Literacy Workshops

A further vehicle for helping students to understand the assessment criteria and how to structure their writing to meet these criteria was the provision of a series of workshops targeted at the assessment topics specified above. These voluntary, 90 minute workshops were taught by staff from the University’s English Language and Study Skills Assistance (ELSSA) Centre and were run in each of the three weeks preceding the submission date for each assignment. Each workshop was offered three times to maximise student access and workshops were designed to assist students to:

• unpack the assignment question and comprehend the set readings; • identify the content that a thorough response would require; • develop and employ the appropriate style of writing for the assignment.

Each of these design features is described in more detail below with respect to the set of workshops for Assignment 1.

Workshop 1: Unpacking the Question and Comprehending the Readings One of the most frequently expressed concerns of university lecturers is that students’ written work focuses too much on description at the expense of analysis; that students tend to focus more upon telling about what happened rather than upon discussing why it happened. On the other hand, a concern frequently expressed by students is that they are not sure exactly what they are expected to produce in their written work. This is precisely the expectations gap identified above. Accordingly, an important function of the workshops was to reduce the expectations gap by helping students to unpack the requirements of each assigned task.

Assignment 1 was structured to lead students from more accessible to more challenging thinking and writing. It thus comprised two parts. The first dealt with telling, and the second with discussing. In order to clarify the distinction between these two types of writing, Worksheet Activity 1 was devised (see the Appendix). This activity was designed to help students understand the intellectual difference between the two parts of the assignment task, to appreciate the different purposes of each part of the task, and to recognise the language features the different styles of writing each part of the task would demand.

The second dimension of Workshop 1 was to assist students in understanding the required readings. Five such texts were set for this task, which provided a range of ideas and genres appropriate to students enrolled in a Business degree. Attention was focused upon what was considered to be a fairly demanding article, but one which gave the clearest overview of the field. The intention was that students’ learning should be scaffolded by instruction and activities located in their Zone of Proximal Development (Vygotsky 1978, 86), the learning zone in which a student can achieve, with support, that which s/he is not yet able to achieve independently. The support provided in this case was threefold: the teacher’s talking through the meaning of the written text prior to students’ reading of it; the provision of a written pro forma which guided students’ taking of notes by paragraph numbers and spacing; and the joint construction of meaning which would result from students verbally explaining their notes to one another.

Workshop 2: Identifying Required Content A contextualized approach to academic literacy draws upon models of texts, examining ‘expert’ writing as well as students’ own ‘apprentice’ writing. However, no such resources were available for the first set of workshops. Accordingly, Ross Forman, the workshop leader (with no previous background in economics), developed

9

Page 12: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

a model essay in the week between Workshops 1 and 2. This essay was read by the two authors from the School of Finance and Economics and feedback was provided.

The development of the model essay had three outcomes. First, it gave Ross a thorough working knowledge of the content required in the assignment. Secondly, it gave him an awareness of the various stages which students would need to go through in their own assignment preparation and identified problems which students were likely to face. Thirdly, it provided a teaching resource which was used extensively in Workshops 2 and 3.

Limited sections of text from this essay were used in class to: • demonstrate the generic structure of the whole text; • model the content and language required at each stage; • draw attention to specific features of academic writing.

There is, of course, always the danger when using a model text that students will consciously or unconsciously reproduce parts of it in their own work. Thus on a few occasions when it was useful for students to see a longer passage of text, the model was displayed on screen long enough for them to read, but not long enough for them to make notes. Students were also advised that Ross would be amongst the markers of Assignment 1, and would be able to recognise his own words if these appeared in a student’s work.

Workshop 3: Identifying the Appropriate Genre Part of the difficulty experienced by undergraduate students in their written work is, as noted earlier, their understanding of the type of language that the task requires. Identification of the appropriate language style requires identification of the writing genre. The term assignment had been used by lecturers to refer to both pieces of written assessment described in this paper. Thus students were presented with the task of deciding which was the most appropriate of the written genres outlined in the guides to writing discussed above. The two nearest options were the essay and the report. Descriptions of these genres as laid out in Faculty of Business (2006) are summarised in Table 1 below.

Essay

Report

a. Purpose

to explore an idea or thesis to document a process of enquiry

b. Audience academic: ‘the lecturer’ professional: ‘the client’

c. Format - include Abstract

- do not use sub-headings

- do not use graphics

- do not use dot points

- include Executive Summary

- subheadings encouraged

- graphics encouraged

- dot points acceptable

d. Style

- complex

- argumentative

- accessible

- explanatory

Table 1 – Written Genre Characteristics

10

Page 13: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

The descriptions in Table 1 suggest that Assignment 1 was neither an essay nor a report but a hybrid of the two. This hybrid was composed of essay characteristics (a), (b) and (d), together with report characteristic (c). That is, while the hybrid essay was to be written for the purpose of exploring an idea, for an academic audience, and in a complex, argumentative style, it nevertheless required provision of an Executive Summary, encouraged the use of sub-headings and graphics, and permitted the (sparing) use of dot points.

Students who attended the workshops were thus guided through an exploration of the assignment question’s demands, were challenged to think about how the question could be addressed, and were taught the kind of language appropriate to the genre required. This also reflects Carless’s (2006, 231) first application for assessment dialogues. In conjunction with the pre-published criteria, the sample essay and the two generic writing guides, students had a resource-rich environment in which to undertake this assessment task. Evaluating the effect to which these resources were put is the subject of the following section.

Program Evaluation and Results The ideal way to evaluate a program of this kind is to gauge the extent to which students’ writing has improved as a result of the intervention. This could be done by comparing the characteristics of student writing samples before and after implementation. However, neither writing samples of previous students in this subject nor writing samples of current students from previously undertaken subjects were available for comparison.

A second dimension of evaluation, however, arises in this case because of the pilot or diagnostic nature of the program. As outlined in the introduction, one of the program’s objectives was to trial techniques, instruments and methodology for a more comprehensive implementation in the future. In this respect, it was aimed to identify the most effective aspects of the program; the kinds of problems that arose in student interaction with various parts of the program; and additional information that could be usefully collected in order to contribute to evaluation.

The best method available to undertake this second kind of evaluation arose from the voluntary nature of the program. Some students attended the workshops while others did not; and some students made use of the online self-access resources while others did not. Thus it was possible to compare the performance of students who made use of resources with the performance of students who did not make use of resources, controlling as best as possible for other factors likely to affect performance. It was also possible to identify patterns of student interaction with online resources by making use of UTS Online’s tracking function.

Data was thus available for student access to online references, assessment criteria, the sample paper and the two guides to writing. This data is described in the first sub-section below. Additional data was also available on student performance in the two assignments and workshop attendance. This data is described in the second sub-section below. The third sub-section below analyses the relative impact of the various resources made available to students for reducing the expectations gap in relation to Assignment 1. It reports the results of a series of regressions of student performance in Assignment 1 on student participation in or accessing of the resources made available for reducing the expectations gap.

11

Page 14: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

(a) Data for Online Self-Accessed Resources

References Figures 1 to 6 present information about student access to the references provided online for Assignment 1. Figure 1, generated by UTS Online itself, shows the number of times each day that students accessed this list in the month or so leading up to the submission deadline. The highest bar on the right hand side of Figure 1 corresponds to the submission date of March 31. This figure seems to indicate that while some students accessed the references up to a month or so prior to submission, the majority of students accessed the references in the few days immediately beforehand.

There is, however, a problem with drawing conclusions of this kind from Figure 1. Since it shows all hits on the UTS Online references page for a given day, it records multiple hits for individual students both on a given day and across the whole period shown in the figure. This involves a total of 1,512 hits across the whole period. A more useful approach would be to show only the number of first hits by students on each day across the period. Since data for hits by day for individual students is provided by the UTS Online tracking function, this statistic was easily calculated using Excel.

Figure 2 shows this pattern of student access to the reference list. It is quite different from the pattern in Figure 1. The total number of hits in Figure 2 is 247, substantially lower than in Figure 1, indicating that students who accessed the reference list (which not surprisingly was most of the 269 students enrolled in the subject) returned to the site an average of just over 5 times each. In addition, the number of first hits is spread much more evenly across the whole pre-submission period, although the peak is still the day before submission. One would expect from this pattern that, holding other factors constant, students who accessed the reference list early, would have had more time to understand, digest and make use of the literature, and should have performed better in the assessment task compared to students who accessed the references only the day before submission. This proposition will be given further consideration later in the paper. The comparison of total hits with first hits highlights the care that must be taken when interpreting and making use of statistics supplied directly by systems such as UTS Online. Assessment Criteria Another resource that one would expect to have had an impact on student performance in Assignment 1 was the pre-published assessment criteria. Students who accessed these criteria early enough to use them in shaping their assignments, other factors held constant, should also have performed better than students who ignored the criteria in preparing their assignments or had little time to shape their papers in response to the criteria. Figure 3 shows the pattern of first hits for the assessment criteria. It must be remembered that these criteria appeared on the compulsory assignment cover page, so that the high number of first hits on March 31, and possibly March 30, represent students who were not necessarily using the criteria to frame their assignments but who were simply preparing documentation for submission. The total number of first hits in Figure 3 was 209 (compared to total hits of 491). Excluding the 56 hits on March 30 and 31 leaves only 153 students who accessed the assessment criteria with more than 2 days to submission.

Sample Paper Figure 4 shows first hits for the annotated sample paper. The pattern indicates considerable activity on the day of and the day prior to submission. This is not

12

Page 15: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

Figure 1 - Access Dates for References : UTS Online Statistics

0

5

10

15

20

25

30

7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3

Date

No

of S

tude

nts

Figure 2 - Access Dates for References: First Hits Only

0

5

10

15

20

25

30

35

7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 4 5 6 7

Access Date

No

of S

tude

nts

Figure 3 - Access Dates for Assessment Criteria: First Hits Only

13

Page 16: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

0

5

10

15

20

25

30

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 4

Access Date

No

of S

tude

nts

Figure 4 - Access Dates for Sample Paper: First Hits Only

0

5

10

15

20

25

30

7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3

Date

No

of S

tude

nts

Figure 5 - Access Dates for Faculty Writing Guidelines: First Hits Only

0

2

4

6

8

10

12

14

16

24 25 26 27 28 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 2122 23 24 25 26 27 28 29 30 31 1 2 3 4 5 6 7

Date

No

of S

tude

nts

Figure 6 - Access Dates for Short Writing Guidelines: First Hits Only

14

Page 17: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

surprising given that the reference access data suggests that many students only began working on their assignments within this short period. The total number of first hits for the sample paper was 193 (compared to 393 total hits) indicating that 72% of students made some use of the paper in the preparation of their assignments. Use of this paper might also be expected to have had an impact on student performance in the assignment. Writing Guides Figures 5 and 6 show first hits for both types of assignment writing resource made available online. The first of these, the Faculty of Business (2006) Guide to Writing Assignments, was also available in hard copy from the student book store but no data was available on the degree to which students in this subject made use of the hard copy version. However, the second shorter guide was only available via UTS Online. The number of first hits for the Faculty and shorter guides respectively were 211 and 232 (compared to 497 and 676 respective total hits). The number of students who looked at either of these guides was 232 and the number of students who looked at the Faculty guide but also looked at the shorter guide was 211. Thus while 21 students looked at the shorter guide but did not look at the Faculty guide (at least via UTS Online), no student looked at the Faculty guide who did not also look at the shorter guide. One would expect that use of either of the guides would have had at least a marginal impact on student performance since some of the assessment criteria pertained to academic conventions outlined in these resources. (b) Writing Workshops The various online resources described above provided students with ‘any time, any place’ access to guidance in their academic writing. A complementary form of support was offered by the provision of the weekly writing workshops.

Individual attendance data for each workshop in the first series of three was not collected, but registration for the series which closely matched average attendance numbers was available. 55 students registered for the workshops, representing 20% of the subject enrolment. Given that the workshops as described in the previous section dealt with material pertaining directly to Assignment 1, there is a strong a priori expectation that workshop attendance should have impacted student performance in this assignment.

Overall WorkshopNon-

Workshop No. 245 55 190

Mean mark/10 5.92 6.43 5.78Variance 2.64 1.89 2.76Skewness -0.1303 0.1942 -0.1121Kurtosis -0.5574 -0.9366 -0.6342% Fails 23.67 9.09 27.89% Distn + 22.45 21.81 12.63

Table 2 – Summary Statistics for Assignment 1

15

Page 18: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

0.0

5.0

10.0

15.0

20.0

25.0

30.0

35.0

0.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0

Mark

Freq

uenc

y

Figure 7 - Distribution of Results for Assignment 1: Entire Cohort

0.00

5.00

10.00

15.00

20.00

25.00

0.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0

Mark

% W

orks

hop

Figure 8 - Distribution of Results for Assignment 1: Workshop Cohort

0.00

5.00

10.00

15.00

20.00

25.00

0.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0

Mark

% N

on-W

orks

hop

Figure 9 - Distribution of Results for Assignment 1: Non-Workshop Cohort

16

Page 19: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

Statistics summarising the results for Assignment 1 are provided in Table 2. Of the

im

) Relative Impact of Resources n, and to identify the most useful dimensions of the

ued above that early access to various online resources could be expected to

result was used as a proxy for general ability. This exam was made up of 30 purely multiple choice questions on economic theory, and while it could not be argued that literacy plays no

269 students enrolled in the subject, 245 submitted Assignment 1. The mean for the cohort who submitted the assignment was 5.92 out of 10 with a variance of 2.64. Of these, 23.67% received a Fail grade (less than 5) and 22.45% received a Distinction grade (7.5) or better. A more granular distribution of the results is presented in Figure 7. These overall results were decomposed into two groups corresponding to students who attended the workshops and students who did not. Table 2 indicates that the mean result for those attending the workshops was 0.65 higher than the mean for those not attending, a statistically significant difference at the 5% level.2 The variance of results for workshop attendees was also lower. Figures 8 and 9 show the respective distributions in more detail. The distribution for the workshop cohort is clearly skewed to the right. This is confirmed by the positive skewness measure in Table 2, the lower failure rate of 9.09% compared to the failure rate of 27.89% for the non- workshop cohort (also shown in Table 2), and a higher proportion of Distinction grades and above for the workshop cohort compared to the non-workshop cohort. Figure 9 further suggests that results for the non-workshop cohort are mildly skewed to the left and this is confirmed by the negative skewness measure shown in Table 2.

These results clearly suggest that attendance at the first series of workshops was anportant factor affecting student performance in Assignment 1. It must be

remembered that markers had no information about which students had attended the workshops and which had not, so that marker bias can be ruled out as underlying this result. The possibility remains, however, that some degree of self-selection was occurring. On the one hand, it may have been the case that capable and motivated students who would otherwise have performed well in the assignment attended the workshops. On the other hand, it is also possible, of course, that less capable students could be expected to take active steps towards improvement by electing to participate in the workshop series. (cTo control for student self-selectioprogram for improving student writing, a series of simple regressions was run. The mark of out 10 awarded for Assignment 1 was used as the dependent variable and this was regressed against a series of dummy variables for workshop attendance and on-line access to each of the resources discussed earlier. Where students attended the first set of workshops, the corresponding dummy variable took a value of 1, and 0 otherwise. Where a student accessed one of the online resources (which are listed below in Table 3), the corresponding dummy variable took a value of 1, and 0 otherwise.

It was arg explain good performance in Assignment 1. As an alternative to the single 0,1

dummies for access to each resource, a series of dummies was constructed for each resource indicating whether it had been accessed in a more specific period of time prior to the assignment submission date. The resources for which these dummies were constructed and their precise definitions are shown in Table 4.

In addition to these dummy variables, the mid-semester exam

2 The Z-statistic for the difference between the two population means was 2.94 against a Z-critical value of 1.959.

17

Page 20: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

ro

ry variables listed in Table 3 in

where Xi represents the appropriate variable from Table 3. For each variable apart om the mid term and thverall dummy variable defined in Table 3 and for the set of granular dummies

Name

le in a student’s success in this type of exam, it is true that such assessment items examine rather different skills and knowledge from those examined by the written assignments. Moreover, while this exam was held after the workshops and the submission of Assignment 1, the results for Assignment 1 were not known by students at the time of the exam. There could not have been, therefore, any significant encouragement-discouragement effects running from the Assignment mark to the mid-semester result. Thus while better indicators of general ability than the mid-term result could be used in the regressions outlined below, this was the best control available and can be justified for the purpose at hand.

The first step taken in attempting to identify which of the resources made available to students was most effective in improving student writing was to regress the mark awarded for Assignment 1 on each of the explanato

dividually. Regressions were thus of the form:

)1(21 ii XMark ββ +=

fr e workshop dummies, this regression was run for both the ocorresponding to a given resource, as defined in Table 4. At the 5% level of significance only three of these individual variables or individual sets of granular dummies were significant. These were the mid term result, the workshop dummy and the full set of granular reference dummies. The results for these three regressions are shown in Table 5. The R2 values are shown but are essentially meaningless due to the effect of omitted variables.

Resource Dummy

References

ssessment Criteria RITT

ting Guide

Workshops

REFT

A C

Faculty Wri FGT

Short Guidelines SGT

Sample Paper SSAT

Academic Literacy W1

Table 3 – ccess The second step was to regress the mark for Assignment 1 against all of the

ed in g form:

Dummy Variables for Resource A

variables list Table 3. This generated a regression of the followin

iiiii CRITTREFTWMTMark 54321 1 βββββ ++++=

)2(876 iii SSATSGTFGT+ β + β + β

18

Page 21: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

Resource Dummy Name Period when Resource First Accessed

References REF1 March 25-31 inclusive

REF2 March 18-24 inclusive

REF3 March 11-17 inclusive

REF4 Before March 11

ssment Criteria

CRIT2 March 18-24 inclusive

CRIT3 March 11-17 inclusive

Faculty Writing Guide FG1 March 25-31 inclusive

FG4 March 4-10 inclusive

SG2 March 18-24 inclusive

SG4 Before March 11

Sample Paper SSA1 March 29-31 inclusive

SSA3 March 20-26 inclusive

SSA4 March 13-19 inclusive

Asse CRIT1 March 25-31 inclusive

CRIT4 Before March 11

FG2 March 18-24 inclusive

FG3 March 11-17 inclusive

FG5 Before March 4

Short Writing Guide SG1 March 25-31 inclusive

SG3 March 11-17 inclusive

SSA2 March 27-28 inclusive

SSA5 Before March 13

Table 4 – Granular Dummy Variables for Student Access of Resources

19

Page 22: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

Variable Constant Coefficient t-value p-value R2

Mid 67 term 2.88 0.50 7.57 0.0000 0.18

Workshop 1 5.77 0.68 2.68 0.0079 0.2453

REF1 4.63 1.09 2.44 0.0156 0.0497

REF2 1.47 3.11 0.0021

REF3 1.41 3.09 0.0022

REF4 2.01 3.78 0.0002

5 – Single Variabl Regressi sults ssig

he re ee of the xplanatory variables are significant at the 5% level: the same three variables

mmy and the set of granular reference d

Table e on Re for A nment 1 Mark

T sults are shown in Table 6. These results indicate that only threidentified by the single factor regressions. The overall regression explains only 21% of the variation in the mark for Assignment 1, suggesting either poor model specification or omitted variables. A better measure of general ability3, and inclusion of a variable identifying students from non-English language backgrounds would be prime candidates for inclusion in the model but neither of these were available for this study. The value for the F-statistic in Table 6, however, indicates that the null hypothesis of a zero underlying R2 can be rejected. The model is likely to suffer from omitted variable bias but the key issue is whether any of the included variables are likely to be highly correlated with those omitted from the model. If the mid-term result is a poor measure of general ability, one might expect positive correlation between these two variables but by definition the level of correlation would be low. There is more likely to be positive correlation between non-English backgrounds and workshop attendance which would make collection of data for the former a high priority for future program implementation.

Taking into account the above results, a third set of regressions was run including only the mid term result, the workshop du

ummies as endogenous variables. In addition, the two most distant granular criteria and sample paper dummies (CRIT3, CRIT4, SSA3 and SSA4+SSA5) were included on strong a priori theoretical grounds. It was difficult a priori to reject the idea that utilisation of both the assessment criteria and the sample paper would have had a strong impact on writing quality if accessed sufficiently far in advance. However, only the mid-term and workshop dummy variables were significant in this regression. CRIT3 and SSA3 were dropped and the granular reference dummies were replaced by the corresponding overall dummy due to insignificance of some of the granular reference dummies when CRIT4 and SSA4+SSA5 were included. These last two variables were insignificant in this model leading to a suspicion of collinearity between them and separate models were estimated, dropping each in turn. Only CRIT4 was significant in these models.

3 Becker (1998, 1363) summarises a range of studies which suggest that “the only consistently significant and meaningful explanatory variables of post-TUCE (Test of Understanding of College Economics) scores are pre-aptitude measures such as pre-TUCE and SAT/ACT scores”. This highlights the importance of selecting an appropriate measure of general ability.

20

Page 23: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

Variable Coefficient t-value p-value

Adj R 0.2108Constant 1.9001 7.57 0.0000 2

Mid-term 0.4779 2.68 0.0079 F-statistic 1

1

riting

Guidelines -0.178 -0.420

0.3508

Workshop 0.4961 2.44 0.0156

References 0.9165 3.11 0.0021

Criteria 0.0316 3.09 0.0022

Faculty WGuidelines

0.2599 3.78 0.0002

Short Writing 6 1 0.6570

Sample Paper 0.0958 0.4445 0.6570

ple Vari gression Resu gnment 1 Mark

term, the workshop dummy, the overall references dummy and the most distant of the granular cr

t-value p-value

Adj R 0.2309

Table 6 – Multi able Re lts for Assi

Results for the final version of the model, which included the mid

tieria dummies, are reported in Table 7. All variables are significant at the 5% level and the model explains 23% of variation in the mark for Assignment 1. This model suggests a base mark of approximately 2 out of 10 to which is added about one half of a mark for each mark scored out of 10 in the mid term (reflecting general level of ability), an additional half mark for attendance at the academic literacy workshops, an additional mark if the specified references were consulted and an additional 0.7 of a mark if the assessment criteria were accessed more than one month before the deadline.

Variable Coefficient

Constant 2.0800 4.02 0.0001 2

Mid-term 0.4633 7.14 0.0000 F-statistic 1

1

9.3860

Workshop 0.5230 2.31 0.0219

References 0.9132 2.33 0.0205

Criteria 4 0.7165 2.01 0.0459

esults for Re ssign t 1 M odel

Tak ree academic literacy workshops had a significant impact on students’ grades for Assignment 1. A student at

Table 7 –R fined A men ark M

en together, these results suggest that the set of th

tending the workshops was less likely to fail and more likely to be awarded a Distinction grade or better compared with a student who did not attend the workshops. Self-selection bias can be ruled out since the covariance of the mid-term and

21

Page 24: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

workshop attendance variables was -0.0002. Not surprisingly, a student who did not access the references scored a lower grade than one who did, although the small value of the coefficient is surprising. The significance of CRIT4 may simply reflect an “enthusiasm” effect, but the simplicity of the assessment criteria compared with the complexity of the annotated sample paper suggests a genuine focusing of enthusiasm in a way that made a significant difference to the final result. This is consistent with the fact that little guidance was given to students in the workshops about how to use and interpret the sample paper, which may have made it less effective than a priori impressions might have suggested.

The results also point to aspects of the program that require further development. One might be tempted on the basis of the regression results reported above to dispense w

ds should be obtained and incorporated into future regression models and th

This paper has reported on the implementation of a pilot program aimed at improving g in an intermediate macroeconomics course. The program attempted to

attended academic literacy workshops which focused on the specific literacies required for writing in economics

ith the sample paper and the writing guides which appear to have no significant effect on writing outcomes. But as pointed out by Becker (1997, 1366), failure to reject the hypothesis of no learning effect does not necessarily imply its acceptance. This especially applies given the limitations of the regressions used in this study. The best use of these regressions is to raise questions about how particular resources are employed and how this employment can be improved. For example, the effectiveness of the sample paper in helping students to see how they could structure their own writing might be improved by integrating this resource more carefully into the writing workshops. A similar point may be made about both kinds of writing guide. A broader question is thus raised about the relative effectiveness of written and non-interactive resources provided online compared to the dialogic scaffolding offered by face to face teaching.

The existence of omitted variable bias further suggests that data on non-English backgroun

at better measures of general ability should at least be tested. The low coefficient of determination for all of the regressions reported above also suggests that qualitative data may be just as useful as quantitative data in evaluating programs of this kind. Interviews, short period longitudinal studies of students’ writing experiences and comparisons of writing samples at various stages of the program are likely to be useful in this respect. Each of these is likely to be incorporated into future implementations of the UTS program.

Conclusion

student writinreduce what has been called the expectations gap between student and academic perceptions of what constitutes “good writing”. This was done in two important ways. Firstly a range of resources designed to describe the characteristics of good writing was provided to students who were helped to structure their writing according to these characteristics. A series of academic literacy workshops formed the centrepiece of this strategy. Secondly, markers themselves were briefed on these characteristics and an approach to marking consistent with them was negotiated. The expectations gap was thus reduced by creating a kind of quality axis negotiated between the co-coordinator of the economics subject and the literacy expert responsible for the workshops (with additional advice from two other economics and education-qualified staff in the university) to which students and markers were oriented.

The impact of this program on student writing, as measured by the mark awarded for Assignment 1, was very promising. Students who

22

Page 25: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

p

guidelines and sample papers into the workshop curriculum, collection of q

g in higher ed

erformed better in the first of two written assignments than those who did not, controlling for general ability. These students were less likely to fail and more likely to be awarded a grade at distinction level or above. Consulting the assessment criteria early also appears to have had a positive impact on the quality of students’ written work.

The program also identifies a number of important areas that need to be developed at the next stage of implementation. These include greater integration of published writingualitative data from interviews and short period longitudinal studies of particular

students’ experiences as they move through the program, and collection of improved data on general ability level and non-English speaking background to act as controls against the impact of academic literacy workshops and online resources.

Overall, the findings of this study confirm that reducing the expectations gap between students and academics concerning the characteristics of good writing plays an important role in attempting to improve the quality of student writin

ucation.

23

Page 26: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

References Becker W.E. (1997), “Teaching Economics to Undergraduates”, Journal of Economic

Literature, 35, 1347-1373.

Carless D. (2006) “Differing Perceptions in the Feedback Process”, Studies in Higher Education, 31 (2), 219-233.

Cohen A.J. & Spencer J. (1993), “Using Writing Across the Curriculum in Economics: Is Taking the Plunge Worth It?”, Journal of Economic Education, 24 (3), 219-230.

Davies S., Swinburne D. & Williams G. (eds.) (2006), Writing Matters: The Royal Literary Fund Report on Student Writing in Higher Education, London: The Royal Literary Fund.

Faculty of Business (2006), Guide to Writing Assignments, Sydney: University of Technology, Sydney.

Hansen W.L. (1998), “Integrating the Practice of Writing into Economics Education Instruction”, in Becker W.E. & Watts M. (eds.), Teaching Economics to Undergraduates: Alternatives to Chalk and Talk, Cheltenham, UK & Northampton MA, USA: Edward Elgar, 79-118.

Higgins R., Hartley, P. & Skelton, A. (2002), The Conscientious Consumer: reconsidering the role of assessment feedback in student learning, Studies in Higher Education, 27 (1), 53-64.

Lea M.R. & Street B.V (1998), “Student Writing in Higher Education: An Academic Literacies Approach”, Studies in Higher Education, 23 (2), 157-172.

Ramsden P. (1992), Learning to Teach in Higher Education, New York: Routledge.

Sander P., Stevenson K., King M. & Coates D. (2000), “University Students’ Expectations of Teaching”, Studies in Higher Education, 25 (3), 309-323.

Simpson M.S. & Carroll S.E (1999), “Assignments for a Writing-Intensive Economics Course”, Journal of Economic Education, 30 (4), 402-410.

University of Technology, Sydney (2000), Statement of UTS Graduate Attributes, Sydney.

Vygotsky L.S. (1978), Mind in Society: The Development of Higher Psychological Processes, Cambridge, Mass: Harvard University Press.

Walstad W.B. (2001), “Improving Assessment in University Economics”, Journal of Economic Education, 32 (3), 281-294.

24

Page 27: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

APPENDIX

SCHOOL OF FINANCE AND ECONOMICS

MACROECONOMICS: THEORY AND APPLICATIONS (25555)

ASSIGNMENT 1 COVER SHEET & ASSESSMENT CRITERIA

STUDENT’S NAME: __________________________________________________________________ Family Name Other Names STUDENT NUMBER: ________________________ TUTORIAL DAY & TIME: _____________

Poor

Mar

gina

l

Satis

fact

ory

Goo

d

Ver

y G

ood

Exc

elle

nt

ARGUMENT/CONTENT/STRUCTURE

1. Executive Summary states main features of argument and conclusions

2. Introduction orients reader to the approach taken in the assignment

3. Assignment uses a good range of relevant concepts and ideas

4. Assignment demonstrates understanding of relevant concepts and ideas

5. Explanation/argument is developed in a logical sequence

6. Clear connections are made between points/ideas

7. Assignment critically evaluates arguments and conclusions in the literature

8. Conclusion summarises main points and results

PRESENTATION

9. Clear layout

10. Intelligent use of graphs and diagrams

11. Appropriate use and presentation of referencing and footnote details

STYLE

12. English usage is clear and easy to follow

13. Acceptable spelling and grammar

SOURCES

14. An appropriate range of sources is consulted

15. Adequate acknowledgement of ideas and data used in assignment

GENERAL COMMENT

MARK:

Figure A1: The Assessment Criteria and Coversheet Used in the Pilot Program

25

Page 28: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

Fac of Business and ELSSA Centre Macroeconomics 25555 Assignment 1 A. Describe the behaviour of the price of oil since January 1999. Identify the factors that you think account for this behaviour and explain how they have influenced the oil price. Use your analysis to evaluate the proposition that under a new ‘price paradigm’, the price of oil is now permanently higher than it was 10 years ago.

⇓ ⇓ B. Describe the behaviour of the price of oil since January 1999. Identify the factors that you think account for this behaviour and explain how they have influenced the oil price. Use your analysis to evaluate the proposition that under a new ‘price paradigm’, the price of oil is now permanently higher than it was 10 years ago.

⇓ ⇓ C. (1) How did the price of oil change from 1999 to now, and why? (2) Will the price of oil remain permanently high? Exercise 1 The following phrases are related either to question (1) or to question (2). Match the phrase with the question in Table (i) overleaf. a. describe & explain e. be a historian or a reporter b. discuss f. tell what happened and show why c. make a judgement g. account for past events d. evaluate the pros and cons h. be a lawyer of the proposition

Figure A2: Worksheet Activity 1 from Academic Literacy Workshop 1

(continued on following page)

26

Page 29: SCHOOL OF FINANCE AND ECONOMICS - finance.uts.edu.aufinance.uts.edu.au/research/wpapers/wp150.pdf · the University of Technology, Sydney (UTS) in the first semester of 2006. The

(1) How did the price of oil change from 1999 to now, and why?

(2) Will the price of oil remain permanently high?

Table (i) Exercise 2 Match the following phrases with Q (1) or Q (2) in Table (ii) below. a. Z happened because… e. This was followed by X, which

was caused by… b. According to T, oil prices will… f. On the one hand…On the other

hand… c. This has been described as ‘….‘ g. Y happened as a result of… by the Economist (2005, p 5).

d. This appears to be a convincing h. W has suggested/predicted argument, but it does not take into that… account…. (1) How did the price of oil change from 1999 to now, and why?

(2) Will the price of oil remain permanently high?

27