EdS Thesis Prospectus Version 2.0

22
Amanda Wilson October 3, 2010 EdS Thesis Prospectus How They Perceive Their Learning An exploration of student reflections on assessment Introduction Assessment has been stimulated by many external actors over the last quarter century including states, the federal government, accrediting organizations, and various third-party organizations—each with its own specific interests in evidence on institutional and program performance” (Ewell 2009). These “external actors,” that Ewell refers to in his 2009 article, have caused the focus of assessment to become accountability instead of improvement. That is to say, assessments have primarily become a way to justify schools and only secondarily as a way to effectively evaluate learning. Traditional testing methods focus on asking a student to prove what they have learned by regurgitating memorized information and earning a grade (assessment for accountability). The idea behind authentic assessment is to give students an opportunity to use higher level cognitive skills to build something using the information they have learned. The focus is on a created product and the journey of creating

description

Draft 2.0 of my EdS Thesis Prospectus--There will be future drafts, I am sure! =)

Transcript of EdS Thesis Prospectus Version 2.0

Page 1: EdS Thesis Prospectus Version 2.0

Amanda Wilson

October 3, 2010

EdS Thesis Prospectus

How They Perceive Their Learning

An exploration of student reflections on assessment

Introduction

“Assessment has been stimulated by many external actors over the last quarter century including

states, the federal government, accrediting organizations, and various third-party organizations—each

with its own specific interests in evidence on institutional and program performance” (Ewell 2009).

These “external actors,” that Ewell refers to in his 2009 article, have caused the focus of assessment to

become accountability instead of improvement. That is to say, assessments have primarily become a way

to justify schools and only secondarily as a way to effectively evaluate learning. Traditional testing

methods focus on asking a student to prove what they have learned by regurgitating memorized

information and earning a grade (assessment for accountability). The idea behind authentic assessment is

to give students an opportunity to use higher level cognitive skills to build something using the

information they have learned. The focus is on a created product and the journey of creating it and is

supported by feedback during the creation (assessment for improvement) and not just a grade upon

completion.

The traditional system of standardized and summative testing has trained students to pass

tests, not necessarily learn the desired information or skills. Authentic assessment serves to alert

university instructors and their students of strengths and weaknesses in learning before students

even finish the project and receive a grade. Whereas students have learned to do well on

traditional testing without adequately mastering the topics those assessments are supposed to

measure (Brimi 2010), authentic assessments are more capable of demonstrating strengths and

Page 2: EdS Thesis Prospectus Version 2.0

weaknesses and allowing room for growth. Instead of failing students only to repeat a system

that has not accomplished its goals of educating that student, these measures allow instructors

and students to discover and improve weaknesses in the educational system as they go. This

allows the students to grow, learn, and reach their goals.

By placing traditional testing measures, which focus on grades, and authentic assessment

measures, which focus on the learning experience, side-by-side, I believe I can provide an

opportunity for students to reflect on these assessments so I can better understand how these

students see these assessments in relation to their learning experiences. Their perspectives can

help shape and improve the assessments themselves, making them more authentic and more

valid. By exploring the reflections of my own students on the assessments I use this fall, I will be

better informed to improve the way I assess and help my students be as successful as possible.

Literature Review

Ewell, in his 2009 article, points out the missing step in the assessment process. “Far too

many institutions have established learning outcomes in response to accreditation requirements and to

drive assessments without ensuring that these goals are continuously mapped to, and reinforced by, the

teaching and learning process throughout the curriculum as part of a systematic competency-based

approach.” In her 2007 article, Banta encourages the use of, “the ‘assessment for accountability is

coming!’ warning to mobilize colleagues to do their own pioneering work in developing measures of

critical thinking, reflective judgment, and deep learning.” However the link must be made between the

student learning outcomes that the accreditation drives are motivating faculty to develop and individual

classroom instruction to prevent the entire exercise from being one of futility.

Embedding the authentic assessments of these student learning outcomes in individual classes is a

great way to make this happen. (Allen 2004). Banta, et al. in their 2009 article on three alternative

assessments suggest ePortfolios, a system of rubrics, and online assessment communities as methods of

Page 3: EdS Thesis Prospectus Version 2.0

effectively accomplishing this. The ePortfolios, “offer an in-depth, long-term view of student

achievement on a range of skills and abilities as opposed to a quick snapshot based on a single sample of

learning outcomes.” The individual pieces of these ePortfolios can easily come from embedded

assessments. Rubrics can then be “used to evaluate student writing and depth of learning has been

combined with faculty learning and team assessments” and be “used at multiple institutions.” Finally

“online assessment communities [can] link local faculty members in collaborative work to develop shared

norms and teaching capacity, and then link local communities with each other in a growing system of

assessment.” Collaboration keeps everyone in the loop and on the same page. This can help faculty show

a more unified front to students and help them understand the assessments being used.

Gulikers points out that it is essential for instructors to understand their students’

perceptions of the assessment measures and to help establish assessments that are perceived by

students as authentically measuring their learning (2008). Otherwise, the assessment problem

becomes another test to learn how to take instead of a measure of what students are actually

learning and what they are not. “Authentic and valid assessment approaches must be developed and

promoted as viable alternatives to scores on single-sitting, snapshot measures of learning that do not

capture the difficult and demanding intellectual skills that are the true aim of a college education” (Banta,

et al. 2009). These approaches can then become not only what they were intended for (assessment for

improvement) but also a measure of accountability.

Foreign language learning in the United States is taking a big step in the direction of authentic

assessment with the use of tools like LinguaFolio, which focuses on students self-assessing their learning

abilities and developing a dossier, or portfolio, of evidences to prove those abilities. Many states,

including North Carolina, are developing and publicizing LinguaFolio to the foreign language teachers in

their states as an essential tool for assessment (NCSSFL 2010). North Carolina’s Department of Public

Instruction is currently running a pilot of an online version of this tool, eLinguaFolio, in which I am

participating (NCDPI 2010).

Page 4: EdS Thesis Prospectus Version 2.0

Justification

Unfortunately, much of students’ experience in the K-12 system is focused on traditional

testing for accountability, not improvement (Banta, et al. 2009). The freshmen arriving in first

year university courses have been trained to pass tests by reciting information instead of

demonstrating their understanding of the material learned. Students who are good at multiple-

choice tests may just be good at taking that type of test (Oberg 2009). The very idea that there is

a focus in schools on test taking strategies should be a huge warning (Brimi 2010). It is not

important if students know how to take a test. It is essential to measure what they are really

learning.

The culture must be changed at the faculty and the student levels. Students need to

understand what is expected of them now. More than that, they need to perceive the assessments

as authentic and not simply another test to learn to beat (Gulikers 2008). Assessment must be

genuine and show competence (Watson 2008). It can only do this if students really, “have the

opportunity to learn the standards and…the opportunity to perform in relation to them” (Carr

2001). Teachers need to show students what is expected of them via tools like rubrics so they

have a fair opportunity to perform. To do this, faculty development is crucial because while most

faculty members are well trained in their disciplines, few are well trained in education and

pedagogy (Banta, et al 2009).

Understanding student perceptions of different assessment tools and their effects on

learning can help educators develop assessments that are authentic, or performance-based,

assessment tasks of what students are learning combined with rubrics that detail valid goals and

expectations that move assessment from being focused on accountability to a focus on

improvement (Ewell 2009). We need to know how students understand the way we measure

Page 5: EdS Thesis Prospectus Version 2.0

what they know to see if we are really measuring what they know or if they are just learning to

meet the requirements of the assessment. Additionally, when done right, authentic assessment

has the added benefit of being, “a strategic tool for enhancing teaching and learning” (Blackburn

2004). There is no reason why a shift toward assessment for improvement would take away the

accountability factor; in fact, the shift should bring the two closer together by allowing for both

simultaneously.

By conducting my courses, intentionally balancing traditional testing tools which focus

on grades (accountability) with authentic assessment tools which focus on the learning

experience as the goal (improvement), my students will be able to reflect on both forms of

assessments when asked to look back on their learning experiences. These reflections will enable

me to draw conclusions on and improve my own practice of assessing and facilitating student

learning, as well as to make suggestions that could be applied by other instructors in any number

of courses.

Research Statement

I will examine student perceptions of authentic assessment tools and traditional testing

tools and how those perceptions affect their perceived learning experiences for the purposes of

improvement and accountability of classroom practices, curricular development, and program

assessment.

Methods

In early Spring semester of 2011, I will solicit interviews with five to seven students from

my fall 2010 courses, asking probing questions to get them to think about and reflect on both

their language learning experiences and the assessments they have encountered, as well as how

those assessments may have impacted their learning experiences (see Appendix A). Additionally,

Page 6: EdS Thesis Prospectus Version 2.0

I plan to administer an anonymous broad attitude scale survey to the entire population of students

in all three sections of Spanish I taught in the fall of 2010 to obtain some basic opinions about

the effectiveness of the assessment tools I used, allowing room for open-ended comments at the

end of the survey (see Appendix B). This will allow students to give feedback on the assessments

I employed in my Spanish courses and how those experiences affected their learning.

I believe that students will value each assessment for different reasons and give important

suggestions for improving the way they were assessed and how it can help them learn better.

Timeline

January-February 2011

I will solicit and conduct interviews with students from the previous semester of fall 2010

and survey students from the three sections of beginning Spanish I from the preceding fall

semester.

April-May 2011

I will present my research and defend my thesis.

Page 7: EdS Thesis Prospectus Version 2.0

Bibliography

Ainsworth, Larry, & Viegut, Donald (2006). Common formative assessments, How to connect

standards-based instruction and assessment. Thousand Oaks, CA, Corwin.

Banta, T. W. (2002). Building a scholarship of assessment. San Francisco: Jossey-Bass.

Banta, T. (2007). Can Assessment for Accountability Complement Assessment for

Improvement?. Peer Review, 9(2), 9-12. Retrieved from Academic Search Complete

database.

Banta, T.W., Griffin, M., Flateby, T.L., & Kahn, S. (2009, December).Three promising

alternatives for assessing college students' knowledge and skills. (NILOA Occasional

Paper No.2). Urbana, IL: University of Illinois and Indiana University, National Institute

of Learning Outcomes Assessment.

Blackburn, B., Dewalt, M., & Vare, J. (2003). A Case of Authentic Redesign: Collaborating with

National Board Certified Teachers to Revise an Advanced Middle Level Program.

Research in Middle Level Education Online, 26(2), 45-56. Retrieved from Education

Research Complete database.

Brimi, H. (2010). Darkening the Ovals of Education. Clearing House, 83(5), 153-157.

doi:10.1080/00098650903505472.

Brint, S., Proctor, K., Murphy, S., Turk-Bicakci, L., & Hanneman, R. (2009). General Education

Models: Continuity and Change in the U.S. Undergraduate Curriculum, 1975--2000.

Journal of Higher Education, 80(6), 605-642. Retrieved from Education Research

Complete database.

Page 8: EdS Thesis Prospectus Version 2.0

Brown, S. A., & Glasner, A. (1999). Assessment matters in higher education: Choosing and

using diverse approaches. Buckingham [England: Society for Research into Higher

Education & Open University Press.

Caner, M. (2010). STUDENTS VIEWS ON USING PORTFOLIO ASSESSMENT IN EFL

WRITING COURSES.Anadolu University Journal of Social Sciences, 10(1), 223-235.

Retrieved from Academic Search Complete database.

Carr, Judy F & Harris Douglas E. (2001). Succeeding with standards linking curriculum,

assessment, and action planning. Alexandria, Virginia: Association for Supervision and

Curriculum Development.

Cauley, K., & McMillan, J. (2009). Formative Assessment Techniques to Support Student

Motivation and Achievement. Clearing House, 83(1), 1-6. Retrieved from Education

Research Complete database.

Choate, J. S. (1995). Curriculum-based assessment and programming. Boston: Allyn and Bacon.

Davis, N., Kumtepe, E., & Aydeniz, M. (2007). Fostering Continuous Improvement and

Learning Through Peer Assessment: Part of an Integral Model of Assessment.

Educational Assessment, 12(2), 113-135. doi:10.1080/10627190701232720.

Earl, L., & Torrance, N. (2000). Embedding Accountability and Improvement Into Large-Scale

Assessment: What Difference Does It Make?. PJE. Peabody Journal of Education, 75(4),

114-141. Retrieved from Academic Search Complete database.

Ellis, Arthur K. (2001). Teaching, learning, & assessment together, The reflective classroom,

Larchmont, New York: Eye On Education.

Eubanks, D. (2008). Assessing the General Education Elephant. Assessment Update, 20(4), 4-16.

Retrieved from Academic Search Complete database.

Page 9: EdS Thesis Prospectus Version 2.0

Ewell, Peter. (2009). Assessment, Accountability, and Improvement: Revisiting the Tension.

Ewell, P. (2008). Assessment and accountability in America today: Background and context.

New Directions for Institutional Research, 20087-17. doi:10.1002/ir.258.

Felner, R., Bolton, N., Seitsinger, A., Brand, S., & Burns, A. (2008). Creating a statewide

educational data system for accountability and improvement: A comprehensive

information and assessment system for making evidence-based change at school, district,

and policy levels. Psychology in the Schools, 45(3), 235-256. Retrieved from Academic

Search Complete database.

Gulikers, J., Bastiaens, T., Kirschner, P., & Kester, L. (2008). Authenticity Is in the Eye of the

Beholder: Student and Teacher Perceptions of Assessment Authenticity. Journal of

Vocational Education and Training, 60(4), 401-412. Retrieved from ERIC database.

Israel, J. (2007). Authenticity and the assessment of modern language learning. Journal of

Research in International Education, 6(2), 195-231. doi:10.1177/1475240907074791.

Kline, L. (2008). Documentation Panel: The "Making Learning Visible" Project. Journal of Early

Childhood Teacher Education, 29(1), 70-80. doi:10.1080/10901020701878685.

NCDPI. (2010). North Carolina e-LinguaFolio Pilot Project (2010-2011).

https://sites.google.com/site/nclfpilot/.

NCSSFL. (2010). LinguaFolio. National Council of State Supervisors for Languages.

http://www.ncssfl.org/links/index.php?linguafolio.

Neagu, Maria-Ionela. (2009). The Influence of the Teacher's Experience and of the Institution

Type in Foreign Language Teaching, Learning and Assessment. Petroleum - Gas

University of Ploiesti Bulletin, Educational Sciences Series, 61(2), 127-132. Retrieved

from Academic Search Complete database.

Page 10: EdS Thesis Prospectus Version 2.0

Oberg, C. (2009). Guiding Classroom Instruction Through Performance Assessment. Journal of

Case Studies in Accreditation & Assessment, 1-11. Retrieved from Education Research

Complete database.

Palmer, S. (2004). Authenticity in assessment: reflecting undergraduate study and professional

practice. European Journal of Engineering Education, 29(2), 193-202. Retrieved from

Education Research Complete database.

Sandoval, P., & Wigle, S. (2006). Building a Unit Assessment System: Creating Quality

Evaluation of Candidate Performance. Education, 126(4), 640-652. Retrieved from ERIC

database.

Scott-Little, C., La Paro, K., & Weisner, A. (2006). Examining Differences in Students' Beliefs

and Attitudes: An Important Element of Performance-Based Assessment Systems for

Teacher Preparation Programs. Journal of Early Childhood Teacher Education, 27(4),

379-390. doi:10.1080/10901020600996273.

Sehlaoui, A. (2008). Language Learning in the United States of America. Language, Culture &

Curriculum, 21(3), 195-200. doi:10.1080/07908310802385873.

Stiggins, R., & DuFour, R. (2009). Maximizing the Power of Formative Assessments. Phi Delta

Kappan, 90(9), 640-644. Retrieved from Education Research Complete database.

Tileston, Donna Walker (2005). 10 Best teaching practices, How brain research, learning styles,

and standards define teaching competencies, Second Edition. Thousand Oaks, CA:

Corwin Press.

Van Gog, T., Sluijsmans, D., Joosten-ten Brinke, D., & Prins, F. (2010). Formative Assessment

in an Online Learning Environment to Support Flexible On-the-Job Learning in Complex

Page 11: EdS Thesis Prospectus Version 2.0

Professional Domains. Educational Technology Research and Development, 58(3), 311-

324. Retrieved from ERIC database.

Vaughn, M., & Everhart, B. (2005). A Process of Analysis of Predictors on an Assessment

Continuum of Licensure Candidates' Success in K-12 Classrooms. Research for

Educational Reform, 10(1), 3-15. Retrieved from Education Research Complete database.

Venables, A., & Tan, G. (2009). Realizing Learning in the Workplace in an Undergraduate IT

Program. Journal of Information Technology Education, 817-26. Retrieved from ERIC

database.

Wang, K., Wang, T., Wang, W., & Huang, S. (2006). Learning styles and formative assessment

strategy: enhancing student achievement in Web-based learning. Journal of Computer

Assisted Learning, 22(3), 207-217. doi:10.1111/j.1365-2729.2006.00166.x.

Watson, D., & Robbins, J. (2008). Closing the chasm: reconciling contemporary understandings

of learning with the need to formally assess and accredit learners through the assessment

of performance. Research Papers in Education, 23(3), 315-331.

doi:10.1080/02671520701755408.

Wikford, E., & Salmon, S. (2003). Chocolate Chip Cookies and Rubrics. Teaching Exceptional

Children, 35(4), 8. Retrieved from Education Research Complete database.

Page 12: EdS Thesis Prospectus Version 2.0

Appendix A: Semi-structured Interview Guide

1. Tell me a little bit about your first experience with Spanish. How old were you? What happened?

2. Why did you decide to take Spanish?3. What are your goals in regards to Spanish? What do you want to do with the language?4. Tell me about how you learn. What situations and tools help you learn? What tools and

strategies do you seek out and employ?5. Tell me a bit about your experience in beginning Spanish I this past fall of 2010.6. There were four chapter tests that included sections on grammar, vocabulary, listening,

writing, and speaking. Tell me about your experience taking these tests.7. The final exam was similar in structure to the chapter tests except it was cumulative,

covering all five chapters. Tell me about your experience taking this exam.8. You were required to create a culture blog this semester that asked you to reflect on

cultural artifacts of your choosing and how they related to you personally and what you were learning in the class. Tell me about your experience in creating this blog.

9. Throughout the semester you were required to keep up with the eLinguaFolio self-assessment project that asked you to reflect on your own learning and provide samples that demonstrated your best efforts. Tell me about your experience working on this project.

10. Considering these four ways in which you were tested during the semester, tell me how you feel they compared to each other. Do you have a favorite?

11. How did completing these assignments affect the way you learned in the course?12. What suggestions for improvement would you make about any or all of these

assignments?13. Any other comments on testing in this course?

Page 13: EdS Thesis Prospectus Version 2.0

Appendix B: Student Survey

Respond to the following questions about your experience in beginning Spanish I this past fall 2010, with one of the following options:

Strongly disagree, disagree, neutral/no opinion, agree, strongly agree

1. I felt like the four chapter tests were interesting to take.2. I felt like the four chapter tests accurately demonstrated what I learned.3. I would recommend the use of similar chapter tests in the future.4. I would recommend extensive changes to the chapter tests in the future.5. I felt like the final exam was interesting to take.6. I felt like the final exam accurately demonstrated what I learned.7. I would recommend the use of a similar final exam in the future.8. I would recommend extensive changes to the final exam in the future.9. I felt like the culture blog was interesting to complete.10. I felt like the culture blog accurately demonstrated what I learned.11. I would recommend the use of the culture blog in the future.12. I would recommend extensive changes to the culture blog in the future.13. I felt like the eLinguaFolio project was interesting to complete.14. I felt like the eLinguaFolio project accurately demonstrated what I learned.15. I would recommend the use of the eLinguaFolio project in the future.16. I would recommend extensive changes to the eLinguaFolio project in the future.17. Overall, I felt like the individual grades I received in this course were a fair assessment of

my learning.18. Overall, I felt like my final grade in this course was a fair assessment of my learning.19. Overall, I felt like I understood the point behind the tests and projects in this course.20. Please leave any additional comments here. Recommendations for improvement are

welcome and appreciated.

Survey will be administered online via this URL: https://spreadsheets.google.com/viewform?formkey=dGphMVFjRnJDWGRaV2N1LW15U2NVcFE6MQ