Assessment Techniques for Curricular Improvement Roxanne Canosa, Rajendra K. Raj Department of...

Post on 25-Dec-2015

216 views 0 download

Tags:

Transcript of Assessment Techniques for Curricular Improvement Roxanne Canosa, Rajendra K. Raj Department of...

Assessment Techniques for Curricular Improvement

Roxanne Canosa, Rajendra K. Raj

Department of Computer Science

Rochester Institute of Technology

Overview

What is Assessment?– Analytic vs. Holistic Approaches– Assessment == Grading?

Terminology– Assessment vs. Accreditation– Outcomes vs. Objectives

Performance Criteria– Direct vs. Indirect

Evaluation and Continuous Improvement

What is Assessment?

“Assessment is one or more processes that identify, collect, and prepare data to evaluate the achievement of program outcomes and educational objectives”– 2006-2007 Criteria for Accrediting

Computing Programs – Appendix A (Proposed Changes)

– From Section II.D.1 of the ABET Accreditation Policy and Procedure Manual

Analytic vs. Holistic Approaches

Analytic approach– All students/courses analyzed to diagnose areas

in need of improvement

Holistic approach – Focus on overall performance of the program

• Input from employers, alumni, advisory board

Develop efficient and effective processes– “Lean, mean assessment machine”– Don’t commit “random acts of assessment”

• Gloria Rogers

What is Your Assessment Goal?

Assessing all students or specific groups of students?

Assessing students, department, or program?

Assessing for short-term improvement or long-term effect?

Assessing for formative or summative purposes?

Grading vs. Assessing

Grading– Measures extent to

which a student meets faculty requirements and expectations for a course

• Can grades infer student’s achievement of an outcome?

– Factors• Student knowledge• Work ethic• Faculty variance in

course content, grading components, beliefs, bias, …

Assessing– Measures extent to

which a student achieves each course (program) outcome

• Can we leverage grading components for assessment?

– Use rubrics, which are pre-announced performance criteria

Assessment vs. Accreditation

Institutional accreditation through Middle States, SACS, etc. are increasingly requiring direct assessment of program objectives and outcomes

Jargon may be different, but the essential ideas are the same

Terminology (Jargon)

Standards, rubrics, metrics

Statements to measure performance on outcomes and backed by evidence

Performance criteria

Goals, outcomes,

standardsDescribe expected accomplishments of graduates 3-5 years after graduation

Objectives

Educational strategies

Mapping curriculum (coursework, internships, etc.) to outcomes

Educational practices

AssessmentProcess to review results of data collection and analysis to determine value of findings and future action(s)

Evaluation

EvaluationProcesses to identify, collect, analyze & report data to evaluate achievement

Assessment

Objectives, goals, standards

Describe what students are expected to know & be able to do by graduation

Outcomes

Other TermsDefinitionTerm

From ABET perspective

Terminology Lessons

Use terminology for your situation– Sometimes dictated by institutional

accreditation (SACS, Middle States)– Sometimes dictated by program

accreditation (ABET)– Keep a glossary of terms handy for any

external evaluatorsStick to your terminology

– Terms are not fungible without causing too much grief

Proposed Changes toABET Criteria for Computing

Old criteria– Intents and Standards

New criteria (2008-2009 cycle)– General – Program Specific

New ABET Criteria

8 General Criteria– Students– Program Educational

Objectives– Program Outcomes

(a) through (i)– Assessment and

Evaluation– Curriculum– Faculty– Facilities– Support

CS Program Specific Criteria– Outcomes and

Assessment (a) and (b)

– Faculty Qualifications

– Curriculum (a), (b), and (c)

IT/IS Program Specific Criteria

Program Audit Concern

Concern– “A criterion is currently satisfied; however,

potential exists for this situation to change in the near future such that the criterion may not be satisfied. Positive action is required to ensure full compliance with the Criteria.”

Program Audit Weakness

Weakness– “A criterion is currently satisfied but lacks

strength of compliance that assures that the quality of the program will not be compromised prior to the next general review. Remedial action is required to strengthen compliance with the Criteria.”

Program Audit Deficiency

Deficiency– “A criterion is not satisfied. Therefore, the

program is not in compliance with the Criteria and immediate action is required.”

Program Objectives

“Program educational objectives are broad statements that describe the career and professional accomplishments that the program is preparing graduates to achieve.”– Long-term goals– Should be distinct to your program– Should be publicly available– Must be measurable!

Program Outcomes

“Program outcomes are narrower statements that describe what students are expected to know and be able to do by the time of graduation. These relate to the skills, knowledge, and behaviors that students acquire in their matriculation through the program.”– Should be publicly available– Must be measurable!

- Gloria Rogers

Objectives vs. Outcomes

Example objective:– Graduates will exhibit effective

communication skillsExample outcomes:

– By the time of graduation, students will:• demonstrate effective written communication

skills• demonstrate effective oral communication skills

Performance Criteria

Define and describe progression toward meeting important components of work being completed, critiqued, or assessed– Student provides adequate detail to support

his/her solution/argument– Student uses language and appropriate word

choice for the audience– Student work demonstrates an organizational

pattern that is logical and conveys completeness– Student uses the rules of standard English

Provide solid evidence of progression

What is Solid Evidence?

Direct Evidence– Easier to measure– Familiar to most faculty - exam or project

grades, presentation skills, etc.Indirect Evidence

– Difficult to measure– Attitudes or perceptions– For example, a desired outcome of a

course may include “improving students’ appreciation of team work”

Direct vs. Indirect Assessment

The assessment process should include both indirect and direct measurement techniques

A variety of sources should be used– Employers, students, alumni, etc.– Converging evidence from multiple sources

can reduce the effect of any inherent bias in the data

Direct Assessment

Direct examination or observation of student knowledge or skills using stated, measurable outcomes

Faculty typically assess student learning throughout a course using exams/quizzes, demonstrations, and reports– Sample what students know or can do – Provide evidence of student learning

Direct Assessment of PEOs

Employment statisticsPromotions and career advancement of

graduatesJob titles, advanced degrees earned,

additional course work taken after graduation, etc.– PEOs must be assessed separately from

POs

Direct Assessment of POs

Common final examsLocally developed exit examsStandardized regional or national exit

examsExternal examinerCo-op reports from employersPortfolios of student work

Indirect Assessment

Indirect assessment of student learning ascertains the perceived extent or value of learning experiences – Assess opinions or thoughts about student

knowledge or skills– Provides information about student

perception of their learning and how this learning is valued by different constituencies

Indirect Assessment Measures

Exit and other kinds of interviewsArchival dataFocus groupsWritten surveys and questionnaires

• Industrial advisory boards• Employers• Job fair recruiters• Faculty at other schools

Survey of Assessment Methods

Indirect

Standardized Exams

Oral Exams

Portfolios

Written & Other Surveys

External Examiner

Locally Designed

Exams

Method

Direct IndirectDirectMethod

Performance Appraisal

Focus Groups

Archived Records

Behavioral Observations

Simulations

Exit & Other Interviews

Direct and Indirect

Duality of some instruments, e.g., an exit interview– Indirect

• Survey of opinions about the perceived value of the program components

– Direct• If person asking the questions uses it as a way

of assessing student’s skills (e.g., oral communication), then the survey is being used as a direct measure of the achievement of that outcome

Evaluation

“Evaluation is one or more processes for interpreting the data and evidence accumulated through assessment practices. Evaluation determines the extent to which program outcomes or program educational objectives are being achieved, and results in decisions and actions to improve the program.”

Continuous Improvement

Accreditation boards are moving towards outcomes-based assessment of CS, IS, and IT programs– Programs must have an established outcomes-

based assessment plan in place (or at least be making progress in that direction)

– Process must be documented– Process must show continuous improvement (both

quantitatively and qualitatively)

Faculty Responsibility

All faculty must have a commitment to and be directly involved in the evaluation of program educational objectives and program outcomes, as well as the process for continuous improvement of the program

Need for Faculty And Staff Buy-In

What makes most academics tick?– Rewards

• Money?• Fun?• Appreciation?• Recognition?

How to encourage involvement? – We all resent any extra work!

Where to Begin?

Define your Mission StatementDefine your Program Educational

Objectives (PEOs)Define your Program Outcomes (POs)Define Course Outcomes (COs)

– Include specific course outcomes on each course syllabus

Make publicly available

Then What?

Show how course outcomes map to program outcomes

Show how program outcomes map to program educational objectives

Choose measurement tools, both direct and indirect

Collect data

Finally

Present data to faculty in an easily digestible form– Charts, graphs, tables, etc.

Faculty evaluates the data – Are students actually learning the material

that the faculty believe (and claim) they are learning?

Faculty make recommendations for improvement as necessary

The Big PicturePerformance

Criteria

Assess: Collect and Analyze

Evidence

EducationalPractices/Strategies

Stakeholders(students,

alumniemployers faculty, …)

Revise

Assess: Collect and Analyze

Evidence

Evaluate:Interpret Evidence

Take Action

Course Outcomes

Mission Statement

Program Outcomes

ProgramObjectives

The Big Picture

Show relationship between mission statement, objectives, and outcomes

Assess and evaluate objectives and outcomes independently

Map program outcomes to program objectives

Map course outcomes to program outcomes Identify weaknesses and implement focused

improvements in targeted areas

Issues

All assessment methods have their limitations and contain some bias

Meaningful analysis requires both direct and indirect measures from a variety of sources– Students, alumni, faculty, employers, etc.

Multiple assessment methods provides converging evidence of student learning

Assessment Lessons

Cannot do everything at once– Try an approach for first round; learn and refine

Having data isn’t all there is to it!– Easy to generate lots of bad data

One size fits all … NOT!– Programs, courses, instructors all differ

Be ready to compromise– Perfection is neither possible nor desirable

Faculty evaluation and promotion– Do not tie to data generated from assessment

Resources

http://www.cs.rit.edu/~rlc/Assessment/http://www.abet.org/assessment.shtml