Guidance Document Quick Start-FEB14-FINAL

18
QUICK START Guidance Document 2014 PART OF THE ASSESSMENT LITERACY SERIES THE RIA GROUP | 16407 Highland Club Avenue Baton Rouge, LA 70817

description

 

Transcript of Guidance Document Quick Start-FEB14-FINAL

Page 1: Guidance Document Quick Start-FEB14-FINAL

QUICK START

Guidance

Document

2014

PART OF THE ASSESSMENT LITERACY SERIES THE RIA GROUP

| 16407 Highland Club Avenue Baton Rouge, LA 70817

Page 2: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

1

Quick Start© Program Guide

Introduction

The purpose of this document is to provide guidance for developing measures of student

performance that will meet the criteria within the Performance Measure Rubric. The rubric is a

self-assessment tool used to ascertain the technical quality of locally-developed performance

measures. The process used to “design”, “build”, and “review” teacher-made performance

measures is contained within the Quick Start program. Quick Start delivers a

foundational understanding of the procedures necessary to create these performance measures,

which teachers may then use to assess their students’ skills, knowledge, and concept mastery of

targeted content standards.

Figure 1. Process Components

Contact

For more information:

www.hr.riagroup2013.com

[email protected]

Des

ign

• Purpose Statement

• Targeted Content Standards

• Test Blueprint

Bu

ild

• Items/Tasks

• Scoring Keys & Scoring Rubrics

• Test Forms

Rev

iew

• Item/Task Reviews

• Alignment Reviews

• Data Reviews

• Refinements

Page 3: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

2

Phase I: Design

1.1 Goal Statement

Understand and apply the techniques used to design measures of student performance.

1.2 Objectives

The professional will successfully:

o Create a “purpose statement” for a specific performance measure;

o Identify an “Enduring Understanding/Key Concept” and its associated content

standards for a specific course/area of study; and,

o Develop a test blueprint outlining the performance measure’s structure.

1.3 Guiding Questions

What is the performance measure intended to measure and at what grade?

What are the developmental characteristics of test-takers?

Which areas will be targeted among the various content standards?

How will educators use the results (overall score and “growth” inferences)?

When will the performance measure be administered?

Do the items/tasks capture the content standards within the key concept?

Is the number of items/tasks sufficient so that students at varying levels can demonstrate

their knowledge?

What are the time demands for both teachers and students?

How does the design reflect the areas of emphasis in the standards?

1.4 Resources

State Content Standards, National Standards, and Common Core Standards

Teacher-made items/tasks

Page 4: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

3

Department/grade-level projects, experiments, portfolios, performance demonstrations,

and writing journals

Textbooks and other ancillary materials

Handout #1 – Designing the Assessment-Examples

Handout #3 – Performance Measure Rubric [Scored Example]

Template #1 – Designing the Assessment

1.5 Procedural Steps

STEP 1. Convene a group of professional educators (e.g., Professional Learning

Communities) who are familiar with the content standards and grade level to be

assessed. A lead facilitator should be designated and agreed upon within the

group.

STEP 2. Decide how participants will reach consensus (100% agreement, majority, or

another measure).

STEP 3. Develop a plan and timeline for completing the development and review of

assessment measures.

STEP 4. Create a purpose statement for the assessment using the procedural steps below.

See Handout #1 for examples. Refer to Template #1 for more information.

Procedural Steps: Create a Purpose Statement

1. Individually create a statement about the performance measure in terms of

the content standards it purports to measure.

2. Build consensus by focusing on three components of the statement: What,

How, and Why.

3. Draft three (3) sentences reflecting the group’s consensus for each

component and review.

4. Merge sentences to create a single paragraph “statement”. Again, review

to ensure that the statement reflects the group’s intent.

5. Finalize the statement and double-check for editorial soundness.

Framework: Create a Purpose Statement

What-___________________________________________________________

How-____________________________________________________________

Why-____________________________________________________________

Page 5: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

4

STEP 5. Identify targeted content standards using the procedural steps below. See

Handout #1 for examples. Refer to Template #1 for more information.

Procedural Steps: Select Targeted Content Standards

1. Place the course/subject’s name and Enduring Understanding/Key

Concept statement above the Targeted Content Standards table.

2. Place the code for each standard/content strand in the Content ID column

along with a description for each content standard in the Content

Statement column.

3. Have subject matter experts work collaboratively to identify an initial (i.e.,

draft) set of content standards associated with the Big Idea/Key Concept.

4. Review the list of targeted content standards. Look for gaps and/or

redundancies and then finalize the list by placing an “X” in the Final

column.

5. Verify that the “final” targeted content standards will be those used to

develop the test blueprint.

Framework: Select Targeted Content Standards

Subject/Course: ___________________

Enduring Understanding/Key Concept: _______________________________

Targeted Content Standards

Content ID Content Statement Draft Final

STEP 6. Develop a test blueprint based on the targeted content standards identified in Step

5. Using the procedural steps on the next page, determine the number of items

assigned to each targeted content standard and the associated levels of cognitive

demand. See Handout #1 for examples. Refer to Template #1 for more

information.

Page 6: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

5

Procedural Steps: Develop a Test Blueprint

1. Review the selected targeted content standards.

2. Insert selected “Big Idea”/Key Concept and targeted content standards

(numeric code only) into the test blueprint table.

3. Determine the number of items/tasks across the four (4) cognitive levels.

4. Tally the rows and place the values in the Total column. Tally each

cognitive level column and place the resultant values in the Grand Totals

row.

5. Report the total number of items/tasks and the total possible points

available.

Framework

Subject/Course: __________________________________

Enduring

Understanding/

Key Concept

Targeted Content Standard

Item/Task Cognitive Level

Level

1

Level

2

Level

3

Level

4 Total

Grand Totals

Technical Note: This assessment is comprised of ___________ items [worth _____ point each],

________items [worth ____ points each], and _______ extended performance task [worth

______points] for a total of_____ total items/tasks. The maximum score possible on this

assessment is ____ points.

1.6 Quality Reviews

The Performance Measure Rubric is designed to help the educator review items/tasks, scoring

rubrics, and assessment forms to create high-quality performance measures. Applying the

criteria within the Performance Measure Rubric allows the educator to evaluate further the

assessment quality.

Strand 1 of the Performance Measure Rubric evaluates the Design phase of the assessment

process (purpose statement, targeted content standards, and test blueprint). Use the following

procedural steps to perform a quality review of this phase. Refer to Handout #3 Performance

Measure Rubric-Scored Example for more information.

Page 7: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

6

Rating Steps: Performance Measure Rubric Strand 1: Design

STEP 1. Review information, data, and documents associated with the

development of the selected performance measure.

STEP 2. Assign a value in the “Rating” column for each aspect within a

particular strand using the following rating scale:

a. (1) = fully addressed

b. (.5) = partially addressed

c. (0) = not addressed

d. (N/A) = not applicable at this time

STEP 3. Reference supporting information associated with each assigned rating

in the “Evidence” column.

STEP 4. In the bottom row, add any additional notations and/or comments that

articulate important nuances of the performance measure.

STEP 5. Compile assigned values and place in the “Strand Summary” row.

STRAND 1: DESIGN

Task

ID

Descriptor Rating Evidence

1.1 The purpose of the performance measure is explicitly stated (who, what, why).

1.2 The performance measure has targeted content standards representing a range

of knowledge and skills students are expected to know and demonstrate.

1.3

The performance measure’s design is appropriate for the intended audience

and reflects challenging material needed to develop higher-order thinking

skills.

1.4

Specification tables articulate the number of items/tasks; item/task types,

passage readability, and other information about the performance measure –

OR – Blueprints are used to align items/tasks to targeted content standards.

1.5

Items/tasks are rigorous (designed to measure a range of cognitive

demands/higher-order thinking skills at developmentally appropriate levels)

and of sufficient quantities to measure the depth and breadth of the targeted

content standards.

Strand 1 Summary __out

of 5

Additional Comments/Notes

Page 8: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

7

Phase II: Build

2.1 Goal Statement

Understand and apply the techniques used to build measures of student performance.

2.2 Objectives

The professional will successfully:

o Create the necessary items/tasks to address the test blueprint;

o Develop scoring keys and/or scoring rubrics; and,

o Organize items/tasks and administration guidelines into a test form.

2.3 Guiding Questions

Are the items aligned with targeted content standards?

Do the selected items/tasks allow students to demonstrate content knowledge by:

o Responding to questions and/or prompts?

o Performing tasks, actions, and/or demonstrations?

Do the items/tasks measure content knowledge, skill, or process and not an external or

environmental factor (e.g., guessing)?

Is the number of items/tasks sufficient to sample the targeted content?

Are the items/tasks developmentally appropriate for the intended test-takers?

Are the correct answers and/or expected responses clearly identified?

Do the performance measure’s directions specify:

o What the test-taker should do, read, or analyze?

o Where and how the test-taker should respond or demonstrate the task?

o How many points a correct/complete response is worth towards the overall score?

Are there directions for different item/task types?

Page 9: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

8

2.4 Resources

Teacher-made items/tasks/projects

Textbooks and other ancillary materials

Formative assessment materials, curriculum-based measures

Handout #2 – Building the Assessment-Examples

Handout #3 – Performance Measure Rubric [Scored Example]

Template #2 – Building the Assessment

2.5 Procedural Steps

STEP 1. Develop items/tasks according to the test blueprint, using the procedural steps for

Multiple Choice (MC) items, Short Answer (SA) items, Extended Answer (EA)

items, and Extended Performance (EP) tasks as listed below. See Handout #2 for

examples. Refer to Template #2 for more information.

Procedural Steps: Multiple Choice (MC) Items

1. Review the targeted content standard.

2. Determine which aspects of the standard can be measured objectively.

3. Select the focused aspect and determine the cognitive demand reflected in

the standard’s description.

4. Create a question (stem), one correct answer, and plausible (realistic)

distractors.

5. Review the item and answer options for grammatical soundness.

Framework: MC

1. ___<Item Stem>______________________________________________

A. __<Answer Option>__

B. __<Answer Option>__

C. __<Answer Option>__

D. __<Answer Option>__ (___<Item Tag>___)

Page 10: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

9

Procedural Steps: SA/EA Items

1. Review the targeted content standard(s).

2. Determine which aspects of the standard(s) can be best measured by

having students “construct” a response.

3. Select and list aspects of the targeted content standard(s) to be measured.

4. Create a prompt, select a passage, or develop a scenario for students.

5. Develop a clear statement that articulates specific criteria for the test-taker

to provide.

Framework: SA/EA Items

Directions:

__________________________________________________________________

__________________________________________________________________

<Task/Passage/Scenario>

___<Item Stem>____________________________________________________

__<Response Area>_________________________________________________

(___<Item Tag>___)

Procedural Steps: Extended Performance (EP) Tasks

1. Review the targeted content standard(s).

2. Determine which aspects of the standard(s) can be best measured by

having students “develop” a complex response, demonstration, or

performance over an extend period of time (e.g., two weeks)

3. Select and list all aspects of the targeted content standard(s) to be

measured.

4. Create a project, portfolio, or demonstration expectation statement that

includes subordinate tasks, which are aligned to the test blueprint.

5. Develop a clear statement for each subordinate task that articulates

specific criteria for the test-taker to provide.

Page 11: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

10

Framework: EP Tasks

__________________________________________________________________

__________________________________________________________________

<Project-Portfolio-Demonstration Description>

___<Task __ of __ >_________________________________________________

__<Criteria>_______________________________________________________

(___<Item Tag>___)

STEP 2. Develop scoring keys/rubrics using the procedural steps below. See Handout #2

for examples. Refer to Template #2 for more information.

Procedural Steps: MC Items Score Key

1. Enter the assessment information at the top of the Scoring Key.

2. Record the item number, item tag (optional), item type, and point value.

3. Record the MC answers in the Answer column.

4. Repeat Steps 1-4 until all items on the test blueprint are reflected within

the Scoring Key.

5. Validate that each question-to-answer relationship is recorded correctly.

Item # Item Tag Item Type Point Value Answer

1

2

Procedural Steps: SA/EA/EP Scoring Rubrics

1. Review the SA, EA, or EP task and the criteria articulated in the

stem/directions.

2. Select a “generic” rubric structure (see Template #2: Building the

Assessment) based upon (a) scoring criteria and (b) the number of

dimensions being measured.

3. Modify the rubric language using specific criteria expected in the response

to award the maximum number of points.

Assessment Name Grade/Course Administration Total Possible

Points

Page 12: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

11

4. Determine how much the response can deviate from “fully correct” in

order to earn the next (lower) point value. [Continue until the full range of

possible scores is described]

5. During the review, ensure that the response expectation, scoring rubric,

and test blueprint are fully aligned.

Frameworks: SA/EA/EP Scoring Rubrics

Short Answer (SA) – Single Dimension

Item # _____ Sample Response for: _________________________________________

2 points

1 point

0 points

Extended Answer (EA) – Single Dimension

Item #_____ Sample Response for: ____________________________________________

4 points

3 points

2 points

1 point

0 points

Extended Performance (EP) Task – Multi-dimensional

Item #_____ Sample Response for: ____________________________________________

Dimension Advanced

(4 points)

Proficient

(3 points)

Basic

(2 points)

Below Basic

(1 points)

STEP 3. Develop administration guidelines using the procedural steps below. See

Handout #2 for examples. Refer to Template #2 for more information.

Procedural Steps: Administration Guidelines

1. Create a series of administrative steps for before, during, and after the

assessment window.

Page 13: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

12

2. Explain any special requirements and/or equipment necessary, including

establish testing accommodations. State any ancillary materials (e.g.,

calculators) needed or allowed by the test-takers.

3. Identify the approximate time afforded to complete the assessment,

including each subtask in an EP task.

4. Include detailed “scripts” articulating exactly what is to be communicated

to students, especially when administering performance tasks over a long

period of time.

5. Include procedures for scoring, administering make-ups, and handling

completed assessments.

Framework: Administration Guidelines

Preparation

STEP 1. ______________________________________________________

STEP 2. ______________________________________________________

STEP 3. ______________________________________________________

Administration

STEP 1. ______________________________________________________

STEP 2. ______________________________________________________

STEP 3. ______________________________________________________

After Testing

STEP 1. ______________________________________________________

STEP 2. ______________________________________________________

STEP 3. ______________________________________________________

STEP 4. Organize test forms to include administration guidelines, items/tasks, and scoring

keys/rubrics.

2.6 Quality Reviews

The Performance Measure Rubric is designed to help the educator review items/tasks, scoring

rubrics, and assessment forms to create high-quality performance measures. Applying the

criteria within the Performance Measure Rubric allows the educator to evaluate further the

assessment quality.

Strand 2 of the Performance Measure Rubric evaluates the Build phase of the assessment process

(items/tasks, scoring keys/rubrics, test forms). Use the following procedural steps to perform a

quality review of this phase. Refer to Handout #3 – Performance Measure Rubric-Scored

Example for more information.

Page 14: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

13

Rating Steps: Performance Measure Rubric Strand 2: Build

STEP 1. Review information, data, and documents associated with the development

of the selected performance measure.

STEP 2. Assign a value in the “Rating” column for each aspect within a particular

strand using the following rating scale:

a. (1) = fully addressed

b. (.5) = partially addressed

c. (0) = not addressed

d. (N/A) = not applicable at this time

STEP 3. Reference supporting information associated with each assigned rating in

the “Evidence” column.

STEP 4. In the bottom row, add any additional notations and/or comments that

articulate important nuances of the performance measure.

STEP 5. Compile assigned values and place in the “Strand Summary” row.

STRAND 2: BUILD

Task ID Descriptor Rating Evidence

2.1

Items/tasks and score keys are developed using standardized procedures,

including scoring rubrics for human-scored, open-ended questions (e.g.,

short constructed response, writing prompts, performance tasks, etc.).

2.2

Items/tasks are created and reviewed in terms of: (a) alignment to the

targeted content standards, (b) content accuracy, (c) developmental

appropriateness, (d) cognitive demand, and (e) bias, sensitivity, and fairness.

2.3

Administration guidelines are developed that contain the step-by-step

procedures used to administer the performance measure in a consistent

manner, including scripts to orally communicate directions to students, day

and time constraints, and allowable accommodations/adaptations.

2.4

Scoring guidelines are developed for human-scored items/tasks to promote

score consistency across items/tasks and among different scorers. These

guidelines articulate point values for each item/task used to combine results

into an overall score.

2.5

Summary scores are reported using both raw score points and a performance

level. Performance levels reflect the range of scores possible on the

assessment and use terms or symbols to denote performance levels.

2.6

The total time to administer the performance measure is developmentally

appropriate for the test-taker. Generally, this is 30 minutes or less for young

students and up to 60 minutes per session for older students (high school).

Strand 2 Summary __out

of 6

Additional Comments/Notes

Page 15: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

14

Phase III: Review

3.1 Goal Statement

Understand and apply the techniques used to review and refine measures of student

performance.

3.2 Objectives

The professional will successfully:

o Review developed items/tasks for validity threats, content and cognitive match;

o Examine test alignment to ensure: (a) all items/tasks match the skills, knowledge,

and concepts in the targeted content standards; (b) all rubrics match the targeted

content standards; and, (c) all items/tasks reflect higher order thinking.

Note: A supplement to this document will address alignment (Step 8), data reviews (Step 9), and

refinement (Step 10), which the presenter addressed in the Orientation presentation.

3.3 Guiding Questions

Does each item/task clearly address the standard?

Is the reading difficulty and vocabulary appropriate?

Is the language clear, consistent, and understandable?

Are charts, tables, graphs, and diagrams clear and understandable?

Is there only one (1) correct answer?

Have the items been reviewed for bias and sensitivity?

o Items provide an equal opportunity for all students to demonstrate their

knowledge and skills. The stimulus material (e.g., reading passage, artwork, and

diagram) does not raise bias and/or sensitivity concerns that would interfere with

the performance of a particular group of students.

Are the items developmentally appropriate for test-takers?

Does the blueprint reflect the test form?

Page 16: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

15

Does the scoring rubric provide detailed scoring information?

Does the assessment have at least two (2) performance levels?

3.4 Resources

Test blueprint

Draft operational form

Draft scoring sheet and rubric(s)

Template #3- Performance Measure Rubric

3.5 Procedural Steps

STEP 1. Review each item/task for content, bias, fairness, sensitivity, accessibility, and

editorial soundness using Strand 3 of the Performance Measure Rubric.

STEP 2. Review each item/task for alignment with the targeted content standards for

content match and cognitive demand.

STEP 3. Review the test form for alignment in terms of: (a) content patterns of emphasis;

and (b) content range.

STEP 4. Conduct a performance level review of the assessment to identify a preliminary

cut score. The assessment should have at least two (2) performance levels.

3.6 Quality Reviews

The Performance Measure Rubric is designed to help the educator review items/tasks, scoring

rubrics, and assessment forms to create high-quality performance measures. Applying the

criteria within the Performance Measure Rubric allows the educator to evaluate further the

assessment quality.

Strand 3 of the Performance Measure Rubric evaluates the Review phase of the assessment

process (validity threat, content and cognitive matches of items/tasks, and test alignment). Use

the following procedural steps to perform a quality review of this phase. Refer to Handout #3 –

Performance Measure Rubric-Scored Example for more information.

Rating Steps: Performance Measure Rubric Strand 3: Review

STEP 1. Review information, data, and documents associated with the development of

the selected performance measure.

Page 17: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

16

STEP 2. Assign a value in the “Rating” column for each aspect within a particular

strand using the following rating scale:

a. (1) = fully addressed

b. (.5) = partially addressed

c. (0) = not addressed

d. (N/A) = not applicable at this time

STEP 3. Reference supporting information associated with each assigned rating in the

“Evidence” column.

STEP 4. In the bottom row, add any additional notations and/or comments that

articulate important nuances of the performance measure.

STEP 5. Compile assigned values and place in the “Strand Summary” row.

STRAND 3: REVIEW

Task

ID

Descriptor Rating Evidence

3.1

The performance measures are reviewed in terms of design fidelity –

Items/tasks are distributed based upon the design properties found

within the specification or blueprint documents.

Item/task and form statistics are used to examine levels of difficulty,

complexity, distracter quality, and other properties.

Items/tasks and forms are rigorous and free of bias, sensitive, or unfair

characteristics.

3.2

The performance measures are reviewed in terms of editorial soundness, while

ensuring consistency and accuracy of other documents (e.g., administration) –

Identifies words, text, reading passages, and/or graphics that require

copyright permission or acknowledgements

Applies Universal Design principles

Ensures linguistic demands and/or readability is developmentally

appropriate

3.3

The performance measures are reviewed in terms of alignment characteristics –

Pattern consistency (within specifications and/or blueprints)

Matching the targeted content standards

Cognitive demand

Developmental appropriateness

3.4

Cut scores are established for each performance level. Performance level

descriptors describe the achievement continuum using content-based

competencies for each assessed content area.

3.5

As part of the assessment cycle, post-administration analyses are conducted to

examine aspects as items/tasks performance, scale functioning, overall score

distribution, rater drift, content alignment, etc.

3.6

The performance measure has score validity evidence that demonstrated item

responses were consistent with content specifications. Data suggest the scores

represent the intended construct by using an adequate sample of items/tasks

within the targeted content standards. Other sources of validity evidence such

as the interrelationship of items/tasks and alignment characteristics of the

performance measure are collected.

Page 18: Guidance Document Quick Start-FEB14-FINAL

Guidance Document: Quick Start©

17

Task

ID

Descriptor Rating Evidence

3.7

Reliability coefficients are reported for the performance measure, which

includes estimating internal consistency. Standard errors are reported for

summary scores. When applicable, other reliability statistics such as

classification accuracy, rater reliabilities, and others are calculated and

reviewed.

Strand 3 Summary __out

of 7

Additional Comments/Notes

Note: A supplement to this document will address Tasks 3.5, 3.6, and 3.7 of the Performance Measure

Rubric.