Assessment Portfolio: A Case Study By Daniel Grajek...
Transcript of Assessment Portfolio: A Case Study By Daniel Grajek...
2
TABLE OF CONTENTS
SCREENING History and Background.......................................................................................................... 1 Previous Test Scores................................................................................................................ 2 GLOBAL MEASURE ........................................................................................................................ 3 SPECIFIC MEASURE....................................................................................................................... 6 INFORMAL MEASURES ................................................................................................................. 7 REPORT / MATERIALS Global Measure....................................................................................................................... 8 Specific Measure................................................................................................................... 10 Informal Measures ................................................................................................................ 12 Planning and Implementing of Instruction .............................................................................. 15 REFERENCES................................................................................................................................. 20 MATERIALS Sleeve A: Woodcock-Johnson Psychoeducational Battery-Revised: Tests of Cognitive Ability (WJ-R Cognitive Battery) Sleeve B: Woodcock Reading Mastery Test-Revised (WRMT-R)- Form G Sleeve C: IEP Notes Sleeve D: WJ-R Reliability Data Sleeve E: WJ-R Validity Data Sleeve F: WJ-R Normative Data Sleeve G: WRMT-R Reliability Data Sleeve H: WRMT-R Validity Data Sleeve I: WRMT-R Normative Data Sleeve J: Informal Tests Sleeve K: Notes of Observations and Interviews Sleeve L: Basals of WJ-R and WRMT-R
3
SCREENING
History and Background
From February to March 2010, I assessed a student in a local school system that I will
call “Ali.” Ten years and three months of age, Ali comes from an Arabic family that migrated
from Lebanon about 24 years ago. His household consists of two parents and three sisters (two
other sisters live outside the home). His father’s occupation is unknown except for the fact that
“he works at Ford.” His mother stays at home.
According to his IEP, Ali’s primary language at home is Arabic with English as a second
language. Though his native language is listed as “Arabic,” he is a fluent English speaker.
According to his speech therapist, his ESL status is no longer the critical factor it was when he
was younger. That said, Ali continues to have some trouble expressing himself. Based on
previous assessments, teachers and other school officials have determined that Ali is eligible for
special education under the “Specific Learning Disability” category. Throughout his
development, Ali met major milestones such as crawling, walking and talking at the appropriate
times. His adaptive, social, and behavioral skills are age appropriate as well. His IEP states he is
“polite, helpful, hard-working, friendly, and outgoing.” Competitive outdoor sports and video
games are Ali’s favorite activities. At first glance, he seems to enjoy having fun and joking
around with peers and adults.
Ali’s IEP, tailored to meet special needs, consists of both general education and special
education (general ed: 9 to 11 hrs./wk.; special ed: 23 to 31 hrs./wk.). He also receives half-hour
sessions of speech and language therapy once a week.
4
Previous Test Scores
Diagnostic tests have determined that Ali has a “significant weakness between his general
abilities and assessed level of achievement.” The IEP cites scores of the following tests (for more
information, see IEP Notes, SLEEVE C):
• WJ-3 Brief Achievement 4/25/2009
• CELF-4 (speech) 4/28/2009 o general language ability
• WISC IV o Verbal acquired knowledge o Verbal reasoning o Verbal comp o Perceptional / Reasoning 5/2006 o
• WIAT II (psych) 5/2006 o Visual-Motor integration
To summarize the results, Ali is about 2 years below age level in writing and reading, and
1 year behind in math. Weak areas include a lack of focus in completing assignments, difficulty
completing writing tasks with correct grammar, and not being able to listen to a story and
interpret meaning.
The above information (history and test results) helped me determine what tests and
strategies should be used for my assessment plan. This plan has a solid basis because it precisely
matches the areas of concern cited in the IEP. These areas are the student’s “general intellectual
ability, specific cognitive abilities, scholastic aptitude, oral language, and academic
achievement.” (see IEP Notes, SLEEVE C). The tests used in this report are justified to acquire
updated information in areas already studied in previous testing which measured cognitive
abilities, such as visual process, processing speed, and comprehension knowledge. Further
evidence that this plan is appropriate is the approval and support of Ali’s teacher.
5
GLOBAL MEASURE
In brief, this assessment plan consists of using the Woodcock-Johnson Psychoeducational
Battery-Revised: Tests of Cognitive Ability (WJ-R Cognitive Battery) (see SLEEVE A) as a
global measure; the Woodcock Reading Mastery Test-Revised (WRMT-R)- Form G (see
SLEEVE B) as a specific measure; and observations, interviews, two informal reading tests, and
a behavioral questionnaire for the informal measure strategy.
This study uses standard scores based on age equivalents form each test’s norm tables.
The Woodcock-Johnson Psychoeducational Battery-Revised: Tests of Cognitive Ability
(WJ-R) as the global measure. (Only the Standard Battery was used, not the Supplemental
Battery.) The determination that it has appropriate technical qualities is based on three
variables—normative data, reliability and validity. First of all, its normative data is sound. Test
administrators obtained information from 6,359 subjects in over 100 geographically diverse
communities in the U.S. (3,245 were K-12). The subjects were selected randomly “within a
stratified sampling design for 10 specific community and subject variables (Woodcock, 1990,
p.93).” The 10 categories include census region, community size, sex, and age. Furthermore,
the data was categorized according to socioeconomic categories (13 of them).
Second, the WJ-R is reliable, based on the data table provided in the test’s manual
(p.100-101) (see SLEEVE D). McLoughlin and Lewis (2008) believe a test must be .80
minimum to be acceptable (.90 for its use to determine individual placement) (p.60), and the WJ-
R meets this criterion. According to the manual, the test-retest coefficients range from .835
(social studies for age 9) to .958 (Writing Samples for age 9) Letter-Word Identification, Allied
Problems, Applied Problems, Dictation, and Writing Samples were above .90 for age 9; and
6
Passage Comprehension, Applied Problems, Science, and Humanities for age 9 level went below
.90, above .83 (p.100-101).
Third, WJ-R is presumed valid from studies cited in the manual. The manual addresses
validity identifying three types of validity—content, concurrent, and construct (p.103-104).
Content validity is “the extent that a test represents the domain of content that it is designed to
measure” (p.103). In this regard, test developers engaged in validity studies and consulted expert
opinion. The authors claim this helped produce a test that is sufficiently “comprehensive,”
“open-ended,” and “real-life” oriented. Also, it considers an appropriate sample of skills (p.103).
Concurrent validity takes into account subject behavior and effective test results. The authors cite
work performed by McCullough and Weibe (year of studies not cited) using comparative studies
with other achievement measures, such as the Boehm, Brachen, and KABC tests. Construct
validity deals consistency within the test itself, comparing various age levels. Results are too
complex to report here. Readers are directed to the table on page 105 (see SLEEVE E).
Evidence that the student was adequately represented in the normative sample is found in
the chart on page 95 and 96 of the Manual (see SLEEVE F). First of all, I believe WJ-R
appropriately emphasizes on socioeconomic status over variables such as region, community
size, and race (evidenced by the fact that a separate chart devoted to socioeconomic status on
page 96 was deemed necessary). Although I’m satisfied that Ali’s region, community, sex, and,
above all, class is represented, I am disappointed that members of Ali’s race, Arab-American, is
not even mentioned in the sample (page 95) (see SLEEVE F).
The accuracy of the basals and ceilings, and the calculation of raw scores and derived
scores can be gleaned in the attached test booklets (see SLEEVE A & B). Basal levels (from the
actual test books) are found in SLEEVE L.
7
Anecdotal behaviors observed before and during testing had been recorded throughout
the process. (See Observation and Interview Notes, SLEEVE K). Overall, Ali seemed very
comfortable, even enthusiastic about the process and the test itself. I attribute this to the
excellent rapport established early on. As Ali’s teacher can attest, several hours were spent
simply getting to know the student before the testing sessions. This involved casual conversing
and sharing activities. When it came time to test, Ali typically reacted with a smile and an
excited rush to get ready. He always made sure he brought along a sharpened pencil. The student
obviously perceived the activity as fun.
We met in a relatively quiet place—the school library. Before and during the test, Ali
usually smiled and joked around with the examiner, while remaining serious about taking the
test. He remained highly focused throughout the process, except for brief moments when noises
in the room (the school library) distracted him. The distractions included noise in the hall, people
using the copy machine, and the phone ringing). One day, the lighting was poor, but it didn’t
seem to make an impact.
From time to time, Ali made jocular comments, but he, on the most part, stayed on track.
He wasn’t afraid to ask the examiner to repeat questions. In regard to response time: sometimes
he was very quick to answer questions, yet other times he was very slow. Significant pauses had
been recorded in the notes on the test booklet.
On Day 2 of the WJ-R, he Ali was particularly animated: slouching over (not out of
boredom), singing a lot, and moving arm and hand with his pencil. I don’t believe this impacted
the test validity.
8
SPECIFIC MEASURE
The decision to use the specific measure, the Woodcock Reading Mastery Test-Revised
(WRMT-R) (Form G) was based on the student’s performance on the global measure (WJ-R).
Supporting evidence is Ali’s Standard Score of “83” ±3 SEM) or “Low Average” (McLoughlin
and Lewis, p.93). Evidence that the student was adequately represented in the normative sample
is found in the chart on page 95 and 96 of the manual (see SLEEVE I). Like the WJ-R, the
WRMT-R considers variables such as region, community size, and race, but places weight on
socioeconomic status, an approach I agree with. However, as I expressed regarding the WR-J,
the WRMT-R doesn’t list “Arab-American”, Ali’s race under the “Race” category, which is
disappointing (page 95).
Once again, the accuracy of the basals and ceilings, and the calculation of raw scores and
derived scores can be examined in the attached test booklets (see SLEEVE A & B). Basal levels
(from the actual test books) are found in SLEEVE L.
Anecdotal behaviors observed before and during the specific measure are virtually the
same as those listed in the Global Measure section (see previous page) gleaned from the
Observation and Interview Notes (SLEEVE K). In brief, Ali had a pleasant disposition. He was
pleasant, jocular, conversational, cooperative, and polite. Distractions in the school library
setting were minimal.
Evidence for the WRMT-R’s reliability can be found in SLEEVE G, its validity in
SLEEVE H).
9
INFORMAL MEASURES
Evidence that the informal measures are based on item analysis of the specific measure is
found in two informal reading tests and a behavioral questionnaire (“Student Assessment
Checklist”) in SLEEVE H. It is also found in the “Observation and Interview Notes,” SLEEVE
K).
In summary, the informal strategy consists of observation, interviews, and three informal
reading tests. The observation of student involved simply watching the student during formal and
informal class time. Reading behavior was observed and noted in this report, as well as behavior
that may indirectly impact performance in reading and general learning.
I interviewed three individuals regarding Ali—his special education teacher, speech
teacher, and psychologist. In two of these interviews, reading was mentioned (see SLEEVE K).
Two informal tests dealt with reading—specifically blending consonant-vowel-consonant
sounds1; and the ability to rhyme words, identify initial sounds, blend words, phoneme
segmentation, phoneme segmentation, and phoneme manipulation2 (see SLEEVE J). The third
informal test, the behavioral questionnaire, provides general information about influences that
may indirectly effect learning.3 “Observation and Interview Notes” in SLEEVE K also
informally documents these influences.
_________________
Sources:
1 Mercer and Mercer, 2005, p.279
2 Reading Rockets (2004) “Informal Reading Assessments: Examples”
http://www.readingrockets.org/articles/3411
3 Florida Center for Instructional Technology. “Student Assessment Checklist”
http://fcit.usf.edu/assessment/performance/assessb.html
10
REPORT
Global Measure
Here is a summary of the assessment data derived from the global-skills test based on
Ali’s age group (age 10.3). On the Woodcock-Johnson Psychoeducational Battery-Revised (WR-
J) test, Ali scored a Standard Score of 84 (on the borderline of “Average” and “Low average”)
in Broad Reading. He scored 92 in Broad Mathematics (“Average”). In Broad Written
Language, he scored 61 (Below Average). The scores he received for Broad Knowledge and
Skills were 86 and 84 respectively (“Average: to “Low Average”). The margin of error is ± 4,
except Broad Reading and Skill is ± 3.
The WR-J’s Reading section consists of two parts: Letter-Word Identification and
Passage Comprehension. On the former, Ali scored 85; on a latter 84. The WR-J’s Math section
consists of two parts: Calculation and Applied Problems. On the former, Ali scored 92; on a
latter 95. The WR-J’s Written Expression section consists of two parts: Diction and Writing
sample. On the former, Ali scored 86 (Average); on the latter 53 (Below Average).
In Knowledge areas, Ali received 93 (Average) in Science, 84 (Low average) in Social
Studies, 86 (Average) in Humanities.
In final analysis of the global test, the data yielded is consistent with that compiled by the
test administered by Ali’s special education teacher last year—the WJ-3 Brief Achievement Test.
It confirms the IEP’s conclusion that Ali is about 2 years below age level in writing and reading,
and 1 year below in math (See IEP notes, SLEEVE C).
11
WJ-3 Brief Achievement- 4/25/2009 (See IEP notes, SLEEVE C).
WJ-R Broad Scores 3/18/2010 and 3/20/2010
Reading 74 83 Math 90 92 Written Language 71 61
This global data can be interpreted as follows: There has been marked improvement in reading in
a year’s time (+11 points) and this difference is well outside the margin of error (4 points). The
scores in math, however, are virtually the same within the margin of error. Nevertheless, there is
a slight gain (2 points). Unfortunately, there is a 10 point drop in Ali’s Written Language score
in a year’s time (-10 points). This difference is outside the margin of error.
Two observations about Ali’s construction of words: First, I observed signs of dyslexia.
For example, on Test 27, #4, he wrote, “dig dog” instead of “big dog;” and on Test 26, #20, he
wrote “tadle” instead of “table” (the “b” was reversed to a “d” in both cases). Second, he has
not yet mastered using correct plural forms (Test 25, #19 “two man,” #22 “two tooths,” #27 “two
shields.”) Since, these forms represent exceptions to rules of pluralization (they don’t involve
simply adding an “–s” at the end of word), the student needs to be drilled to memorize these
forms.
Ali’s highest score on the WJ-R is Applied Problems is 95, showing that his strongest
area is analyzing and solving practical problems in mathematics. (Woodcock, p.14). Ali has the
ability to use a mathematical procedure and make computations with arbitrary information.
Although Ali still remains below the mean in math, perhaps his personal strength of
perseverance can help him to keep improving until this ability is a significant asset. However, a
deficiency in reading could stand in the way of his ability to do math, especially in solving story
problems. This was the main reasons I decided to assess Ali’s reading ability through the
Woodcock Reading Mastery Test-Revised (WRMT-R)- Form G.
12
Regarding math, it was observed during the administration of the WJ-R that the subject
counted his fingers and doodles on paper. This adaptive behavior may help Ali in the short run,
but, if not corrected, it could hinder his performance in higher math.
Ali’s lowest score in Written Language is a concern, particularly the standard score of 55
on his Writing Samples test. As Woodcock (1990) states “The subject must phase and present
written sentences that are evaluated with respect to the quality of expression (p.14)” Since
writing is a critical determinant of success, special attention needs to given to this area. It is my
opinion, however, reading is the highest priority.
Specific Measure
Here is a summary of the assessment data derived from the specific-skill test based on
Ali’s age group (age 10.3). On the Woodcock Reading Mastery Test-Revised (WRMT-R)- Form
G, Ali received a standard score of 66 (Below Average) on the Readiness Cluster, 77 (Low
Average) in the Basic Skills Cluster, 68 (Below Average) in the Reading Comprehension
Cluster, and 69 (Below Average) in the Total Reading Cluster (a composite of the Basic Skills
and Reading comprehension clusters).
The Readiness Cluster is a composite of two tests: Visual-Auditory Learning and Letter
identification. Ali scored 102 (Average- slightly above the mean) on the former and 56 (Below
Average) on the latter. The Basic Skills Cluster consists of two tests: Word Identification and
Word Attack. Ali scored 75 (Low Average) on the former and the 56 (Below Average) on the
latter. The Word Comprehension Cluster consists of two tests: Word Comprehension and
Passage Comprehension. Ali scored 79 (Low Average) on the former and the 62 (Below
Average) on the latter.
13
Analysis and interpretation of specific measure scores proved to be more problematic
than that of the global scores. First of all, the standard scores derived from WRMT-R are
significantly lower than those derived from the WJ-R. (again, standard score of 66 [Below
Average] on the Readiness Cluster; 77 [Low Average] in the Basic Skills Cluster, 68 [Below
Average] in the Reading Comprehension Cluster, and 69 [Below Average] in the Total Reading
Cluster). At first glance, it appears the scores are close to the WJ-3 Brief Achievement Test,
administered last year by his teacher (Total Reading= 72). (See IEP notes, SLEEVE C).
However, a closer look at each of the six individual tests troubled me, particularly Test 2: Letter
Identification (score: 59) and Passage Comprehension (score: 62). On seeing how low they were
I decided to recompute the data for a second time and then check the reliability coefficients in
the WRMT-R Examiner’s Manual. In the table organized by grade level (See Reliability of
WRWT-R, SLEEVE G), I found three very unusual coefficients in the “Grade 5” (the grade
level of Ali’s age). Coincidentally, all three match Ali’s lowest scores (in Letter Identification,
Passage Comprehension, and Readiness cluster)! For example, the coefficient for Letter
Identification is only .34! Of course, it would be difficult to establish a causal link between Ali’s
scores and unreliable normative data (and unfortunately the table gives only grade-equivalent
data instead age-level data, which I use in this study). However, I feel this is an important issue
to consider.
Assuming the data is correct, the standard score of Test 1: Visual -Auditory Learning is
striking. Ali scored 102 in which is relatively high for Ali. It is virtually in line with the mean.
What exactly does this test measure? This score is included in the reading Cluster score because
it has to do with skills foundational to reading, “learning-to-read” skills, if you will, learning and
remembering visual-auditory associations What are we to make from this score? This test has an
14
element of unfamiliarity. In this test “the subject learns a vocabulary of unfamiliar visual
symbols (rebuses) representing familiar words, and then translates sequences of rebuses that
have been used to form sentences (Woodcock p.6).” Since this test shows Ali has no problem
making associations between visual stimuli and oral responses (Woodcock, p.4), it is puzzling
that Ali received a relatively low score (score 59) on Test 2: Letter Identification and Test 4:
Word Identification which deal with familiar rather unfamiliar visual symbols, namely words.
One obvious explanation for the low Letter Identification score may be simply his unfamiliarity
with cursive characters presented in the latter part of the test. (I asked Ali’s teacher and she
agreed this is likely the case.) However, if Ali’s score is relatively high on Test 1, why the 75
(“Low Average”) score on Test: Word identification (again, visual symbols called words).
Perhaps Ali’s trouble is not recognizing words as symbols but processing the individual letters to
form the words. This is why I decided to use an Informal Measure that pinpoints how students
piece together letters to form words. (In this regard, it should be noted that Test Attack, which
involves putting letters together to form words, was Ali’s highest score on the WRMT-R [85],
though it was nowhere near the Visual Learning score in Test 1 (score 102).)
It is also worth noting that Ali’s possible dyslexia appears again on Test 4: Word Attack
(“bubs” instead “dud’s” and “chab” instead of “chad”). Ali’s second lowest score (62) in Passage
Comprehension shows that he may “either make poor use of passage context clues, or is unable
to decode key words in the passage accurately (p.67).”
Informal Measures
As previously mentioned, informal measures consist of observation, interviews, and three
informal reading tests (See SLEEVE J). The observation of student involved simply watching
15
the student during formal and informal class time. Regarding reading, I observed Ali during a
five-minute “reading alert” time announced by the principal during science class. I wrote in
notes that the student reads “science book, stops reading and gazes into the distance…. slouches
back and looks around…he is definitely not reading in the book…he rubs eyes staring at the
book.” (See “Notes of Observations and Interviews,” SLEEVE K).
I interviewed three individuals regarding Ali—his special education teacher, speech
teacher, and psychologist. In two of those interviews, reading was mentioned (SLEEVE K) His
special education teacher reported he has trouble with reading comprehension and organized
sequential thought. She said he probably has dyslexia (but this has not been confirmed by a
physician) which would, of course, impact his reading ability. He also has a hard time
maintaining attention. Ali’s speech teacher reported in her interview that she noticed a marked
improvement in reading comprehension in a year’s time.
The two informal tests dealt with reading (see SLEEVE H). The first test, a “probe
sheet” (Source: Mercer and Mercer [2004]) Teaching students with learning problems focused on
blending consonant-vowel-consonant sounds. The second test, (Source: Reading Rockets [2004])
dealt with the ability to rhyme words, identify initial sounds, blend words, phoneme
segmentation, phoneme segmentation, and phoneme manipulation. In the first test, I asked Ali to
string together letters to form “words.” He got 19 out of 50 wrong mainly because he used the
long and short versions of the vowel sounds, rather than just the short version (“kit instead of
“kite”). However, on the second test, he got virtually all the answers correct. My theory is this is
due to the fact the test was entirely verbal, which appears to be Ali’s strong suite (based on the
results of the Visual-Auditory Learning test of the specific measure). This also confirms the need
to read Ali the math test aloud to accommodate.
16
The third informal test, the “Student Assessment Checklist” (source: Florida Center for
Instructional Technology) is a behavioral questionnaire that assesses behavior that may indirectly
impact reading performance and general learning. The results of the checklist essentially rule out
social problems. Ali indicated that he has no problem getting along with people and he is not
negatively impacted by peers, teachers, or family members. There is no sign of “anti-social”
behavior. In fact, the student’s admission he has trouble “saying no” to people’s requests
indicates he tries to please people too much. (That said, he also expressed that he “fights back”
when harassed by peers.) His reaction to the school’s physical environment is generally positive.
The checklist also shows Ali’s high interest in sports, music, and art.
Overall comments regarding strengths are found in “Notes of Observations and
Interviews” (SLEEVE K). Ali showed signs of creativity on one assignment. The notes read,
“Worked on a an assignment for art, coloring in a shamrock for St. Patrick’s Day. He did it much
differently than other students in the class—he designed it to look like a stained-glass window—
very pleasant to the eye. When asked if others made the comment that he is good at art, he said
yes.” Other comments were already noted: Ali is focused, polite, and easy to get along with. He
“works very hard.”
My analysis and interpretation of the informal measures is as follows: First, Ali shows
visible signs that he loses focus in reading. Second, he may have shown signs of improvement
(based on his speech teacher’s testimony). Third, he has no trouble recognizing verbal sounded
phonemes and stringing them together to form words. He does have trouble distinguishing short
and long vowel sounds in words.
Regarding strengths, he has determined attitude toward learning, he possesses
considerable social skills, and appears to be creative.
17
Planning and Implementation of Instruction My recommendations for additional assessment and follow-up would be to take another
standardize reading assessment test. As cited earlier I have reservations regarding the reliability
of the norm data on the Woodcock Reading Mastery Test-Revised (WRMT-R)- Form G. This is
based on “fishy” coefficients on the Reliability data for the WRMT-R (see SLEEVE G)
It should be noted that I used older versions of the global and specific tests, therefore I
would recommended using more up-to-date tests. The age of the tests would probably render
them invalid since many of the variables of the late are significantly different from 2010, such as
curriculum content and population composition. The current editions of the tests probably match
today’s lower white population —65.6% (estimated 2008). Perhaps the Arab-American
population is included.
Although I question the results of the WRMT-R, the relatively high score of Test 1:
Visual -Auditory Learning intrigued me. Perhaps it would be worthwhile to probe the particular
skills it measures with other assessment tools to identify a possible strength or learning style that
would help tailor instructional strategies.
I had suggested Ali may be creative. Is there a test that measures creativity? Our class
textbook McLoughlin and Lewis doesn’t mention one.
Follow-up to this assessment (administered in, say, in couple of months) may try zero-in
on how Ali mentally constructs words that the specific measure seems to have identified.
Comprehension is also an issue. I’d shop around for products that do just that.
18
Here are my recommendations for planning and implementing instruction based on the
assessment data. In keeping with the current thought in education, I would build on Ali’s
strengths. First, his positive qualities outside of academics are considerable (based on data from
the informal measures: observation and interviews). This young man is extremely good-natured,
polite, and hardworking, characteristics any prospective employer would find desirable. I am
very confident that Ali, if he maintains his good attitude, will be successful in life. His social
skills could be a valuable asset in a people-oriented job. Second, I would concentrate on Ali’s
relative strength in math. (The WJ-R determined he is only a year behind other student his age) I
believe that, with determination (a quality Ali seems to possess) and continued practice, Ali
could excel in mathematic problem solving. Until his reading comprehension improves,
assignments and tests with story problems should continue to be read aloud to Ali.
Reading instruction should remain a top priority (based on the results of the WJ-R the
WRMT-R, and informal measures). Mercer and Mercer (2005) offer evidence-based teaching
strategies that may suit Ali’s particular needs in this area. First of all, I would not address Letter
Identification (Test 2) that Ali had scored the lowest in on the WRMT-R. (Brief description: in
the Letter Identification section on the test, the examiner does not provide names or sounds of
each letter, only asking the subject “What letter is this?”) I decided to rule out the results because
I am firmly convinced that the low score resulted from Ali’s unfamiliarity with cursive letters.
This seems apparent when eyeing Test 2 of the WRMT-R (See SLEEVE B). When asked, his
teacher verified that this is probably the case.
However, I would consider learning strategies to address a concern reflected in Ali’s
second lowest score—Word Identification. In this section, the test-taker must provide the correct
pronunciation of test words, particularly unfamiliar ones. I would recommend an intensive
19
instructional strategy that focuses on phonological awareness (“Identifying and manipulating
parts of spoken language including words, syllables, onsets and rims, and phonemes” [p.283].
Using cut-out letters, magnetic letters, or plastic letters (in one-on-one and small group sessions),
I would have the student practice manipulating various phonemes, blending them together to
produce individual sounds and form words (Mercer and Mercer, P.283).
Mercer and Mercer (2005) cite the five recommendations of Simmons, Gunn, et. Al.
(1994): First, focus on the auditory features of words. In other words, have the student regularly
sound out words in various lessons. Second, move from explicit, natural segments of language to
the more implicit and complex. For example, the student could perform exercises that start with
segmenting sentences into words, then words into syllables, and then syllables into phonemes.
Third, use phonological properties and dimensions of words to enhance performance. This could
mean starting the student with words containing fewer phonemes and work toward using more
complex words. Fourth, scaffold blending and segmenting through modeling. In other words,
have the student observe the teacher himself continually using phonemes. Fifth, integrate letter-
sound correspondence once learners are proficient with auditory tasks. Here, blending and
segmenting are reinforced through reading, writing, and spelling. (p.284)
The connection between written letters and their spoken sounds can be forged through
repetitive exercises (using worksheets, flash cards, and objects). A regular regime of
memorizing regular and irregular pronunciations of phonemes would be helpful. Ali has
demonstrated in these assessments that he can memorize effectively. Work attack assignments,
flash cards, and phonics games (such as “Hooked on Phonics” can assist in this. (p.285)
Regarding phonics, Mercer and Mercer (2005) mentions six approaches suggested by the
National Reading Panel (2000): (1) Analogy-based phonics: Showing students how to decode
20
unfamiliar words by analogy to words they already know (i.e. reading stump by analogy to
jump); (2) Analytic Phonics: Using the sound segments of known words, students can decode
unknown words (using build to decode guild); (3) Embedded phonics: Teaching letter-sound
relationships while reading; (4) Phonics through spelling: Students segment words into
phonemes and write them out; (5) Onset-rime phonics instruction: “teaches students to identify
the sound of the letter(s) before the first vowel (i.e. onset) in a one-syllable word and the sound
of the vowel (i.e. rime) in the remaining part of the word.” (p.285); (6) Synthetic phonics:
students covert letters into sounds.
“Comprehension is the active process that enables the learner to understand the words
being read” (Mercer and Mercer, 2005, p.290). Besides word comprehension and phonetic
difficulties identified on the tests, Ali struggles with passage comprehension. Mercer and Mercer
(2005) provide the evidence-based model proposed by the National Reading Panel (2000). It has
five components: (1) Comprehension monitoring: When students read the text with “vigilant
awareness” they focus on trying to understand what they are reading. To achieve this, the teacher
can regularly ask the student questions about what they have read, or have the student come up
with questions; (2) Cooperative reading: Students work in small groups to learn comprehension
strategies from other students; (3) Use graphic organizers: Students can “map out” concepts in
the text and how they connect with other concepts using fill-in charts; (4) Recognizing story
starters: Teachers can show how stories are organized into a plot and a sequence of events that
can be mapped out with graphic organizers. This helps students to know what they are looking
for in a text; (5) Summarizing: Students learn to determine what are the important ideas or
concepts in the text, and the central theme. (p.290-291)
21
With these five aspects in mind, Mercer and Mercer (2005) proceed to identify systematic
and explicit comprehension strategies. They claim good instruction requires ample time for
students to read (silently and aloud), teacher modeling, generating questions and summarizing
texts with graphic organizers.
In summary, we used two formal tests and informal measures to identify strengths and
weaknesses of a ten-year old boy named Ali. My conclusion is Ali possesses strong social skills
and a determination to learn. Academically, he’s strongest in math. Since he received a “low
average” on the WJ-R in reading, we zeroed in on particular weak areas in reading with the
WRMT test. Based on the results, we recommend using evidence-based instruction models
presented by Mercer and Mercer (2005), focusing on phonological awareness, phonics, and text
comprehension. Since Ali’s chief learning strategy appears to be verbal communication, I
suggest using it to “work around” his learning disabilities in tests and assignments.
22
REFERENCES
Florida Center for Instructional Technology. “Student Assessment Checklist”
http://fcit.usf.edu/assessment/performance/assessb.html
McLoughlin, J.R. and Lewis, R.B. (2008). Assessing students with special needs. Columbus:
Pearson.
Mercer, C.D. and Mercer, A.R (2004) Teaching students with learning problems. Columbus:
Pearson.
Reading Rockets (2004) “Informal Reading Assessments: Examples”
http://www.readingrockets.org/articles/3411
Woodcock. R. W. and Mather, (1989) Woodcock-Johnson tests of achievement (includes
manual). Allen, Texas, DLM Teaching Resources
Woodcock. R. W. and Mather, (1987) Woodcock-Johnson reading mastery tests-revised
(includes manual). Circle Pines, MN, American Guidance Service.