stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five...

25
1 Doughman RUNNING HEAD: Article Critique Article Critique Leah G. Doughman University of West Georgia Spring 2011

Transcript of stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five...

Page 1: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

1Doughman

RUNNING HEAD: Article Critique

Article Critique

Leah G. Doughman

University of West Georgia

Spring 2011

Page 2: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

2Doughman

Introduction

This paper is written in response to, yes to an assignment, but to also analyze, evaluate,

and critique numerous articles that are readily available to the education arena in reading and

technology. This paper dives deeper into the meanings of the articles; it is not just a summary

of them all. It answers those questions of “how?” “why?” “what?” and “how well?”. Some of

the critiques are negative while many are positive. This paper provides readers with an

understanding of implementation of specific evaluations and the methods. It is also the

framework for my evaluation project which helped strengthen my personal understanding of it

all.

Article One Summary

The authors of this article analyze and discuss a technology that is affordable, easy to

access, and promotes learning. This technology is called WriteToLearn; it is a web-based tool

that integrates reading and writing to provide assessment and immediate feedback to students

(Dooley, Lochbaum, & Landauer, 2009). The article is intended and directed toward teachers

who are interested in providing students with immediate feedback, allowing students to work

at their pace, and who want to incorporate reading, writing, and technology. The summative

study evaluated the effectiveness, accuracy, and reliability of the program in the classroom.

WriteToLearn provides two sections with its program: Summary Street (students read

and summarize short passages) and Intelligent Essay Assessor (students write essays based on

computer generated responses). The authors provide examples of the immediate feedback and

evidence of students’ academic growth. They also provide teachers with comments of the

Page 3: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

3Doughman

program. Many educators state that they like the immediate feedback that the program

provides the students and how it allows them more time to work with the students on skills

instead of spending more time on assessing (Dooley et al., 2009). In the end, the researchers

state that the evaluations completed on the program show that students’ reading and writing

data improved each time. Their only recommendation for their study was the need for more

studies in this field.

Article One Critique

In my personal opinion, I thought the article was very well written. The authors present

their arguments in an easy to read, easy to follow format. They explain the program and its

components before jumping into their evaluation of the product. A thorough explanation of

the program is needed for readers to understand the multiple components of the program.

Once an understanding of the program is established, then arguments and/or assessments of

the programs can be presented; this is what the authors did and it provides clarity for the

reader. After reading the article I am curious to evaluate the program and try it out myself.

I feel the evaluation of the program is carried out in a professional manner. Time is

spent explaining the program, examples taken from the program are provided along with

student performance samples, and numerous teacher responses of the program are listed. I

feel that the terms have been defined and there is sufficient evidence to support the authors’

arguments or ideas. My only question of the evaluation was when the authors stated “comes

from several sources” (Dooley et al., 2009, p. 49). I feel like they should have listed their

Page 4: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

4Doughman

sources. This would have provided more reliability of their evaluation. Overall, I feel the

program is worth looking into and evaluating for school use.

Article Two Summary

This summative article explores and evaluates the effects of an interactive singing

software program called Carry-a-Tune (CAT). It was originally designed to improve singing but

has been adopted in the reading classroom because of its ability to provide repeated readings

which aid with comprehension and fluency. The authors and evaluators felt that this

interactive program help adolescent readers with their reading skills, ability to gather and gain

background knowledge, and increase self-esteem.

As the authors point out, many adolescent readers no longer engage in reading and

their self-esteem in regards to their ability to read is low. They are very reluctant to read, feel

discouraged because of experienced difficulties, and believe they will automatically fail (Biggs,

Homan, Dedrick, Minick, & Rasinski, 2008). The program provides students with opportunities

to feel individual success in a non-threatening environment. Through the use of this program,

students’ performances are not compared to their peers, friends, or role models (Biggs et al.,

2008). Another important aspect of this program is automatic feedback. Students do not have

to wait on the teacher to grade their performance(s); the computer program evaluates their

performance at the end of each session and provides them with an immediate evaluation.

Page 5: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

5Doughman

Article Two Critique

The article was well written and presents its evaluation in an effective manner. The

authors’ arguments were logical, well organized, clear, and easy to read. At the beginning of

the article, I felt it was very slow at presenting its points but picked up in the middle-half of the

paper. It did not explain how the program works up front, but instead provided some very solid

background research in the field of which this technology would cover. The National Reading

Panel report was included along with research from very famous authors, such as Allington,

Caulkins, Chall, Fry, Caldwell, LaBerge, and Samuels who are renown in the field of reading and

writing. This background of research-based information set a solid foundation for presenting

the evaluation of the computer-based program.

Once the stage was set, the authors went on to state the “how” aspect of their research.

I really like the fact that the authors explained the levels of expertise of the singing levels and

the tools used. It is broken down into levels: beginner, intermediate, and advanced and the

students/singers use a soundproof microphone headset that they can listen, sing, and record

with (Biggs et al., 2008). The program explanation and purpose for using this program were

clear, but the evaluation of the participants was slim. The evaluation only included twenty-four

participants. If I had been conducting the research, I would have included a larger number of

participants to gain an accurate, more precise evaluation of the technological tool.

An issue that was not addressed, but crossed my mind several times was the number of

songs available or provided with the program. Did it only have about twenty to fifty? Or did it

include more? Student could very quickly go through this amount of songs and get bored. Can

Page 6: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

6Doughman

teachers add to the song list? Can this be adopted into younger aged classrooms? I could see

trying this program out, but not if it didn’t apply to my age group. I feel like many questions

were left unanswered in regards to the program itself, but the evaluation of the program was

well organized and well conducted.

Article Three Summary

This formative article evaluates and discusses the “benefits” using a computer-based

program to increase reading strategy training with adolescent readers. The program or training

tool used is called iSTART which stands for Interactive Strategy Trainer for Active Reading and

Thinking. According to this article, there are a growing number of adolescent students who do

not understand what they read (McNamara, O’Reilly, Best, & Ozuru, 2006). Students who have

a better understanding of reading strategies tend to perform better in comprehension than

students who do not. The overall goal of the interactive program is to increase reading

strategies among adolescent students through training. The goal of the evaluation was to

investigate the effects iSTART had on adolescent students’ comprehension on non-fiction text

such as science texts.

Article Three Critique

In my opinion, this article was very boring and too wordy. The points the authors were

trying to make could have been stated in a more straight forward method. I felt specific points

were thrown in the paper without the thought of clarity and understanding (i.e. working

memory). Since the data and evidence were not straight forward, I felt as if the authors’ points

and meanings were lost in all the wording. I had to look back and forth numerous times to try

Page 7: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

7Doughman

and gain more clarity on the topic. Their arguments were not clear, and I thought the article

could have been organized a clearer manner to aid with clarity. I also felt that this article did

not hold the attention of its intended audience. If I, an educator, were confused by their

message, then I am sure that others who are not trained in the education field would be quite

lost.

Almost five pages into the article, the goals were finally stated. Why could the goals not

be stated up front? It would have made their study and objectives much clearer for the reader.

The study also had several focuses; why have so many? I felt like they should have stuck with

one or two focuses and which would have made their point(s) stronger instead of diluted. The

number of participants was rather low; it consisted of 39 children (11 males, 28 females)

(McNamara et al., 2006). After twelve pages of excessive wording, I learn that the students

benefitted from the training. After reading this article, I will use it as an example of what not to

do when evaluating and writing about a program.

Article Four Summary

This article summatively evaluates and examines the benefits of a CAI (computer-

assisted instruction) called Lexia Strategies for Older Students or Lexia S.O.S. It is a program

that was designed for older students who did not completely grasp reading concepts when

introduced or who may have missed out on the benefits of phonics, fluency, and

comprehension instruction. The program “contains five levels with twenty-four skill activities

and 369 discrete units” (Macaruso & Rodman, 2009, p. 106). It provides teachers with the

ability to monitor students’ progress, and it progresses with the students’ individual rate.

Page 8: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

8Doughman

Macaruso and Rodman (2009) also state that the tool provides students with the opportunity to

master skills before moving on to the next.

The participants, procedures and materials of evaluation, along with a description of

each subtest were quickly and thoroughly addressed with adequate pace in the article. These

key points help the reader to understand the main points of the evaluation. Teachers,

administrators, or curriculum specialist (the intended audience) can quickly see and understand

the benefits of using the Lexia S.O.S. program with struggling students. Up to this point, this

evaluation had the largest number of participants (47) when compared to others in this paper.

A larger sample size aided with the evaluations validity and reliability, but more students are

still needed.

Article Four Critique

I felt this article was well written, comprehensible, and easy for all to follow. The

language was written in a straightforward method and all components were clearly explained.

Questions about the program did not arise while I was reading the evaluation. The program

was thoroughly explained, even for readers who have never used the program. The facts,

based on the well documented references, appear to be legitimate and accurate.

With my reading background, I feel that there is solid research presented in the area(s)

of phonics, fluency, phonemic awareness, vocabulary, and comprehension which provide the

data with a solid foundation. The overall findings were positive and indicated that the CAI can

provide struggling with numerous benefits. The only concern stated was that other CAI

programs have not all provided positive results. Details about these results were not

Page 9: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

9Doughman

thoroughly discussed. It is recommended that these types of programs should be used as a

supplement and carefully integrated into the curriculum to get the best results.

Article Five Summary

Article five evaluates two programs that were developed by the Frostig Center Research

Department to improve reading and spelling among student who have learning disabilities. The

two programs evaluated were Speech Recognition-Based Program (SRBP) and text-based

Automaticity Program (AP). The purpose of this evaluation was to see how effective the

program was for students with learning disabilities and the teachers who teach these students.

The study was conducted as a summative evaluation with all involved as stakeholders,

especially the developers of the computer-based programs.

The Speech Recognition-Based Program (SRBP) and the Automaticity Program (AP) were

designed to improve word recognition, reading comprehension, phonological processing, and

spelling (Higgins & Raskind, 2004). Training required an extensive amount of time for both the

students and teachers. Researchers of the programs monitored the usage of the program; they

were available daily for the first week and then weekly to troubleshoot any problems that may

have occurred. In the end, researchers noted a significant increase in word recognition and

reading comprehension, but not spelling (Higgins et al., 2004).

Article Five Critique

I felt the researchers of this article did a great job of arguing their points. The facts were

clearly laid out on the table, and the data was well organized, clear, and easy for any educator

Page 10: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

10Doughman

to follow. I was extremely concerned though with the amount of time required to train

educators and students and the difficulty of it. The research stated, “training to proficiency has

proved difficult and time-consuming”, “training for continuous speech programs requires

children to read stories aloud into the microphone, which many children with LD are able to do

effectively”, and “the use of speech recognition in the classroom as an assistive technology has

proved difficult to implement” (Higgins et al., 2004, p. 366). This was alarming, but very

beneficial in knowing.

At the beginning of the evaluation all students were grouped to together; students’ ages

and IQs were not separated accordingly. The age range was rather large; students’ ages ranged

from eight to eighteen. Results would definitely vary among this diverse age group and ability

levels. Researchers later separated the students accordingly and the evaluations were much

clearer. Also, I liked the fact that program provided self-paced activities. Students were not

forced to move onto the next topic if they were not ready, so master of a concept was

achievable.

The other recommendations reported by the authors were small sample size and

statistical data. Since the evaluation only consisted of forty-four students (twenty-eight who

were listed as LD), authors state that this might hinder meaningful analysis of the programs and

the evaluation of them. Another recommendation mentioned was the need for statistical

verification. Without the statistics, researchers did state the data should be interpreted with

“considerable caution” (Higgins et al., 2009, p. 383). I feel that if statistical data had been

gathered, then it would have a more legitimate standing in the field of research.

Page 11: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

11Doughman

Article Six Summary

This article builds upon another evaluation that investigated the use and effectiveness

of computer-assisted instruction (CAI). It looks more thoroughly into the use of a program

called Lexia Early Reading. The program is designed “to supplement classroom instruction in

building a foundation for emerging literacy skills” (Macaruso & Walker, 2008, p. 270). The

program aids students with sound identification, rhyming, segmenting and blending sounds

within words, and application of letters with immediate feedback. Training is minimal and the

program, based on the results, has shown an increase in phonological skills among struggling

reading students.

Throughout the article, the authors explain and clearly lay out the benefits of using

computer-assisted instruction in the classroom. The focus was on one particular program, and

it was evaluated in a very summative manner. The stakeholders included the authors of the

study and the teachers and students who participated in the evaluation. The struggling

kindergarten readers were the ones who had the most to gain and/or lose based on the results

of the evaluation. Qualitative and quantitative data was gathered, including surveys. In the

end when the results were reported, there were positive results that came through the use of

the CAI Lexia Early Reading.

Article Six Critique

Overall, the article was well written and clear; I personally like Paul Macaruso’s

research, presentation style, and topics. The article was very interesting and easy to read. It is

a relative topic that applies to numerous educators, students, and what is carried out in the

Page 12: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

12Doughman

classroom in regards to technology. This evaluation supports computer assisted instruction

(CAI) as a supplement and not as the sole instruction of the classroom. The evaluation found

that all students involved in using the CAI benefited from receiving phonics-based reading

instruction that was offered as part of the computer instruction (Macaruso et al., 2008).

The author does an excellent job of targeting his audience, teachers, with promising

facts or information about the program. It provides students with immediate feedback and

progresses to the next activity once the student has mastered a particular phonics skill. As an

educator, I like the fact that the program provides individual support and progresses with the

student’s ability. There was also a large number of participants (94 to be exact) in this study

and consideration of a child’s sex was also considered. I felt that the larger participant numbers

helped to foster accuracy and validity in this evaluation. My only concern was the fact that the

researchers used a different tool to evaluate the final results. I feel like they should have used

the evaluation tool that was provided with the CAI, but no explanation was stated as to why

they did this.

Article Seven Summary

This summative article evaluates numerous research and tools that are used as

computer assisted technology. The authors look specifically at three important aspects that are

included in all the articles. The first major aspect evaluated was the impact computerized print

versus printed reading material had on comprehension. The second aspect addressed was

computers, comprehension, and reading difficulties. And the third aspect considers the

research on the computerized tools, engagement, and meaning that is gained (Stetter &

Page 13: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

13Doughman

Hughes, 2010). The evaluation results showed many positive effects that computer assisted

technology has on learners, yet the results were mixed in regards to improved comprehension.

The mains points the authors were trying to make were that there are technologies

available to aid with reading comprehension among all students. They do argue that there is

still not enough research in this particular field, and that there needs to be more. This will help

educators and others to determine if computers and technology are truly effective in the daily

classroom. By using several different tools listed in the reviewed articles, authors are able to

support their main points and arguments with solid, sound facts. On the other hand, Setter and

Hughes (2010) might have some biases, because they do feel strongly about the positive effects

technology might have reading comprehension. This is the internal push of their research and

quest for other supportive data.

Article Seven Critique

The article is well written and very lucid about their intended goals. I agree with the

authors shortly after a few sentences into their evaluation. Stetter and Hughes (2010) quickly

state that teachers are faced with new and unique classrooms that are different from the past;

teachers need to know how to best meet the needs of their unique students. I also feel that

students, especially those with learning disabilities, need explicit instruction, and the authors’

evaluations of the literatures and technologies listed in them are very useful to the everyday

educator. I also feel that we must teach students reading comprehension strategies and

monitor for understanding—Stetter and Hughes (2010) try to prove just this.

Page 14: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

14Doughman

Through their evaluations, I was shocked to read that twenty-three percent of teachers

and students recorded that they used computers only once a week (Stetter et al., 2010). And

most of that was internet use, so no programs were used regularly based on the evaluations.

Programs to aid with reading comprehension, strategies, and diverse learners are becoming

more and more popular. If feel that it is important to evaluate the programs that school

systems are looking to adopt or implement for effectiveness in regards to cost, resources, time,

and training. The authors of this article do just that; they evaluate the research and tools that

are used as computer assisted technology, and present it in an simple, “up-front” manner.

Conclusion

In the end of the whole critiquing process, I have gathered numerous articles that will

aid to my own personal research evaluation project. It provided me with insight on how

evaluations are carried out and the processes of them. It also allowed me to see the “ins and

outs” as well as a big picture of what I am doing. I feel as if I know have the framework for my

research, I just have to now fill in the pieces. I believe that my articles aid with my own

evaluation since they are all related to reading comprehension and technology. Others who are

interested in a wide variety of reading comprehension technologies would find these articles

worth their time and applicable to their own research evaluations. The articles provide some

valuable resources which can aid or apply to other technologies as well.

In regards to relevancy and my personal practices, these articles have “opened my eyes”

to the slew of technologies that are readily available to the education field. Not all are great,

but researching provides educators some insight so they can apply their knowledge and not just

Page 15: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

15Doughman

blindly lead. I feel like my collection of articles provides me with some knowledge and

background to a few technologies available. This knowledge and background will help to guide

me in my own personal research evaluation journey. I am now better equipped to tackle some

of the “unknowns”.

Page 16: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

16Doughman

References

Dooley, S., Lochbaum, K., & Landauer, T. (2009). A new formative assessment technology for

reading and writing. Theory Into Practice, 48, 44-52. Retrieved from EBSCOhost.

Biggs, M., Homan, S., Dedrick, R., Minick, V., & Rasinski, T. (2008). Using an interactive singing

software program: A comparative study of struggling middle school readers. Reading

Psychology, 29, 195-213. Retrieved from EBSCOhost.

McNamara, D., O’Reilly, T., Best, R., & Ozuru, Y. (2006). Improving adolescent students’ reading

comprehension with iSTART. Educational Computing Research, 34(2), 147-171.

Retrieved from EBSCOhost.

Macaruso, P., & Rodman, A. (2009). Benefits of computer-assisted instruction for struggling

readers in middle school. European Journal of Special Needs Education, 24(1), 103-113.

Retrieved from EBSCOhost.

Higgins, E., & Raskind, M. (2004). Speech recognition-based and automaticity programs to help

students with severe reading and spelling problems. Annals of Dyslexia, 54(2), 365-387.

Retrieved from EBSCOhost.

Macaruso, P. & Walker, A. (2008). The efficacy of computer-assisted instruction for advancing

literacy skills in kindergarten children. Reading Psychology, 29, 266-287. Retrieved from

EBSCOhost.

Page 17: stu.westga.edustu.westga.edu/~lgoodno1/8480_articlecritique_lgd.docx  · Web viewArticle five evaluates two programs that were developed by the Frostig Center Research Department

17Doughman

Stetter, M., & Hughes, M., (2010). Computer-assisted instruction to enhance the reading

comprehension of struggling readers: A review of the literature. Journal of Special

Education Technology, 25(4), 1-16. Retrieved from EBSCOhost.