Literature review ENGL 7702 Lecture. Objectives Define the purpose of a literature review Explain...

40
Literature review ENGL 7702 Lecture

Transcript of Literature review ENGL 7702 Lecture. Objectives Define the purpose of a literature review Explain...

Literature review

• ENGL 7702• Lecture

Objectives

Define the purpose of a literature review Explain how to select and evaluate sources Explain how to write a literature review for

both qualitative and quantitative research studies

Literature review

What is the goal of a literature review?– Existing gaps in our knowledge

– A level playing field for the reader

– Your credibility

– The validity of your topic

A literature review is not a research paper (like the ones you have written too many times in the past)

Final result

Clear picture of your chosen topic that shows the current state of knowledge.

– What do we know?

– What do we not know?

– What areas are very fuzzy/contradictory?

A frequent definition is that a technical writer takes information that is only understood by a brilliant few and make it available to the masses, but this is not correct. Your job as a technical writer is to understand the audience expectations and the rhetorical situation and to develop appropriate content. When doing research, the rhetorical situation does not include writing for the masses; you are essentially writing a journal article and the expectations are that it should read like one.

Research as building a brick wall

Your research works to fill in missing spots. The overall theory discussion defines where along the wall you are working, the literature review considers the surrounding bricks and shows which ones are missing, then the original research moves forward to show how it provides a new brick.

Random sources lit review

An unorganized lit review or one where the author just took the first 10 sources she found gives a view like this. It doesn’t establish the hole or the shape of the literature around that hole.

Literature review

Qualitative and quantitative research integrate the literature into the text differently. With qualitative, the initial lit review is shorter and the literature is intermixed heavily with the discussion. It can show both how the literature supports/does not support the findings and to show how this study resolves issues raised in other studies.

In both cases, the goal is to support that the method made sense, the findings fit within the existing literature, and the research makes sense.

Establish a topic can and should be researched

Can be researched– Other people have looked at the topic

– Other research methods might establish that it requires more time/money than you have available

Should be researched– Fits within TC research

– Holes exist that are worth exploring. Previous research found something worthwhile and worth further exploration

Your topic has not been researched– A research project needs to be generating new information. A new twist

or a new slant. You can’t repeat already published research.

Contradictions

Often times, you'll find previous research is contradictory; different studies find what appear to be oppose conclusions. Yet, both of these studies are within the area you are researching. Even worse, one of them may not support the hypothesis you're planning for your study.

Good. The lit review needs to help clarify why/how these studies seem to contradict and lead up to how your study will help resolve that contradiction.

Secondary vs primary

When you are selecting sources for a literature review, you will have to make decision between using secondary and primary sources

Try to use primary sources. But the secondary sources are great for finding those sources.

Too many secondary sources turn your lit review into a repeat of those sources. You will frequently see sentences such as: “See Martin & Palmer (2007) for substantial review on …” You are covering 5 areas and Martin & Palmer did an in-depth lit review on one of those areas.

Popular how-to books

I once read a thesis which made extensive use of popular web design books (the kind you find in at Barnes & Nobles).

The author then found it almost impossible to compare/contrast sources since they all said the same thing. And they were all variations on “here’s how to design this part of your web site”

Literature review as stand alone

As secondary research, a literature review can provide a valuable contribution to the discipline's literature.

This is what you learned to write in 7701. It works to be an exhaustive search on the topic

and draw broad conclusions. The lit review for a research project is narrow; you only care about the bricks around your hole.

Writing as cyclic process

You start with a research question. Then you consider what you think are the implications (part 3 of a finished paper). Finally, you write a lit review which leads to your chosen implications.

You have to have an idea of where you are going before starting the lit review. There is too much information available and you need a way to filter it.

This it how research often works. You have an idea, collect data, and during the analysis find something more interesting. So you go back and rework the study design as of that was your intention from the beginning.

Writing techniques

Literature review

A typical undergraduate research paper takes one source (perhaps a book on the topic), the first source says “there are five ways to …” The paper has five sections that explain what that source says. You are explaining (data dumping) everything you know about those five ways. The other sources are added to support that first book.

In a literature review, you’ll only say this sources says there are 5 ways. You don’t explain them in detail. You then compare those against other sources. There is an assumption that you and the reader are both familiar with the subject and you using it to position your discussion, not explain the topic.

Finding sources

How do you search for journal articles? How do you daisy chain articles? What is the problem with search engines?

Get some TOC subscriptions (long term issue)

Direct quotes

What drives the need to insert– short quotes

– long quotes

Quote when the person says something better than you can say it yourself.

Avoid a “dump quote” when you quote without introducing it.

Don’t overuse quotes. For the most part, the reader is concerned with the other person’s results, not how he said it. This is different from a literature review in many humanities subjects.

End

Literature ReviewExamples

Example - poor

Overall, Philbin, Ryan, and Friedel found that the practitioners surveyed--both randomly chosen STC members and graduates of Bowling Green State University’s program in Scientific and Technical Communication--experienced a level of job satisfaction that was in the 35th to 38th percentile of national norms, using the Job Descriptive Index (JDI). (1995) This percentile indicates “…a much greater level of dissatisfaction with the work than is typically found in other occupations.” (1995) Taking into consideration job aspects such as pay, possibility of promotion, supervision, co-workers, and gender differences, the survey concluded that technical writers are “disaffected” and suggested some implications for current educators. Philbin, Ryan, and Friedel focused on training—technical, entrepreneurial, and “reality.”

Example - good

Two other approaches proposed by Corbett and addressed by Miles, case studies and the praxis model, take the information approach to the next step--audience consideration. First, case studies treat knowledge as contextual and negotiable (Corbett 114). Case studies allow for audience analysis and evaluation of document design issues. A great deal of the internationalization research involves case studies. For example, in addition to Schriver, Waka Fukuoka in his article "Illustrations in User Manuals: Preference and Effectiveness with Japanese and America Readers" examines cultural design issues in his study of Japanese and American manual

users. Fukuoka's study revealed ……

Example - poor

In the article “Usability Basics for Software Developers, “ a sample of usability benchmarks is examined to access quantitative usability goals. These benchmarks are determined before any design begins. The Merriam Webster Dictionary defines benchmarks as a point of reference for measurement. Jeffery Rubin recommends generating a chronicled record of usability benchmarks for future reference. Hereby, ensuring maintenance or progress in future products (“Handbook” 26). The benchmarks should be an average or maximal time interval for the task to be accomplished. For example, if you were analyzing the usability time of a predetermined E-mail system, you would need to recognize how long it takes users to accurately put in his or her name and address in the E-mail system. If it takes 15 minutes to conclude this task, the design is flawed by most standards. You will need to evaluate the average and maximum time it took users to enter the information correctly (“Handbook” 98).

Example - good

Hirst (1996) is not only an advocate of faculty internships, but also reports on what he learned in his experiences as an intern. In accord with Rehling, he claims that “…a faculty internship is much more of a two-way street [because]…you are expected to make some contributions, but your ‘employers’ know that you are with them on a mission to improve yourself as an educator.” (1996) He not only enumerates the benefits of this internship to himself, but also to the various organizations for whom he worked.

Example - good

Many of these guidelines cite Jakob Neilson’s (1994/1997) observation that only 10% of web users will scroll down a page. In 1997, Neilson declared, “scrolling now allowed.” However, he still lists “scrolling navigation pages” as one of the “Top Ten Mistakes in Web Design” (Neilson, 1999). Additionally, work by Morkes and Nielson (1997), which found higher usability for concise and scannable text, has been used to support the notion that scrolling should be avoided on web pages. Although Neilson is widely regarded as an expert on web usability, much of his work has not been subjected to the scrutiny of peer review.

Example - poor

There is a persistent drive for editing to be done online rather than on print, because the internet has the capacity for multiple users, continual updating of editing methods, and directed goals to create tailored outcomes (Ojala, 2005). As production costs continue to increase, future expenses in health care rise continue to rise, and the availability of huge amounts of online data information is widespread, it is foreseeable that print editing will be obsolete in the future (Ojala, 2005). The challenge now lies in setting out strategies and frameworks for information discovery and content development for getting the most out of editing online (Ojala, 2005).

Specific points to address

Position your study

As was the case for the ‘‘time-consuming and detrimental to efficiency’’ theme, this feedback primarily originated from the users of light or moderate user groups. We conclude, this is due to the same fact that ‘‘heavy’’ users had adapted their behavior accordingly. We will discuss this modified behavior in the later section.

Finally, residents complained that the reminder system lacks guidance in the application of workflow. In contrast to the history and physical examination forms that residents typically use, the interface of CRS appeared to provide little guidance as to a preferred order of data entry.

Position your study Although these three learning models differ in some respects, they all

incorporate at least two common characteristics that may aid efforts to develop better decision support for DDM. First, all three models take into account the need for two forms of learning: explicit (i.e., decision making based on rules of action) and implicit (i.e., decision making based on context-based knowledge and recognition). There is some evidence that individuals who have completed a dynamic task are not always aware of the task structure (i.e., their knowledge is implicit), which suggests that the knowledge they acquired was not in the form of rules about how the system works (Dienes & Fahey, 1995). Often, individuals performing DDM tasks are unable to describe the key elements of the task or verbalize the ways in which they make decisions (Berry & Broadbent, 1987, 1988). Such a lack of awareness both of the key variables involved in performing a task and of their relationships may denote an individual’s dependence on implicit learning (Berry & Broadbent, 1987).

Show general relevance

This is also supported by the broad attention that has been paid to time availability constraints and their consequences (mostly psychological stress experienced because of perceived lack of time or time pressure) in many business disciplines. For example, research in accounting has actively studied time pressure in auditing (Kermis and Mahapatra, 1985; McDaniel, 1990; DeZoort and Lord, 1997; Spilker and Prawitt, 1997; Braun, 2000), marketing research has investigated the effects of time pressure on consumer decisions (Nowlis, 1995; Dhar and Nowlis, 1999; Pieters and Warlop, 1999), and under the broad umbrella of management research, time pressure has been studied in the context of, for example, ethical decision-making (Moberg, 2000), group communication (Kelly et al., 1997; Brown and Miller, 2000), and negotiation (Stuhlmacher et al., 1998).

Show general relevance

Research in human-computer interactions has identified many reasons for the low usage of on-line help. First, users may not be able to formulate queries effectively, i.e. to give precise and differentiating descriptions of things they lack knowledge about (Blair & Maron, 1985; Nickerson, 1999). Users are left alone to navigate through the query results (Hertzum & Frokjaer, 1996; Horvitz, 1999), and often need to re-formulate the queries on a trial-and-error basis. This process can be fruitless and frustrating. Second, much empirical evidence has shown that most users put a lot more emphasis on getting their work done than seeking help to optimize their work (Fisher, Lemke & Schwab, 1985; Carroll & Rosson, 1987; Desmarais, Larochelle & Giroux, 1987; Furman & Spyridakis, 1992). One potential solution to these problems is to make on-line help proactive. However, several major challenges have hindered the effectiveness of this approach, e.g. correctly inferring a user's task and delivering relevant advice at the right time (Furman & Spyridakis, 1992; Beaumont, 1994; Wolfe & Eichmann, 1997; Agah &Tanie, 2000).Moreover, users like predictability and to be in control, but they do not like surprises (Shneiderman, 1998; Hook, 2000), which are associated with system-initiated help.

Show general relevance

Clinical cueing systems (CCS) are a class of clinical decision support systems (CDSS) that send just-in time alerts to clinicians when potential errors or deficiencies in the patient management are detected. Significant research evidence shows that CDSS can enhance the clinical performance in drug dosing, preventive care, and other aspects of medical care [1—9]. However, most evaluations of CDSS emphasize clinical performance and diagnostic accuracy; few studies address user acceptance and adoption of such tools in the ambulatory care practice setting that reflects specific characteristics of the users and/or the environment [8,9]. It remains unclear whether a CDSS shown to be effective in laboratory settings will be effective over time in routine clinical settings with real patients.

Build a path to your research

In addition, earlier research suggests that a transition from paper to computer-based documentation might have other unintended impacts. Nygren and Henriksson [11] showed that the format, layout, and other textural features of the paper record are critical to a physician’s ability to search, read, and assess the relevance of information contained therein. Features such as the ability to manually tabulate pertinent data and mark up abnormal findings may be important to the cognitive processing of clinical information and could be lost with CPD. Indeed, more recently conducted research by Patel et al. [12] found that EHR use was associated with changes in physicians’ cognitive behaviors such as information gathering, organization, and reasoning strategies.

Defining ideas or concepts

Patel et al. [20,21] point out that when processing natural language text, a distinction should be made between the ‘text base’ and ‘situation model’. According to this research, the ‘text base’ represents the core meaning of the text, which is independent of the choice of language with which it is expressed. The reader generates meaning from the text by transforming the written information into some semantic form or conceptual message. The mental model developed by an individual who is reading a text is not limited to the information contained in the text itself but is extended to incorporate the reader’s prior knowledge. In this sense, the reader constructs a ‘situation model’ of the scenario described in the text. As a result, a conceptual representation emerges from the interaction processing natural language text, a distinction should be made between the ‘text base’ and ‘situation model’. According to this research, the ‘text base’ represents the core meaning of the text, which is independent of the choice of language with which it is expressed. The reader generates meaning from the text by transforming the written information into some semantic form or conceptual message. The mental model developed by an individual who is reading a text is not limited to the information contained in the text itself but is extended to incorporate the reader’s prior knowledge. In this sense, the reader constructs a ‘situation model’ of the scenario described in the text. As a result, a conceptual representation emerges from the interaction between the text base and the situation model [20].

Defining ideas or concepts

Lerch and Harter (2001) used a real-time DDM task to investigate the effects of outcome feedback and feedforward on performance. In their study, outcome feedback included explicit real-time (i.e., instantaneous) details about task performance. Feedforward involved a what-if computational analysis tool that allowed participants to ‘look into the future’ by observing the effects of possible actions. The results of that study indicate that the effectiveness of the support strategies depended on the presence of outcome feedback. Feedforward alone impeded performance and inhibited learning, but feedforward provided in combination with outcome feedback led to slightly improved performance.

Showing the hole exists

No published research has, however, investigated the effects of time availability limitations on data retrieval tasks, which is an omission worth addressing taken into account the important role of these tasks play in organizational life. The main independent variable of interest in this research, time availability limitations, was chosen based on the identification of this gap. The effects of time availability limitations do not, however, exist in vacuum but they have to be evaluated together with other factors that affect human performance in these tasks. Prior research on query writing (Suhan d Jenkins, 1992; Chan et al., 1993, 1994, 1998) provides a framework, which is presented in Fig. 1

Showing a hole exists

The majority of approaches (e.g. HTA and its hybrid forms) have been used traditionally to describe the physical aspect of a task, and the steps that are required to carry it out. They have made significant contributions towards improving productivity in cases where the major elements of the task are observable, but it has been suggested that they are less effective in the analysis of cognitive activities (e.g. Klein, Kaempf, Wolf, Thorsden & Miller, 1997). As a result of this debate and an increased emphasis on cognitive aspects of work, cognitive task analysis or CTA techniques emerged (a thorough description of the evolution of task analysis is provided in Annett, 2000). Cognitive task analysis (CTA) concerns itself with the knowledge that people have, or need to have, in order to complete a task. Its approach is to describe and represent the cognitive elements that underlie decision-making, goal generation, judgments, etc. (Militello & Hutton, 1998).

Support your method choice

Many UCD modeling approaches exist but none explicitly addresses the distinct demands of complex problem solving. Commonly, UCD models vary by degree of granularity. Usage-centered design, for example, produces fine-grained models of users’ unit-level tasks and interrelationships while scenarios and design rationales or elaborated storyboards capture activities at a coarser grain [2, 3]. Contextual design represents consolidated findings from task analysis in five different finely-detailed models -- workflow, task sequence, artifacts, physical layout and workplace culture [4]. At a higher level, application (or sociotechnical) design patterns capture in separate sections the context, problem, influential forces, solutions, a visualized example of work and ecological arrangements, resulting contexts, rationale, related patterns and uses [5]. Participatory design may produce finely grained use cases and abstract diagrams of task and screen objects or broader personas and scenarios [6,7]. Each type of model involves trade-offs.

Support your method choice

I also tested another exemplar decision support in the form of feedforward: I afforded individuals the opportunity to compare their decisions with those made by an expert performer. Research suggests that individuals may be able to improve their performance by comparing their decisions and effects of their decisions with the decisions made by an expert and the effects of those decisions (Sengupta & Abdel-Hamid, 1993). Because it allows individuals to analyze expert’s decisions without having to execute decisions at the same time, this feedforward support removes time constraints. I hypothesized that individuals permitted to review an expert’s decisions without time constraints would exhibit improvements in overall task performance.

Show your method is acceptable CTA is considered to be appropriate for tasks that are cognitively

complex (requiring an extensive knowledge base, complex inferences and judgment) and which take place in a complex, dynamic, uncertain, real-time environment'' (O'Hare, Wiggins, Williams & Wong, 1998). This description would seem to make CTA a highly appropriate choice for inclusion in a design method for decision support systems. It would also seem to be particularly true for the context in which the process described in this paper, crop production, is carried out. Decision-making in crop production can certainly be described as cognitively complex, requiring the manipulation of many variables, as this quotation from Bartlett illustrates:

End