Assessing and validating a learning styles instrument
-
Upload
andrea-decapua -
Category
Documents
-
view
216 -
download
0
Transcript of Assessing and validating a learning styles instrument
www.elsevier.com/locate/system
System 33 (2005) 1–16
SYSTEM
Assessing and validating a learningstyles instrument
Andrea DeCapua a,*, Ann C. Wintergerst b,1
a Department of Teaching and Learning, Multilingual and Multicultural Studies,
239 Greene Street, 6th Floor, New York, NY 10003, USAb Department of Languages and Literatures, TESOL, St. John�s University, 8000 Utopia Parkway,
Queens, NY 11439, USA
Received 17 March 2004; received in revised form 27 August 2004; accepted 15 October 2004
Abstract
How can learning styles best be measured? Reid�s (1984) Perceptual Learning Styles Pre-
ference Questionnaire has been widely used in ESL/EFL research to investigate learning styles.
Previous research revealed concerns with the reliability and validity of the PLSPQ, leading the
researchers to devise a new Learning Styles Indicator (LSI), based on the PLSPQ, and tested
on different populations. From the results of this testing arose additional concerns regarding
the construction of the PLSPQ/LSI statements themselves. How then can learning styles
instruments best be validated? Interviews of graduate students in a Master�s in TESOL degree
program revealed that quantitative means alone are insufficient to ascertain the effectiveness
and usefulness of a learning styles instrument, particularly in the case of non-native speakers.
A triangular approach utilizing a questionnaire, semi-structured oral interviews, and partici-
pant observations presents a fuller picture of instrument validation.
� 2004 Elsevier Ltd. All rights reserved.
Keywords: Learning styles; Learning styles instruments; ESL/EFL
0346-251X/$ - see front matter � 2004 Elsevier Ltd. All rights reserved.
doi:10.1016/j.system.2004.10.003
* Corresponding author. Steinhardt School of Education, Department of Teaching and Learning, New
York University, 239 Greene Street, 635 East Building, New York, NY 10003. Tel.: +1 212 998 5498
(office)/201 263 9552 (home).
E-mail addresses: [email protected] (A. DeCapua), [email protected] (A.C. Wintergerst).1 Tel./fax: +1 718 456 3532 (home), Tel.: +1 718 990 5208 (office).
2 A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16
1. Introduction
Learning styles has become a �buzz� word in the field of ESL/EFL, but what ex-
actly are these learning styles? Learning styles may be defined as the inherent prefer-
ences of individuals for how they engage in the learning process (Ehrman andOxford, 1990; Oxford, 2000). Whether as a result of heredity, educational back-
ground, situational requirements, age, or other factors, learners understand and pro-
cess information differently. While one individual prefers a particular learning style
over another, such a preference reflects a personal inclination for how to learn in a
particular situation. As personalities change, so too may their learning style prefer-
ences after exposure to different teaching/learning situations.
How can learning styles best be measured? In order for instruments to measure
the discrete characteristics that they are devised for, it becomes necessary to createlearning style scales that are conceptually consistent as well as statistically reliable
and orthogonal. Learning styles instruments are questionable in terms of construct
validity (DeBello, 1990; Itzen, 1995; Rubles and Stout, 1990). Difficulties exist in val-
idating instruments used in the language learning context and in the usefulness of
exploratory factor analysis in assessing the underlying factor structure of the items
in these surveys (e.g. Curry, 1987; Purpura, 1998). Such difficulties are compounded
when instruments attempt to assess learning style preferences that are not in the na-
tive language of the students. Nevertheless, constructs do explain certain differencesamong individuals and how they learn. Determining construct validity involves gath-
ering numerous test results from the instrument on similar populations (Gay and
Airasian, 2000).
How then can a learning styles instrument best be validated? The current study
focuses on the Wintergerst and DeCapua (1999) learning styles instrument (LSI),
how it is a �revised� version of Reid�s (1984) PLSPQ instrument, and why there is
a need for validating it via interviews.
2. Review of relevant literature
2.1. Learning styles instruments
Various learning styles instruments for native speakers of English have been
developed including the Learning Style Inventory (Dunn et al., 1975, 1989), The
Grasha–Riechmann Student Learning Styles Scales (Riechmann and Grasha,1974), the Gregorc Learning Style Delineator (Gregorc, 1982), and Kolb�s (1976,
1985) Learning Styles Inventory.
For non-native speakers of English, Reid�s (1984) Perceptual Learning Style
Preference Questionnaire (PLSPQ), O�Brien�s (1990) Learning Channel Preference
Checklist, and Oxford�s (1993) Style Analysis Survey are the better-known learning
styles instruments in the ESL/EFL field. The earliest and most widely used of these
instruments is Reid�s PLSPQ, which is based on the concept of six learning style
preferences: visual, auditory, kinesthetic, tactile, group learning and individual
A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16 3
learning. There are 30 statements, which participants rate on a five-point Likert
scale.
Only Reid�s PLSPQ has been normed on non-native speakers of English, with reli-
ability and validity established on high intermediate or advanced ESL classes (Reid,
1987). For this reason, the researchers selected the PLSPQ as their learning stylesinstrument and also opted to use it in the original language, English, and not in
translation or blind back-translation. Eliason (1995) argues that Inclan (1986) and
Melton (1990) found no significant differences in how students responded to a ques-
tionnaire based on the language of the questionnaire, whether Spanish and English
or Chinese and English, respectively.
2.2. Four previous studies
The first study used Reid�s PLSPQ as the instrument to determine ESL students�learning styles (Wintergerst and DeCapua, 2001) through an analysis and compari-
son of participants� responses to three elicitation instruments: the PLSPQ, a back-
ground questionnaire, and data from oral interviews. Discrepancies arose in the
findings among the three elicitation instruments, thereby raising questions regarding
the reliability and validity of the PLSPQ.
In a second study, Wintergerst et al. (2001) examined the difficulties of conceptu-
alizing learning style modalities (Itzen, 1995) and of developing an assessment instru-ment for ESL students that actually measures what it purports to measure. They
examined the validity of the hypothesized factor structure of the PLSPQ through
exploratory factor analysis and explored the dimensionality of the PLSPQ, showing
that specific survey items did not group into factors conceptually compatible with
Reid�s learning style model.
The results of both studies pointed to a number of problems associated with the
PLSPQ, including statement design problems, a lack of reliability and validity, and
alternate statements producing a range of responses. Additionally, six items did nothave loadings of 0.35 or greater on any of the conceptualized factors and were there-
fore deleted from the PLSPQ. A 24-item scale was created and labeled the Learning
Styles Indicator (LSI). The original Likert scale was converted from a five-point scale
to a four-point scale and different word choices chosen to deter students from select-
ing the middle or non-committal response.
The outcome of the second study was the formation of the LSI (Wintergerst and
DeCapua, 1999). Using the Cronbach Alpha reliability estimate, the internal consis-
tency of these scales was analyzed and the results of both varimax and oblimin rota-tions were assessed. An alternative learning style model was then investigated,
leading to three new learning styles factor scales, namely group activity orientation
(GAO), individual activity orientation (IAO), and project orientation (PO), to pro-
vide a conceptually acceptable learning style framework, the LSI. PO refers to a stu-
dent�s preference of learning best when involved in ‘‘hands-on’’ activities or when
working with materials in a learning situation. The student may be working individ-
ually or with others, showing that project work is not mutually exclusive to individ-
ual work or group work. GAO refers to a student�s preference of learning best when
4 A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16
interacting or working with one or more students in a learning situation. IAO refers
to a student�s preference of learning best when working alone in a learning situation.
In the third study, a pilot study, Wintergerst et al. (2002) tested the reliability and
validity of the LSI across ESL students, freshman English composition students, and
foreign language students. The result of factor analysis of the 24 items in the LSIshowed that the students learned English or a foreign language in three conceptual-
ized situations: PO, GAO, and IAO. Question #21 ‘‘I prefer working on projects by
myself’’ did not meet the loading of 0.35 onto any of the three factor scales and was
thus deleted on the new version of the LSI, which now contained 23 final items (see
Appendices A and B).
The fourth study was a replication study of the factor structure – and not an
investigation of other questions related to the instrument – using the LSI, developed
and previously tested for reliability and validity by Wintergerst et al. (2001) andWintergerst et al. (2002). Replication is essential in obtaining consistent information
within and across populations and to improve the accuracy of the instrument used.
The initial step was to factor analyze the questionnaire from the students. Princi-
pal component (PC) analysis was the method used to extract maximum variance
from the data and to verify the factor structure. Variables with factor loadings of
0.30 and above provide for meaningful correlation and are interpretable (Tabach-
nick and Fidell, 2001), so the cutoff point of 0.35 was used. PC analysis extracted
the initially designed three PCs: PO, GAO, and IAO, although the distribution ofitems under each differed.
This study replicated our previous pilot study and provided further support for
the validity and reliability of the LSI. With the exceptions of the three problematic
statements (#5, #8, and #14), the remaining 20 statements fell within the three gen-
eral categories. Statement design problems became more and more evident as the
researchers worked with the instrument.
3. Current study
In working with graduate MA. TESOL students, the researchers questioned
whether continuing to test the LSI, essentially a revised PLSPQ, was indeed the right
approach. Was continuing with the LSI the best way to improve research on learning
styles in the ESL/EFL classroom? Quantitative research, it was found, did not reveal
the full picture with respect to the comprehensibility of the LSI statements. With this
in mind, one of the researchers interviewed volunteers in a master�s in TESOL degreeprogram to gain better insights into the students� interpretation and understanding
of the 23 LSI statements for the purpose of instrument validation.
4. Methodology
Data were collected from two groups of informants. The first group consisted of
24 master�s level TESOL students in a graduate department at a major metropolitan
A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16 5
university. These students were enrolled in a TESL methodology class in the 2003
spring semester taught by one of the researchers. Six of the graduate students were
native speakers of American English and eighteen were non-native speakers, seven-
teen from Asia and one from Brazil. The non-native speakers were full-time interna-
tional students who were in the US for the sole purpose of getting their M.A. inTESOL and then returning to their home countries to teach English.
The second group from whom data were collected consisted of 10 non-native
speakers of English, currently graduate students in the TESL department. Eight of
the students were enrolled in the TESL methodology class. In order to gather a lar-
ger pool of informants, a request for volunteers who were non-native speakers of
English was posted on the TESOL graduate department listserv. Two additional
graduate students who knew the researcher from a previous course volunteered to
participate. These two informants were also full-time graduate students and plannedon returning to their native countries to teach upon completion of their master�s de-gree. Nine of the informants were Asian (six Korean, two Taiwanese, one Hong
Kong) and the one non-Asian informant was Brazilian. They were all females.
The reasons these two groups of informants were chosen were twofold. First, the
informants could be considered ‘‘informed participants’’. These graduate TESOL
students were current or prospective teachers of ESL/EFL, had been in the master�sprogram at least one full semester prior to the data collection, and were familiar with
at least the basic principles and practices of teaching and learning a second/foreignlanguage. In addition, the non-native speakers were considered fluent users of EFL,
which added another dimension insofar as none could be regarded as language learn-
ers who might have trouble with a questionnaire not in their native language. (Before
admission to this graduate program, non-native speakers of English are required to
score 600 or higher on the pencil and paper TOEFL test, or 250 or higher on the
computer version.)
4.1. Data collection procedures
The data were collected in the graduate TESL methodology course, a four-credit
graduate course meeting once a week for 2 h and 45 min. The data consisted of
instructor field notes taken during the student small group and full class discussions.
The first group of informants, the 24 graduate M.A. TESOL students, were assigned
various articles on learning styles research as part of their requirements for this
course. In weeks three and four of the term, two full class sessions were devoted
to learning styles. The first class session centered on a review and discussion of learn-ing styles: what they are, the findings of different researchers (e.g. Felder and Henr-
iques, 1995; Stebbins, 1995) and classroom implications. In the next class session, the
graduate students took the LSI in class and scored it themselves. Once all the stu-
dents had had the opportunity to take and score the LSI, they were randomly placed
into small groups of 4–5. They were instructed to compare and discuss their scores
on the LSI, their reactions to the LSI, and their feelings regarding the usefulness of
using such an instrument in their small groups. They were also told to reflect upon
what they saw as the role of culture versus personality in learning style preferences.
6 A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16
Finally, they were told to choose one person as the ‘‘secretary’’ or ‘‘recorder’’ for
their group to take notes of their main points and to be the primary spokesperson
for later class discussion. The instructions were listed on the blackboard and re-
viewed before the small group discussions.
During the small group discussions, the instructor circulated around the class-room taking notes as unobtrusively as possible. The small group discussions were
followed by a full class discussion on the insights and comments brought out in
the small groups. Highlights and main points were listed on the blackboard. Every
effort was made to include all salient information. At the end of the class session,
the instructor asked for volunteers who were non-native speakers of English to par-
ticipate in oral interview sessions on learning styles and learning instruments.
Ideally the class discussions would have been tape-recorded or video-taped, but
the physical realities of the classroom setup prevented tape recording, and otherlogistical problems prevented videotaping of the session. The instructor�s notes were,however, extensive and provide an adequate record of student discussion and
comments.
Data from the second group, the 10 graduate M.A. TESOL volunteers, were
collected through semi-structured interviews. These interviews were conducted
over a three-week span by the researcher who met individually with each volun-
teer at a mutually convenient time. Twice the interview sessions were cancelled be-
cause of snow. Eight of the interviews took place in a private cubicle in the M.A.TESOL office, and two were held in an empty classroom in an adjacent building.
The interviews were tape-recorded and subsequently transcribed. The two infor-
mants who were not enrolled in the course and had not yet seen the LSI were gi-
ven the opportunity to take it and score it themselves before the interview
sessions.
Before each interview actually began and the tape-recorder was turned on, the re-
searcher and informant engaged in polite chitchat (e.g. ‘‘How do you like this winter
we�re having? How is your semester going?’’) for the purpose of putting the infor-mant at ease and establishing rapport. After 2–3 min, the researcher explained very
generally the purpose of the interview and asked if the informant had any questions.
The most commonly asked question was when the results of this research would be
published, accompanied by a request to e-mail the informant when the work
appeared.
Once the informant felt comfortable, the researcher explained the procedure of
the interview to the student: First the informant would have the opportunity to read
over the LSI. When ready, the informant would indicate to the researcher her read-iness to begin the interview. With the tape-recorder turned on, the researcher would
begin the interview by asking the informant to state her name, native language, how
long she had been studying in the graduate M.A. TESOL program, and her current
and prospective teaching plans. After these preliminaries, the informant would be
asked some questions that she should talk about as long as she liked; there were
no time limits (see Appendix C). The researcher had the interview guide before
her, but it could not be seen by the informant. When the interview was completed,
the researcher turned off the tape recorder and thanked the informant. After the
A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16 7
informant left, the researcher wrote down any significant impressions from the
interview.
There are several advantages of semi-structured interviews. In a previous study
conducted by the researchers (Wintergerst et al., 2002), semi-structured interviews
were found to provide a rich source of data. Semi-structured interviews with theiropen-ended questions have several advantages in this type of descriptive study.
Open-ended questions allow the researcher to focus on a particular topic or topics
while allowing for flexibility in providing opportunities for two-way communication.
The semi-structured interview permits the researcher to ask more complex and in-
volved questions, allows informants to expand and elaborate upon their answers,
and allows the researcher and the informants to ask for clarification or explanation
when they are unsure or require more detail. Unstructured interviews, which simply
ask informants to talk on a topic, were too broad and imprecise given the type ofinformation regarding the LSI the researchers were investigating. Structured inter-
views, in contrast, which ask informants the identical questions with a limited set
of possible responses (e.g. very good, good, poor, very poor) were not considered
appropriate for the research as these interviews are too restrictive and uninformative
for this type of research.
5. Results
5.1. The role of context
Similar issues were raised in both the class discussions and in the interviews.
One major concern raised by both groups of informants was that of context and
the questions on the LSI. Students argued that in different contexts they would
make different choices to the LSI questions. Several students pointed out that there
were a number of statements to which they would change their response, depend-ing upon the learning context. For example, for Question #4,‘‘I learn more when I
make a model of something’’, or for Question #12, ‘‘ I learn more by reading text-
books than by listening to lectures’’, these students felt that their responses would
change depending on what type of ESL course they were enrolled in, e.g. speaking/
listening versus grammar course; academic English, or community English. They
also felt that age plays a role; different age groups engage in different types of
learning activities, some of which might be more appropriate to some of the LSI
questions. For example, the students pointed out that elementary school learnersare more likely to make a model of a house and label it to learn the appropriate
vocabulary than are university or community students. The students also felt that
some of the questions such as Question #4 did not really apply to language learn-
ing but to other types of learning situations. They suggested that if the question
referred to a content area course, namely a science course or an introductory
course of general knowledge such as history or psychology, their answer would
be different from what they would choose if they were enrolled in a language or
writing course.
8 A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16
Several of the international students remarked that they would choose different
answers based on whether the statements referred to classes in their country or to
classes at their university in the US. The Asian students from Korea and Taiwan
emphasized that EFL classes are generally large and students must pass standardized
tests. How a student prefers to learn is not as important as passing these tests, whichdepend largely on memorized learning.
In both the class discussion and in the oral interviews informants noted that a
number of the questions asked whether the respondent enjoyed working with others,
or learned better with others (e.g. Questions #1, #5, #11, and #15). Both the class
discussion and the interviews revealed that informants felt their response to these
types of statements depended on three factors: what they had to learn; with whom
they had the option of learning; and finally, what material they needed to learn.
As one of the interview informants states:
Y: A lot of them [the questions] asking if I enjoy working with others or do Ilearn more or better with others. It�s sometimes, it depends on the situation. Iwill not be absolute about ‘‘always’’ or ‘‘never’’ because sometimes it might bebut sometimes it might not be.
Other concerns regarding the wording of several of the LSI questions were
voiced in the interviews. Again in Question #4, ‘‘I learn more when I can make
a model of something’’, the informants stated they had trouble with this questionbecause they could not see the relevance of this statement to their current educa-
tional studies:
M: I have to think twice because I don�t build.H: I couldn�t think what you mean by ‘‘make a model’’. I thought this was alittle strange if I could make a model.
Both groups of informants experienced a similar problem with Question #7 ‘‘I en-
joy learning in class by experiments’’. In both the class discussion and the oral inter-views, informants questioned why such a statement was included, arguing that
experimentation is typically associated with science classes rather than language
learning. As H expressed in the interview:
H: I think experiments more for scientific programs, so in language class howdo we experiment? You should gear more toward language teaching, maybelike ‘‘I enjoy learning more in class by giving presentations’’.
During this discussion, the instructor pointed out that the instructions on the LSIindicate that they are to circle their answer for each statement based on how they
learn or learned English (see Appendix C). This elicited discussion by the students
on the importance of reviewing instructions more than once and a related discussion
on actual test-taking practices that interfere with accurate assessment. These prac-
tices include not following or forgetting specific instructions, whether in whole or
in part, and becoming impatient with the questions and ‘‘just answering to get it
done’’.
A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16 9
5.2. Attitudes to questionnaires
The tendency to ‘‘just getting it done’’, rather than thoughtfully and carefully
completing a questionnaire was another concern voiced by both groups of infor-
mants. One interview informant, M from Taiwan, recounted her experience in an-other class in a previous semester where the students were asked to complete a
two-page survey. She noted that if there are too many questions, she just leaves them
blank.
The students felt that the inclination to ‘‘just get it done’’ was also exacerbated
by the questions themselves. Both in the class discussions and in the interviews,
informants pointed out repeatedly the repetition among the statements. The stu-
dents were very much aware that the same questions were asked in different ways
on the LSI and questioned why. Indeed, in the class discussion several studentsmentioned that they had thought this was some sort of mistake. Even though
the instructor pointed out that stating the same thing in more than one way is
a way of ascertaining whether there is consistency in responses, the students still
felt that this was a weakness of the LSI. In the class discussion, C from Taiwan
pointed out that even though the LSI is a short questionnaire, if students are
learners of English, they may find it too difficult and too boring to ‘‘pay atten-
tion’’ until the end of the questionnaire, especially if they think, ‘‘What�s the
point? Always the same question’’. These are considerations which would certainlyaffect results.
In the interviews, informants echoed similar feelings:
C: Some of the questions are quite similar. It�s ok, but if it�s getting longer, Iwill get bored and I don�t exactly know what your point in asking me this is.Y: Some of the questions are repeating themselves; I already answered thembut keep asking.
5.3. Linguistic issues
Another issue raised by many of the non-native speaking informants was the use
of ‘‘better’’ in Questions #3, #9 and #22. They questioned the use of ‘‘better’’ in that
they wondered ‘‘better than what’’? They felt this was confusing because for them
there had to be a comparison with something else. In the small group discussions,
the American graduate students found themselves called upon to clarify the use of‘‘better’’ in these statements as well as later in the full class discussion. Adding to
the confusion was the use of ‘‘best’’ in Question #11. Several of the non-native
speakers were bemused by the use of ‘‘better’’ in some statements and ‘‘best’’ in an-
other, finding no comparison being made with anything in their mind. In one of the
interviews, A remarks:
A: What�s the difference between ‘‘better’’ and ‘‘best?’’ How can we differenti-ate between ‘‘better’’ and ‘‘best’’? I think you should unify some words like‘‘prefer’’ and ‘‘learn better’’. It�s confusing. What do you mean?
10 A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16
Both the native speakers of English and the non-native speakers felt that a learn-
ing styles instrument developed specifically for learners of English would need to ad-
dress this issue of better and best.
Another important point raised was that it was difficult for them to choose be-
tween the four choices offered on the LSI (always, very often, sometimes, never).As mentioned earlier, the researchers had purposely elected to offer four, rather
than five choices in order to avoid the ‘‘middle-of-the road syndrome’’. In their
first study the researchers had noticed a tendency to choose the middle choice;
by offering only four, they hoped to have respondents evaluate more precisely
the statements and their own feelings. As it turned out, the informants consis-
tently expressed how unhappy they were with the choices provided. In the class
discussion, this was a central theme first raised by all the small groups and then
commented on again extensively in the full group discussion. The Asian studentsin particular felt that the use of never was too strong and too negative for them to
choose. As IS from Korea noted, ‘‘I can�t make choice to choose �never�; maybe
always, but �never� is not good’’. Her sentiments were echoed by MJ, another
Korean, who added that ‘‘never is so negative and we feel very bad’’. One Amer-
ican, A, expressed a similar reluctance to use �never� and �always� but offered an-
other reason, namely that he did not feel comfortable making an absolute
statement when circumstances themselves are not absolute, ‘‘you need room to
budge’’. In the class discussion, some students suggested the substitution of ‘‘sel-dom’’ for ‘‘never’’ and ‘‘much of the time’’ for ‘‘always’’ as these choices are less
categorical.
In the oral interviews, the same concerns with respect to answer choices were
raised by many of the informants:
S: I chose few ‘‘always’’ but no ‘‘never’’ because too strong, so I neverchoose never. I prefer ‘‘very often’’ or ‘‘most of the time’’. It will be morehelpful.J: I never chose ‘‘never’’ because even though I didn�t like that because ‘‘never’’say never. I like better maybe ‘‘hardly ever’’. ‘‘Always’’ too absolute.V: I think it�s hard to mark ‘‘always’’ and ‘‘never’’, very hard. I think I keptgoing back and forth between ‘‘very often’’ and ‘‘sometimes’’.
Interview informants also pointed out that for ESL students ‘‘very often’’ is a con-
fusing choice because it consists of two adverbs, and that ‘‘usually’’ would be a better
choice. In the oral interviews, M, a Taiwanese informant, states:
M: I feel I have a problem with ‘‘always’’ and ‘‘never’’ because most of the timeit�s ‘‘sometimes’’. Probably I would have preferred ‘‘rarely’’ because ‘‘never’’ isso absolute.
Along the same lines, H, a Korean informant, states:
H: I think maybe other words good for more people. I have a problem withhardly ever and rarely because from my experience everyone, teachers and stu-dents have a hard time to distinguish words between the words rarely and
A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16 11
hardly. I think Asian students rarely use the word rarely. Hardly ever is kind ofmore familiar.
Interestingly, H finds ‘‘rarely’’, the term suggested both in class and during the
course of the interviews, difficult and suggests the synonym ‘‘hardly ever’’ as a better
choice because it is a more familiar term.
6. Discussion and implications
The results of this study on the LSI suggest that quantitative research or sta-
tistical findings alone are insufficient to ascertain the effectiveness and usefulness
of a learning styles instrument, particularly in the case of non-native speakers.
The LSI, like any paper and pencil measurement instrument, is subject to ques-
tions of validity (see, e.g. Brown, 2001; Dornyei, 2003; Keeves, 1988; Oppenheim,
1992).
As the data analysis of the class discussions and interviews reveals, several factors
influence the validity and reliability of questionnaires. Two important findings werethe respondents� inability to contextualize or apply statements to the current situa-
tion and misunderstandings due to wording or poor word choice.
The inability of informants to contextualize questions and their dislike of the an-
swer choices would not reveal themselves on any quantitative data since these reflect
subjective opinions on the nature of the test instrument itself. Even if researchers
realize a statistical tendency on the part of respondents to avoid the extreme ends
(‘‘always’’ and ‘‘never’’), they do not necessarily know why respondents are avoiding
these extremes, nor are they aware of options with which respondents would feelmore comfortable.
When informants were unsure or unclear as to the implied context, they felt that it
was not only difficult to choose the appropriate response but that their responses
were also not necessarily indicative of how they would respond under different cir-
cumstances. The question arises as to how useful the results of a learning styles
instrument are for the language classroom if respondents would respond differently
in different contexts and situations or if their response were based on a different
imagined context other than language learning.A third factor was the finding that respondents are often uninterested or bored in
completing such a questionnaire. If respondents check answers merely to complete a
survey instrument, they are not reflecting upon the questions or indicating their true
preferences (see Brown, 2001; Dornyei, 2003; Porte, 2002 for a detailed discussion of
survey research).
Combining quantitative survey questionnaires with qualitative techniques helps
researchers to better understand the quantitative data (Brown, 2001). Quantita-
tive research needs to be followed up with such qualitative research methodsas oral interviews in order to reveal some of the threats to reliability found in
survey questionnaires. Interviews can help researchers learn what survey items
the participants found confusing, or how environmental and personal factors
12 A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16
such as noise level, fatigue, boredom or other affected their completion of the
questionnaires.
Needless to say, interviewing informants has its own limitations, including the
difficulty with and the time commitment of conducting such research. Another
drawback is that oral interviews do not guarantee honest answers; informantsmay choose to provide what they think the researcher wants to hear, or they
may be intimidated by the interview process and offer more positive responses
than they actually believe (Johnson, 1992; Nunan, 1992). Another problem with
interviews is that of failing to elicit expansive answers. At times the informant
will provide only a short, uninformative answer and the researcher must con-
sider how to best elicit a more informative response without leading the
informant.
Interviews, however, are useful when investigating informants� attitudes and expe-riences in depth while questionnaires are appropriate when researchers opt for
breadth or responses from a larger number of participants (Wallace, 1998). Both
techniques involve asking questions to gather data; however, using the strengths
of each technique will insure more comprehensive data-collection (see e.g. Spradley,
1997 for a detailed discussion of ethnographic interviews).
Another option to support quantitative research with qualitative methods is to
use the think-aloud technique whereby researchers observe informants participating
in a task and record informants� thoughts as they engage in the task. This techniquecan be very effective to ascertain the mental processes informants engage in as they
complete a task and avoids the problem of their forgetting why and how they made a
particular choice (short-term memory loss) since informants concurrently verbalize
their thoughts and complete the task. (see, e.g. Ericsson and Simon, 1993 for a full
discussion of the think-aloud technique.) While this technique offers opportunity for
instantaneous insights into the thought processes of the informants, it also has draw-
backs. Some problems include the problem of researcher intervention in the types of
questions researchers may ask (Boren, 2000) and the Observer�s Paradox where thepresence of the researchers themselves may bias or skew the informants� mental pro-
cesses (Labov, 1972).
Using more than one method for gathering data allows researchers greater
opportunities to gain better insights into what they are researching. A triangular
approach utilizing a questionnaire, oral interviews, and participant observations
presents a fuller picture with regard to the comprehensibility of the LSI. Having
the teacher involved in active note taking while students are interviewed and sub-
sequently transcribing the resulting audiotapes adds another dimension to the find-ings of the research. Such a fuller picture confirms the importance for the teacher
to better explain the purpose of the LSI before it is administered to students,
including how statements with the same intent appear more than once to confirm
understanding and how thoughtful consideration of choices presents a truer insight
into students� actual learning styles. Although such a process is time-consuming, it
is essential if researchers are to provide teachers with a clearer understanding of
what learning styles are and how they may be assessed using an instrument such
as the LSI.
A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16 13
Appendix A. Learning Styles Indicator (Wintergerst and DeCapua, 1999)
Circle your answer for each statement based on how you learn or learned English
(1) I enjoy working on an assignmentwith two or three classmates.
Always
Very often Sometimes Never(2) I learn best in class when I can
participate in related activities.
Always
Very often Sometimes Never(3) I understand things better in class
when I participate in role playing.
Always
Very often Sometimes Never(4) I learn more when I can make a
model of something.
Always
Very often Sometimes Never(5) When I study alone, I rememberthings better.
Always
Very often Sometimes Never(6) I get more work done when I work
with others.
Always
Very often Sometimes Never(7) I enjoy learning in class by doing
experiments.
Always
Very often Sometimes Never(8) When I work alone, I learn better.
Always Very often Sometimes Never(9) I understand better when I read
instructions.
Always
Very often Sometimes Never(10) When I build something, I
remember what I have learned
better.
Always
Very often Sometimes Never(11) In class, I learn best when I work
with others.
Always
Very often Sometimes Never(12) I learn more by reading textbooks
than by listening to lectures.
Always
Very often Sometimes Never(13) When I do things in class, I learnbetter.
Always
Very often Sometimes Never(14) I prefer to work by myself.
Always Very often Sometimes Never(15) When someone tells me how to
do something in class, I learn
better.
Always
Very often Sometimes Never(16) I enjoy making something for a
class project.
Always
Very often Sometimes Never(17) When I read instructions, Iremember them better.
Always
Very often Sometimes Never(18) I prefer to study with others.
Always Very often Sometimes Never(19) When the teacher tells me the
instructions, I understand better.
Always
Very often Sometimes Never(20) I learn more when I can make
something for a class project.
Always
Very often Sometimes Never(21) I learn more when I study with a
group.
Always
Very often Sometimes Never14 A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16
(22) I learn better by reading than by
listening to someone.
Always
Very often Sometimes Never(23) I prefer to learn by doingsomething in class. Statements
drawn from Reid (1984)
Always
Very often Sometimes NeverAppendix B. Learning styles indicator scales
Learning style one: project orientation (PO)
Q20
I learn more when I can make something for a class project.Q16
I enjoy making something for a class project. Q3 I understand things better in class when I participate in roleplaying.
Q24
I prefer to learn by doing something in class.Q10
When I build something, I remember what I have learnedbetter.
Q7
I enjoy learning in class by doing experiments.Q2
I learn best in class when I can participate in related activities.Q4
I learn more when I can make a model of something. Q13 When I do things in class, I learn better.Q19
When the teacher tells me the instructions, I understandbetter.
Q15
When someone tells me how to do something in class, I learnbetter.
Learning style two: group activity orientation (GAO)
Q18
I prefer to study with others. Q22 I learn more when I study with a group.Q11
In class, I learn best when I work with others.Q6
I get more work done when I work with others.Q1
I enjoy working on an assignment with two or three classmates.Learning style three: individual activity orientation (IAO)
Q14
I prefer to work by myself.Q8
When I work alone, I learn better. Q23 I learn better by reading than by listening to someone.Q12
I learn more by reading textbooks than by listening tolectures.
Q5
When I study alone, I remember things better.Q17
When I read instructions, I remember them better.Q9
I understand better when I read instructions.Statements drawn from Reid (1984)
A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16 15
Appendix C. Interview questions
(1) What do you see as the importance of
(a) knowing your own learning style?
(b) teachers� knowing students� learning styles?
(2) Have you changed your learning style preferences since you began studying in
the US? How have you adapted to the US classroom, i.e. what changes haveyou made in your learning style preference(s), if any? Explain.
(a) Have these changes been easy/hard. Explain
(3) Which learning style do you prefer?
(a) What cultural influences (if any) do you see influencing learning style
preferences?
(4) How effective was the LSI in determining your learning style preferences?
(a) Why or why not?
(b) What would you change/reword?
(c) How would you change/reword this?
References
Boren, M.T., 2000. Thinking aloud: reconciling theory and practice. IEEE Transactions on Professional
Communication 43, 261–278.
Brown, J.D., 2001. Using Surveys in Language Programs. Cambridge University Press, Cambridge.
Curry, L., 1987. Integrating Concepts of Cognitive or Learning Style: A Review with Attention to
Psychometric Standards. Canadian College of Health Service Executives, Ottawa, Ont.
DeBello, T., 1990. Comparison of eleven major learning styles models: variables, appropriate populations,
validity of instrumentation, and the research behind them. Journal of Reading, Writing, and Learning
Disabilities International 6, 203–222.
Dornyei, Z., 2003. Questionnaires in Second Language Research: Construction, Administration and
Processing. Erlbaum, Mahwah, NJ.
Dunn, R., Dunn, K., Price, G., 1975. The Learning Style Inventory. Price Systems, Lawrence, KS.
Dunn, R., Dunn, K., Price, G., 1989. Learning Styles Inventory (LSI): An Inventory for the Identification
of How Individuals in Grades 3 Through 12 Prefer to Learn. Price Systems, Lawrence, KS.
Ehrman, M., Oxford, R., 1990. Adult language learning styles and strategies in an intensive training
setting. The Modern Language Journal 74, 311–327.
Eliason, P., 1995. Difficulties with cross-cultural learning styles assessment. In: Reid, J. (Ed.), Learning
Styles in the ESL/EFL Classroom. Heinle & Heinle, Boston, pp. 19–33.
Ericsson, A., Simon, H., 1993. Protocol Analysis: Verbal Reports as Data. MIT Press, Cambridge, MA.
Felder, R., Henriques, E., 1995. Learning and teaching styles in foreign and second language education.
Foreign Language Annals 28, 21–33.
Gay, L., Airasian, P., 2000. Educational Research, sixth ed. Prentice-Hall, Upper Saddle River, NJ.
Gregorc, A., 1982. Gregorc Style Delineator. Gabriel Systems, Maynard, MA.
Inclan, A.F., 1986. The development of the Spanish version of the Myers Briggs type indicator, form G.
Journal of Psychological Type 11, 35–46.
16 A. DeCapua, A.C. Wintergerst / System 33 (2005) 1–16
Itzen, R., 1995. The Dimensionality of Learning Structures in the Reid Perceptual Learning Style
Preference Questionnaire. Unpublished doctoral dissertation, University of Illinois, Chicago.
Johnson, D.M., 1992. Approaches to Research in Second Language Learning. Longman, London.
Keeves, J. (Ed.), 1988. Educational Research, Methodology, and Measurement: An International
Handbook. Pergamon Press, Oxford.
Kolb, D., 1976. The Learning Style Inventory: Self-Scoring Test and Interpretation. McBer & Company,
Boston.
Kolb, D., 1985. Learning Style Inventory (Revised Edition). McBer & Company, Boston.
Labov, W., 1972. Some principles of linguistic methodology. Language in Society 1, 97–120.
Melton, C., 1990. Bridging the cultural gap: a study of Chinese students� learning style preferences. RELC
Journal 21, 29–51.
Nunan, D., 1992. Research Methods in Language Learning. Cambridge University Press, Cambridge.
O�Brien, L., 1990. Learning Channel Preference Checklist(LCPC). Specific Diagnostic Services, Rockville,
MD.
Oxford, R., 1993. Style Analysis Survey (SAS). University of Alabama, Tuscaloosa, AL.
Oxford, R., 2000. Personal communication.
Oppenheim, A.N., 1992. Questionnaire Design, Interviewing and Attitude Measurement, second ed. Pinter
Publications, London.
Porte, K., 2002. Appraising Research in Second Language Learning. John Benjamins, Amsterdam, PA.
Purpura, J., 1998. The development and construct validation of an instrument designed to investigate
selected cognitive background characteristics of test-takers. In: Kunnan, J. (Ed.), Validation in
Language Assessment. Erlbaum, Mahwah, NJ, pp. 119–139.
Reid, J., 1984. Perceptual Learning Styles Preference Questionnaire. Copyrighted.
Reid, J., 1987. The perceptual learning style preferences of ESL students. TESOL Quarterly 21, 87–111.
Riechmann, S., Grasha, A., 1974. A rational approach to developing and assessing the construct validity
of the Student Learning Style Scales Instrument. Journal of Psychology 87, 213–223.
Rubles, T., Stout, D., 1990. Reliability, construct validity and response set bias of the revised Learning
Style Inventory (LSI-1985). Educational and Psychological Measurement 50, 619–628.
Spradley, J., 1997. The Ethnographic Interview. Thompson International, New York.
Stebbins, C., 1995. Culture-specific perceptual-learning-style preferences of post-secondary students of
English as a second language. In: Reid, J. (Ed.), Learning Styles in the ESL/EFL Classroom. Heinle &
Heinle, Boston, pp. 108–117.
Tabachnick, B., Fidell, L., 2001. Using Multivariate Statistics. Harper & Row, New York.
Wallace, M., 1998. Action Research for Language Teachers. Cambridge University Press, Cambridge.
Wintergerst, A., DeCapua, A., 1999. Learning Styles Indicator (LSI). Copyrighted by Wintergerst and
DeCapua. Available through Ann Wintergerst, Department of Languages and Literatures, St. John�sUniversity, Queens, New York 11439.
Wintergerst, A., DeCapua, A., 2001. Exploring the learning styles of Russian-speaking students of English
as a second language. The CATESOL Journal 13, 23–46.
Wintergerst, A., DeCapua, A., Itzen, R., 2001. The construct validity of one learning styles instrument.
System 29, 385–403.
Wintergerst, A., DeCapua, A., Verna, M., 2002. An analysis of one learning styles instrument for language
students. TESL Canada Journal/Revue TESL du Canada 20, 16–37.