Running Head: QUANTITATIVE OBSERVATION Head: QUANTITATIVE OBSERVATION ... This paper describes...

16
Running Head: QUANTITATIVE OBSERVATION In press at the Journal of College Student Development An Inside View: The Utility of Quantitative Observation in Understanding College Educational Experiences Corbin M. Campbell Teachers College, Columbia University 525 W. 120 th St, Box 101 New York, New York 10027 [email protected] 212-531-5182 Acknowledgements: This research is supported by a National Academy of Education/Spencer Foundation fellowship. Appreciation also to Marisol Jimenez, research assistant for the CEQ research project, who developed the training protocols detailed in this paper.

Transcript of Running Head: QUANTITATIVE OBSERVATION Head: QUANTITATIVE OBSERVATION ... This paper describes...

Running Head: QUANTITATIVE OBSERVATION

In press at the Journal of College Student Development

An Inside View: The Utility of Quantitative Observation in Understanding

College Educational Experiences

Corbin M. Campbell

Teachers College, Columbia University

525 W. 120th St, Box 101

New York, New York 10027

[email protected]

212-531-5182

Acknowledgements: This research is supported by a National Academy of Education/Spencer

Foundation fellowship. Appreciation also to Marisol Jimenez, research assistant for the CEQ research

project, who developed the training protocols detailed in this paper.

QUANTITATIVE OBSERVATION 2

In press at the Journal of College Student Development

Abstract

This paper describes quantitative observation as a method for understanding college educational

experiences. Quantitative observation has been used widely in several fields and in K-12

education, but has had limited application to research in higher education and student affairs to

date. The paper describes the central tenets of quantitative observation, using an example

protocol, the College Educational Quality (CEQ) study to illustrate its potential application to

higher education and student affairs research. Quantitative observation allows researchers to

witness the educational process as it unfolds, and does so in a systematic way that enables

understanding patterns across time, groups, and settings.

QUANTITATIVE OBSERVATION 3

In press at the Journal of College Student Development

An Inside View: The Utility of Quantitative Observation in Understanding College

Educational Experiences

The learning outcomes assessment movement has given rise to several new ways to consider

student outcomes, including self-reported learning outcomes, standardized tests, and analysis of

student work (Ewell, 2008; Kuh, 2001; Rhodes, 2012). In this context, the field of higher education

and student affairs continues to play an important role in developing an understanding of the

student experiences in educational practices that bring forth these outcomes (Calhoun, 1996; Inkelas

& Weisman, 2003; Kuh, 2009). The field has developed rich theory and applications to practice that

point to the complex dynamics among student’s individual, social, and historical contexts, their lived

experiences, and their educational environments that shape student outcomes (e.g. Gurin, Dey,

Hurtado, & Gurin, 2002; Jones, 2009; Renn, 2003; Torres, Jones, & Renn, 2009). The purpose of

this paper is to suggest the utility of quantitative observation as a method of data collection for

understanding college educational experiences as situated in specific student and educational

contexts.

Due to the use of expert raters, quantitative observation has the ability to understand

patterns in rich, theoretically-driven constructs, which can be applied to understanding student experiences

both in and out of the classroom. For example, quantitative observation could be used to

investigate the proportion of students who show dissonance during inter-group dialogue

discussions, the prevalence of heteronormative values displayed during by-stander intervention

programs, or the group dynamics in orientation conversations. Secondly, quantitative observation is

well-suited to understanding educational practices given that this method records data in real-time. For

example, a quantitative observational protocol could compare two alcohol education programs by

witnessing how the programs support students in progressing through stages of change.

While the use of quantitative observation is not common in higher education and student

affairs research, there are a few studies that explored this approach, largely focusing on in-class

QUANTITATIVE OBSERVATION 4

In press at the Journal of College Student Development

experiences. In the early 1980s, Ellner and Barnes (1983) described the use of structured

observation for understanding college teaching. More recently certain studies have used a

quantitative observational protocol to study the complex dynamics among students, faculty, and

subject-matter in higher education (Campbell, 2015; Hora & Ferrare, 2014). In the field of

educational psychology and also k-12 research, classroom observation methods have been used

extensively to describe educational practices and to research inequities (Alexander & Winne, 2006;

Anderson & Burns, 1989; Hilberg, Waxman, & Tharp, 2004; Kane, Kerr, & Pianta, 2014).

Although it is not possible to describe all aspects of quantitative observation due to the

variation in this data collection method1, this paper will: 1) describe the central tenets of quantitative

observation and resources for implementation in research on college educational experiences; and 2)

illustrate the potential utility of this approach by discussing an example quantitative observational

protocol—the College Educational Quality project, an observational study of college teaching and

academic rigor in 587 courses across nine colleges and universities in the United States.

Quantitative Observation, Defined

Observational research in education, broadly construed, is using the five senses in a

systematic way to understand a social phenomenon (Angrosino, 2007; 2012). Observation can be

clinical (conducted in a lab setting with manipulation by the researcher) or naturalistic (conducted in

the setting of the social phenomenon as it naturally occurs); un-obtrusive (participants do not know

observation is taking place), reactive (the observer is known to the participants, but does not

participate), or participatory (observers actively participate in the phenomenon); qualitative (open-

ended; themes emerge from data) or quantitative (closed-ended; systematically coded according to

pre-defined theory) (Alexander & Winne, 2006; Angrosino, 2012). This paper focuses on

naturalistic, reactive, quantitative observational protocol due to its potential use in higher education

and student affairs research for understanding trends in educational experiences. I draw from the

1 I refer readers to Anderson & Burns (1989) and Angrosino (2007) for additional information about the method.

QUANTITATIVE OBSERVATION 5

In press at the Journal of College Student Development

field of educational psychology, which understands quantitative observation as “a great range of

tools and techniques, both for generating the basic corpus of raw data and for processing those data

to develop quantitative scores or induce generalizations” (Alexander & Winne, 2006, p. 755).

Quantitative Observation Applied to Higher Education and Student Affairs

Stallings and Mohlman (1988) described several elements common across quantitative

observational protocols. I apply these categorizations to a higher education and student affairs

context in Table 1.

[INSERT TABLE 1 ABOUT HERE]

The College Educational Quality Study—Brief Introduction

To discuss the potential utility of quantitative observation as a method for understanding

college educational experiences, I describe the College Educational Quality study as an example.

The purpose of the College Educational Quality (CEQ) study was to explore a new

conceptualization and a new methodological protocol that allows for deeper insight into college

educational experiences in coursework. CEQ examines the course-based educational practices that

take shape between students and faculty, between students and course content, and among students.

Across these practices, CEQ considers two facets of educational quality: academic rigor and college

teaching. I focus on the cognitive complexity facet of the academic rigor framework in this paper as

an example for an observational protocol. The first phase of the CEQ project took place in spring

semester of 2013 at two research institutions. The second phase of the CEQ project took place in

fall semester of 2014, including seven additional institutional sites: five were liberal arts colleges and

two were regional public institutions. Approximately 350 courses were sampled at each institution,

stratified by class size, discipline, and faculty category. Among faculty with sampled courses, 34%

agreed to participate. Two observers visited one class mid-semester of 587 courses.

QUANTITATIVE OBSERVATION 6

In press at the Journal of College Student Development

CEQ Study Observational Protocol

Below I describe how the Stallings and Mohlman’s (1988) elements of quantitative

observation apply to the CEQ study.

Conceptual basis for observation. Stallings and Mohlman (1988) describe three elements

of quantitative observation that relate to the conceptual framing of the study: the purpose, the

observational focus, and the operational definitions. The purpose of the CEQ observational study was

to understand college educational quality by viewing academic rigor and college teaching as they take

shape between students and faculty, between course content and students, and among students.

Given this purpose, the focus was on the educational practices in context. Across these practices,

academic rigor was operationalized into two constructs according to the conceptual framework, one of

which was the level of cognitive complexity (the focus of this paper). CEQ defined the level of

cognitive complexity in the coursework by the revised Bloom’s Taxonomy, which suggests that

academic work can require students to engage, cognitively, in six increasingly complex levels:

remember, understand, apply, analyze, evaluate, and create (Anderson & Krathwohl, 2001; Bloom,

Engelhart, Furst, Hill, & Krathwohl, 1956; Braxton & Nordvall, 1985). This conceptualization

guided the training of observers and the rubric development.

Observer training. According to Stallings and Mohlman (1988), the training procedures for

observers is a key aspect of structured observation that allows for greater conceptual depth. Observer

training for the CEQ study was approximately 30 hours, and included three elements: observation

procedures, knowledge of the conceptual frameworks, and tuning observer ratings using the rubrics.

The training on procedures addressed the logistics of observing (what to expect before, during, and

after observing) and observer behavior. The training on the conceptual frameworks focused on

whether the observers fully understood the ways in which academic rigor and college teaching were

defined in the study. The tuning training helped observers to map their understanding of the

conceptual frameworks onto rubric scores. To illustrate, I provide excerpts from the training

QUANTITATIVE OBSERVATION 7

In press at the Journal of College Student Development

protocols in Table 2 below. Observers were required to pass a test on each of the three elements.

The tests culminated in observers scoring an entire class session of a college course and passing an

inter-rater certification, comparing observer’s scores against the master ratings from the PI.

[INSERT TABLE 2 ABOUT HERE]

Observation procedures. Stallings and Mohlman (1988) delineated four aspects of

structured observation that specify procedures: the setting, observational schedule, unit of time, and

method for recording the data. For the CEQ project, based on the purpose and conceptualization

of the study, the setting was classrooms. To understand trends in educational practices, the team

observed a large number of courses (50-100) per institution—and the courses were representative as

much as possible by discipline, class size, and faculty category. The CEQ project sought the

“average” class experience, so the schedule was one-week site visits in mid-semester (avoiding exams

or vacations). Two in-person observers2 rated the entirety of one class session for each course (unit

of time). Observers were matched to rate courses in their own disciplinary background whenever

possible. The observers rated real time—for example, observers would record the highest level of

cognitive complexity in the lecture or class discussions, no matter when that took place.

Methods to process and analyze data. According to Stallings and Mohlman (1988), there

are several ways to process the rich observational data into codes or scores for analysis (e.g.

checklists, evaluative scales, interactive coding schemes). Given that the purpose of the CEQ

project was to measure college educational quality in the aggregate and to provide comparative

information, the research team created an evaluative rubric following our conceptual frameworks. I

provide example rubric items, for the cognitive complexity construct in Figure 1, below. While

these items may not be easily understood upon first reading, the training protocols assisted

observers in knowing how to rate reliably and validly according to the conceptual frameworks.

2 There were some classes where, due to scheduling conflicts, only one observer was present.

QUANTITATIVE OBSERVATION 8

In press at the Journal of College Student Development

Raters were trained to code what behaviors were “class activities” versus “class discussion” and how

to distinguish among the levels of Blooms revised taxonomy, such as “analyze” and “evaluate.”

One aspect of quantitative observation that differentiates this method from qualitative

observation is the ability to generate descriptive, comparative, and predictive statistics. Results from

the CEQ study revealed the proportion of courses that achieved higher order cognitive levels

(analyze, evaluate, create). Results were also used to compare these practices across institutional

contexts and to examine which course characteristics predicted a higher level of cognitive

complexity (author, 2015).

Reliability and Validity. Given the quantitative nature of observation scores, the method

lends to statistical validation procedures. Inter-rater reliability of CEQ data was calculated using a

one-way, absolute, average-measure, mixed-effects intra-class correlation (ICC) calculation (i.e. one-

way mixed-effects ANOVA; Hallgren, 2012). The ICC across all rubric items was .705,

demonstrating that inter-rater reliability was good (Cicchetti’s (1994) cutoff values .6-.74 = good).

The inter-rater reliability of the CEQ study demonstrates that theoretically rich constructs can be

rated with strong agreement among expert raters in higher education and student affairs.

The CEQ observational study included five scales, one of which was the cognitive

complexity construct. The research team conducted Confirmatory Factor Analyses (CFA) using

MPlus to determine the construct validity and the relationships among the five constructs. Model fit

indices indicated excellent fit of a 5 factor intercorrelated model (RMSEA=.049, CI [.041, .057],

CFI=.965, TLI=.956, SRMR=.047). The reliability of constructs was high (Coefficient-H: .809 -

.970). Readers can obtain a full description of the construct validation process and results by

contacting Author (author email).

Limitations of Quantitative Observation

While quantitative observation may show promise for research on college educational

experiences, there are several limitations that should be considered. Perhaps the most highly cited

QUANTITATIVE OBSERVATION 9

In press at the Journal of College Student Development

limitation is that quantitative observation requires more time and funding than some other

quantitative methodologies, such as survey research (Anderson & Burns, 1989; Hilberg, Waxman, &

Tharp, 2004). Although the researcher burden and cost may be higher than in survey method, it has

been successful on large scales in k-12 education (Kane, Kerr, & Pianta, 2014) and has begun to

receive funding in higher education (Hora & Ferrare, 2014). The CEQ project made a broad-scale

observational protocol more cost-effective by using volunteer observers (60+ graduate students in

higher education and student affairs programs). Further, the colleges involved in the CEQ study

were willing to pay in-kind in order to participate, by housing observers (making the cost

manageable for both the research team and participating institutions). A second limitation is

reactivity, subjects of observation may react differently and alter their behavior when observed

(Jacob,Tennenbaum, & Krahn). For example, students may be more conscious of their behaviors if

an observer is present. There are certain practices that can ameliorate reactivity: being as

unobtrusive as possible, guaranteeing confidentiality, and focusing on aggregate rather than

individualized results (Anderson & Burns, 1989; Jacob, Tennenbaum, & Krahn, 1987).

Finally, the validity of the observations will only be as good as the framework in the study.

In essence, the validity of quantitative observational data is only as clear as the glasses of the

researcher—the data paint a picture of the classrooms as the researcher sees it (assuming good

training protocols). While this is a limitation, this same aspect of quantitative observation means

that the method can closely follow a conceptual framework. If a study does use a strong

conceptualization and has good training procedures for raters, the data captured may be able to

more accurately report patterns of complex phenomenon in the classroom (Hilberg, Waxman, &

Tharp, 2004). In particular, specifying behaviors that are highly agreed upon by both practitioners

and scholars (e.g. higher education researchers and teaching faculty) as easily codable, such as in a

structured, interactive rubric coding process, may allay some of these validity critiques (Stallings &

Mohlmann, 1988).

QUANTITATIVE OBSERVATION 10

In press at the Journal of College Student Development

Discussion

There are four features of quantitative observation that highlight its potential utility in higher

education and student affairs (Table 3). Although quantitative observation has had limited use in the

body of scholarship on higher education and student affairs to date, the time is ripe for the broader

introduction of this method. In a time when assessment of student outcomes is at the forefront of

higher education and student affairs as a field, quantitative observation (as a supplement to the

existing survey research) may contribute to a deeper understanding of what educational contexts

shape student outcomes. This paper used the CEQ study as an example of how quantitative

observation may apply in higher education research. Using observers who were experts in

understanding Bloom’s taxonomy revealed new insights about the level of cognitive complexity in

the classroom. Yet, this study was limited to in-class observations. Quantitative observation has,

largely, not been applied to examinations of the co-curriculum, to student developmental processes,

or to student affairs administration.

[INSERT TABLE 3 ABOUT HERE]

As the field continues to progress in understanding the complex and situated nature of

educational practices and student identities, quantitative observation may provide a useful additional

methodological tool for the higher education and student affairs researcher. The use of expert

raters may be more valid for coding certain theoretically-rich and contextualized constructs when

compared to a student’s self-report in a survey (Hilberg, Waxman, & Tharp, 2004). For example,

while students may report in a survey how often they interacted with racially diverse others (Gurin,

Dey, Hurtado, & Gurin, 2002), quantitative observation using observers who are expert in inter-

group contact theory (Allport, 1954; Pettigrew, 1988) might reveal whether student experiences with

these diverse others met the “common goals” condition for optimal inter-group contact.

QUANTITATIVE OBSERVATION 11

In press at the Journal of College Student Development

Additionally, the quantitative nature of the scores garnered through structured observation lends to

demonstrating patterns of results across contexts. These descriptive and comparative statistics

would not be possible with qualitative observation (although qualitative observation has other

advantages, such as providing rich descriptions of educational processes). In these ways,

quantitative observation may offer a new window for asking and answering different questions that

shed light on patterns of complex educational practices in the field of higher education and student

affairs.

References

Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing:

A revision of Bloom's taxonomy of educational objectives. NY: Addison-Wesley Longman.

Alexander, P. A., & Winne, P. H. (Eds.). (2006). Handbook of educational psychology. Psychology Press.

Anderson, L. W. & Burns, R. B. (1989). Research in Classrooms. The study of teachers, teaching, and

instruction. Elmsford, NY: Pergamon Press.

Angrosino, M. V. (2007). Naturalistic Observation. Walnut Creek, CA: Left Coast Press.

Angrosino, M. V. (2012). Observation-based research. J. Arthur (Ed.). Research Methods and

Methodologies in Education, (pp. 165-170). Thousand Oaks, CA: Sage.

Astin, A. W. (1993). What matters in college? Four critical years revisited. CA: Jossey-Bass.

Bloom, B. S., Englehart, M. B., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy

of educational objectives, the classification of educational goals – Handbook I: Cognitive domain. New

York, NY: McKay.

Braxton, J. M., & Nordvall, R. C. (1985). Selective liberal arts colleges: Higher quality as

well as higher prestige? The Journal of Higher Education, 538-554.

Calhoun, J. C. (1996). The student learning imperative: Implications for student affairs. Journal of

College Student Development, 37(2), 188-122.

QUANTITATIVE OBSERVATION 12

In press at the Journal of College Student Development

Campbell, C. M. (2015). Serving A Different Master: Assessing College Educational Quality

for the Public. Higher Education: Handbook of Theory and Research, 30, 525-579.

Cicchetti D.V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed and

standardized assessment instruments in psychology. Psychological Assessment, 6(4), 284–290.

Gurin, P., Dey, E., Hurtado, S., & Gurin, G. (2002). Diversity and higher education: Theory and

impact on educational outcomes. Harvard Educational Review, 72(3), 330-367.

Ewell, P. T. (2008). Assessment and accountability in America today: Background and context. New

Directions for Institutional Research, 7-17.

Ellner, C. L., & Barnes, C. P. (1983). Studies of college teaching. Lexington, MA: Lexington Books.

Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: An overview and

tutorial. Tutor Quantitative Methods Psychology, 8(1), 23-34.

Hilberg, R. S., Waxman, H. C., & Tharp, R. G. (2004). Purposes and Perspectives on Classroom

Obvservational Research. In H. C. Waxman, R. G. Tharp, & R. S. Hilberg, (Eds),

Observational Research in U. S. Classrooms: New Approaches for Understanding Cultural and Linguistic

Diversity. (pp. 1-20). West Nyack, NY, USA: Cambridge University Press.

Hora, M.T. & Ferrare, J. J. (2014). Remeasuring postsecondary teaching: How singular

categories of instruction obscure the multiple dimensions of classroom practice. Journal

of College Science Teaching, 43(3), 36-41.

Inkelas, K. K., & Weisman, J. L. (2003). Different by design: An examination of student outcomes

among participants in three types of living-learning programs. Journal of College Student

Development, 44(3), 335-368.

Jacob, T., Tenenbaum, D. L., & Krahn, G. (1987). Factors Influencing the Reliability and Validity of

Observation Data. In. T. Jacob. (Ed.). Family Interaction and Psychopathology. NY: Springer.

Jones, S. R. (2009). Constructing identities at the intersections: An autoethnographic exploration of

multiple dimensions of identity. Journal of College Student Development, 50(3), 287-304.

QUANTITATIVE OBSERVATION 13

In press at the Journal of College Student Development

Kane, T. J., Kerr, K. A., & Pianta, R. C. (2014). Designing teacher evaluation systems. CA: Jossey-Bass.

Kuh, G. D. (2009). What student affairs professionals need to know about student engagement.

Journal of College Student Development, 50(6), 683-706.

Kuh, G. D. (2001). Assessing what really matters to student learning inside the national survey of

student engagement. Change: The Magazine of Higher Learning, 33(3), 10-17.

Pettigrew, T. F. (1998). Intergroup contact theory. Annual review of psychology, 49(1), 65-85.

Renn, K. A. (2003). Understanding the identities of mixed-race college students through a

developmental ecology lens. Journal of College Student Development, 44(3), 383-403.

Rhodes, T. L. (2012). Show me the Learning: Value, accreditation, and the quality of the degree.

Planning for Higher Education, 40(3), 36-42.

Stallings, J. A., & Mohlman, G. G. (1988). Classroom observation techniques. In J. P. Keeves

(Ed.), Educational research, methodology, and measurement: An international handbook. (pp. 469-474).

Elmsford, NY: Pergamon Press.

Torres, V., Jones, S. R., & Renn, K. A. (2009). Identity development theories in student affairs:

Origins, current status, and new approaches. Journal of College Student Development, 50(6),

577-596.

Waxman, H. C., Tharp, R. G., & Hilberg, R. S., Eds (2004). Observational Research in U. S.

Classrooms: New Approaches for Understanding Cultural and Linguistic Diversity. West

Nyack, NY, USA: Cambridge University Press.

QUANTITATIVE OBSERVATION 14

In press at the Journal of College Student Development

Table 1

Main Elements of Quantitative Observational Method (Stallings & Mohlman, 1988)

Element Application to Higher Education and Student Affairs Research

Purpose for the Observation

Observation is ideally suited to understand student, faculty, and administrator behaviors and educational processes as they unfold in their naturalistic setting (Hilberg, Waxman, & Tharp, 2004). For example, it can be used to examine inequities in campus climate or the facilitation of leadership development programs. Observation is less suited to understanding perceptions and outcomes.

Specific Observational Focus

Given that educational processes and behaviors could be viewed in infinite ways, a conceptual framework can delineate, specifically, what aspects of the observational setting are to be examined (Anderson & Burns, 1989). For example, the observation could focus on interactions among students, facilitation skills among staff, group dynamics, or the climate within the setting.

Operational Definitions

Because quantitative observation requires trained observers to code what they see into specific categories of interest to the research questions, there should be very specific operational definition of all observed behaviors. These operational definitions can be derived from a conceptual framework that undergirds the research and are further specified in a rubric or other coding scheme. For example, an examination of racial climate in the residence hall might use a conceptual framework of sense of belonging and then operationalize how “belonging” would be coded into the rubric during observation.

Training Procedures for Observers

Given that expert observers are a linchpin of good data collection in quantitative observation, the training for observers is key to valid and reliable data (Anderson & Burns, 1989). These training procedures usually detail, in depth, the procedures for observation (e.g. time period, when to observe, what to observe, how to observe) and the conceptual frameworks for the study (to ensure inter-rater reliability). For example, observers of study habits in a student union could be trained to pay attention to the amount of collaboration in study groups.

A Setting Quantitative observation is an applied method where data collection occurs in a naturalistic setting (Stallings & Mohlman, 1988). Therefore, a quantitative observational study will specify where, specifically, the observations will take place—e.g. classrooms, residence halls, libraries, student unions.

An observation schedule

The observation schedule is the amount of time the observation will take and how many observations will take place during the specified time period. For example, will the observation be a week-long site visit? A day in fall and spring semester? A 3 hour course or program?

A Unit of Time The unit of time refers to how the observation data are recorded within one observation session. For example, a study of service learning could use the time sample method—recording what happens every 2 minutes during a reflection discussion, or the real time method, recording all critical behaviors that happened during that reflection discussion regardless of when those behaviors happened (Stallings & Mohman, 1988).

A Method to Record the Data

Will observations be video-taped, audio-recorded with supplemental notes from an in-person observer, will an in-person observer record the data in paper and pencil, computer, online software to capture the data? For example, a study of social justice emphasis in learning communities might be captured via video, but, perhaps, an in-person observer would be more appropriate in a study of racial climate and norms.

Methods to Process and Analyze Data

According to Stallings & Mohlman (1988), there are three primary ways to code the data in quantitative observation: 1. Checklists—e.g. at certain time intervals record which pedagogical technique faculty use; 2. Ratings or evaluative scales—e.g. how welcoming was the resident advisor on scale of 1-5?; 3. Interactive coding schemes or “systematic observation”—uses a simplistic rubric to capture easily identifiable behaviors that do not require the observer to make strong inferences or judgments (Galton, 1988; Hilberg, Waxman, & Tharp, 2004)—e.g. could be used to capture the flow of interactions among student group meetings.

Reliability According to Alexander & Winne (2006), there are three forms of reliability to consider in a quantitative observational protocol: inter-rater reliability (do observers’ ratings match each other?); sampling reliability (would sampling other classes produce different results?); and coding reliability (are the codes reliably distilled in the same manner each time from observed behaviors?).

QUANTITATIVE OBSERVATION 15

In press at the Journal of College Student Development

Table 2

Excerpts from CEQ training scripts.

Training Component

Example Language from Training Script

Procedures Training

DURING THE OBSERVATIONS Please arrive 5-10 minutes before the beginning of the class to introduce yourself to the professor. Please do the following: □ Try to sit in the back, middle of the class so you can observe what is happening in the different parts of the room….However, also scan the room periodically to see whether the patterns of behavior or actions you see around are reflected in more distant areas of the room.

Knowledge of Frameworks Training

In 1956, Benjamin Bloom, along with a group of other educators, devised a classification system of the learning objectives that educators could expect of, or set for, students in a teaching context. The classification system consisted of six categories addressing the cognitive, affective, and psychomotor domains of learning….Bloom et al (1956) argued that the different categories represented a continuum from simple to increasingly more complex thinking, and that to address objectives at the upper levels of the continuum, teachers needed to have addressed skills and abilities at the lower levels of the continuum…In 2001, cognitive psychologists, testing and assessment experts, as well as other educators revised the taxonomy. The revised taxonomy consists of six categories: Remember, Understand, Apply, Analyze, Evaluate, and Create. …

Tuning Ratings Training

BLOOM’S TAXONOMY: THE ANALYZE LEVEL When students are asked to think at the level of analysis, they are asked to engage in differentiating, distinguishing, deconstructing, and comparing and contrasting. In essence, they are asked to compare attributes of diverse concepts, processes, or events, without rendering an opinion or judgment… For example, an assignment that calls for analysis could ask students to analyze an article in an academic journal. To complete such an assignment, students might be asked to indicate the article’s thesis, the parts of the article that support the thesis, the underlying assumptions of the author’s thesis, the implications of the author’s argument, and the possible implications of the author’s thesis….Tip: To differentiate between apply and analyze remember that applying calls for the use of one’s knowledge in new situations. Analysis calls for the skills to examine and break information into parts. The end goal of applying is using, while the end goal of analyzing is comparing and deconstructing.

Table 3

Utility of Using Quantitative Observation for Research in Higher Education and Student Affairs

Element Utility for Higher Education and Student Affairs Research

Real-time data collection

One of the most prominent features of quantitative observation is its ability to view educational experiences in process: “data are collected on classroom practices as they unfold” (Alexander & Winne, 2006, p. 755). Data collection does not suffer from possible recall biases, which has been noted as a limitation of survey responses (Porter, 2011; 2013).

An Applied Method

Quantitative observation allows researchers to study educational processes in naturalistic settings (Hilberg, Waxman, & Tharp, 2004). In this way, quantitative observation could be used to understand the application of educational interventions, change processes, and the genuine interactions among students.

Allows for Conceptual Training

Because quantitative observation uses trained observers to code data, quantitative observational studies can use more complex conceptual frameworks to examine educational processes (Anderson & Burns, 1989). Quantitative observation allows for “tuning” rubric ratings specifically to the conceptual framework intended in the study.

Understanding Trends in Subtler Educational Processes

Because quantitative observation allows for viewing and collecting data on subtler processes and is a more proximal data collection method than many other quantitative data collection methods, it may allow for more detailed and precise evidence about broader trends in college educational experiences (Anderson & Burns, 1989; Good, 1988).

QUANTITATIVE OBSERVATION 16

In press at the Journal of College Student Development

Figure 1. Excerpt from Cognitive Complexity items in CEQ observation rubric

Remember: Recognize, recall, repeat back (State back the same information; Recall information) Understand: Exemplify, classify, explain, summarize (State back the same information in a different way or with additional examples) Apply: Execute, implement (Apply the same concept to a new setting, to the field, or to a real world problem) Analyze: Differentiate, distinguish, deconstruct, compare and contrast (Discuss and compare attributes of the idea—without forming an opinion) Evaluate: Critique, test, judge (Discuss strengths and limitations AND form an opinion about an idea, substantiate one’s opinion about the idea) Create: Generate, produce, construct, hypothesize (Take this idea and make something new from it—more than applying the same concept, actually creating a new concept different from the learned concept)

Please rate both the average level attained during the lesson (mark A) and highest level attained by the end of the lesson (mark H).

With regard to the class’s subject matter….

RE

ME

MB

ER

UN

DE

RS

TA

ND

AP

PL

Y

AN

AL

YZ

E

EV

AL

UA

TE

CR

EA

TE

NA (i.e. the class did

not contain lecture,

handouts, activities, questions,

discussions)

The instructor’s lecture reflected what level of cognitive processing?

The level of the handouts or other visual material in class reflected what level of cognitive processing?

The class activities required students to . . .

The questions asked by the instructor required students to…

The class discussions demonstrated students’ ability to. . .

The questions asked by students demonstrated students’ ability to…