ASSESSMENT OF STUDENT LEARNING AS AN EMOTIONAL …

116
ASSESSMENTOFSTUDENTLEARNINGASANEMOTIONALPRACTICEFOR FACULTY:AQUALITATIVESTUDY by TracyL.Labadie Thisdissertationissubmittedinpartialfulfillmentofthe requirementsforthedegreeof DoctorofEducation FerrisStateUniversity November2019

Transcript of ASSESSMENT OF STUDENT LEARNING AS AN EMOTIONAL …

Labadie Dissertation C7FACULTY: A QUALITATIVE STUDY
Tracy L. Labadie
This dissertation is submitted in partial fulfillment of the requirements for the degree of
Doctor of Education
Ferris State University
ASSESSMENT OF STUDENT LEARNING AS AN EMOTIONAL PRACTICE FOR
FACULTY: A QUALITATIVE STUDY
Dissertation Committee
ABSTRACT
Assessment of student learning is a necessary component of the work life for educators.
The real promise of student learning assessment depends on faculty embracing the process.
Although the need for formalized assessment has been in practice for several decades, faculty
engagement in the assessment process continues to be the number one barrier to effectively
cultivating a culture of assessment across the campus. Understanding faculty’s emotional journey
while doing assessment of student learning will enhance institutional ability to support faculty
through the process, garner faculty buy-in, and cultivate a culture of assessment on campus.
Continuing to ignore the emotional factors that may impact the assessment process may cause
some faculty to remain on the margins of the institution, a position that provides no opportunity
or incentive to participate in college-wide quality improvement efforts.
This qualitative study contributes to the limited research on the non-transactional
elements of student learning assessment by capturing stories that will increase awareness and
understanding of the faculty emotional landscape that accompanies the assessment experience.
The study’s design provides the higher education community, particularly faculty and
administrators, a better understanding of the emotional barriers that may inhibit faculty
motivation and participation. It will also promote a better understanding of the types of emotions
that may come into play and identify if any point in the process is a catalyst for faculty’s
willingness to engage. This study has the potential to contribute knowledge to support cultural
shifts in higher educational institutions.
i
KEY WORDS: assessment of student learning, faculty buy-in, culture of assessment, emotions,
emotions in the workplace, emotional landscape
ii
ACKNOWLEDGMENTS
Completion of this dissertation would not have been possible without the support and
guidance from many people. I wish to thank Dr. Gary Wheeler, Dr. Jennifer (Jenny) Schanker,
and Dr. Thomas (Tom) Quinn who comprised my dissertation committee. I would especially like
to thank Dr. Wheeler, whom I consider a confidant and mentor, for his unwavering guidance. I
owe much of what I had accomplished with this research to his continued patience, coaching,
and support. I would be remiss if I did not thank the faculty included in this study for their
participation and support. Their stories were enlightening and powerful.
My heartfelt gratitude goes out to Dr. Sandra (Sandy) Balkema of Ferris State University,
Dr. Tammy Russell of Glen Oaks Community College, Dr. Matthew (Matt) Fall of Lansing
Community College, Dr. Karen Hicks of Lansing Community College, and Rafeeq McGiveron
of Lansing Community. Their guidance and encouragement were instrumental in my ability and
desire to successfully complete this dissertation. I would also like to thank my colleagues from
Cohort 7 who were always there for me when I needed them most.
I must thank my family. I owe my success to their love, support, and sacrifice. I thank my
parents, Bill and Linda Hasbrouck, for always believing in me. My three biggest joys in life,
Julie, Justin, and Isabelle inspire me every day to be the best person I can be. Finally, I thank my
husband Nick for always being on Team Tracy.
iii
Summary of the Study .......................................................................................................74 Limitations and Delimitations ...........................................................................................77 Recommendations for Future Research.............................................................................78 Reflection ..........................................................................................................................80
APPENDIX C: INTERVIEW PARTICIPATION RECRUITMENT TRANSCRIPT .................94 APPENDIX D: INFORMED CONSENT FORM.........................................................................96
APPENDIX E: INTERVIEW GUIDE ........................................................................................101 APPENDIX F: CONFIDENTIALITY AGREEMENT FORM FOR IRB PROJECTS .............103
Page
Table 1: Emotion Categorization as adopted by Folkman and Lazarus (1985). ...........................42
Table 2: Interview Participants......................................................................................................47
Table 3: Assessment Technique ....................................................................................................48
Table 4: Emotion Categorization as adopted by Folkman and Lazarus (1985). ...........................48
Table 5: List of Emotions Cited by Each Interviewee ..................................................................59
Table 6: Compilation of emotions noted by each participant categorized by Folkman and Lazarus’ emotion categorization matrix .........................................................................63
vi
Page
Figure 1: Emotional categories cited by FTF1 during the PDSA cycle ........................................50
Figure 2: Emotional categories cited by FTF2 during the PDSA cycle ........................................51
Figure 3: Emotional categories cited by FTF3 during the PDSA cycle ........................................53
Figure 4: Emotional categories cited by FTF4 during the PDSA cycle ........................................54
Figure 5: Emotional categories cited by FTF5 during the PDSA cycle ........................................56
Figure 6: Emotions noted by each interviewee for each stage of the PDSA cycle. ......................66
vii
INTRODUCTION
The importance of classroom assessment of student learning outcomes continues to be a
featured element in higher learning. Accreditation bodies, both specialized, such as those for
engineering, nursing, or education, and regional accreditors, such as the Higher Learning
Commission (HLC), have made assessment of student learning a priority to meet their Criteria
for Accreditation (HLC, nd). There are numerous references in literature outlining best practices
for accomplishing assessment of student learning in the classroom, such as assessment design,
techniques, examples, data collection and analysis, and accountability (Banta, 2004a, 2004b;
Lane, Lane, Rich, & Wheeling, 2014; MSCHE, 2007; Nunley, Bers, & Manning, 2011; Serban
& Friedlander, 2004; Standahl, 2008; Walvoord, 2010). Additionally, there is frequent discussion
regarding the importance of faculty participation in the assessment process, along with tips on
how to best foster faculty engagement (Driscoli & Wood, 2007; Guetterman & Mitchell, 2015;
Lane et al., 2014; MacDonald, Williams, Lazowski, Horst, & Barron, 2014; Ndoye & Parker,
2010; Wang & Hurley, 2012). This discussion centers around the notion that faculty must be
properly motivated to participate in assessment of student learning and that this motivation stems
from gaining an understanding of the importance of undertaking such practices. What is missing
from the literature is a discussion regarding the non-transactional elements involved, specifically
the role that faculty emotions have on the assessment process and its efficacy.
This study proposed that assessing student learning is an emotional experience for
instructors. The goal was to gain a better understanding of the faculty experience to enhance
1
institutional efforts to support quality teaching and learning in the classroom. Emotional factors
and faculty response to them may have an influence on the instructor’s willingness to continue
using a particular assessment technique, impact their motivation to design and implement
experiential learning activities in the classroom, and potentially impact the efficacy of
assessment process itself.
Background for the Study
Assessment of student learning has evolved over the past century from standardized
testing to outcomes-based evaluation of student learning. The current era of assessment was a
reaction to increased political pressure for accountability of higher educational institutions,
resulting in a need for higher-level assessment of student learning to evaluate institutional
performance and to meet accreditation standards (Shavelson, 2007). Traditional, standardized
evaluations were no longer sufficient. With the understanding that standardized evaluations do
not automatically lead to educational improvement, pressure increased to institute outcomes-
based educational models.
expectations prior to delivering instruction, that students direct their learning efforts to meet
those learning expectations, and that student progress and completion be measured by the level
of achievement toward them (Driscoll & Wood, 2007). While instructors inherently wish for
their students to succeed, formalized assessment of student learning continues to be a challenge.
Faculty remain unconvinced that quantifying assessment of student performance brings value to
the learning experience. Driscoll and Wood (2007, p. 8) report faculty complain that formalizing
assessment creates inflexible, mechanistic practices limited innovation and creation in the
classroom. Kelly (July 2017) explains that common concerns include student changeover each
2
term leading to irrelevant data collection and analysis, a perception that assessment is just
busywork, the notion that assessment is not part of teaching responsibilities, and the standard
teaching workload does not leave room for any additional work such as assessment (para. 2).
Further, gauging how students are doing is an inherent part of teaching, and faculty often believe
that assigning final grades is a reflection of their assessment of student performance (Banta,
2004b). Cook-Sather (April 2009) notes that some faculty remain suspicious of administrator
motives for requiring formalized assessment practices, citing concerns that assessment will be
used as an evaluation tool for teacher performance, resulting in punitive action by administrators.
Confusion regarding assessment expectations continues to permeate higher education
campuses to this day. Although there is a plethora of literature and professional development
opportunities available, these resources do not reach all who need them. Instructors who teach in
higher education are typically content experts and have not received training on assessment, thus
limiting the scope of their skills, understanding, and confidence to effectively develop and
employ assessment tools in the classroom (Banta, 2004b; Driscoll & Wood, 2007).
Assessment of student learning in a public community college setting is especially
problematic. The open access model of community colleges results in multiple missions, leading
to multiple outcomes and learning expectations to be assessed. The demand for time on faculty
teaching in a community college is greater than that of their peers teaching at four-year
institutions. This is significant because faculty at two-year institutions typically carry a higher
teaching load then their colleagues at four-year institutions, leaving them with limited
availability to focus on student learning assessment (Caudle & Hammons, 2017).
To meet accreditation requirements, institutions of higher education need to document
student learning performance. This is accomplished by establishing learning outcomes and
3
expectations for students at all levels of the institution: course, program, and institutional. It is
not enough to claim that students are meeting learning expectations. Accreditors want to see
evidence (HLC, nd) that institutions are using assessment results to inform curriculum decisions.
Therefore, when the term “assessment” is spoken on campus, faculty immediately focus on the
data collection and reporting aspect of the process. However, the true nature of assessment is to
gain an understanding of the student’s learning experience. This can only be answered by faculty
(Hutchings, 2010). Student learning revolves around instructional effectiveness, and faculty are
more effective when using techniques that focus on authentic assessment of student learning
(Driscoll & Wood, 2007). Assessment of student learning goes beyond data collection and
reporting. The value-added component of the process is the analysis of the results collected to
understand and improve student learning. Regardless of the institution’s approach to assessment,
the success of this step in the process hinges on faculty motivation and engagement (MacDonald
et al., 2014).
The notion of faculty buy-in is a common concern cited in the literature and continues to
be reported as the primary barrier to developing a comprehensive culture of assessment. Many
institutional strategies to address this challenge are implemented with a top-down approach are a
result of external pressures and tend to fail because leadership does not foster faculty ownership
over the process. Researchers have reported that faculty need to own the process in order to fully
embrace it (Banta, 2004a; Caudle & Hammons, 2017; Guetterman & Mitchell, 2015; Serban &
Friedlander, 2004; Wang & Hurley, 2012). Cultivating faculty buy-in tends to follow two
primary strategies: (1) Holding faculty accountable; and (2) highlighting the value of assessment.
These strategies may be deployed individually or in conjunction with one another.
4
In an effort to hold faculty accountable for conducting and reporting on assessment of
student learning, some institutions include assessment as a component of faculty performance
evaluation, promotion, and tenure decisions (Suskie, 2015). Suskie (2015) argues that
accountability measures such as these are necessary to reinforce a culture of quality on campus.
Gardner, Kline, and Bresciani (2014) encourage administrators to allocate resources based on
evidence and results of student learning assessment.
The second strategy focuses on providing training and professional development for
faculty to help them better understand the process, with a goal of increasing their appreciation for
the value that assessment can bring to their classroom. Driscoll and Wood (2007) claim that
participation in the practice of developing student learning outcomes may be enlightening for
faculty and result in them having a greater appreciation for the assessment process. Banta
(2004a) further argues that student success in the classroom hinges on the effectiveness of the
instructor. She reports that the instructors who were formally trained in effective use of
assessment had changed their behavior in the classroom and were more likely to embrace the
value of the assessment process.
Cultivating a culture of assessment on campus is noted to be the most effective way to
garner faculty engagement in the assessment process. Researchers report this can be
accomplished by creating an atmosphere that is receptive, supportive, and enabling. However,
there is a lack of consensus of what this looks like. Some point to a culture of assessment as a
final result that fosters acceptance and engagement in assessment practices, while others deem
the establishment of a culture of assessment as the catalyst for institutional change (Banta, 1997;
Driscoll & Wood, 2007; Guetterman & Mitchell, 2015; Lakos & Phipps, 2004; Lane et al., 2014;
Ndoye & Parker, 2010).
5
Assessment, by design, goes beyond attempting to capture what students know and may
have learned from particular experiences. It is a practice of instructor self-reflection that may
lead to a sense of vulnerability. This feeling of vulnerability may be unpleasant enough to keep
the instructor from feeling motivated to engage in the assessment process or to accept its
instructional value. Driscoll and Wood (2007) note that a culture of assessment is centralized on
the concept of trust between faculty and administration and trust between faculty and students.
To obtain this level of trust there must be openness, mutual respect, responsiveness,
collaboration, enjoyment, and empowerment. Banta (2004b) further explains that barriers to
faculty engagement are oftentimes within the control of institutional leadership, such as the lack
of time needed to devote to the practice, the belief that assessment is already occurring, and the
feeling that the process is being imposed on them. She goes on to explain that institutional
support and commitment from all levels is essential. Recognizing assessment as an emotional
practice can better serve the pedagogical and assessment structure needs of instructors, as well as
provide adequate motivation to be successful teachers and assessment practitioners.
Problem Statement
An abundance of literature focused on the need for faculty commitment to the assessment
process to foster quality improvement of student learning experiences. The literature discussed
strategies and best practices that administrators may adopt to encourage a culture of assessment
on campus. Lack of faculty buy-in continues to be the number one factor inhibiting
comprehensive assessment of student learning for most institutions. Understanding how to
motivate faculty to actively engage and embrace the process creates a problem for institutional
administration.
6
In William H. Whyte’s book, The Organization Man, published in 1956, he explained
that effective organizational leaders are logical and reasoned decision makers. Emotions were
regarded as undesirable influencers and the showing of emotions was to be considered a sign of
weakness (As cited by Muchinsky, 2000). This concept of professionalism as one that is void of
emotion has permeated throughout business and industry for decades (Sharp-Page, August 2018,
p. 1). However, bringing emotions into the workplace allows employees to be their authentic
selves, improves workplace relationships, and increases innovation and productivity (Muchinsky,
2000; Sharp-Page, August 2018; TEDxTalks, March 2015). This authenticity encourages
appreciation, compassion, courage, and vulnerability. Vulnerability is the key driver in human
trust and connection, innovation, change, growth, risk taking, and development (TEDxTalks,
March 2015). Vulnerability is oftentimes perceived as bad. Driscoll and Wood (2007) note that
the level of vulnerability that faculty experience during the assessment process is sometimes
remarkable. Faculty have reported the feeling of exposure and the fear of risk when evaluating
assessment of student learning. Robbins (TEDxTalks, March 2015) argues that vulnerability is
good, and that it should be recognized and celebrated for the positive attribute that it is.
The problem driving this qualitative study is the lack of research regarding the faculty
emotional experience while conducting assessment of student learning. While some literature
made mention of emotional factors such as vulnerability and motivation, the authors danced
around the topic. For example, Driscoll and Wood (2007) discussed the importance of enjoyment
as a key factor in building a strong culture of assessment on campus and Serban and Friedlander
(2004) explained that faculty are likely to embrace the assessment process when they feel
motivated over a period of time. However, the authors did not expand further beyond a brief
mention of emotions. This study begins to establish the framework for understanding the true
7
experience that is assessment. The phenomenological design of this study captured the faculty
experience to identify if they are experiencing emotions during the assessment process, what
types of emotions they experience, and if there are certain points in the process that are triggers
for specific emotions. Institutional leadership will develop a comprehensive understanding of
faculty needs to foster a culture of assessment across the campus.
Significance of the Study
Assessment of student learning is a necessary component of the work life for educators.
Gone are the days when faculty can step into the classroom, teach, grade, and enjoy a summer
off. The need for formalized assessment of student learning emerged in the late 1970s when
external political pressure increased the need for accountability (Shavelson, 2007). This need for
accountability in higher education has seeped into accreditation requirements that could have a
significant and detrimental impact on the college should they fall out of compliance.
The real promise of student learning assessment depends on faculty embracing the
process. Although the need for formalized assessment has been in practice for several decades,
faculty engagement in the assessment process continues to be the number one barrier to
effectively cultivating a culture of assessment across the campus (Banta, 2004b). Assessment of
student learning goes beyond collecting data. Data must be evaluated to understand the student
experience and to identify opportunities for pedagogical improvements. Regardless of the
institution’s approach to assessment, the success of this step hinges on faculty motivation and
participation (MacDonald et al., 2014).
Faculty and administrators need to recognize and acknowledge that humans are
emotional creatures and encourage instructors to be authentic in the classroom. Continuing to
ignore the emotional factors that may impact the assessment process may cause some faculty to
8
remain on the margins of the institution, a position that provides no opportunity or incentive to
participate in college-wide quality improvement efforts. Both faculty and administrators will
benefit from having an understanding of the types of emotions faculty are experiencing and the
points of the process that may be the most challenging. Having an appreciation that assessment is
an emotional practice will allow faculty to anticipate the emotional journey they may embark
upon and develop strategies for dealing with it. Having an appreciation for the emotional journey
that faculty face will allow administrators to focus resource allocations and target support
systems where they are most needed.
This study contributes to the limited research on the non-transactional elements of student
learning assessment by capturing stories that will increase awareness and understanding of the
faculty emotional landscape that accompanies the assessment experience. The study’s design will
help increase faculty and administrator awareness of the emotional barriers that may inhibit
faculty motivation and participation. It will also promote a better understanding of the types of
emotions that may come into play and identify if any point in the process is a catalyst for
faculty’s willingness to engage. This study has the potential to contribute knowledge to support
cultural shifts in higher educational institutions.
Nature of the Study
Upon review of research methods, the researcher determined that a qualitative approach
is the most appropriate for this study. A phenomenological study is designed to gain a better
understanding of a phenomena and capture the subjective understanding of the interviewee’s
experience (Seidman, 2013). This study used a phenomenology approach with participants
employed as full-time instructional faculty by one of the mid-sized or large public community
colleges in the State of Michigan. Faculty were selected based on two criteria: (1) being full-time
9
tenured or non-tenured instructional faculty; and (2) having implemented within the past three
academic years (2015–2016, 2016–2017, 2017–2018), or currently implementing an assessment
technique that addresses one of the top three tiers of Bloom’s Taxonomy (analysis, evaluation,
and creating) (Krathwohl, 2002).
The theoretical framework was based on the literature related to the topic as discussed in
Chapter Two. The framework drew upon concepts, terms, and definitions cited in the literature,
particularly in regard to faculty motivation and willingness to embrace the assessment process.
For years, the use of course examinations as the assessment tool of choice resulted in a push for
authentic assessment, with directed efforts to capture the student’s true learning experience
(Driscoll & Wood, 2007). The literature discusses strategies and best practices that
administrators may adopt to encourage a culture of authentic assessment on campus. Lack of
faculty buy-in continues to be the number one factor inhibiting comprehensive assessment of
student learning for most institutions. This concern has permeated for decades despite the
countless number of resources, literature, conferences, and professional development
opportunities offering strategies and best practices to overcome the barrier. The strategies
emphasize the importance of encouraging faculty engagement by fostering an environment that
encourages authentic faculty ownership of the process (Banta, 2004b; Caudle & Hammons,
2017; Guetterman & Mitchell, 2015; Serban & Friedlander, 2004; Wang & Hurley, 2012). The
qualitative data collected through semi-structured interviews with study participants offers
insight into the emotional landscape that may accompany the assessment of student learning
process, shedding light on a potential root cause for faculty resistance that has not been explored.
10
Research Questions
The researcher was interested in confirming that emotions are indeed experienced during
the assessment of student learning process and discovering both what types of emotions may be
elicited and whether certain points in the process are especially challenging for faculty. Three
fundamental questions will guide this research:
1. Research Question 1. Are faculty experiencing emotions while doing assessment of student learning?
2. Research Question 2. What emotions are experienced by faculty throughout the implementation process?
3. Research Question 3. Are different emotions experienced at different points of the implementation process?
Definition of Terms
• Assessment – shorthand for assessment of student learning or educational assessment. It is the systematic process of collecting and documenting data on the knowledge and skill levels of students, to inform decisions that affect student learning (Walvoord, 2010).
• Bloom’s Taxonomy – developed in the 1950s and named after Benjamin Bloom, a set of hierarchical models used to classify educational learning objectives into levels of complexity and specificity. In 2001, the taxonomy was revised to identify cognition levels as remember, understand, apply, analyze, evaluate, and create (Krathwohl, 2002). A diagram of the taxonomy used for this study may be found in Appendix A.
• W. Edwards Deming’s Plan, Do, Study, Act (PDSA) Model – made popular by W. Edwards Deming, a four-step method used by organizations for continuous quality improvement of processes, services, and products (Moen, 2010). A diagram of the PDSA model used for this study may be found in Appendix B.
Limitations of the Study
Participants in this study were all full-time faculty teaching at a community college in the
State of Michigan. Their perspectives may not reflect those who are employed as adjunct faculty
or who teach at a different type of postsecondary institution. Thus, the ability to generalize this
study’s findings may be limited. There are six limitations of this study:
11
1. The study was limited to the number of full-time faculty who were able to be recruited from the identified community college sample in the State of Michigan. Participation was voluntary, and faculty willingness to participate was outside of the researcher’s control.
2. Interviews were conducted in the southeastern region of the State of Michigan. The researcher chose this approach based on ease of access to the study participant sample.
3. The study included full-time faculty who teach in a mid- to large-size urban community college setting. Adjunct faculty were intentionally omitted from the study. While adjunct faculty have become the mainstream teaching workforce in community colleges, “they have less frequent interactions with students and are less integrated into the campus cultures in which they work” (Jolley, Cross, & Bryant, 2013, p. 219). For this reason, it was the belief of the researcher that including adjunct faculty would change the scope and dynamic of the study. Further, faculty from other types of institutions were not included, such as those who teach in four-year institutions, rural community colleges, tribal colleges, and private institutions.
4. The study required faculty to reflect on experiences from the past. It is possible their responses are influenced by time and other experiences.
5. The study was designed to capture the emotional experience of the study participants during the assessment process. The study did not include contextual analysis, such as institutional culture, expectations of instructor workload, union landscape, and the politics of compliance and accountability on campus.
6. The researcher is not a trained psychologist and does not possess the expertise to analyze emotions or the root cause of emotions. The researcher relied on published work of trained psychologists to provide the context needed for study analysis.
Delimitations
This study was confined to interviewing and observing full-time tenured and non-tenured
faculty at a mid-size to large community college in the State of Michigan. Faculty from six
community colleges were targeted for interviews based on the number of full-time faculty
employed at each institution and the location of their institution. Participants were recruited
based on recommendations from assessment directors from each institution. Recommendations
included full-time faculty who within the past three years have implemented an assessment of
student learning technique that measures one of the top three tiers of Bloom’s Taxonomy.
12
The researcher utilized qualitative methodology. There was no intent to utilize a
quantitative approach. The phenomenological study was designed to gain a better understanding
of the phenomena and capture the subjective understanding of the interviewee’s experience.
SUMMARY
Community colleges experience increased pressure to demonstrate the value of a
postsecondary education by politicians and accreditors (Fain, 2017, March 2, 2018; HLC, June
2019). Being able to demonstrate that students are achieving learning objectives is a major
component of institutional accountability measures. Institutions are being asked to provide
evidence of student performance at the course, program, and institutional level (HLC, nd). Only
faculty have the capability to effectively complete these evaluations. The efficacy of student
learning assessment rides on faculty engagement in the process. Recognizing, understanding, and
embracing the emotional journey that assessment is for faculty will enhance administrator and
faculty ability to foster innovation in the classroom, leading to increased quality learning
experiences for students. However, the many literature citations available to improve assessment
practices do not explore this phenomenon.
In light of this gap in the literature, the primary goals of this study was to capture faculty
stories to identify if faculty experience emotions during the assessment of the student learning
process, discovered what types of emotions are experienced, and identified if certain points in the
implementation process trigger specific types of emotions. The researcher utilized a qualitative
methodology to establish a framework for increased awareness of the emotional barriers that
may inhibit faculty motivation and participation.
Chapter One provided an introduction and overview of the purpose of the study. Chapter
Two presents a review of the related literature exploring the current landscape for assessment of
13
student learning. Chapter Three outlines the research design and methodology of the study, the
instrument used to gather the data, the procedures followed, and parameters for sample selection.
Chapter Four will present a discussion of the findings and analysis of data collected. Chapter
Five contains a summary, conclusion, reflection, and recommendations for future research.
14
INTRODUCTION
It is widely understood that the real promise of student learning assessment depends
significantly on faculty embracing the process (Hutchings, 2010). The heart of assessment is
answering the question of whether or not students are learning what is being taught. This is a
question that can only be answered by faculty (Hutchings, 2010). Student learning revolves
around faculty effectiveness, and faculty are more effective when using techniques that foster
authentic assessment of student learning (Driscoll & Wood, 2007). It has been reported that
faculty behavior changes when properly trained to use effective assessment techniques.
However, lack of faculty support of assessment efforts continues to be a barrier for assessment of
student learning (Banta, 2004a). Guetterman and Mitchell (2015, p. 45) speculate “that the roles
of both senior administrators and faculty leaders are integral to successful assessment practices”
(p. 45).
Assessment of student learning goes beyond collecting data. The most critical aspect of
assessment is evaluating the results of assessment to understand and improve student learning.
Regardless of the institution’s approach to assessment, the success of this step in the process
hinges on faculty motivation and participation (MacDonald et al., 2014).
Teaching is recognized as an emotional practice, and the literature touches on faculty
emotional response to assessment; however, it falls short of making a connection between faculty
emotion and assessment of student learning (Hargreaves, 2000; Steinberg, 2008). Steinberg’s
15
(2008) review of the literature as it relates to assessment as an emotional practice for secondary
teachers reported that further exploration of the correlation between assessment and teacher
emotional response is necessary to gain a true understanding of the impact that teacher emotion
may have on assessment.
This literature review provides a brief history of assessment of student learning and
demonstrates connections between authors who draw theoretically upon the same ideas in regard
to faculty ownership of the assessment process, efforts to create a culture of assessment on
campus, teacher emotions and vulnerability, and how emotions factor into workplace motivation.
The review sets the foundation for why the study is needed when considering the limited
availability of data and research regarding the emotional experience of faculty during the
assessment process.
Brief History of Assessment of Student Learning
The Carnegie Foundation for the Advancement of Teaching is cited as being the founder
of student learning assessment. The first president of the foundation, Henry Pritchett, was
motivated by his concern for the caliber of higher education and recognized the potential impact
that objective testing may bring to monitoring quality (Shavelson, 2007). The assessment of
student learning evolved through four eras:
• 1900–1933: the origin of standardized tests of learning. Led by the Carnegie Foundation, this era spawned the use of multiple-choice tests to measure learning.
• 1933–1947: the assessment of learning for general education and graduate education. This era was witness to the first attempt to measure personal, social and moral outcomes of general education, in addition to the traditional cognitive measurements. The Graduate Record Examination (GRE) emerged in this era as well.
• 1948–1978: the rise of test providers. Fueled by the increased demand resulting from the G.I. Bill of Rights, this era led to the creation of the Educational Testing Service (ETS) and the American College Testing (ACT) program.
16
• 1979–Present: the era of external accountability with political pressure taking hold by the end of the 1970s, resulting in a need for higher level assessment of student learning. (Shavelson, 2007, p. 5)
It became apparent in the late 1970s that objective and standardized testing was
incompatible with the way faculty members wanted student learning to be assessed. Instructors
preferred open-ended, holistic, and problem-solving assessments to the highly structured
multiple choice test. According to Shavelson (2007), “student performance varies considerably
depending upon whether a task is presented as a multiple-choice question, an open-ended
question, or a concrete performance task” (Shavelson, 2007, p. 12).
It is this understanding that a culture of evidence does not automatically lead to
educational improvement that has spawned the current era of outcomes-based assessment for
student learning. A true outcomes-based educational model includes three primary
characteristics:
2. Students directing their learning efforts to meet outlined expectations.
3. Student progress and completion measured by the level of achievement of learning expectations (Driscoll & Wood, 2007).
While instructors inherently wish for their students to achieve success in the classroom,
assessing student learning continues to be a challenge (Serban & Friedlander, 2004). First,
faculty remain unconvinced that assessment of student learning has a beneficial impact on
student success. Second, instructors who teach in higher education are typically content experts
and have not received formal training on assessment. There is a lack of knowledge regarding
assessment processes, tools, and methodology. Third, effective assessment of student learning
requires faculty across the discipline to come to a consensus about what successful student
achievement looks like, and this consensus is difficult to acquire. Finally, implementing and
17
(Serban & Friedlander, 2004).
According to Caudle and Hammons (2017), assessment of student learning in a two-year
open-door institution is especially problematic. The open access model of community colleges
results in multiple missions, leading to multiple outcomes and expectations to be assessed for
student learning. This means that faculty at community colleges require more time to complete
assessment than their peers teaching at four-year institutions. This is significant because faculty
at two-year institutions typically carry a higher teaching load then their colleagues at four-year
institutions, leaving them with limited availability to focus on student learning assessment
(Caudle & Hammons, 2017).
The assessment of student learning process cycles through three steps:
1. Establishing learning goals that clearly define what students should be able to do when they complete their education, at the course level, program level, and institutional level.
2. Collecting data to measure how well students are achieving the goals and what factors are influencing their learning.
3. Using the data collected to inform decisions to improve student learning (Walvoord, 2010).
The assessment process is intended to be circular in nature with step three informing updates and
revisions to learning goals, along with changes in teaching methods, assessment tools, and
curriculum (Walvoord, 2010).
The assessment process mirrors W. Edwards Deming’s Plan, Do, Study, Act (PDSA)
model for quality improvement that has been adopted by business and industry since its original
development in 1950. The original model was an adaptation of the scientific method to fit the
18
cyclical nature of industry practices. Since then, the model has gone through some evolutions,
landing on the most widely accepted model that was developed in 1986 by Deming after
modifications to the initial model were adapted to fit Japanese manufacturing practices (Moen,
2010). The PDSA model serves as a guide for determining what goals should be accomplished,
what actions should be taken to achieve the goals, and how to determine if the action resulted in
improvements (Moen, 2010).
Faculty Engagement in the Assessment Process
The notion of faculty buy-in is a common theme in the literature and is reported to be the
number one barrier to developing a comprehensive culture of assessment. Many researchers
report that the key to faculty engagement is faculty ownership of the process (Banta, 2004a;
Caudle & Hammons, 2017; Guetterman & Mitchell, 2015; Serban & Friedlander, 2004; Wang &
Hurley, 2012). Many institutional strategies are implemented with a top-down approach, are a
result of external influence, and tend to fail because leadership does not allow time for faculty to
gain ownership of the process (Welsh & Metcalf, 2003). Increased accountability has led to a
new paradigm of assessment that calls for dual responsibility for learning in which both the
student and the instructor collaborate to forge a learning-centered relationship (DeBoy,
Monsilovich, & DeBoy, 2013).
In an attempt to determine how faculty achieve optimal motivation to participate in
effective assessment practices, Guetterman and Mitchell (2015) conducted a study of 26 faculty
leaders teaching general education courses at eight different undergraduate colleges and
universities. Their study included an analysis of assessment attitudes of the participants before
and after a three-step training process. Their training included emphasis on the importance of
assessment, in addition to offering best practices for how to implement and use assessment
19
results to improve student learning. While their study resulted in increased personal disposition
regarding assessment, it failed to dig deeper into the emotional factors that may have been an
influence (Guetterman & Mitchell, 2015). Further, the Guetterman and Mitchel study was a
survey of motivation and attitude as a result of a professional development workshop, rather than
a study of the assessment experience itself.
Fostering faculty buy-in tends to fall into two primary strategies: holding faculty
accountable by requiring assessment of student learning to serve as a measure of faculty
performance (Gardner et al., 2014; Suskie, 2015) and, secondly, pleading to faculty’s passion for
student success, educating them on the value of assessment, and supporting faculty with
professional development (Banta, 2004b; Driscoll & Wood, 2007; Fuller, 2013; Hutchings,
2010; Lane et al., 2014; Serban & Friedlander, 2004; Wang & Hurley, 2012; Welsh & Metcalf,
2003).
Suskie (2015) discusses “five dimensions of quality” for supporting accountability and
accreditation efforts on campus. Dimension five, “a culture of better,” focuses on efforts to
support changes, improvement, and innovations in the classroom, including assessment of
student learning techniques. Her guidance outlined methodology to link assessment of student
learning, supported by evidence, as an accountability measure included in faculty performance
evaluation, promotion, and tenure decisions. She argued that accountability is necessary to
reinforce a culture of quality on campus (Suskie, 2015). To further support a culture of
accountability, it is recommended that leadership collect and analyze evidence that assessment of
student learning is happening on campus; ensure that the institution’s strategic plan supports an
assessment culture; continuously promote the culture (using artifacts such as mottos, taglines,
20
etc.); make expectations clear during the hiring process; promote professional development for
assessment of student learning; and incorporate accountability by allocating resources based on
evidence and results of student learning assessment (Gardner et al., 2014).
Understanding the Value of Assessment
Institutional efforts for assessment of student learning depend on the assessment and
improvement of student learning at the classroom level. Faculty frequently question the value of
this exercise, citing infringement on academic freedom, limiting autonomy to choose lesson
plans, and mandating methods of evaluation of their students (MacDonald et al., 2014). Faculty
may resist the process when they perceive that it is being imposed on them by administration or
accrediting bodies and at times, faculty may question the motivation behind mandated
assessment, believing that the true purpose is to serve as a tool for scrutinizing their classroom
practices. Finally, faculty may view assessment as just another fad that adds additional workload
onto their already busy schedules (MacDonald et al., 2014). Faculty ownership of assessment
improves the viability of the assessment process and results in a meaningful and lasting process
(MacDonald et al., 2014; Serban & Friedlander, 2004). As faculty ownership takes hold, they
become motivated to be engaged in the process and are more likely to sustain the practice into
the future (Serban & Friedlander, 2004). “A greater voice in the assessment process results in the
faculty taking more of a personal interest in the performance of the students and greater
receptivity towards making changes to improvements in student learning” (Lane et al., 2014, p.
4).
Serban and Friedlander (2004) reported that there continues to be a lack of knowledge
about assessment of student learning techniques, tools, and evaluation. Few faculty have
received formalized training in developing measurable and valid learning outcomes, aligning
21
their curriculum with outcomes, selecting appropriate tools for measuring outcomes, and
evaluating the results of the assessment in a meaningful way (Driscoll & Wood, 2007;
Hutchings, 2010; Serban & Friedlander, 2004). Further, faculty struggle to determine the level of
ability or knowledge that students should attain to meet outlined learning expectations and to
build a common understanding of what learning outcomes mean (Driscoll & Wood, 2007;
MacDonald et al., 2014; Serban & Friedlander, 2004). Participation in the practice of developing
student learning outcomes is enlightening for faculty and results in having a greater appreciation
for the assessment process (Driscoll & Wood, 2007). Banta (2004a) further explains that student
success in the classroom hinges on the effectiveness of the instructor. Her study found that most
instructors who were formally trained in effective use of assessment had changed their behavior
in the classroom and were more likely to embrace the assessment process. Banta (2004b) also
points out that engaging in professional development appeals to the faculty’s natural curiosity to
learn about best practices in the classroom in support of helping their students learn. This is a
concept that is echoed by Driscoll and Wood (2007). They argue that when faculty apply their
intellectual curiosity to answer questions regarding student learning, they are unknowingly
aligning their teaching with assessment expectations. Institutional investment in resources, such
as investing in professional development opportunities for faculty, sends a message that
assessment of student learning is a valued initiative for the college (MacDonald et al., 2014).
A slightly different approach in faculty education is a focus on teaching faculty about the
importance and value of assessment, rather than focusing on teaching faculty how to incorporate
assessment of student learning in the classroom. MacDonald et. al. (2014) reported that many
faculty do not see the relevance or usefulness of performing assessment of student learning.
Fuller (2013) conducted a survey of faculty questioning what they believe the purpose of
22
assessment is. He identified five primary responses: to improve student learning, accreditation,
accountability expectations, compliance with government requirements, and tradition. Less than
half of his survey respondents indicated that a focus on student learning was the primary purpose
for assessment (Fuller, 2013). Fuller argued that the message to faculty should focus on the
interconnectedness between assessment of student learning and teaching and learning in the
classroom (Banta, 2004b; Hutchings, 2010). The Middle States Commission on Higher
Education (MSCHE) cautions that an intense focus on assessment should not turn into a focus on
data collection, but rather on teaching and learning (MSCHE, 2007). Embedded assessment
should naturally grow out of the teaching and learning that are happening in the classroom.
When faculty value assessment as a scholarly practice, they are more likely to embrace the
practice and consider it a valuable use of their time (Wang & Hurley, 2012). Additionally,
faculty who see and understand the results of their assessments are more likely to support the
process (Banta, 1997).
Creating a Culture of Assessment
Many researchers note that the most effective way to garner faculty engagement in
assessment is to create an atmosphere that is receptive, supportive, and enabling (Banta, 1997;
Driscoll & Wood, 2007; Guetterman & Mitchell, 2015; Lakos & Phipps, 2004; Lane et al., 2014;
Ndoye & Parker, 2010). There is a lack of consensus of what a culture of assessment looks like,
however. Some researchers point to a culture of assessment as a final result that fosters
acceptance of assessment practices, while others deem the establishment of a culture of
assessment as a catalyst for institutional change (Guetterman & Mitchell, 2015). According to
Lane et al. (2014), a culture of assessment is one in which the concept of assessment is widely
understood by faculty and staff with an intentional focus on continuous quality improvement. It
23
has been defined as “an organizational environment in which decisions are based on facts,
research, and analysis, and where services are planned and delivered in ways that maximize
positive outcomes and impacts for customers and stakeholders” (Lakos & Phipps, 2004, p. 352).
Common strategies for building a culture of assessment include strong institutional
leadership support, embedded assessment in daily practice, use of assessment data to drive
improvement decisions, and cross communication (Ndoye & Parker, 2010). A culture of
assessment is centralized on the concept of trust: trust between faculty and administration, and
trust between faculty and student (Driscoll & Wood, 2007). Other qualities necessary for a strong
campus culture are openness, mutual respect, responsiveness, collaboration, enjoyment, and
empowerment (Driscoll & Wood, 2007).
In a survey on building and sustaining a culture of assessment that was administered in
2007, Ndoye and Parker (2010, p. 32) noted that 76% of institutions that have reported
successfully achieving an assessment culture indicate that faculty engagement was paramount.
Fostering faculty engagement includes understanding, anticipating, and addressing faculty
concerns early in the process. Oftentimes, the concerns faculty raise are those that administration
have the capacity to address, such as the lack of time to devote to the practice, the feeling that the
process is being imposed on them, the belief that assessment is an administrative responsibility,
and the belief that they are already assessing student learning by assigning grades (Banta,
2004b). Faculty must feel as though assessment is not just another fad the institution is
following. Institutional support and commitment from all levels is essential (Banta, 2004b), with
visible and collaborative partnership between faculty and administration that incorporates strong
faculty leadership. Faculty need to be engaged in establishing institutional structures for
facilitating assessment across the campus (Morse & SantiagoJr, 2000), as faculty tend to trust
24
their peers and are more likely to engage in the process if they see their colleagues embracing the
practice (Banta, 2004b).
A Sense of Vulnerability
Driscoll and Wood (2007) noted that the level of vulnerability that faculty experience
during the assessment process is stunning. Faculty have reported the feeling of exposure and the
fear of risk when evaluating assessment of student learning and experiencing a level of anxiety
for not knowing how their students are performing in comparison to others (Driscoll & Wood,
2007). Additionally, faculty have voiced concerns that assessment is really a tool used implicitly
to evaluate their performance, to justify existence of their course, program, or department, or to
determine allocation of resources. Faculty who feel that they are being scrutinized, that
assessment is infringing on their academic freedom, or that they need to use assessment to prove
their value as an instructor, will become resistant to practicing assessment in their classrooms
(Banta, 2004b; MacDonald et al., 2014). Further, Garfolo and L’Huillier (2015) note that
assessment is a practice in self-evaluation that is personal and private for faculty, and Matthews
(July 6, 2017) reported that teachers resist change in the classroom for fear of looking stupid in
front of their students.
MacDonald et al. (2014) recognize that instructors need to find value in the practice of
assessment in order to feel motivated to participate in the process. Through their exploration, the
authors discussed how the failure to understand the intrinsic value of assessment may be a
contributing factor to faculty’s lack of motivation. They failed to dig deeper to determine if
assessment may have an emotional impact on the instructor. This study, however, revealed that
insecurity regarding ability to assess student learning effectively was a leading contributor to
faculty’s lack of motivation (MacDonald et al., 2014). This particular study gives us a glimpse at
25
a sense of vulnerability that may come into play when assessing student learning in the
classroom.
Understanding Teacher Emotions and Motivation
While research is starting to take a closer look at teacher emotion, the studies are
structured around the interpersonal aspects of the teacher role. Zembylas (2003) stated that “an
important feature of teacher emotional experience is the role of discourse in the classroom/school
setting in which a teacher works, and how this discourse influences the construction of
emotions” (p. 105). Zembylas reported in 2003 that there was little research that studied teacher
emotions, and this assessment remains true 15 years later. He attributes this to three factors:
Western culture has an embedded bias against emotion in the workplace, it is challenging to
study teacher emotion with objective measures, and emotions have primarily been linked with
female employees and subsequently ignored in the overall picture. Zembylas (2003) goes on to
explain that emotions in teaching have come to influence school policy issues and that
administrators and policymakers are beginning to accept that teaching is an emotional practice.
Hargreaves (2000) noted that education reform neglects the emotional dimension of
teaching: “Teaching, learning and leading may not be solely emotional practices, but they are
always irretrievably emotional in character” (Hargreaves, 2000, p. 812). Hargreaves (2000)
concluded that teaching is an emotional practice and that teacher emotion may raise classroom
standards or lower them. Oftentimes, emotions are acknowledged only as something that should
be managed when leading institutional change or to assist administrators in dealing with teacher
resistance to strategic plan implementation (Hargreaves, 1998).
Teachers tend to formulate pedagogy to accommodate their own emotional needs.
Emotions such as excitement and joy tend to lead pedagogy decisions, along with emotions that
26
are linked to a sense of creativity and achievement in teaching with students. Oftentimes,
teachers struggle when letting go of familiar practices and comfortable routines (Hargreaves,
1998).
Restructuring of the teacher role leads to increased occupational stress that may lead to a
feeling of “burnout,” which is defined as the feeling of physical, emotional, and mental
exhaustion (Hargreaves, 1998; Jepson & Forrest, 2006). Teacher burnout is becoming a
prevalent issue in the profession. Some contributing factors for teacher stress include heavy
teaching loads, overload of roles outside of teaching, classroom management, and anxiety over
evaluations (Jepson & Forrest, 2006). The Jepson and Forrest (2006) study indicated that those
who strive for a higher level of achievement tend to experience a greater level of stress.
According to Steinberg (2008), “assessment decisions are not ‘neutral’ but involve
teacher’s emotions” (p. 42). Steinberg conducted a review of the available research and literature
at the time and concluded that teachers “grapple with emotional complexity” (p. 58), but the field
remains under-researched. While the literature addressing teachers’ emotions is primarily
focused on secondary teachers, it is the belief of this researcher that these findings may be
applied to faculty teaching in higher education as well. Faculty emotions are a critical component
to curriculum development and teaching (Zembylas, 2003), and education policies, reforms, and
faculty professional development should take into account the role of emotions.
Sujitparapitaya (2014) completed a study to determine why some faculty are motivated to
participate in assessment of student learning and some are not. He concluded that faculty are
more likely to engage in assessment if they possess the confidence to be successful and if they
feel that it will be a valuable practice. His study, however, failed to dive deeper into the
emotional factors that may influence faculty motivation. Hargreaves (1998, 2000), Jepson and
27
Forrest (2006), and Zembylas (2003, 2007) all reported that teaching is an emotional practice.
Their studies focused primarily on the interpersonal interactions with students in the classroom at
the secondary level. Pedagogy is personal in nature, as teaching is a personal craft, and it can be
isolating. As a result, teachers are often hesitant to accept that they may need to change their
approach or revamp the curriculum (Hernandez, 2016).
Driscoll and Wood (2007) discussed the importance of empowerment and enjoyment as
key factors in building a strong culture of assessment across campus. Pink (2009) explained that
traditional reward models for motivating employees are not effective. Rather, employees need to
feel a sense of purpose for the work that they do. The most motivated people couple their efforts
with a cause that is larger than themselves. Having a sense of purpose is an emotional link to the
overarching goal. Sinek (November 2012) explained that humans are social animals and that our
survival depends on our ability to form cultures founded in trust. Trust is an emotion that leads to
a willingness to experiment, fail, and innovate (99U, November 2012), and it emerges when we
surround ourselves by those who believe what we believe. The assessment literature speaks to
this in some respect when the discussion turns to the importance of faculty buy-in to effectively
create a culture of assessment on campus. Serban and Friedlander (2004) summarize this well:
As faculty ownership becomes apparent in the assessment process, faculty are motivated to remain engaged and their interest is more likely to be sustained over the years that follow, even after the accreditation self-study has come and gone. (p. 33)
Many researchers reported that the key to faculty engagement is faculty ownership of the
process (Banta, 2004a; Caudle & Hammons, 2017; Guetterman & Mitchell, 2015; Serban &
Friedlander, 2004; Wang & Hurley, 2012). The literature fell short of demonstrating the notion
that emotions have an integral role in faculty motivation; however, Sinek maintains that people
want to be a part of something bigger than themselves and that experiencing a sense of
28
fulfillment leads to wanting to do more (99U, November 2012). The notion of understanding
human emotions to understand human behavior was supported by Ford (1992, p. 201) as he
explained that the true essence of motivating behavior is having a sound understanding about
“the role of motivational processes in effective functioning”. He goes on to argue,
The strategy of trying to motivate people through direct control of their actions – as opposed to indirect facilitation of their goals, emotions, and personal agency beliefs – should be reserved for situations in which swift attainment of a goal is urgent and no other means are available. (Ford, 1992, p. 203)
SUMMARY
This chapter reviewed current literature related to faculty buy-in of assessment of student
learning, establishing a culture of assessment, and teaching as an emotional practice. As
assessment of student learning has evolved over the years, faculty support of the assessment
process has remained a barrier for effective practice. Researchers offer two different pathways to
gaining faculty buy-in: incorporating accountability measures in faculty performance
evaluations, or helping faculty understand the intrinsic value of the assessment process.
The literature has shown that teaching is recognized as an emotional practice. The
process of teaching tends to be done in isolation and is considered to be a personal craft, making
it challenging for teachers to feel comfortable with external influences that may impact the
curriculum. Researchers reported that the key to faculty engagement is faculty ownership of the
process.
Correlations were found in the literature describing the necessity to understand human
emotions in the workplace and establishing a culture of assessment on campus. People are social
by nature and tend to surround themselves with others who believe what they believe, forming
cultures founded in trust. Trust is an emotion that increases one’s willingness to take risks and to
innovate. The most motivated employees have a sense of purpose for the work they do and
29
believe that their efforts are supporting a cause that is bigger than themselves. The literature
hinted at the emotional factors that may influence these practices, however, it failed to dive
deeper into the emotional factors that may influence faculty motivation. Further exploration of
the correlation between assessment and teacher emotional response is necessary to gain a true
understanding of the impact that teacher emotion may have on assessment.
30
INTRODUCTION
This study was designed to gain a better understanding of the faculty experience of
assessing student learning in the classroom. It investigated what emotions faculty experienced
throughout the design, implementation, and continuation of assessment practices that measure
the top three tiers of Bloom’s taxonomy (analysis, evaluation and creation).
For years, the use of multiple choice and true/false exams as the assessment tools of
choice have led to a push for what educators call authentic assessment, with directed efforts to
capture alternative evidence of student learning. Further, authentic assessment provides an
opportunity for students to integrate personal experiences with their academic learning (Driscoll
& Wood, 2007). Based on the principle of active learning, assessments should be designed to
appropriately align with Bloom’s taxonomy (Morton & Colbert-Getz, July 2016).
Bloom’s taxonomy was developed in the 1950s to categorize the level of cognition for
student learning goals. The taxonomy moves from basic level of cognition (knowledge) to
highest level of cognition (evaluation), with four levels in between (comprehension, application,
analysis, synthesis). In 2001, the taxonomy was revised to remember, understand, apply,
analyze, evaluate, and create (Krathwohl, 2002). “Teaching methods that provide opportunities
for active learning should increase the student’s ability to solve problems at a higher cognitive
level based on Bloom’s taxonomy” (Morton & Colbert-Getz, July 2016, p. 173). For this reason,
31
this study was designed to focus on the experience of faculty who have implemented assessment
techniques that measure a higher level of cognition.
Sinek states that “if you don’t understand people, you don’t understand business” (99U,
November 2012). He goes on to explain that humans are social animals and our survival depends
on our ability to form cultures. Trust emerges when we surround ourselves by those who believe
what we believe. Trust is an emotion that leads to a willingness to experiment, fail, and innovate
(99U, November 2012). The assessment literature speaks to this in some respect by emphasizing
the importance of faculty buy-in to effectively create a culture of assessment on campus. Serban
and Friedlander (2004) summarize this well:
As faculty ownership becomes apparent in the assessment process, faculty are motivated to remain engaged and their interest is more likely to be sustained over the years that follow, even after the accreditation self-study has come and gone. (p. 33)
Many researchers reported that the key to faculty engagement is faculty ownership of the
process (Banta, 2004a; Caudle & Hammons, 2017; Guetterman & Mitchell, 2015; Serban &
Friedlander, 2004; Wang & Hurley, 2012). The literature falls short of the notion that emotions
play a role in achieving faculty ownership. Sinek states that people want to be a part of
something bigger than themselves and experiencing a sense of fulfillment leading to the desire to
do more (99U, November 2012). This concept of understanding human emotions to understand
human behavior is supported by Ford (1992) as he explained the true essence of motivating
behavior by having a sound understanding regarding “the role of motivational processes in
effective functioning” (p. 201). He goes on to argue,
…the strategy of trying to motivate people through direct control of their actions – as opposed to indirect facilitation of their goals, emotions, and personal agency beliefs – should be reserved for situations in which swift attainment of a goal is urgent and no other means are available. (p. 203)
32
How emotions may factor into the faculty experience of assessing student learning gives
interest to this research. It is the hope of the researcher that this study may provide insight into
and an increased awareness of the faculty experience while employing techniques for assessing
student learning in the classroom.
Population Sample
The research population includes full-time tenured and non-tenured faculty who teach at
a mid-sized or large community college in the State of Michigan. While adjunct faculty have
become the mainstream teaching workforce in community colleges, “they have less frequent
interactions with students and are less integrated into the campus cultures in which they work”
(Jolley et al., 2013, p. 219). For this reason, it is believed that inclusion of adjunct faculty would
change the dynamic of the study.
The researcher limited the study to the State of Michigan where the researcher lives,
allowing for greater accessibility to the sample population. Seidman (2013) explains that the
purpose of an interview study is to understand the experience of those who are being
interviewed; it is not to test a hypotheses. Therefore the findings of this interview study are not
meant to be generalized to a broader population, but rather to present the experience of the
interviews in compelling detail and sufficient depth to allow the readers to connect with that
experience (Seidman, 2013).
Probability and nonprobability sampling are the two most common approaches for
selecting a research sample. Probability sampling is a statistical approach that allows for
generalization of the study results. Since this is not the goal of a qualitative study, nonprobability
sampling is recommended (Merriam & Tisdell, 2016). The researcher used purposive sampling,
the more common form of nonprobability sampling. Purposive sampling is appropriate when the
33
researcher wishes to gain insight and understanding, and therefore would benefit most from
selecting a sample from which the most may be learned (Merriam & Tisdell, 2016).
Participants
Michigan is home to 28 public community colleges with a collective student enrollment
of 411,764 (WorkforceDevelopmentAgency, February 2015) and employing 2,672 full-time
instructional faculty (IPEDS, nd). The researcher selected participants from seven community
colleges within the State of Michigan that for academic semester Fall 2016 employ more than
150 full-time faculty as reported in the Integrated Postsecondary Education Data System
(IPEDS). The researcher collaborated with assessment directors at these institutions to identify
full-time faculty members who met the study participant criteria:
• Full-time tenured or non-tenured faculty teaching at a mid-size or large public community college in Michigan.
• Have implemented within the past three academic years (2015–2016, 2016–2017, 2017–2018), or are currently implementing, an assessment technique that addresses one of the top three tiers of Bloom’s Taxonomy (analysis, evaluation, and creating).
Each institutional assessment director recommended three to five full-time faculty who
met the criteria. From those recommendations, the researcher invited two individuals from each
institution to participate. A letter of informed consent was sent to each individual who agreed to
participate in a qualitative research study. The researcher contacted potential participants in
person, by phone, or email to briefly explain the study, including explaining that each interview
was to be audio recorded and giving an outline of the research schedule, and to determine their
interest in participating.
Once an individual agreed to participate, a mutually agreed-upon time and location for
the interview was scheduled and the informed consent process initiated. The researcher provided
34
the participant with a copy of the written informed consent form in advance of the interview.
During the informed consent process, the researcher described to each participant the potential
risks and benefits from participation, as well as the measures that were taken to minimize risk
and maximize benefits. The participant was given an opportunity to ask any questions prior to
signing the form. The participant signed, scanned, and emailed the form back to the researcher,
or provided verbal consent. The researcher kept the signed form, indicating agreement to
participate in the study, and participants were provided a copy of the signed form for their
records. Prior to the interview, the researcher emailed participants a copy of the signed informed
consent. Prior to the start of the interview, the researcher reviewed the important aspects and
procedures.
Interview Process
Researchers often choose qualitative study design because they are interested in gaining
insight, discovery, and interpretation, rather than hypothesis testing. Qualitative in-depth
interviewing is the primary method of data collection when the goal is to capture the essence and
basic structure of an experience (Merriam & Tisdell, 2016). The researcher’s objectives for this
study were to follow a line of inquiry as reflected by the qualitative research design and to ask
questions in an unbiased manner. The design of a phenomenological qualitative study in
particular is well suited for the study of emotions and guides the researcher in capturing a
person’s subjective understanding of his or her experience (Seidman, 2013). Seeking to
reconstruct the participant’s experience from a subjective point of view requires the researcher to
reflect on their own assumptions and biases. It is common practice for researchers to set aside
these influential factors prior to embarking on a study in order to capture the true essence of the
participant’s experience (Merriam & Tisdell, 2016; Seidman, 2013).
35
The interviews were designed to explore what emotions may have been experienced by
faculty during the assessment process and to inductively build an understanding of the faculty
experience rather than to test concepts and theories of the researcher (Merriam & Tisdell, 2016).
Therefore, the interviews were semi-structured, with many of the questions asked in the
interview developing as the interviewees shared their experiences. Seidman (2013) explains that
interviewing is a relationship and, as such, each interview is individually crafted. The interviews
were designed to be guided conversations to elicit the interviewee’s story. The unstructured
discussions gave the participant an opportunity to expound on experiences he or she believed
would enhance the researcher’s understanding of their experiences.
One visual aid (Appendix A) portraying Bloom’s Revised Taxonomy was employed to
assist participants with defining the level of assessment utilized in the experience discussed. The
visual aid provided a visual hierarchy for each level of Bloom’s Taxonomy, with supporting
verbs to clarify the learning activities students are expected to exhibit during each level
(MacMeekin, June 5, 2014).
The standard three-step assessment process (establishing goals, collecting data, and
taking action) (Walvoord, 2010) correlates closely with W. Edward Deming’s Plan, Do, Study,
Act (PDSA) model for continuous quality improvement. The PDSA model has been widely
accepted by business and industry for three decades and serves as a comprehensive guide for
implementing and evaluating quality improvement processes (Moen, 2010). For this reason, the
researcher chose the PDSA model as a guide for the study. A visual aid was used (Appendix B)
to guide interviewees through the PDSA cycle while discussing the implementation of their
assessment tools.
Published literature, consultation with assessment and research experts, and the
researcher’s own experiences informed the architecture of the interview questions. The semi-
structured interviews were conducted by video or in-person and digitally recorded. The
researcher asked participants to share their experiences and strategies by responding to a series of
open-ended interview questions related to creating and implementing an assessment of student
learning technique. The researcher asked follow-up questions during the interview to encourage
elaboration and clarification. Interviews were digitally recorded for accuracy and then
professionally transcribed after the interviewee read and signed an informed consent form
(Appendix D). Recordings began when both the researcher and the interviewee were seated, and
the consent form was collected. An overview statement outlining the purpose of the study and
discussion of procedures for the interview were given prior to beginning each interview session
(Appendix C). Interviewees were encouraged to add commentary or ask questions for
clarification at any point. Interviewees were also given the option to stop the interview at any
time.
To analyze the interviews, files were professionally transcribed by Peterson Transcription
& Editing Services, LLC. The transcriptionist signed a confidentiality agreement (Appendix F)
and did not know the location of the study sites to reinforce the confidential nature of the study.
After each transcript was completed, the participant’s name in the text files was replaced with a
pseudonym to ensure confidentiality. These pseudonyms were used for the duration of the study,
and only the researcher knows the identity of each participant.
Each interview transcript was carefully reviewed by the researcher. Inductive analysis
was used to identify patterns, categories, and themes that led to a definitive interpretation, and
37
the constant comparative method was used throughout the research process. The interviewee was
allowed an opportunity to review the transcript and offer follow-up thoughts for clarification.
Thorough review and analysis of data using the words of the participants, along with a review of
the researcher’s field notes, allowed additional consideration for the interviewee’s meaning
before filtering the data into emerging themes. The researcher connected and identified the
various themes and common descriptions. Broad themes were refined to eliminate redundancy,
and the ultimate use of a limited number of themes provided a clear description of the
experiences discovered in this study.
Data Analysis
The methodology chosen was a qualitative research design. Qualitative research relies on
words as data. This differs from a quantitative research design that uses numbers as data that are
analyzed with statistical techniques (Merriam & Tisdell, 2016). The collection of data and
analysis of data happen simultaneously in a qualitative study: “The final product is shaped by the
data that are collected and the analysis that accompanies the entire process. Without ongoing
analysis, the data can be unfocused, repetitious, and overwhelming in the sheer volume of
material that needs to be processed” (Merriam & Tisdell, 2016, p. 197).
The theoretical framework is based on the literature related to the topic as discussed in
Chapter Two. The framework draws upon concepts, terms, and definitions cited in the literature,
particularly in regard to faculty motivation and willingness to embrace the assessment process.
For years, the use of course examinations as the assessment tool of choice resulted in a push for
authentic assessment, with directed efforts to capture the students’ true learning experience
(Driscoll & Wood, 2007). The literature discussed strategies and best practices that
administrators may adopt to encourage a culture of authentic assessment on campus. Lack of
38
faculty buy-in continues to be the number one factor inhibiting comprehensive assessment of
student learning for most institutions. This concern has permeated for decades, despite the
countless number of resources, literature, conferences, and professional development
opportunities offering strategies and best practices to overcome the barrier. The strategies
emphasize the importance of encouraging faculty engagement by fostering an environment that
encourages authentic faculty ownership of the process (Banta, 2004b; Caudle & Hammons,
2017; Guetterman & Mitchell, 2015; Serban & Friedlander, 2004; Wang & Hurley, 2012). The
qualitative data collected through semi-structured interviews with study participants offers
insight into the emotional landscape that may accompany the assessment of student learning
process, shedding light on a potential root cause for faculty resistance that has not been explored.
A phenomenological study is designed to gain a better understanding of a phenomena and
capture the subjective understanding of the interviewee’s experience (Seidman, 2013). This
study used a phenomenology approach with participants employed as full-time instructional
faculty by one of the public community colleges in the State of Michigan. Faculty were selected
based on two criteria: (1) full-time tenured or non-tenured instructional faculty teaching at a mid-
sized or large public community college in the State of Michigan; and (2) have implemented
within the past three academic years (2015–2016, 2016–2017, 2017–2018) or are currently
implementing, an assessment technique that addresses one of the top three tiers of Bloom’s
Taxonomy (analysis, evaluation, and creating).
Qualitative data analysis is both inductive and comparative. To achieve this, the
researcher is tasked with comparing one unit of information with the next to identify recurring
regularities in the data (Merriam & Tisdell, 2016). This analysis is done by first isolating any
biases or assumptions that the researcher may have in order to reflect on the experience of the
39
participants and to suspend judgment. This type of analysis focuses on capturing the essence of
the participant’s story. The goal is to arrive at structural descriptions of the experience and the
underlying, precipitating factors that may contribute to what is being experienced (Merriam &
Tisdell, 2016).
Interviewing is a basic mode of inquiry. The primary purpose is not to evaluate but to
understand the lived experience of other people and the meaning they make of that experience
(Seidman, 2013). The researcher used open-ended interview questions to address the following
research questions:
• Research Question 1. Are faculty experiencing emotions while doing assessment of student learning?
• Research Question 2. What emotions are experienced by faculty throughout the implementation process?
• Research Question 3. Are different emotions experienced at different points of the implementation process?
The research samples are similar in some respects but different in others. Comparison
research may be used as a method for explaining or extracting understood knowledge. The focus
of this study was to identify if assessment of student learning generates an emotional experience
for faculty. The interview portion of this study focused on reconstructing the faculty’s perception
of the emotional journey that assessment of student learning generated for them. The study
examined the types of emotions experienced at different stages within the assessment
implementation process.
The interview guide (Appendix E) and visual aids (Appendices A and B) were used as
the primary instruments for collecting data. The researcher’s field notes were supplemental data
collected during the study. The interview guide consisted of five questions to elicit the
participants’ stories. The visual aids were used as a guide to define emotions experienced at each
40
point of the assessment process and to measure the intensity of those emotions. Visual aids
served as a point of reference for faculty to identify the level of cognitive learning the assessment
technique measured, in accordance with Bloom’s Revised Taxonomy. The interviews were
digitally recorded, and the researcher maintained field notes that included (but were not limited
to) observations, emerging trends, and thoughts to follow up on. The researcher’s field notes
were also assigned a unique number that aligned with the participants’ pseudonyms and coding
system used.
Following each interview session, the researcher listened to the audio to ensure there
were no problems with the recordings and to determine if any clarification from the interviewee
was necessary. Once the interview files were transcribed, reviewed by the interviewees, and
pseudonyms assigned, the researcher uploaded the transcripts into the software, which allowed
the researcher to apply open coding for the first level of analysis. Open coding is the process of
categorizing the data into themes that are responsive to the purpose of the study (Merriam &
Tisdell, 2016).
The coding process used the interviewees’ own language to define coding classifications,
a practice referred to as in vivo coding (Merriam & Tisdell, 2016). Using the participants’
language and terms to define the codes is a strategy to prevent bias during the coding process
that may result otherwise if the researcher were to substitute perceptions of participant meaning
during the analysis process (Merriam & Tisdell, 2016). The coded transcripts were then reviewed
and narrowed further into fewer themes and refined to eliminate any redundancy or overlap in
the themes. The final themes were then correlated to the research questions. The limited number
of themes allowed the true essence of the faculty experience to emerge.
41
Emotions articulated by the participants were categorized using a scale adopted from a
1985 Folkman and Lazarus study. In this study, the researchers evaluated emotions experienced
by students during a college examination.
From the perspective of a cognitive theory of emotion, the quality and intensity of any emotion — anxiety, jealousy, sorrow, joy, relief — is generated by its own particular appraisal. For ex