Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates...

27
Examining the Impact and Detection of the “Urban Legend” of Common Method Bias Andrew Schwarz Louisiana State University Tracey Rizzuto Louisiana State University Colleen Carraher-Wolverton University of Louisiana at Lafayette José Luis Roldán University of Seville Ramón Barrera-Barrera University of Seville Abstract Common Method Bias (CMB) represents one of the most frequently cited concerns among Information System (IS) and social science researchers. Despite the broad number of commentaries lamenting the importance of CMB, most empirical studies have relied upon Monte Carlo simulations, assuming that all of the sources of bias are homogenous in their impact. Comparatively analyzing field-based data, we address the following questions: (1) What is the impact of different sources of CMB on measurement and structural models? (2) Do the most commonly utilized approaches for detecting CMB produce similar estimates? Our results provide empirical evidence that the sources of CMB have differential impacts on measurement and structural models, and that many of the detection techniques commonly utilized within the IS field demonstrate inconsistent accuracy in discerning these differences. Keywords: Common method bias, Experimental design, Structural equation model ACM Categories: A2, G3 Introduction Common Method Bias (CMB) represents one of the most frequently cited concerns among information systems (IS) (Malhotra et al. 2006; Straub et al. 2004) and social science researchers (Campbell et al. 1959; Feldman et al. 1998; Podsakoff et al. 2003). The threat of CMB, or the difference between the trait score and measured score that is attributed to the use of a common method to take more than one measurement of the same or different traits (Burton-Jones 2009; Podsakoff et al. 2003), is that this bias conflates our findings due to systematic errors. In a standard survey study, an individual subject responds to the items in a particular survey at one point in time using a relatively standardized set of question and response formats. In these instances, researchers argue that the data is susceptible to CMB (Kemery et al. 1982; Lindell et al. 2001) and that CMB only exists to the extent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self- report surveys are the most frequently utilized method of data collection in the IS field (Hufnagel et al. 1994), this issue denotes an important topic for the research community. Although it has been recommended to include CMB analysis when reporting both PLS and CBSEM results (Gefen et al. 2011) and many researchers The DATA BASE for Advances in Information Systems 93 Volume 48, Number 1, February 2017

Transcript of Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates...

Page 1: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Examining the Impact and Detection of the “Urban Legend” of Common Method Bias

Andrew Schwarz Louisiana State University

Tracey Rizzuto Louisiana State University

Colleen Carraher-Wolverton University of Louisiana at Lafayette José Luis Roldán University of Seville

Ramón Barrera-Barrera University of Seville

Abstract

Common Method Bias (CMB) represents one of the most frequently cited concerns among Information System (IS) and social science researchers. Despite the broad number of commentaries lamenting the importance of CMB, most empirical studies have relied upon Monte Carlo simulations, assuming that all of the sources of bias are homogenous in their impact. Comparatively analyzing field-based data, we address the following questions: (1) What is the impact of different sources of CMB on measurement and structural models? (2) Do the most commonly utilized approaches for detecting CMB produce similar estimates? Our results provide empirical evidence that the sources of CMB have differential impacts on measurement and structural models, and that many of the detection techniques commonly utilized within the IS field demonstrate inconsistent accuracy in discerning these differences.

Keywords: Common method bias, Experimental design, Structural equation model

ACM Categories: A2, G3

Introduction

Common Method Bias (CMB) represents one of the most frequently cited concerns among information systems (IS) (Malhotra et al. 2006; Straub et al. 2004) and social science researchers (Campbell et al. 1959; Feldman et al. 1998; Podsakoff et al. 2003). The threat of CMB, or the difference between the trait score and measured score that is attributed to the use of a common method to take more than one measurement of the same or different traits (Burton-Jones 2009; Podsakoff et al. 2003), is that this bias conflates our findings due to systematic errors. In a standard survey study, an individual subject responds to the items in a particular survey at one point in time using a relatively standardized set of question and response formats. In these instances, researchers argue that the data is susceptible to CMB (Kemery et al. 1982; Lindell et al. 2001) and that CMB only exists to the extent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently utilized method of data collection in the IS field (Hufnagel et al. 1994), this issue denotes an important topic for the research community.

Although it has been recommended to include CMB analysis when reporting both PLS and CBSEM results (Gefen et al. 2011) and many researchers

The DATA BASE for Advances in Information Systems 93 Volume 48, Number 1, February 2017

Page 2: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

are aware of the potential problems of CMB arising from the use of a single-method study, disagreement about the actual effect of CMB is widespread (Crampton et al. 1998; Lindell et al. 2001; Richardson et al. 2009). We are not alone in our view that additional research is needed to strategically address concerns about CMB (Spector 2006; Spector et al. 2010). While some argue that CMB accounts for one of the primary sources of measurement error (Podsakoff et al. 2003), others believe it is an urban legend, and that its impact has been over-rated (Meade et al. 2007; Spector 2006).

Nonetheless, recent literature within the social sciences community has moved beyond the debate about the existence of CMB and is now focusing upon the magnitude and the interaction of these biases (Baumgartner et al. 2012; MacKenzie et al. 2012; Sharma et al. 2009; Viswanathan et al. 2012). For the purposes of this study we accept the consensus view that CMB exists. Instead we ask whether our current tools of measurement are capable of detecting these biases when researchers implicitly interject CMB throughout their research design. In our review of the literature, we found that much of the previous research utilized Monte Carlo data to understand the impact of CMB [e.g. (Chin et al. 2012)]. Such methods involve mathematically injecting a form of statistical bias into the data to ascertain if particular tools and techniques would detect the bias, such as an increase in variance among constructs or other statistical approaches. The limitation of this approach is that these manipulations do not reflect the mundane reality of error sources typically introduced through our data collection approaches (i.e., that researchers are said to inadvertently create).

It is our thesis that Monte Carlo manipulations are not sufficient for estimating CMB impacts because they do not reflect the reality in which CMB manifests itself in our structural models. Therefore, we have selected an alternative approach; through experimental design, we purposely introduced bias into our experiment. In so doing, it is our purpose to simulate a more realistic interjection of CMB - one that a researcher might actually implement. Thus, we examine whether conventional statistical tools are able to accurately detect interjected forms of bias. Furthermore, rather than treat all bias homogenously, we examine distinct types of bias sources, model them through Structural Equation Modeling, and then employ three commonly utilized approaches that are designed to detect CMB: Harman’s One Factor Model, the CFA marker technique, and the unmeasured latent marker construct. Our justification for these three methods derives from the review of prevalent CMB

approaches by Chin, et al (2012), who discounted three of the other six prevalent CMB approaches (namely partial correlation, MTMM, and the CFA marker technique). Through comparative analyses of field-based data, we will address the following questions: (1) What is the impact of different sources of CMB on measurement and structural models?, and (2) Are the three commonly used approaches able to detect the sources of CMB differentially?

Sources of Common Method Bias

In order to determine the sources of CMB to study, we utilized Podsakoff, et al’s (2003) review of CMB literature which lists potential causes of bias. Specifically, we were interested in examining sources of CMB that we could: (a) manipulate experimentally or (b) assess directly, and thus included only sources that fit either of these criteria. For example, implicit theory was not included, as we could not ascertain a method for manipulating an individual’s implicit theory or assessing what theory existed a priori. We examined each source on an individual basis to arrive at our final set of sources. We are not arguing that this is a comprehensive study. Rather, we are focusing on these seven sources in an attempt to quantitatively demonstrate the impact of CMB on our research.

Source 1: Ambiguous or complex items

One source of CMB is to create items that are either ambiguously worded or complex (e.g. containing a joint item that contains two phrases). Some examples proposed by Podsakoff of ambiguous or complex items include using double-barreled questions, words with multiple meanings, technical jargon or colloquialisms, or unfamiliar or infrequently used words. Ambiguous or complex items tend to impose on the respondent the need to develop idiosyncratic meaning, which may vary across respondents and thus increase the levels of random and systematic measurement error. As a result, we would expect complex items in construct measures to have lower internal consistency, thus reducing construct reliability.

Source 2: Format of the scales and choice of scale anchors

A second source of CMB is to rely upon a single format for either the scale or scale anchors. As Podsakoff explains, although employing the same anchor for every question (e.g. strongly agree to strongly disagree) makes it easier for a respondent to fill out the survey (due to the standardization), it could lead a respondent to focus upon the scale consistency rather than the individual items themselves. This contributes to CMB, as the

The DATA BASE for Advances in Information Systems 94 Volume 48, Number 1, February 2017

Page 3: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

responses to all of the items may be systematically biased upwards or downwards depending upon how the scale is formatted.

Source 3: Negatively worded or reverse coded items

The third source of CMB is the inclusion of negatively-worded or reverse-coded items among positively-worded items. The reverse-coded item is supposed to serve as a “speed bump” that catches the respondent’s attention and encourages more careful attention to the item and response [e.g. (Baumgartner et al. 2001)]. A critical assessment of this approach, however, could argue that reverse-coded items are cognitively more complex. Respondents may misinterpret these items or overlook the reverse framing, resulting in systematic error. As a result, we would expect reverse-coded items to exhibit weaker factor loadings, thus reducing construct reliability.

Source 4: Item priming effects

The fourth source of CMB is the inclusion of an introduction informing respondents about what the items are attempting to measure before the respondent views the items, thereby increasing the face validity of the items. Advocates of priming argue that it directs a respondent’s attention to the correct construct space and is not cognitively demanding. A critical assessment of this approach could argue that this source produces a systematic, upward bias on the items. If items of different constructs are affected, this will likely reduce construct reliability. If items from the same construct are affected, this construct will create artificially high internal consistency and scale reliability. Further, item priming could produce artificial covariation among variables, thus influencing the structural model.

Source 5: Item embeddedness

The fifth source of CMB is including neutrally valenced items within either positively or negatively worded items. The arguments for and against this source of bias are similar to those of the positive attributes of reverse-coded items. Those in favor argue that once a pattern toward positively or negatively worded items has been established, an abrupt switch to a neutrally valenced item re-directs a respondent’s attention to the items. Those opposed argue that these neutral items tend to adopt the positive or negative meaning of adjacent items. Similar to negatively worded or reverse coded items, we would expect neutrally-coded items to load incorrectly on a construct, thus also reducing construct reliability.

Source 6: Positive and negative affectivity traits

The sixth source of CMB is state-dependent. Respondents will respond to items in a style that is consistent with their affectivity trait, rating all items in a questionnaire more negatively or positively (Jackson 1967; Rorer 1965). A researcher might introduce a block of questions in a manner that would lead a respondent to be more positively (or negatively) primed towards a more positive (or negative) set of emotions. Where item priming is designed to trigger positive (or negative) emotions, systematic effects would be expected to inflate (or deflate) item and path scores in a consistent manner.

Source 7: Transient mood state

The seventh source of CMB is the influence of external events on the respondent’s mood which exerts an impact on the responses given. While the sixth source is concerned with the respondent’s mood at the time the respondent completes the survey, the seventh source refers to that series of external events that have occurred over a certain period of time (e.g. the death of a spouse or bankruptcy) and create a transient mood state in the respondent. If all respondents are consistently positively or negatively affected by transient mood states, these will exert upward or downward biases on all latent variable scores. The transient mood states of respondents may also produce artificial covariance in self-report measures, because the person responds to questions about both the predictor and criterion variable while in a particular mood.

Examining the Impact

Methodology

After identifying the seven sources of CMB that could be experimentally manipulated, we next focused on the development of a research model and context to examine how these seven sources materialize. Rather than relying upon Monte Carlo data, we were interested in a real-life study to examine how these sources influence actual research results. Specifically, Monte Carlo studies do not focus upon the source of the bias; they simply interject statistical bias into the data with the assumption that all CMB is equal and that the CMB will manifest itself in a similar pattern within the data. We instead call this assumption into question and argue that the different sources of CMB will manifest themselves differentially depending upon the source. Therefore, as researchers who focus on IT adoption research issues, we chose to contextualize our study within this research domain.

The DATA BASE for Advances in Information Systems 95 Volume 48, Number 1, February 2017

Page 4: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Based upon extant work that indicates that individual characteristics influence beliefs about technology (Agarwal et al. 1998; Lewis et al. 2003), our research model hypothesized that Personal Innovativeness in the domain of information technology (PIIT) exerts a positive (and significant) influence on beliefs about technology (specifically the perceived benefits of the technology and the overall attitude of the individual towards the technology). Theoretically, we are suggesting that individual differences influence technology perceptions and that more innovative individuals will view a technology more positively. We have depicted our research model below in Figure 1.

Figure 1. Research Model

Research Design

To understand the impact of the seven sources of CMB, we first divided the seven sources into two categories: (1) Sources that could be experimentally manipulated (including ambiguous or complex items; format of the scales; negatively worded or reverse coded items; item priming effects; and item embeddedness) versus (2) Sources that could not be experimentally manipulated, but could be directly assessed (including positive and negative affectivity and transient mood state). Our research approach was to include one version of the questionnaire with no manipulation (the baseline questionnaire), which would be compared against all of the seven sources of CMB. Respondents were first randomly assigned into one of two versions of the questionnaire. Version A was not manipulated, while version B was manipulated and developed into the five forms. Subjects were randomly assigned to one of six questionnaire forms: Version A or one of the five forms of Version B. Our design called for us to sub-divide the respondents within Version A to examine the impact of the two non-manipulated sources, assuming no randomization of the constructs or the items. We will expand upon this after discussing the operational definitions of our constructs.

Participants

Questionnaire data was collected from 1333 students enrolled in freshman-level undergraduate

psychology courses (e.g., Introductory Psychology) at a large southeastern university over a two-year period (2010 and 2011). Inclusion criteria for the study required that participants be 18 years of age or older and enrolled in a course that utilized an electronic course management system (CMS) (e.g., Blackboard or Moodle). Participation in the study was voluntary and compensated with in-class extra credit. The average participant was 19 years of age (standard deviation = 1.19), 70% were in their second year of college (18% in their third year), and 9% were psychology majors. Half were experienced course management system users with 12 months of experience or more.

Operational Construct Definition

The constructs in the research model were operationalized through items validated in prior research studies. We decided to leave two of the three constructs constant, only altering one construct for our design. We selected this approach to isolate the effects of our design and not introduce systematic bias that we would be unable to identify through our analysis. If we were to introduce bias on both the independent and dependent variables, we would argue that this could potentially lead to other impacts which we would be unable to identify if our analysis was detecting CMB versus an inflation (or deflation) effect between the variables. By focusing on only one construct, we could potentially determine the unique impact of each source of bias while holding the construct itself constant. We therefore chose PIIT as the construct to manipulate for our study.

Personal Innovativeness in the domain of Information Technology (PIIT). PIIT is defined as the willingness of an individual to try out any new innovation (Agarwal et al. 1998). Items were adapted from Agarwal and Prasad (1998) and are included in Appendix A. In the original scale, PIIT3 was negatively worded (“In general, I am not eager to try out new information technologies.”); however, for the baseline survey, this wording was altered so that all four items were positively valenced. A total of 212 subjects completed the baseline survey.

For the manipulated sources of CMB, the following survey manipulations were completed:

Ambiguous questions. For two of the PIIT items (PIIT2 and PIIT4), ambiguity was interjected into the items. For example, PIIT2 was changed from: “Among my peers, I am usually the first to try out new information technologies,” to “Among my peers, I am usually the first to play around with, try out, and experiment with new

PIIT

Attitude

Perceived Benefits

The DATA BASE for Advances in Information Systems 96 Volume 48, Number 1, February 2017

Page 5: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

information technologies.” Additionally, PIIT4 was changed from: “I like to experiment with new information technologies,” to “I like to experiment with, try out, and play around with new information technologies.” These manipulations were ambiguous, as a researcher would be unable to distinguish which of the three aspects of experimentation a user was responding to when agreeing or disagreeing with one of the scale items (i.e. play around with, try out, experiment, or a combination of all three). A total of 213 subjects completed the survey manipulation containing ambiguous questions.

Scale Format. Instead of a Likert scale question for the four PIIT items, the PIIT items were converted to Semantic Differential (with scale anchors of -3 to +3). A total of 309 subjects completed the survey manipulation with scale format.

Negative Word. In the baseline survey, we removed the negatively valenced items to ensure that all four items were positively phrased. In the negatively worded manipulation, we interjected the negative items back into the scale. For example, PIIT2 was changed from: “Among my peers, I am usually the first to try out new information technologies,” to “Among my peers, I am usually the last to try out new information technologies.” Additionally, PIIT4 was changed from: “I like to experiment with new information technologies,” to “I do not like to experiment with new information technologies.” A total of 158 subjects completed the survey manipulation with negatively worded items.

Item Priming. For every survey except for the scale format and item priming, the PIIT scale was introduced by the following priming: “For the following questions, please indicate the extent to which you agree with the following statements.” To adapt the scale format to semantic differential, the priming was changed to: “When a new information technology comes out, I….” To manipulate the priming of the scale, the item priming included the following introductory text:

This next section of questions is going to focus specifically on your own level of innovativeness. In other words, how innovative do you

think that you are? Answering more positively indicates that you feel that you are more innovative.

The PIIT items were the same as the baseline survey. A total of 223 subjects completed the survey manipulation representing item priming.

Embeddedness. To introduce neutrally valenced concepts within the items, one item was altered. PIIT2 was changed from: “Among my peers, I am usually the first to try out new information technologies,” to “Among my peers, I am usually among the first try out new information.” A total of 218 subjects completed the survey manipulation representing embeddedness.

Attitude. Six Semantic Differential Items captured attitude, or the feelings, emotions, or drives associated with an attitude object. The items were introduced with the text: “I would evaluate (CMS) as...” with the following items: (1) Bad/Good; (2) Unfavorable/Favorable; (3) Unpleasant/Pleasant; (4) Dislikeable/Likeable; (5) Unenjoyable/enjoyable; and (6) Distasteful/Delightful. The range on these items was from -3 to +3.

Perceived Benefits. Derived from the work of Dwyer et al. (1991), we conceptualized the perceived educational benefits of using a CMS as the extent to which educational tools were helpful, valuable, beneficial, and aided in school work. Based upon this conceptual definition, we created a concept termed “perceived benefits” with four Likert scale items: (1) (CMS) is helpful to aid us in the completion of our school work; (2) (CMS) is valuable to aid us in the completion of our schoolwork; (3) (CMS) is beneficial to aid us in the completion of our schoolwork; and (4) (CMS) is positive to aid us in the completion of our schoolwork.

Measuring Un-Manipulated Sources of CMB

While the subjects were randomly assigned to conditions, we operationalized the impact of the un-manipulated sources by splitting up the baseline population into two groups.

Measuring positive and negative affectivity. To assess the mood of the student at the time they completed the survey, the respondent also filled out the 20-item Positive and Negative Affect Schedule (or PANAS) (Watson et al. 1988). These 20 items contained half positive affects and half negative affects. For each respondent, the positive items were summed and the average for the entire baseline set was computed. Those subjects whose sum was above the average were categorized as “More Positive,” while those who were below the

The DATA BASE for Advances in Information Systems 97 Volume 48, Number 1, February 2017

Page 6: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

average were categorized as “Less Positive.” According to this operational definition (summarized below in Table 1), 99 subjects were more positive and 113 were less positive.

Measuring transient mood state. To examine if the transient mood state influenced the respondent, each subject filled out the Major Life Events for Students scale. Each student was asked if a set of events had occurred to them over the past year (Clements et al. 1996). The objective of this approach was to encourage the subject to recall the events of the past year and, by extension, induce the mood state that occurred due to the given life event. We chose this approach in order to replicate the state an individual would be under if these events had randomly occurred while completing the survey. If the student experienced the event, they were assigned a score corresponding to that particular circumstance (the weighting is in parenthesis). The higher the score, the more major life events that occurred to the student, while the lower, the fewer major life events. The average life events score for the entire baseline set was computed and those subjects whose sums were above the average were categorized as “High amount of external events” and those below the average were termed “Low amount of external events.” As summarized in Table 1 below, 82 students had a high score and 130 had a low score.

Table 1. Profile of Assignments

Questionnaire Version

2010 2011 Total

Baseline 128 84 212

Ambiguous 132 81 213

Embeddedness 153 65 218

Item Priming 193 30 223

Negative Word 121 37 158

Scale Format 228 81 309

Total 955 378 1333

Before proceeding with our analysis, we compared the means of the respondents from 2010 and 2011, in order to justify combining the data sets for the analysis. As Table 2 (below) reveals, when comparing the data set for the two years, there were almost no significant differences between the two response sets. PIIT3 in the baseline and ATT4 in the item priming samples represent the only cases of significant differences. We therefore combined the two for our analysis.

Data Analysis and Results

We analyzed the data utilizing structural equation modeling. Given our focus on accurate parameter estimation and the parameter testing for analyzing differences between models (Chin 1998; Wold 1985), we chose to use a covariance-based approach (Westland 2010). Specifically, we employed AMOS, version 20. We will begin with a discussion of our measurement model.

Measurement Model

For all of our tests, we created ten separate models: a baseline model; one for each of the 5 manipulated sources; a model with individuals who had gone through major life events; a model with individuals who had not gone through major life events; a model with low emotions; and a model with high emotions. First, we examined the adequacy of the measures to ensure that the items measured the constructs as they were designed to. Table 3 provides the results of the dimensionality, convergent validity, and reliability assessment. We also offer the standardized loadings, the composite reliability, and the average variance extracted (AVE). Thus, all items significantly load in their respective dimensions. The AVE values obtained are all above the recommended value of 0.50. This indicates that each construct’s items have convergent validity. Each construct demonstrated good internal consistency, with composite reliability coefficients that vary between 0.746 and 0.979.

Structural Model

Table 4 includes the path coefficients obtained for each of the samples analyzed. The values of the variance explained for each construct were high. Moreover, several common indices for overall fitness were included. We established cutoffs for the Comparative Fit Index (CFI) close to 0.95, Tucker-Lewis Index (TLI) close to 0.95, Root Mean Square Error of Approximation (RMSEA) close to 0.06, and Standardized Root Mean Square Residual (SRMR) close to 0.08 (Hu and Bentler, 1999). With regard to RMSEA, we also created confidence intervals (LO90 and HI90), as recommended by Byrne (2009). In all contexts, the model demonstrated a reasonably good fit of the data.

The DATA BASE for Advances in Information Systems 98 Volume 48, Number 1, February 2017

Page 7: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Table 2. Mean Comparisons (2010 to 2011)

Baseline Mean Comparison

Item

2010 (n=128) 2011 (n=84)

t Significance Mean

Standard Deviation

Standard Error Mean

Mean Standard Deviation

Standard Error Mean

ATT1 5.64 1.561 0.138 5.80 1.378 0.150 -0.75 0.454

ATT2 5.37 1.659 0.147 5.54 1.452 0.158 -0.759 0.449

ATT3 5.17 1.578 0.139 5.40 1.465 0.160 -1.081 0.281

ATT4 5.26 1.652 0.146 5.48 1.452 0.158 -0.987 0.325

ATT5 4.66 1.579 0.140 4.82 1.364 0.149 -0.748 0.455

ATT6 4.71 1.553 0.137 4.87 1.454 0.159 -0.743 0.458

BENEF1 4.00 0.972 0.086 4.02 0.806 0.088 -0.186 0.852

BENEF2 3.84 1.000 0.088 4.10 0.754 0.082 -1.967 0.051

BENEF3 3.93 0.941 0.083 4.05 0.805 0.088 -0.944 0.346

BENEF4 3.91 0.951 0.084 4.01 0.843 0.092 -0.827 0.409

PIIT1 3.40 0.908 0.080 3.58 0.934 0.102 -1.434 0.153

PIIT2 2.83 0.965 0.085 3.04 1.069 0.117 -1.468 0.144

PIIT3 3.24 1.099 0.097 3.54 0.975 0.106 -1.987 0.048

PIIT4 3.31 0.994 0.088 3.52 0.898 0.098 -1.572 0.117

Ambiguous Mean Comparison

Item

2010 (n=132) 2011 (n=81)

t Significance Mean

Standard Deviation

Standard Error Mean

Mean Standard Deviation

Standard Error Mean

ATT1 5.48 1.531 0.133 5.81 1.433 0.159 -0.709 0.479

ATT2 5.23 1.639 0.143 5.49 1.501 0.167 -1.600 0.111

ATT3 5.12 1.529 0.133 5.27 1.449 0.161 -1.156 0.249

ATT4 5.17 1.585 0.138 5.38 1.586 0.176 -0.711 0.478

ATT5 4.70 1.507 0.131 4.83 1.412 0.157 -0.932 0.352

ATT6 4.63 1.495 0.130 4.75 1.347 0.150 -0.590 0.556

BENEF1 4.02 0.925 0.080 4.07 0.905 0.101 -0.611 0.542

BENEF2 4.03 0.828 0.072 3.96 0.941 0.105 -0.455 0.649

BENEF3 3.98 0.891 0.078 4.05 0.835 0.093 0.547 0.585

BENEF4 4.04 0.735 0.064 3.89 0.908 0.101 -0.525 0.600

PIIT1 3.43 0.974 0.085 3.53 1.013 0.113 1.311 0.191

PIIT2 2.98 1.011 0.088 2.98 1.072 0.119 0.065 0.948

PIIT3 3.52 0.878 0.076 3.58 0.986 0.110 -0.501 0.617

PIIT4 3.42 1.070 0.093 3.38 1.280 0.142 0.208 0.835

Scale Format Mean Comparison

Item

2010 (n=228) 2011 (n=81)

t Significance Mean

Standard Deviation

Standard Error Mean

Mean Standard Deviation

Standard Error Mean

ATT1 5.49 1.491 0.099 5.59 1.563 0.174 -0.541 0.589

ATT2 5.24 1.611 0.107 5.21 1.801 0.200 0.125 0.900

ATT3 5.10 1.457 0.096 4.88 1.833 0.204 1.087 0.278

ATT4 5.03 1.613 0.107 5.00 1.768 0.196 0.143 0.886

ATT5 4.77 1.461 0.097 4.63 1.585 0.176 0.713 0.476

ATT6 4.62 1.460 0.097 4.56 1.651 0.183 0.344 0.731

BENEF1 4.00 0.816 0.054 4.07 0.919 0.102 -0.639 0.524

BENEF2 4.01 0.791 0.052 4.07 0.863 0.096 -0.623 0.534

BENEF3 4.04 0.779 0.052 4.06 0.885 0.098 -0.255 0.799

BENEF4 3.98 0.785 0.052 4.01 0.915 0.102 -0.281 0.779

PIIT1 4.82 1.622 0.107 5.05 1.440 0.160 -1.103 0.271

PIIT2 4.23 1.650 0.109 4.20 1.646 0.183 0.143 0.886

PIIT3 4.63 1.627 0.108 4.33 1.949 0.217 1.343 0.180

PIIT4 4.61 1.801 0.119 4.79 1.752 0.195 -0.780 0.436

The DATA BASE for Advances in Information Systems 99 Volume 48, Number 1, February 2017

Page 8: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Negative Word Mean Comparison

Item

2010 (n=121) 2011 (n=37)

t Significance Mean

Standard Deviation

Standard Error Mean

Mean Standard Deviation

Standard Error Mean

ATT1 5.75 1.422 0.129 5.97 1.166 0.192 -1.588 0.114

ATT2 5.45 1.538 0.140 5.51 1.304 0.214 -0.860 0.391

ATT3 5.11 1.459 0.133 5.32 1.203 0.198 -0.211 0.833

ATT4 5.14 1.690 0.154 5.16 1.590 0.261 -0.822 0.412

ATT5 4.88 1.324 0.120 4.95 1.433 0.236 -0.069 0.945

ATT6 4.87 1.291 0.117 5.00 1.291 0.212 -0.243 0.808

BENEF1 3.94 0.942 0.086 4.08 0.829 0.136 -0.545 0.586

BENEF2 3.91 0.913 0.083 3.92 0.894 0.147 -0.806 0.421

BENEF3 4.00 0.904 0.082 4.08 0.829 0.136 -0.058 0.954

BENEF4 3.97 0.912 0.083 4.16 0.602 0.099 -0.487 0.627

PIIT1 3.44 0.930 0.085 3.73 1.122 0.184 -1.222 0.224

PIIT2 2.40 1.029 0.094 2.35 0.949 0.156 0.239 0.812

PIIT3 3.49 0.818 0.074 3.54 1.095 0.180 -0.317 0.752

PIIT4 2.37 0.923 0.084 2.16 0.800 0.131 1.246 0.215

Item Priming Mean Comparison

Item

2010 (n=193) 2011 (n=30)

T Significance Mean

Standard Deviation

Standard Error Mean

Mean Standard Deviation

Standard Error Mean

ATT1 5.73 1.437 0.103 5.33 1.768 0.323 1.346 0.180

ATT2 5.32 1.655 0.119 5.10 1.583 0.289 0.685 0.494

ATT3 5.19 1.468 0.106 4.70 1.664 0.304 1.675 0.095

ATT4 5.21 1.572 0.113 4.53 1.833 0.335 2.151 0.033

ATT5 4.72 1.492 0.107 4.20 1.584 0.289 1.745 0.082

ATT6 4.67 1.434 0.103 4.30 1.489 0.272 1.302 0.194

BENEF1 4.05 0.792 0.057 4.23 0.568 0.104 -1.241 0.216

BENEF2 3.99 0.797 0.057 4.17 0.531 0.097 -1.175 0.241

BENEF3 3.98 0.851 0.061 4.20 0.664 0.121 -1.325 0.186

BENEF4 3.97 0.826 0.059 4.13 0.629 0.115 -1.011 0.313

PIIT1 3.48 0.919 0.066 3.53 0.730 0.133 -0.322 0.748

PIIT2 2.90 1.036 0.075 3.07 1.230 0.225 -0.816 0.415

PIIT3 3.43 0.894 0.064 3.43 1.040 0.190 -0.018 0.985

PIIT4 3.40 1.026 0.074 3.57 1.165 0.213 -0.817 0.415

Embeddedness Mean Comparison

Item

2010 (n=153) 2011 (n=65)

t Significance Mean

Standard Deviation

Standard Error Mean

Mean Standard Deviation

Standard Error Mean

ATT1 5.81 1.202 0.097 5.92 1.361 0.169 -0.608 0.544

ATT2 5.61 1.278 0.103 5.65 1.430 0.177 -0.162 0.871

ATT3 5.36 1.370 0.111 5.26 1.513 0.188 0.468 0.640

ATT4 5.32 1.576 0.127 5.38 1.518 0.188 -0.279 0.781

ATT5 4.80 1.493 0.121 4.98 1.566 0.194 -0.835 0.405

ATT6 4.86 1.328 0.107 4.88 1.505 0.187 -0.069 0.945

BENEF1 4.07 0.889 0.072 4.15 0.755 0.094 -0.650 0.516

BENEF2 4.01 0.925 0.075 4.08 0.714 0.089 -0.497 0.620

BENEF3 4.08 0.839 0.068 4.05 0.837 0.104 0.260 0.795

BENEF4 3.93 0.922 0.075 4.05 0.943 0.117 -0.811 0.418

PIIT1 3.41 0.906 0.073 3.51 0.904 0.112 -0.764 0.446

PIIT2 2.90 1.093 0.088 3.03 1.045 0.130 -0.806 0.421

PIIT3 3.44 0.973 0.079 3.62 0.930 0.115 -1.202 0.231

PIIT4 3.45 0.980 0.079 3.54 1.105 0.137 -0.580 0.562

The DATA BASE for Advances in Information Systems 100 Volume 48, Number 1, February 2017

Page 9: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Tab

le 3

. D

imen

sio

nality

, c

on

verg

en

t v

alid

ity,

an

d r

eliab

ilit

y a

ss

essm

en

t

Sta

ndard

ized lo

adin

gs

Baselin

e

Am

big

uous

Scale

Fo

rmat

Negative

Word

It

em

- P

rim

ing

Em

beddedness

Hig

h E

motio

n

Low

Em

otio

n

Majo

r Life

Events

N

on M

ajo

r Life

Events

PIIT

C

R:

0.8

98

AV

E: 0.6

88

CR

: 0.8

87

AV

E: 0.6

64

CR

: 0.8

05

AV

E: 0.5

21

CR

: 0.7

46

AV

E: 0.4

28

CR

: 0.8

69

AV

E: 0.6

26

CR

: 0.8

84

AV

E: 0.6

57

CR

: 0.8

01

AV

E: 0.6

10

CR

: 0.9

27

AV

E: 0.7

63

CR

: 0.8

52

AV

E: 0.5

91

CR

: 0.9

24

AV

E: 0.7

54

PIIT

1

0.8

03

0.8

49

0.4

60

0.7

47

0.7

78

0.7

55

0.7

23

0.8

77

0.7

38

0.8

42

PIIT

2

0.7

49

0.8

21

0.7

02

0.5

40

0.7

09

0.7

38

0.7

36

0.7

69

0.7

21

0.7

58

PIIT

3

0.8

53

0.8

98

0.9

04

0.7

33

0.9

36

0.9

03

0.7

92

0.9

18

0.7

67

0.9

23

PIIT

4

0.9

06

0.6

75

0.7

50

0.5

72

0.7

22

0.8

36

0.8

64

0.9

22

0.8

45

0.9

39

BE

NE

FIT

C

R:

0.9

51

AV

E: 0.8

28

CR

: 0.8

90

AV

E: 0.6

68

CR

: 0.9

36

AV

E: 0.7

84

CR

: 0.9

50

AV

E: 0.8

27

CR

: 0.9

02

AV

E: 0.6

96

CR

: 0.9

26

AV

E: 0.7

56

CR

: 0.9

23

AV

E: 0.7

50

CR

: 0.9

61

AV

E: 0.8

61

CR

: 0.9

39

AV

E: 0.7

94

CR

: 0.9

55

AV

E: 0.8

44

BE

NE

F1

0.8

90

0.7

88

0.8

82

0.9

02

0.8

54

0.8

23

0.8

28

0.9

15

0.8

82

0.9

00

BE

NE

F2

0.9

10

0.8

60

0.8

74

0.9

06

0.8

62

0.9

14

0.8

89

0.9

20

0.8

41

0.9

40

BE

NE

F3

0.9

05

0.8

39

0.8

99

0.9

57

0.8

23

0.8

85

0.8

68

0.9

23

0.9

19

0.8

99

BE

NE

F4

0.9

35

0.7

81

0.8

87

0.8

72

0.7

98

0.8

55

0.8

79

0.9

55

0.9

22

0.9

35

AT

TIT

UD

E

CR

: 0.9

56

AV

E: 0.7

84

CR

: 0.9

52

AV

E: 0.7

66

CR

: 0.9

28

AV

E: 0.6

83

CR

: 0.9

33

AV

E: 0.6

99

CR

: 0.9

32

AV

E: 0.6

94

CR

: 0.9

32

AV

E: 0.6

97

CR

: 0.9

65

AV

E: 0.7

60

CR

: 0.9

79

AV

E: 0.8

49

CR

: 0.9

71

AV

E: 0.8

00

CR

: 0.9

77

AV

E: 0.8

32

AT

T1

0.8

45

0.8

74

0.6

81

0.7

40

0.8

02

0.7

80

0.7

75

0.8

71

0.8

12

0.8

64

AT

T2

0.9

14

0.8

77

0.8

40

0.8

93

0.8

07

0.8

46

0.8

49

0.9

53

0.8

81

0.9

29

AT

T3

0.9

33

0.9

41

0.8

89

0.9

37

0.8

86

0.9

15

0.9

21

0.9

48

0.9

56

0.9

30

AT

T4

0.9

25

0.8

95

0.8

71

0.8

09

0.8

74

0.8

67

0.9

33

0.9

12

0.9

24

0.9

26

AT

T5

0.8

60

0.8

60

0.8

58

0.8

04

0.8

48

0.8

38

0.8

55

0.8

45

0.8

68

0.8

56

AT

T6

0.8

30

0.8

01

0.8

03

0.8

22

0.7

78

0.7

54

0.8

48

0.8

01

0.7

92

0.8

49

Sourc

e: ow

n e

labora

tio

n.

SL =

sta

ndard

ized lo

adin

gs;

CR

= C

om

posite R

elia

bili

ty; A

VE

= A

vera

ge V

ariance E

xtr

acte

d; A

ll t-

valu

es w

ere

gre

ate

r th

an 2

.576 (

p <

0.0

01).

The DATA BASE for Advances in Information Systems 101 Volume 48, Number 1, February 2017

Page 10: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Tab

le 4

. S

tru

ctu

ral M

od

el E

sti

mati

on

B

aselin

e

Am

big

uous

Scale

F

orm

at

Negative

Word

It

em

- P

rim

ing

Em

beddedness

Hig

h

Em

otio

n

Low

Em

otio

n

Majo

r Life

Events

N

on M

ajo

r Life E

vents

PIT

T

BE

NE

FIT

S

0.3

63**

* 0.1

94**

0.1

36**

0.3

34**

* 0.1

47**

0.3

52**

* 0.3

66**

* 0.3

52**

* 0.2

43**

0.4

07**

*

PIT

T

AT

TIT

UD

E

0.3

76**

* 0.1

65**

0.1

60**

0.3

80**

* 0.1

41 (

n.s

.)

0.1

87**

0.4

81**

* 0.2

94**

* 0.3

19**

* 0.3

97**

*

Varia

nce e

xpla

ined (

R2)

BE

NE

FIT

S

0.1

32

0.0

37

0.0

19

0.1

12

0.0

22

0.1

24

0.2

32

0.0

87

0.1

01

0.1

57

AT

TIT

UD

E

0.1

41

0.0

27

0.0

25

0.1

45

0.0

20

0.0

35

0.1

34

0.1

24

0.0

59

0.1

65

Fit s

tatistics

χ2

285.7

27

255.7

80

292.3

85

202.5

22

277.2

06

359.0

89

188.3

95

198.2

57

140.7

45

228.9

41

df

75

75

75

75

75

75

75

75

75

75

P

0.0

00

0.0

00

0.0

00

0.0

00

0.0

00

0.0

00

0.0

00

0.0

00

0.0

00

0.0

00

CF

I 0.9

29

0.9

24

0.9

29

0.9

24

0.9

07

0.8

84

0.9

01

0.9

3

0.9

33

0.9

23

TL

I 0.9

13

0.9

08

0.9

13

0.9

08

0.8

88

0.8

59

0.8

8

0.9

16

0.9

18

0.9

07

SR

MR

0.2

25

0.1

86

0.1

95

0.1

678

0.1

703

0.1

950

0.2

092

0.2

365

0.2

315

0.2

253

RM

SE

A

0.1

15

0.1

07

0.0

97

0.1

04

0.1

10

0.1

32

0.1

24

0.1

21

0.1

04

0.1

26

LO

90 a

nd H

I90

0.1

01-0

.130

0.0

92-0

.121

0.0

85

-0.1

09

0.0

87-0

.121

0.0

96-0

.124

0.1

19-0

.146

0.1

02-0

.146

0.1

01-0

.142

0.0

77-0

.13

0.1

08-0

.145

Note

: **

p<

0.0

5; **

*p<

0.0

01;

n.s

.: n

ot

sig

nific

ant.

Fig

ure

2.

Co

nfi

gu

ral M

od

el

The DATA BASE for Advances in Information Systems 102 Volume 48, Number 1, February 2017

Page 11: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Comparison of Models

Our next step was to examine whether the differences between the baseline model and the sources of CMB that were studied were significant. Specifically, we examined whether there were significant differences between the groups for both the path coefficients (structural model) and for the loadings (measurement model) using a multi-group analysis. A critical assumption in multi-group analysis is that the instrument measures the same construct(s) in exactly the same way across all groups (i.e., the instrument is measurement and structurally equivalent) (Byrne and van de Vijver 2010). Nevertheless, if the equivalence or invariance of an assessment instrument does not hold, the validity of the inferences and interpretations extracted from the data may be erroneous (Byrne 2008), and the findings based upon comparisons of the groups cannot be valid.

Measurement invariance is concerned with the extent to which parameters comprising the measurement instrument are similar across groups (Byrne 2008), and it is evaluated at three levels: weak (factor loadings invariance), strong (factor loadings and item intercepts invariance) and strict (factor loadings, item intercepts and error variances, and covariances invariance). In contrast to measurement invariance, structural invariance is concerned with the equality of relationships among the factors. If a researcher is interested in testing for cross-group equivalence related to a full structural equation (or path analytic) model, then it is necessary to test for the equality of structural regression paths between and among the postulated latent constructs (or factors) (Byrne, 2008).

Testing for measurement and structural invariance entails a hierarchical set of steps that typically begins with the determination of a well-fitting multi-group baseline model (configural model - Figure 2). The importance of this model is that it serves as the baseline against which all subsequent tests for equivalence are compared.

The next stage of the analysis is determining if the factor structure is similar across the different groups (test of invariance of the configural model). The parameters are estimated for all groups simultaneously. Given that the configural model fits reasonably well (Table 5), we can conclude that both the number of factors and the patterns of their item loadings are similar. Consequently, the results support the configural invariance of the measurement model and justify the evaluation of more restrictive invariant models.

In testing for measurement and structural invariance, the research compares the equality of estimated

parameters across different groups. This procedure involves testing the fit of a series of increasingly restrictive models against a baseline model (the configural model in which no equality constraints are imposed). The models analyzed can be seen as nested models to which the constraints are progressively added. Previous research has employed the likelihood ratio test (also known as the chi square difference test) for the comparison of nested models. This χ

2 difference value (∆χ

2) is

distributed as χ2, with degrees of freedom equal to

the difference in degrees of freedom (∆df). If this value is statistically significant, in the comparison of two nested models, it indicates that the constraints specified in the more restrictive model do not hold (i.e., the two models are not equivalent across groups). However, due to the sensitivity of the χ

2 to

sample size and non-normality (Hair et al. 1999), Cheung and Rensvold (2002) have proposed a more practical criterion, the CFI increment (∆CFI), to determine if the models compared are equivalent. In this sense, when there is a change greater than 0.01 in the CFI between two nested models, the least constrained model is accepted and the other rejected. That is, the most restrictive model does not hold. If the change in CFI is equal or inferior to 0.01, the specified equal constraints are considered tenable conditions to proceed with the next step in the analysis of the measurement invariance.

After configural invariance is established, we continue with the testing for the measurement and structural invariance. As can be observed in Table 5, when the factor loadings are equally constrained, the difference in the ∆CFI between the unconstrained model M0 and the constrained model M1 do not exceed 0.01 in all of the compared groups. These results demonstrate the metric invariance or weak measurement invariance, that is, the equivalence of the factor loading across the different groups. This ensures equivalent relationships between a latent factor and its indicators (items) in the CFA model. Furthermore, the metric invariance is a prerequisite to compare these groups on regression paths (Dimitrov, 2010). When the factor loadings and item-intercepts are equally constrained, the difference in the ∆CFI between the unconstrained model M0 and the constrained model M2 does not exceed 0.01 in the following cases: 1) Baseline Versus Ambiguous, 2) Baseline Versus Item Priming, 3) Baseline Versus Embeddedness, 4) High Emotion Versus Low Emotion, and 5) Major Life Events Versus Non Major Life Events. In these cases, the results support the strong measurement invariance. This is necessary to compare these groups on factor means (Dimitrov, 2010).

The DATA BASE for Advances in Information Systems 103 Volume 48, Number 1, February 2017

Page 12: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Table 5. Testing for measurement and structural invariance across groups

χ2 df ∆χ

2 ∆df P CFI ∆CFI TLI SRMR RMSEA

RMSEA 90% CI

Baseline Versus Ambiguous

M0 541.507 150 0.927 0.911 0.225 0.079 0.071-0.086

M1 559.2 161 17.693 11 0.089 0.925 0.002 0.916 0.223 0.076 0.07-0.083

M2 571.241 175 29.734 25 0.234 0.926 0.001 0.923 0.223 0.073 0.067-0.080

M3 567.72 163 38.280 27 0.074 0.924 0.003 0.915 0.251 0.077 0.07-0.084

Baseline Versus Scale Format

M0 578.187 150 0.929 0.913 0.225 0.074 0.068-0.081

M1 602.989 161 24.802 11 0.010 0.926 0.003 0.917 0.226 0.073 0.067-0.079

M2 803.916 175 225.729** 25 0.000 0.895 0.034 0.891 0.235 0

.083 0.077-0.089

M3 822.525 177 244.338** 27 0.000 0.892 0.037 0.889 0.291 0.084 0.078-0.090

Baseline Versus Negative Word

M0 488.239 150 0.927 0.911 0.225 0.078 0.071-0.086

M1 506.452 161 18.212 11 0.077 0.925 0.002 0.916 0.225 0.076 0.069-0.084

M2 1257.587 175 769.348** 25 0.000 0.766 0.161 0.756 0.238 0.130 0.123-0.136

M3 1170.406 177 682.166** 27 0.000 0.785 0.142 0.779 0.224 0.123 0.117-0.130

Baseline Versus Item Priming

M0 562.935 150 0.920 0.902 0.225 0.080 0.073-0.087

M1 568.317 161 5.381 11 0.911 0.921 -0.001 0.910 0.225 0.076 0.070-0.083

M2 574.975 175 12.039 25 0.986 0.922 -0.002 0.919 0.225 0.073 0.066-0.079

M3 585.751 177 22.816 27 0.695 0.920 0.000 0.918 0.256 0.073 0.067

Baseline Versus Embeddedness

M0 644.811 150 0.908 0.889 0.225 0.088 0.081-0.095

M1 660.376 161 15.565 11 0.158 0.907 0.001 0.895 0.225 0.085 0.078-0.092

M2 675.321 175 30.510 25 0.206 0.907 0.001 0.904 0.225 0.082 0.075-0.088

M3 680.125 177 35.313 27 0.131 0.907 0.001 0.904 0.242 0.081 0.075-0.088

High Emotion Versus Low Emotion

M0 386.662 150 0.919 0.902 0.236 0.087 0.076-0.097

M1 407.308 161 20.646* 11 0.037 0.916 0.003 0.905 0.236 0.085 0.075-0.096

M2 437.652 175 50.990** 25 0.001 0.910 0.009 0.907 0.234 0.085 0.075-0.094

M3 439.236 177 52.574** 27 0.002 0.910 0.009 0.908 0.233 0.084 0.074-0.094

Major Life Events Versus Non Major Life Events

M0 369.677 150 0.926 0.911 0.225 0.084 0.073-0.094

M1 385.009 161 15.332 11 0.167 0.925 0.001 0.915 0.225 0.081 0.071-0.092

M2 407.027 175 37.350 25 0.05 0.922 0.004 0.919 0.226 0.079 0.069-0.09

M3 408.836 177 39.159 27 0.06 0.922 0.004 0.92 0.238 0.079 0.069-0.089

Notes: M0: Unconstrained configural model; M1: First-order factor loadings invariant model; M2: First-order factor loadings and item-intercepts invariant model; M3: First-order factor loadings, item-intercepts and regression paths invariant model; *p<0.05; **p< 0.01; ∆CFI > 0.01 in bold

The DATA BASE for Advances in Information Systems 104 Volume 48, Number 1, February 2017

Page 13: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Table 6. Critical Ratios for Differences Between Groups

Baseline Versus

Ambiguous

Baseline Versus Scale

Format

Baseline Versus

Negative Word

Baseline Versus Item

Priming

Baseline Versus

Embeddedness

High Emotions

Versus Low Emotions

Major Life Events Versus Non Major Life

Events

PIIT2 -0.111 2.47 -1.819 0.454 1.025 1.208 0.226

PIIT3 -2.228 3.221 -2.224 0.037 0.379 0.663 0.517

PIIT4 -2.149 2.64 -2.739 -0.748 0.527 0.637 -0.103

BENEF2 0.075 -1.207 -0.543 -0.281 1.157 0.479 0.005

BENEF3 0.151 -0.321 0.408 0.523 0.751 0.085 0.892

BENEF4 -2.044 -1.136 -2.117 -0.888 0.902 -0.618 0.961

ATT2 -0.98 1.773 1.264 -0.344 0.002 1.007 1.091

ATT3 -0.821 1.898 1.33 -0.252 1.777 1.854 2.935

ATT4 -0.91 2.046 1.208 0.304 1.962 2.55 1.998

ATT5 -0.69 2.036 0.398 0.546 2.454 3.191 2.116

ATT6 -1.375 1.613 0.404 -0.589 0.633 2.431 1.555

PIIT-BENEFIT -2.232 -2.489 -0.155 -2.445 -0.327 -0.331 -1.372

PIIT-ATTITUDE -2.257 -2.669 -0.635 -2.264 -2.284 0.607 -0.971

Notes: factor loadings of PIIT1, BENEF1 and ATT1 are fixed to 1; significant differences between groups in bold (p < 0.05)

Table 7. Impact on Measurement and Structural Models for Each Source of CMB

Measurement Model Structural Model

Items manipulated Items non-manipulated

Ambiguous Questions.

Significant impact upon the loading of PIIT4

Significant impact upon the loadings of PIIT3 and BENEF4

Significant reduction in both structural paths

Scale Format Significant impact upon the loadings of PIIT2, PIIT3 and PIIT4

Significant impact upon the loadings of ATT4 and ATT5

Significant reduction in both structural paths

Negative Word Significant impact upon the loading of PIIT4

Significant impact upon the loadings of PIIT3 and BENEF4

No significant impact

Item Priming Not significant impact Not significant impact Significant reduction in both structural paths

Embeddedness Not significant impact Significant impact upon the loadings of ATT4 and ATT6

Significant reduction in one structural path

Emotions Significant impact upon the loadings of ATT4, ATT5, and ATT6 No significant impact

Major Life Events Significant impact upon the loadings of ATT3, ATT4, and ATT5 No significant impact

Moreover, when the factor loadings, item-intercepts and regression paths are equally constrained, the difference in the ∆CFI between the unconstrained model M0 and the constrained model M3 only exceed 0.01 in two cases: 1) Baseline Versus Scale Format, and 2) Baseline Versus Negative Word. This indicates that the factor loadings, item-intercepts and regression paths are not equivalent across these two comparisons. On the contrary, these parameters are equivalent in the rest of the compared groups.

The invariance analysis demonstrates the equivalence of the parameters globally between the compared groups. To compare each parameter between two groups, we utilized AMOS to employ the critical ratio method, as recommended by Byrne (2009). These pairwise comparisons are presented in Table 6. If the critical ratios exceed 1.96, the parameter is significantly different between the two groups compared to a level of p <0.05. We have highlighted these cases in bold.

Table 6 reveals that the ambiguous manipulation resulted in significant differences for some indicators (PIIT3, PIIT4, and BENEF4), as well as for both path coefficients. The scale format manipulation gave rise to significant differences both in the indicators of PIIT (also in ATT4 and ATT5) and in the standardized regression coefficients. The negative word manipulation led to significant differences in some indicator loadings (PIIT3, PIIT4, and BENEF4), but not for the path coefficients. Item priming manipulation generated significant changes only in path coefficients. The embeddedness manipulation produced significant differences in only two items not directly related to the manipulation (ATT4 and ATT5), and in one of the path coefficients (PIIT-ATTITUDE). For the non-manipulated sources of CMB, the comparison between groups resulted in significant differences in three item loadings of the attitude construct. We have summarized these findings below in Table 7.

The DATA BASE for Advances in Information Systems 105 Volume 48, Number 1, February 2017

Page 14: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Table 8. Harman’s Single Factor Test

Baseline Ambiguous Scale Format

Negative Word Item Priming Embeddeddness

Principal Components – no rotation

Component 1 55.17% 44.80% 43.24% 45.60% 41.35% 45.93%

Component 2 17.53% 20.06% 17.26% 15.83% 19.61% 20.20%

Component 3 9.45% 12.73% 13.72% 13.64% 13.94% 10.76%

Total 82.15% 77.59% 74.22% 75.07% 74.90% 76.89%

Principal Components – varimax rotation

Component 1 34.42% 33.84% 31.17% 31.71% 31.80% 31.53%

Component 2 25.26% 22.40% 25.08% 25.69% 22.58% 23.87%

Component 3 22.47% 21.35% 17.97% 17.66% 20.53% 21.49%

Total 82.15% 77.59% 74.22% 75.06% 74.91% 76.89%

Principal Axis – varimax rotation

Component 1 32.88% 32.12% 29.13% 29.69% 29.68% 29.71%

Component 2 23.96% 20.22% 23.38% 24.39% 20.49% 21.88%

Component 3 20.53% 19.18% 15.15% 14.18% 18.10% 19.25%

Total 77.37% 71.52% 67.66% 68.26% 68.27% 70.84%

High emotion Low emotion Major Life Events Not Major Life Events

Principal Components – no rotation Principal Components – no rotation

Component 1 44.53% 60.68% 51.92% 57.78%

Component 2 19.41% 15.18% 16.67% 18.53%

Component 3 13.73% 8.24% 9.42% 9.30%

Total 77.67% 84.10% 78.01% 85.61%

Principal Components – varimax rotation Principal Components – varimax rotation

Component 1 33.42% 33.78% 33.04% 35.30%

Component 2 24.82% 25.56% 24.69% 25.72%

Component 3 19.43% 24.76% 20.28% 24.58%

Total 77.67% 84.10% 78.01% 85.60%

Principal Axis – varimax rotation Principal Axis – varimax rotation

Component 1 31.76% 32.36% 30.91% 34.08%

Component 2 23.49% 24.25% 23.45% 25.54%

Component 3 16.51% 23.51% 17.66% 23.26%

Total 71.76% 80.12% 72.02% 82.88%

Detecting CMB

The results of our data collection approach revealed patterns of CMB. We next investigated whether the three most popular techniques to detect CMB were able to detect CMB within the sources of CMB that we experimentally created. Chin et al (2012) outlined the six most commonly utilized approaches to detect CMB (namely, Harman’s Single-Factor Test, the Partial Correlation Technique, the Multitrait-Multimethod approach, the CFA Marker Technique, and the Unmeasured Latent Marker Construct). We focused primarily on those techniques that are broadly utilized in the literature and that are conducted post-hoc, as researchers typically assess CMB post-hoc. Based upon these two criteria, we decided to focus upon Harman’s single-factor model approach, the CFA marker technique, and the unmeasured latent method construct (ULMC) technique.

Harman’s Single-Factor Test

Harman’s Single-Factor Model specifies that all of the items in the research model should be subjected to an exploratory factor analysis. The test argues that the presence of a substantial amount of Common Method Bias will result in one factor emerging (Podsakoff et al., 2003)

1. We loaded all of

the items into three separate analyses: (a) unrotated principal component factor analysis, (b) principal component analysis with varimax rotation, and (c) principal axis analysis with varimax rotation.

Our results (in Table 8 below) indicate the presence of three distinct factors with eigenvalue greater than 1.0, rather than a single factor.

1 While Podsakoff is commonly utilized to justify Harman,

the article actually states that the method is poor and should not be used. However, given the pervasive use of Harman within the IS discipline, we opted to examine if this claim is correct.

The DATA BASE for Advances in Information Systems 106 Volume 48, Number 1, February 2017

Page 15: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Figure 3. Confirmatory Factor Analysis (CFA) Model

Interestingly, the survey approach that had the highest amount of variance explained by one factor was the baseline survey (55%), while all of the other approaches where bias was purposely interjected into our survey did not result in Harman’s test revealing a bias.

CFA Marker Technique

The CFA Marker Technique argues that CMB can be controlled by partialing out the shared variance in correlations. Therefore, to operationalize this technique, a marker variable is injected into the analysis that is theoretically unrelated to the nomological network (Lindell et al. 2001; Richardson et al. 2009). An individual-level measure of Willingness to Learn (Rizzuto & Park, under development) was employed to assess one’s openness and intrinsic enjoyment of learning new skills and knowledge. We argue that the Willingness to Learn domain, as a personality trait, is more global than, for example, PIIT, and therefore represents a different concept. Specifically, we posit that it is possible to be open to learning new things while simultaneously being unwilling to try new IT (which is PIIT), and they therefore represent general versus domain-specific personality traits. Willingness to learn was assessed utilizing the following six items:

I enjoy learning new things.

I like being challenged to learn new skills.

I do not enjoy teaching myself how to apply new work processes.

I consider myself a student to life's lessons.

Continuous learning is important to me.

I would rather apply knowledge and skills I already have than learn something new.

To test the presence of method bias, we followed the comprehensive confirmatory factor analysis (CFA) marker technique proposed by Williams et al. (2010). The first model examined, the CFA model (see Figure 3), allows for a set of correlations among the three substantive latent variables (PIIT, Benefits, and Attitude) and the marker latent variable called Method. The purpose of this model is to obtain the factor loadings and measurement error variance estimates for the six marker variable indicators for use in subsequent models (WTLi).

The second model evaluated, the baseline model shown in Figure 4, allows the three factors to be correlated but has an orthogonal marker latent variable with its indicators having both fixed factor loadings and fixed error variances.

PITT

PIIT1e1

1

1

PIIT2e21

PIIT3e31

PIIT4e41

BENEFITS

BENEF1 e5

BENEF2 e6

BENEF3 e7

BENEF4 e8

1

1

1

1

1

ATTITUDE

ATT1 e9

ATT2 e10

ATT3 e11

ATT4 e12

1

1

1

1

1

ATT5 e131

ATT6 e141

1

METHOD

WTL1

e15

WTL2

e16

WTL3

e17

WTL4

e18

1 1 1 1

WTL5

e19

1

WTL6

e20

1

The DATA BASE for Advances in Information Systems 107 Volume 48, Number 1, February 2017

Page 16: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Figure 4. Baseline Model

The estimates from the CFA Model were used as fixed values for the factor loadings and error variances for the marker variable indicators in the Baseline Model.

The third model, called the Method-C Model (Figure 5), is similar to the Baseline Model because the method marker variable is assumed to be orthogonal and the measurement parameters associated with its indicators are fixed. However, the Method-C Model has additional factor loadings from the method marker variable latent to each of the indicators in the model that are forced to be equal. The comparison of the Method-C Model with the Baseline Model provides a test of the presence of method variance associated with the marker variable. The fourth model, called the Method-U Model, is similar to the Method-C Model, except that the factor loadings from the marker latent variable to the substantive indicators are freely estimated. Finally, the Method-R Model, the fifth model, employs the obtained factor correlations for the substantive factors from the Baseline Model as fixed values in either the Method-C or the Method-U Model (depending upon which is supported).

Results of Model Comparisons

The Baseline Model and the Method-C Model were compared to test the null hypothesis that the method factor loadings (assumed to be equal) associated with the marker variable were not related to each of the substantive indicators utilizing the chi-square difference test. This hypothesis is rejected in three samples (baseline, ambiguous, and embeddedness) (Table 9). This suggests evidence of method bias in these samples. In the “item priming” and “scale format” samples, this hypothesis is accepted. Therefore, Common Method Bias does not appear in these cases.

Next, a model comparison was conducted between the Method-C and Method-U Models to determine if the impact of the method marker variable was equal for all of the items loading on the substantive indicators. Comparison of these two models tested the null hypothesis that the method factor loadings are equal. In the baseline sample, this hypothesis is accepted, while in the ambiguous and embeddedness samples the hypothesis is rejected (Table 9).

PITT

PIIT1e1

1

1

PIIT2e21

PIIT3e31

PIIT4e41

BENEFITS

BENEF1 e5

BENEF2 e6

BENEF3 e7

BENEF4 e8

1

1

1

1

1

ATTITUDE

ATT1 e9

ATT2 e10

ATT3 e11

ATT4 e12

1

1

1

1

1

ATT5 e131

ATT6 e141

1

METHOD

WTL1

e15

WTL2

e16

WTL3

e17

WTL4

e18

1 1 1 1

WTL5

e19

1

WTL6

e20

1

The DATA BASE for Advances in Information Systems 108 Volume 48, Number 1, February 2017

Page 17: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Figure 5. Method-C, Method-U and Method R models

Table 9. Chi-Square, Goodness-of-Fit Values, and Model Comparison Tests (Marker technique)

Model (“Baseline” Sample) χ2 df CFI

CFA 493.136 165 0.904

Baseline 556.459 180 0.89

Method-C 465.206 179 0.916

Method-U 450.07 166 0.917

Method-R 477.933 181 0.913

Chi-Square Model Comparison Tests ∆Models ∆χ2 ∆df P

Baseline vs. Method-C 91.253** 1 0.000

Method-C vs. Method U 15.136 13 0.301

Method-C vs. Method R 12.727* 2 0.001

Model (“Ambiguous” Sample) χ2 df CFI

CFA 380.909 165 0.912

Baseline 388.192 180 0.915

Method-C 383.052 179 0.917

Method-U 344.961 166 0.927

Method-R 380.909 165 0.912

Chi-Square Model Comparison Tests ∆Models ∆χ2 ∆df P

Baseline vs. Method-C 5.14* 1 0.023

Method-C vs. Method U 38.091** 13 0.000

Method-U vs. Method R 0.835 3 0.841

PITT

PIIT1e1

1

1

PIIT2e21

PIIT3e31

PIIT4e41

BENEFITS

BENEF1 e5

BENEF2 e6

BENEF3 e7

BENEF4 e8

1

1

1

1

1

ATTITUDE

ATT1 e9

ATT2 e10

ATT3 e11

ATT4 e12

1

1

1

1

1

ATT5 e131

ATT6 e141

1

METHOD

WTL1

e15

WTL2

e16

WTL3

e17

WTL4

e18

1 1 1 1

WTL5

e19

1

WTL6

e20

1

The DATA BASE for Advances in Information Systems 109 Volume 48, Number 1, February 2017

Page 18: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Model (“Scale format” Sample) χ2 df CFI

CFA 381.887 165 0.929

Baseline 382.909 180 0.934

Method-C 382.345 179 0.934

Method-U 353.836 166 0.939

Method-R

Chi-Square Model Comparison Tests ∆Models ∆χ2 ∆df p

Baseline vs. Method-C 0.564 1 0.454

Method-C vs. Method U 28.509* 13 0.019

Method-U vs. Method R 7.762 3 0.052

Model (“Negative Word” Sample) χ2 df CFI

CFA

Could not be estimated

Baseline

Method-C

Method-U

Method-R

Chi-Square Model Comparison Tests ∆Models

Baseline vs. Method-C

Method-C vs. Method U

Method-U vs. Method R

Model (“Item Priming” Sample) χ2 df CFI

CFA 395.438 165 0.898

Baseline 400.625 180 0.902

Method-C 400.243 179 0.902

Method-U 381.954 166 0.904

Method-R

Chi-Square Model Comparison Tests ∆Models ∆χ2 ∆df p

Baseline vs. Method-C 0.382 1 0.537

Method-C vs. Method U 18.289 13 0.150

Method-U vs. Method R

Model (“Embeddedness” Sample) χ2 df CFI

CFA 480.379 165 0.875

Baseline 491.252 180 0.876

Method-C 474.222 179 0.883

Method-U 398.739 166 0.907

Method-R 411.305 168 0.903

Chi-Square Model Comparison Tests ∆Models ∆χ2 ∆df p

Baseline vs. Method-C 17.03** 1 0.000

Method-C vs. Method U 75.483** 13 0.000

Method-U vs. Method R 12.566* 2 0.001

Notes: *p<0.05, **p<0.01; accepted models in bold.

The DATA BASE for Advances in Information Systems 110 Volume 48, Number 1, February 2017

Page 19: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Table 10. Factor Loadings (Standardized Solution)

Item Baseline Sample (Method-R) Ambiguous Sample (Method-U) Embeddedness Sample (Method-R)

PIIT

PIIT1 0.702b 0.844

b 0.772

b

PIIT2 0.661*** 0.833*** 0.747***

PIIT3 0.777*** 0.881*** 0.896***

PIIT4 0.814*** 0.654*** 0.822***

BENEFITS

BENEF1 0.808b 0.717

b 0.623

b

BENEF2 0.829*** 0.791*** 0.679***

BENEF3 0.817*** 0.782*** 0.704***

BENEF4 0.862*** 0.714*** 0.63***

ATTITUDE

ATT1 0.814b 0.824

b 0.136

b

ATT2 0.884*** 0.836*** 0.191***

ATT3 0.905*** 0.922*** 0.431***

ATT4 0.894*** 0.892*** 0.41***

ATT5 0.825*** 0.882*** 0.725***

ATT6 0.794*** 0.835*** 0.712***

MARKER VARIABLE

WTL1 0.849ª 0.31a 0.302

a

WTL2 0.696ª 0.171a 0.054

a

WTL3 0.056ª 0.613a 0.555

a

WTL4 0.608ª 0.301a 0.289

a

WTL5 0.792ª 0.403a 0.47

a

WTL6 0.117ª 0.342a 0.182

a

PIIT1 0.419*** 0.091(n.s.) 0.14(n.s.)

PIIT2 0.376*** -0.004(n.s.) 0.136(n.s.)

PIIT3 0.365*** 0.181** 0.186**

PIIT4 0.403*** 0.187** 0.205**

BENEF1 0.406*** 0.337*** 0.593**

BENEF2 0.399*** 0.347*** 0.629***

BENEF3 0.413*** 0.308*** 0.572***

BENEF4 0.394*** 0.325*** 0.612***

ATT1 0.256*** 0.394*** 0.909***

ATT2 0.243*** 0.315*** 0.914***

ATT3 0.247*** 0.167(n.s.) 0.811***

ATT4 0.244*** 0.093(n.s.) 0.792***

ATT5 0.257*** -0.022(n.s.) 0.613***

ATT6 0.255*** -0.119(n.s.) 0.521***

Notes: ***<0.01; **<0.05; a: factor loadings taken from the baseline model are fixed values;

b: factor loadings fixed to 1; n.s.:

not significant.

An additional model comparison was completed to assess the marker variable effects on factor correlation parameter estimates. A Method-R Model was constructed to perform this test. In the baseline sample, the comparison is made between the Method-C Model and the Method-R Model. Specifically, as shown in Table 9, the comparison yields a chi-square difference of 12.727 with two degrees of freedom, which exceeds the 0.05 chi-square critical value for two degrees of freedom of 5.99 (Table 9). This result indicates that the marker variable effects are significant and equal on the

substantive indicators and the effects of the marker variable significantly bias factor correlation estimates.

In the ambiguous and embeddedness samples, the comparison is made between the Method-U Model and the Method-R Model. In the case of ambiguous samples, the results indicate that the Method-U Model is accepted (Table 9). This indicates that the marker variable effects are significant and different on the substantive indicators, and the effects of the marker variable do not significantly bias factor correlation estimates.

The DATA BASE for Advances in Information Systems 111 Volume 48, Number 1, February 2017

Page 20: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Figure 6. Baseline Model

Figure 7. ULMC Model

On the contrary, in the embeddedness sample, the results indicate that the Method-R Model is accepted. In this case, the marker variable effects are significant and different on the substantive indicators, and the effects of the marker variable also significantly bias factor correlation estimates.

The completely standardized factor loadings for the Method-R Model (baseline sample), Method-U Model (ambiguous sample), and Method-R Model (embeddedness sample) are shown in Table 10. All substantive indicators load significantly (p < 0.05) on the constructs they were intended to measure. Furthermore, in baseline samples, all the substantive

items were contaminated by a source of method variance captured by the marker variable (p<0.01).

Unmeasured Latent Method Construct (ULMC) Technique

According to the ULMC Technique, a latent variable is utilized to represent a “method” effect. This latent variable is created to partial out the bias in the data. To operationalize ULMC, a researcher creates a method effect construct that is an aggregate of all of the manifest variables utilized in the study, with no unique observed indicators (Podsakoff et al 2003; Richardson et al 2009).

PITT

PIIT1e1

1

1

PIIT2e21

PIIT3e31

PIIT4e41

BENEFITS

BENEF1 e5

BENEF2 e6

BENEF3 e7

BENEF4 e8

1

1

1

1

1

ATTITUDE

ATT1 e9

ATT2 e10

ATT3 e11

ATT4 e12

1

1

1

1

1

ATT5 e131

ATT6 e141

PITT

PIIT1e1

1

1

PIIT2e21

PIIT3e31

PIIT4e41

BENEFITS

BENEF1 e5

BENEF2 e6

BENEF3 e7

BENEF4 e8

1

1

1

1

1

ATTITUDE

ATT1 e9

ATT2 e10

ATT3 e11

ATT4 e12

1

1

1

1

1

ATT5 e131

ATT6 e141

1

METHOD

The DATA BASE for Advances in Information Systems 112 Volume 48, Number 1, February 2017

Page 21: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Table 11. Chi-Square, Goodness-of-Fit Values, and Model Comparison Tests (ULMC Technique)

Model (“Baseline” Sample) χ2 df CFI

Baseline 307.469 75 0.921

ULMC 95.477 61 0.988

Chi-Square Model Comparison Tests ∆Models ∆χ2 ∆df p

Baseline vs. ULMC 211.992** 14 0.000

Model (“Ambiguous” Sample) χ2 df CFI

Baseline 259.813 75 0.922

ULMC Could not be estimated

Chi-Square Model Comparison Tests ∆Models ∆χ2 ∆df p

Baseline vs. ULMC

Model (“Scale format” Sample) χ2 df CFI

Baseline 296.22 75 0.927

ULMC 125.195 61 0.979

Chi-Square Model Comparison Tests ∆Models ∆χ2 ∆df p

Baseline vs. ULMC 171.025** 14 0.000

Model (“Negative word” Sample) χ2 df CFI

Baseline 213.704 75 0.917

ULMC Could not be estimated

Chi-Square Model Comparison Tests ∆Models ∆χ2 ∆df p

Baseline vs. ULMC

Model (“Item priming” Sample) χ2 df CFI

Baseline 279.762 75 0.906

ULMC 106.709 61 0.979

Chi-Square Model Comparison Tests ∆Models ∆χ2 ∆df p

Baseline vs. ULMC 173.053** 14 0.000

Model (“Embeddedness” Sample) χ2 df CFI

Baseline 365.47 75 0.881

ULMC 121.96 61 0.975

Chi-Square Model Comparison Tests ∆Models ∆χ2 ∆df p

Baseline vs. ULMC 243.51** 14 0.000

Notes: **p<0.01; accepted models in bold.

As with the CFA marker approach, nested models are compared to formally detect CMB. The first model estimated is represented in Figure 6. The second model was identical to the first model, but paths from the method construct to all the independent and dependent construct manifest indicators were added (see Figure 7). The Baseline

Model and ULMC Model were compared to test the presence of CMV. In the baseline, scale format, item priming, and embeddedness samples, the ULMC model fit better than the baseline model. These results indicate that there is evidence of bias because of CMB (Table 11).

The DATA BASE for Advances in Information Systems 113 Volume 48, Number 1, February 2017

Page 22: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Discussion

This research intended to answer two questions, namely: (1) What is the impact of different sources of CMB on measurement and structural models?, and (2) Are the three commonly used approaches able to detect the sources of CMB differentially? Our results indicate that when we manipulated the PIIT construct, there were some significant differences in the measurement model between the baseline sample and those samples affected by some kind of manipulations. We refer particularly to ambiguous questions, scale format, and negative words. On the contrary, we do not observe important changes in the measurement model of the PIIT construct in the samples of item priming and embeddedness. In addition, there are some significant changes in other items not manipulated in most CMB sources samples. Regarding the structural model, we found evidence of significant impacts in path coefficients in four of the five manipulations of PIIT construct. Moreover, when we analyzed the two non-manipulated sources of bias, there was significant evidence of change only in the measurement models in both cases. This result is consistent with the conclusions demonstrated by the study of Williams and Anderson (1994) about negative affectivity (emotionality) as a source of artifactual covariance. Therefore, we can conclude that CMB influences our models differentially, and the pattern of influence depends upon the error source.

As we compare the approaches to detect CMB across the different sources of bias (Table 12), the Harman’s single-factor test was not able to detect any kind of CMB impact. However, both the CFA Marker and the ULMC techniques showed some evidence of CMB in most cases where we previously detected significant impacts in measurement and structural models (Table 12). Specifically, the

Method-U marker variable detected the bias of ambiguous questions and the Method-R marker variable approach detected bias in the baseline and embeddedness manipulations. However, the Marker technique did not detect bias in the case of item priming and scale format. In the negative word manipulation, the model could not be estimated. Furthermore, the ULMC was able to partial out bias in four of the models (baseline, scale format, item priming, and embeddedness), but not in the other two (ambiguous questions and negative words). Interestingly, both the Marker and ULMC techniques found evidence of CMB in the baseline sample. Most likely, this bias can be attributed to both the influence of positive and negative affectivity, and the transient mood state of the survey respondents. In fact, our analysis revealed significant impacts in some loadings (measurement model) comparing high emotions and low emotions, as well as major and non-major life event subsamples.

Having even greater implications for the IS literature is the finding that the Marker and ULMC techniques could provide evidence of both the intentionally injected CMB error and the influence of a non-manipulated source of bias. From this finding, three potential conclusions can be drawn. First, some of the analytic tools advised in the literature for assessing CMB, particularly, Marker and ULMC techniques, can be useful for estimating different types of bias. Alternatively, previous claims about CMB as an “urban legend” (Meade et al 2007; Spector, 2006) may not be founded. Finally, given the detected impact of CMB using a posteriori approach, researchers should strive to prevent CMB during the research design phase by using an a priori approach. Additional research is necessary to differentiate the merit of these three potential conclusions.

Table 12. Comparison of potential impact of sources of CMB and results from CMB Techniques

Sources of CMB Impact of sources of CMB CMB techniques

Harman Marker ULMC

Baseline

Emotions Significant impact on measurement model

Not detected Detected (Method-R)

Detected Major Life Events Significant impact on

measurement model

Ambiguous Questions Significant impact on measurement and structural models

Not detected Detected (Method-U)

Could not be estimated

Scale Format Significant impact on measurement and structural models

Not detected Not detected Detected

Negative Word Significant impact on measurement model

Not detected Could not be estimated

Could not be estimated

Item Priming Significant impact on structural model

Not detected Not detected Detected

Embeddedness Significant impact on measurement model and one path coefficient

Not detected Detected (Method-R)

Detected

The DATA BASE for Advances in Information Systems 114 Volume 48, Number 1, February 2017

Page 23: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

The results of our study have implications for both researchers and reviewers. First, our study denotes the end of the debate over whether CMB truly exists. Our findings show that bias is implicit in the work that we do as academics and as such CMB is far from an urban legend. Despite the dangers of CMB, we would make a series of reflections that seek to minimize the impact of the bias for us as researchers. While these reflections are not inherently new within the realm of survey development, the demonstrable impact of survey creation upon resulting bias shows that such reflections have the power not just to change how our respondents read our surveys, but to clarify the results as well.

Reflection #1: Ambiguous questions do not simply impact the construct for the ambiguous item. As researchers, there is a tendency to believe that an ambiguous item will only influence that particular item and that we can control for that item by removing it from our analysis. However, our results demonstrate that the ambiguous item not only influenced that particular construct, but also another construct at the measurement model level thereby resulting in a significant reduction in structural paths for the model overall. The implication for researchers is that card sorting or other methodologies with our target population is critical to ensure that the items are not ambiguous. For journal editors and reviewers, this finding highlights the need for full disclosure of our surveys during the publication process. While many of us report on a sub-set of a survey in our experiments (and collect more data than we report), this finding highlights that there could be constructs beyond those reported that could influence the results of our work. Even if the full survey is not included in the published paper, we would encourage review teams to ask for broader disclosure of the entire survey or experiment during the review process.

Reflection #2: Changing a scale format impacts the analysis, both at the measurement and structural levels. While we know that changing scale format mathematically affects SEM, this finding demonstrates the need for consistency not only with regards to the number of scale points, but also the labels as well. For researchers, we suggest the need for consistency with regards to scale anchors and formats where an author attempts to build upon the work of others. Furthermore, the alteration of an anchor or number of points in a scale requires demonstrable proof that there is no impact to the measurement or structural models. For the review process, there are two implications. The first is that review panels need to ensure that any items that draw upon past work use the same anchors and number of points as the originally published work. If

not, the burden is upon the authors to demonstrate the equivalence. The second implication is to ensure clarity of operationalization of scales to ensure that scales are used consistently in the future.

Reflection #3: Reverse and neutrally coded items influence structural results. Traditional survey development encourages researchers to include reverse coded items to ensure that respondents are paying attention to surveys. However, our results call into question the impact of reverse coding items. For researchers, we encourage others to further investigate the impact of reverse coded items and to use caution in using this approach. For review teams, we would discourage the use of reverse coded items or require a justification for why they are necessary.

Reflection #4: Item priming does influence results at the structural level. As with our other findings, for both researchers and review teams, this finding highlights the need for disclosure of the operationalizarion of a survey so that others can limit bias in future survey implementations. We particularly highlight the need for including how each construct was introduced within a survey so that future research can mirror how the construct was introduced to the respondent.

Reflection #5: External factors influence our results. In highlighting how emotions and major life events influenced the attitude construct, it would be natural to argue that we need to control for emotions and major life events in all of our surveys. We do not believe this to be true. The counter-argument is that with enough sample size, these influences will not cause significant disruption in our results. We concur with this assessment, but do urge researchers to reflect upon whether any constructs in the survey might be affected by external factors. Furthermore, we encourage future replication work to examine critical constructs within the field to determine if they have the potential to be influenced by emotions or major life events. We believe that this is a fruitful area of future research potential.

Reflection #6: Harman’s one-factor test is unable to detect CMB. Our finding that Harman could not detect any of the influences of CMB suggests that this approach should not be utilized in the future to demonstrate a lack of CMB. For both researchers and review teams, we believe that other, better approaches need to be used.

Reflection #7: The use of a single technique to demonstrate a lack of CMB is insufficient. Our results found that ULMC was able to detect some, but not all types of CMB and that the same was true of the marker technique. This has two implications. The first implication is that all researchers should a

The DATA BASE for Advances in Information Systems 115 Volume 48, Number 1, February 2017

Page 24: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

priori include a marker variable when designing a survey for the purpose of detecting CMB and that this should be a step in the design process. The second implication is that only utilizing one approach (either marker or ULMC) is insufficient and that both approaches should be used and reported in our journals.

This work does not come without limitations. Although Embeddedness (S5) affects only one item, Ambiguous Questions (S1) and Negative Word (S3) impact two items (PIIT2 and PIIT4), and Scale Format (S2) and Item Priming (S4) affects all 4 items of PIIT. Thus, the impact that we see may be due to the manipulations that were created. Next, our scales measuring the life events and mood states are temporally based and contingent upon the timing of these events relative to our measurement. Third, we have only manipulated one construct to examine the impact of CMB. Future work could examine the effects of multiple constructs being affected differentially. Fourth, our manipulations were necessarily artificial (e.g. the neutral item in embeddeddness), which might have influenced our results. We encourage other researchers to manipulate scale items to highlight how these manipulations influence effect sizes. Finally, an argument could be made that our marker variable could be theoretically argued to have an impact upon the constructs in our model and that a different marker variable would be able to detect CMB differentially. We hope other researchers will examine the influence of marker variables upon the detection of CMB.

We urge other resesarchers to consider these issues as a path towards future research. Nonetheless, we posit that these findings contribute towards our understanding of CMB in interesting ways and advise others to consider the impact of CMB on our research findings.

References

Agarwal, R., and Prasad, I. 1998. "A Conceptual and Optional Definition of Personal lnnovativeness in the Domain of lnformation Technology," Information Systems Research (9:2), pp 204-215.

Baumgartner, H., and Steenkamp, J. B. 2001. "Response styles in marketing research: A cross-national investigation," Journal of Marketing Research), pp 143-156.

Baumgartner, H., and Weijters, B. 2012. "Commentary on “Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies”," Journal of Retailing (88:4), pp 563-566.

Burton-Jones, A. 2009. "Minimizing method bias through programmatic research," MIS Quarterly (33:3), pp 445-471.

Campbell, D., T. , and Fiske, D. 1959. "Convergent and discriminant validation by the multitrait-multi-method matrix," Psychological Bulletin (52:2), pp 81-105.

Chin, W. W. 1998. "The partial least squares approach for structural equation modeling," in Modern methods for business research, G. A. Marcoulides (ed.), Erlbaum: Mahwah, NJ, pp. 295-336.

Chin, W. W., Thatcher, J. B., and Wright, R. T. 2012. "Assessing Common Method Bias: Problems With the ULMC Technique," MIS Quarterly (36:3), pp 1003-1019.

Clements, K., and Turpin, G. 1996. "The Life Events Scale For Students: Validation For Use With British Samples," Personality and Individual Differences (20:6), pp 747-751.

Crampton, S. M., and Wagner, J. A. 1998. "Percept-percept inflation in microorganizational research: An investigation of prevalence and effect," Journal of Applied Psychology (79:3), pp 421-135.

Dwyer, D., Ringstaff, C., and Sandholtz, J. 1991. "Changes in teachers’ beliefs and practices in technology-rich classrooms," Educational Leadership (48:8), pp 45-52.

Feldman, J. M., and Lynch, J. G. 1998. "Self-generated validity and other effects of measurement on belief, attitude, intention, and behavior," Journal of Applied Psychology (73:3), pp 421-135.

Gefen, D., Rigdon, D., and Straub, D. W. 2011. "An Update and Extension to SEM Guidelines for Administrative and Social Science Research," MIS Quarterly (35:2), pp iii-xiv.

Hufnagel, E. M., and Conca, C. 1994. "User response data: The potential for errors and biases," Information Systems Research (5:1), pp 48-73.

Jackson, D. N. 1967. "Acquiescence response styles: Problems of identification and control," in Response set in personality assessment, I. A. Barg (ed.), Aldine: Chicago, IL.

Kemery, E. R., and Dunlap, W. P. 1982. "Partialling factor scores does not control method variance: A reply to Podsakoff and Todor," Journal of Management (12:4), pp 525-544.

Lewis, W., Agarwal, R., and Sambamurthy, V. 2003. "Sources of Influence on Beliefs about Information Technology Use: An Empirical Study of Knowledge Workers," MIS Quarterly (27:4), pp 657-678.

Lindell, M. K., and Whitney, D. J. 2001. "Accounting for common method variance in cross-sectional

The DATA BASE for Advances in Information Systems 116 Volume 48, Number 1, February 2017

Page 25: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

research designs," Journal of Applied Psychology (86:1), pp 114-121.

MacKenzie, S. B., and Podsakoff, P. M. 2012. "Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies," Journal of Retailing).

Malhotra, N. K., Kim, S. S., and Patil, A. 2006. "Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Research," Management Science (52:12), pp 1865-1883.

Meade, A. W., Watson, A. M., and Kroustalis, C. M. Year. "Assessing common methods bias in organizational research," 22nd annual meeting of the Society for Industrial and Organizational Psychology, New York2007, pp. 1-6.

Ostroff, C., Kinicki, A. J., and Clark, M. A. 2002. "Substantive and operational issues of response bias across levels of analysis: An example of climate-satisfaction relationships," Journal of Applied Psychology (87:2), pp 355-368.

Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., and Podsakoff., N. P. 2003. "Common Method Biases in behavioral research: A critical review of the literature and recommended remedies," Journal of Applied Psychology (88:5), pp 879-903.

Richardson, H. A., Simmering, M. J., and Sturman, M. C. 2009. "A tale of three perspectives: Examining post hoc statistical techniques for detection and correction of common method variance," Organizational Research Methods (12:4), pp 762-800.

Rizzuto, T.E. & Park, S. (under development). Predicting training behaviors through the analysis of Willingness to Learn: A construct validation study. Human Resource Development Quarterly.

Rorer, L. G. 1965. "The great response-style myth," Psychological Bulletin (63), pp 129-156.

Sharma, R., Yetton, P., and Crawford, J. 2009. "Estimating the effect of common method variance: the method–method pair technique with an illustration from TAM research," Applied Psychology (86:1), pp 114-121.

Spector, P. E. 2006. "Method Variance in Organizational Research: Truth or Urban Legend?," Research Methods (9), pp 221-232.

Spector, P. E., and Brannick, M. T. 2010. "Common method issues: An introduction to the feature topic in organizational research methods," Organizational Research Methods (13:3), pp 403-406.

Straub, D., Boudreau, M. C., and Gefen, D. 2004. "Validation guidelines for IS positivist research," Communications of the Association of Information Systems (13), pp 380-427.

Viswanathan, M., and Kayande, U. 2012. "Commentary on “Common Method Bias in

Marketing: Causes, Mechanisms, and Procedural Remedies”," Journal of Retailing (88:4), pp 556-562.

Watson, D., Clark, L. A., and Tellegen, A. 1988. "Development and validation of brief measures of positive and negative affect: the PANAS scales," Journal of Personality and Social Psychology (54:6), pp 1063-1070.

Westland, C. J. 2010. "Lower bounds on sample size in structural equation modeling," Electronic Commerce Research and Applications (9:6), pp 476-487.

Williams, L. J., and Anderson, S. E. 1994. "An alternative approach to method effects by using latent-variable models: Applications in organizational behavior research," Journal of Applied Psychology (79:3), pp. 323-331.

Wold, H. 1985. "Systems analysis by partial least squares," in Measuring the Unmeasurable, P. Nijkamp, Leitner, H., & Wrigley, N. (ed.), Martinus Nijhoff: Dordrecht, The Netherlands, pp. 221-251.

About the Authors

Dr. Andrew Schwarz is a Professor of Information Systems in the E. J. Ourso College of Business at Louisiana State University. His research interests focus on the adoption of new technology, IT-business alignment, and IT outsourcing. Previous work by Dr. Andrew Schwarz has been published in MIS Quarterly, Information Systems Research, the Journal of AIS, the European Journal of Information Systems, and others. Prior to his career in academia, Andrew was a consultant in the market research industry, working on projects for Fortune 100 companies on topics related to market segmentation, brand imaging, and brand awareness.

Dr. Tracey Rizzuto is Associate Director of the Louisiana State University School for Human Resource Education and Workforce Development (SHREWD). She received her PhD from Penn State University in Industrial-Organizational Psychology with a minor concentration in Information Systems and Technology. The overarching focus of her research program is on developing human capital and organizational capacity through technology-mediated processes with the goal of increasing access to the knowledge, expertise, and resources in the workplace. Her research appears in scholarly journals across multiple disciplines including psychology, management, information systems, sociology, and education, and has been featured in popular media outlets including The New York Times, National Public Radio’s Market Place and American Public Radio Works segments,The APA Monitor on Psychology, and TED.com.

The DATA BASE for Advances in Information Systems 117 Volume 48, Number 1, February 2017

Page 26: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Collenn Carraher-Wolverton is the Home Bank/BORSF Endowed Assistant Professor at the University of Louisiana Lafayette. Her research interests include adoption of new technology, IT outsourcing, Distance Learning, and creativity with IT. Previous work by Dr. Colleen Carraher-Wolverton has been published in the Journal of Information Technology, European Journal of Information Systems, Information & Management, Communications of the AIS, Journal of Organizational and End User Computing, Journal of Information Technology Theory and Application, Journal of Management History, and Small Group Research. She is an Associate Editor at Information & Management and a Senior Editor at The Data Base for Advances in Information Systems. Prior to her career in academia, Colleen was an IT professional, working as a project manager for an IT development firm, an IT analyst for a Fortune 50 oil and gas company, and organizational development in an IT department.

José L. Roldán is a Professor of Management in the Department of Business Administration and Marketing at the Universidad de Sevilla (Spain). His current research interests include technology acceptance models, business intelligence, knowledge

management, organizational culture, and partial least squares (PLS). His recent contributions have been published in European Journal of Operational Research, International Journal of Project Management, British Journal of Management, Journal of Business Research, International Business Review, European Journal of Information Systems, Computers in Human Behavior, and Industrial Marketing Management, among others. He has served as Guest Editor of the European Journal of Information Systems (EJIS), Journal of Business Research and European Management Journal.

Ramón Barrera-Barrera is Assistant Professor in the Management and Marketing Department at the University of Seville (Spain). He holds the PhD in Services Marketing. His research interests include service quality, customer satisfaction and customer loyalty in electronic markets, international and industrial marketing and tourist behavior. He has been published in Spanish academic journals, several conference proceedings, contributed to various handbooks and international journals such as The Services Industries Journal, Educational Studies, Revista Europea de Dirección y Economía de la Empresa and International Journal of Retail & Distribution Management among other journals.

The DATA BASE for Advances in Information Systems 118 Volume 48, Number 1, February 2017

Page 27: Examining Abstract - Louisiana State Universityextent that Common Method Variance (or CMV) creates the bias (Ostroff et al. 2002). Since self-report surveys are the most frequently

Appendix A Manipulation of PIIT

Baseline Ambiguous Negatively Worded

Item Embeddedness

Item Priming Scale Format

Scale format

1-Strongly Disagree, 2-Disagree, 3-Neither Agree, Nor Disagree, 4-Agree, 5-Strongly Agree -3 to +3

Introduction to the section

For the following questions, please indicate the extent to which you agree with the following statements.

This next section of questions is going to focus specifically on your own level of innovativeness. In other words, how innovative do you think that you are? Answering more positively indicates that you feel that you are more innovative.

When a new information technology comes out, I…

PIIT1 If I heard about a new information technology, I would look for ways to experiment with it.

If I heard about a new information technology, I would look for ways to experiment with it.

If I heard about a new information technology, I would look for ways to experiment with it.

If I heard about a new information technology, I would look for ways to experiment with it.

If I heard about a new information technology, I would look for ways to experiment with it.

Do not actively look for ways to experiment with it/ Actively look for ways to experiment with it

PIIT2 Among my peers, I am usually the first to try out new information technologies.

Among my peers, I am usually the first to play around with, try out, and experiment with new information technologies.

Among my peers, I am usually the last to try out new information technologies.

Among my peers, I am usually among the first try out new information technologies

Among my peers, I am usually the first to try out new information technologies.

Am among the last to try out/ Am among the first to try out

PIIT3 In general. I am eager to try out new information technologies.

In general. I am eager to try out new information technologies.

In general. I am eager to try out new information technologies.

In general. I am eager to try out new information technologies.

In general. I am eager to try out new information technologies.

Am hesitant to try out/ Am eager to try it out

PIIT4 I like to experiment with new information technologies.

I like to experiment with, try out, and play around with new information technologies.

I do not like to experiment with new information technologies.

I like to experiment with new information technologies.

I like to experiment with new information technologies.

Do not like to experiment with it/ Like to experiment with it

The DATA BASE for Advances in Information Systems 119 Volume 48, Number 1, February 2017