Measurement of Variables: Operational Definitions and Scales
Independent and Dependent Variables Operational Definitions Evaluating Operational Definitions...
-
Upload
seth-loller -
Category
Documents
-
view
239 -
download
1
Transcript of Independent and Dependent Variables Operational Definitions Evaluating Operational Definitions...
Independent and Dependent Variables
Operational Definitions
Evaluating Operational Definitions
Planning the Method Section
What is an independent variable?
Independent and Dependent Variables
An independent variable (IV) is the variable (antecedent condition) an experimenter intentionally manipulates.
Levels of an independent variable are the values of the IV created by the experimenter. An experiment requires at least two levels.
Explain confounding.
Independent and Dependent Variables
An experiment is confounded when the value of an extraneous variable systematically changes along with the independent variable.
For example, we could confound our experiment if we ran experimental subjects in the morning and control subjects at night.
What is a dependent variable?
Independent and Dependent Variables
A dependent variable is the outcome measure the experimenter uses to assess the change in behavior produced by the independent variable.
The dependent variable depends on the value of the independent variable.
What is an operational definition?
Operational Definitions
An operational definition specifies the exact meaning of a variable in an experiment by defining it in terms of observable operations, procedures, and measurements.
What is an operational definition?
Operational Definitions
An experimental operational definition specifies the exact procedure for creating values of the independent variable.
A measured operational definition specifies the exact procedure for measuring the dependent variable.
What are the properties of a nominal scale?
Evaluating Operational Definitions
A nominal scale assigns items to two or more distinct categories that can be named using a shared feature, but does not measure their magnitude.
Example: you can sort canines into friendly and shy categories.
What are the properties of an ordinal scale?
Evaluating Operational Definitions
An ordinal scale measures the magnitude of the dependent variable using ranks, but does not assign precise values.
This scale allows us to make statements about relative speed, but not precise speed, like a runner’s place in a marathon.
What are the properties of an interval scale?
Evaluating Operational Definitions
An interval scale measures the magnitude of the dependent variable using equal intervals between values with no absolute zero point.
Example: degrees Celsius or Fahrenheit and Sarnoff and Zimbardo’s (1961) 0-100 scale.
What are the properties of a ratio scale?
Evaluating Operational Definitions
A ratio scale measures the magnitude of the dependent variable using equal intervals between values and an absolute zero.
This scale allows us to state that 2 meters are twice as long as 1 meter.
Example: distance in meters or time in seconds.
What does reliability mean?
Evaluating Operational Definitions
Reliability refers to the consistency of experimental operational definitions and measured operational definitions.
Example: a reliable bathroom scale should display the same weight if you measure yourself three times in the same minute.
Explain interrater reliability.
Evaluating Operational Definitions
Interrater reliability is the degree to which observers agree in their measurement of the behavior.
Example: the degree to which three observers agree when scoring the same personal essays for optimism.
Explain test-retest reliability.
Evaluating Operational Definitions
Test-retest reliability means the degree to which a person's scores are consistent across two or more administrations of a measurement procedure.
Example: highly correlated scores on the Wechsler Adult Intelligence Scale-Revised when it is administered twice, 2 weeks apart.
Explain interitem reliability.
Evaluating Operational Definitions
Interitem reliability measures the degree to which different parts of an instrument (questionnaire or test) that are designed to measure the same variable achieve consistent results.
What does validity mean?
Evaluating Operational Definitions
Validity means the operational definition accurately manipulates the independent variable or measures the dependent variable.
What is face validity?
Evaluating Operational Definitions
Face validity is the degree to which the validity of a manipulation or measurement technique is self-evident. This is the least stringent form of validity.
For example, using a ruler to measure pupil size.
What is content validity?
Evaluating Operational Definitions
Content validity means how accurately a measurement procedure samples the content of the dependent variable.
Example: an exam over chapters 1-4 that only contains questions about chapter 2 has poor content validity.
What is predictive validity?
Evaluating Operational Definitions
Predictive validity means how accurately a measurement procedure predicts future performance.
Example: the ACT has predictive validity if these scores are significantly correlated with college GPA.
What is construct validity?
Evaluating Operational Definitions
Construct validity is how accurately an operational definition represents a construct.
Example: a construct of abusive parents might include their perception of their neighbors as unfriendly.
Explain internal validity.
Evaluating Operational Definitions
Internal validity is the degree to which changes in the dependent variable across treatment conditions were due to the independent variable.
Internal validity establishes a cause-and-effect relationship between the independent and dependent variables.
Explain the problem of confounding.
Evaluating Operational Definitions
Confounding occurs when an extraneous variable systematically changes across the experimental conditions.
Example: a study comparing the effects ofmeditation and prayer on blood pressure would be confounded if one group exercised more.
Explain history threat.
Evaluating Operational Definitions
History threat occurs when an event outside the experiment threatens internal validity by changing the dependent variable.
Example: subjects in group A were weighed before lunch while those in group B were weighed after lunch.
Explain maturation threat.
Evaluating Operational Definitions
Maturation threat is produced when physical or psychological changes in the subject threaten internal validity by changing the DV.
Example: boredom may increase subject errors on a proofing task (DV).
Explain testing threat.
Evaluating Operational Definitions
Testing threat occurs when prior exposure to a measurement procedure affects performance on this measure during the experiment.
Example: experimental subjects used a blood pressure cuff daily, while control subjects only used one during a pretest measurement.
Explain instrumentation threat.
Evaluating Operational Definitions
Instrumentation threat is when changes in the measurement instrument or measuring procedure threatens internal validity.
Example: if reaction time measurements became less accurate during the experimental than the control conditions.
Explain statistical regression threat.
Evaluating Operational Definitions
Statistical regression threat occurs when subjects are assigned to conditions on the basis of extreme scores, the measurement procedure is not completely reliable, and subjects are retested using the same procedure to measure change on the dependent variable.
Explain selection threat.
Evaluating Operational Definitions
Selection threat occurs when individual differences are not balanced across treatment conditions by the assignment procedure.
Example: despite random assignment, subjects in the experimental group were more extroverted than those in the control group.
Explain subject mortality threat.
Evaluating Operational Definitions
Subject mortality threat occurs when subjects drop out of experimental conditions at different rates.
Example: even if subjects in each group started out with comparable anxiety scores, drop out could produce differences on this variable.
Explain selection interactions.
Evaluating Operational Definitions
Selection interactions occur when a selection threat combines with at least one other threat (history, maturation, statistical regression, subject mortality, or testing).
What is the purpose of the Method section of an APA report?
Planning the Method Section
The Method section of an APA research report describes the Participants, Apparatus or Materials, and Procedure of the experiment.
This section provides the reader with sufficient detail (who, what, when, and how) to exactly replicate your study.
When is an Apparatus section needed?
Planning the Method Section
An Apparatus section of an APA research report is appropriate when the equipment used in a study was unique or specialized, or when we need to explain the capabilities of more common equipment so that the reader can better evaluate or replicate the experiment.