Research design fw 2011

22
Comparing Research Designs Tiffany Smith, Eric Heidel, & Patrick Barlow Research and Statistical Design Consultants

description

A brief overview of epidemiological research designs as well as survey construction.

Transcript of Research design fw 2011

Page 1: Research design fw 2011

Comparing Research DesignsTiffany Smith, Eric Heidel, &Patrick BarlowResearch and Statistical Design Consultants

Page 2: Research design fw 2011

Comparing Research Designs Cohort Studies

Case Control Studies

Cross-Sectional Studies

Developing Survey Instruments

Page 3: Research design fw 2011

Cohort Studies A “cohort” is a group of individuals

who are followed or traced over a period of time.

A cohort study analyzes an exposure/disease relationship within the entire cohort.

Page 4: Research design fw 2011

Cohort Design

Page 5: Research design fw 2011

Prospective versus Retrospective Cohort Studies

Exposure Outcome

Prospective Assessed at the beginning of the study

Followed into the future for outcome

Retrospective

Assessed at some point in the past

Outcome has already occurred

Page 6: Research design fw 2011

Advantages and Disadvantages of Cohort Studies

Advantages• Establish population-based

incidence• Accurate relative risk• Examine rare exposures• Temporal relationship

inferred• Time-to-event analysis

possible• Used when randomization

not possible• Quantifiable risk magnitude• Reduces biases (selection,

information)• Can study multiple outcomes

Disadvantages• Lengthy and costly• May require very large

samples• Not suitable for rare/long-

latency diseases• Unexpected environmental

changes• Nonresponse, migration and

loss-to-follow-up• Sampling, ascertainment

and observer biases• Changes over time in

staff/methods

Page 7: Research design fw 2011

Case-Control Studies A sample with the disease from a

population is selected (cases).

A sample without the disease from a population is selected (controls).

Various predictor variables are selected.

Page 8: Research design fw 2011

Strategies for Sampling Controls Hospital- or clinic-based controls Matching Using a population-based sample of

cases Using 2 or more control groups

Page 9: Research design fw 2011

Advantages and Disadvantages of Case-Control Studies

Advantages High

information yield with few participants

Useful for rare outcomes

Disadvantages Cannot estimate

incidence/prevalence of disease

Limited outcomes can be studied

Highly susceptible to biases

Page 10: Research design fw 2011

For Discussion “How much does a family history of

lung cancer increase the risk of lung cancer?” The PI plans a case-control study to answer this question. How should she pick the cases? How should she pick the controls? What are some potential sources of

bias in the sampling of cases and controls?

Page 11: Research design fw 2011

Cross-Sectional Studies• “Snapshot” of a population.• Cross-sectional studies include surveys.• People are studied at a “point” in time,

without follow-up.• Data are collected all at the same time

(or within a short time frame).• Can measure attitudes, beliefs,

behaviors, personal or family history, genetic factors, existing or past health conditions, or anything else that does not require follow-up to assess.

• The source of most of what we know about the population.

Page 12: Research design fw 2011

Advantages and Disadvantages of Cross-Sectional Studies

Advantages Fast and

inexpensive No loss to follow-up Springboard to

expand/inform research question

Can target a larger sample size

Disadvantages Can’t determine

causal relationship

Impractical for rare diseases

Risk for nonresponse

Page 13: Research design fw 2011

13

INSTRUMENTATION RESULTS

Junk In Junk Out

=

Page 14: Research design fw 2011

Steps in Assembling the Instruments for the Study

1. Identify the purpose and focus of study2. Obtain feedback from experts to clarify the

purpose and focus3. Identify the research methodology and type

of instrument to use for data collection4. Begin to formulate questions or items5. Pretest items and preliminary draft6. Revise instrument based on feedback7. Pilot test and revise8. Administer final instrument and analyze

and report results

14

Page 15: Research design fw 2011

Reliability & Validity (Colton & Covert, 2007)

Validity: The extent to which we measure what we purport to measure (synonyms: accuracy). Types: Face, Concurrent, Predictive, Convergent,

Discriminant

Reliability: The extent to which an instrument produces the same information at a given time or over a period of time (synonyms: stable, dependable, repeatable, consistent, constant, regular). If the instrument is reliable, we would expect a patient who receives a high score

the first time he/she completes the instrument to receive a high score the next time he/she completes it (all things being equal).

15

Page 16: Research design fw 2011

Parts of a Survey Title Introduction Directions or instructions Items Demographics Closing section

Page 17: Research design fw 2011

How to Create Survey Items (Colton & Covert, 2007)Literature reviewUse of existing processesBrainstormingSnowballing or pyramidingDelphi technique

17

Page 18: Research design fw 2011

Strategies for Writing Good Items Avoid Double-barreled questions Appropriate readability

Sentence length Simple Language Clear, specific terminology

Exhaustive response sets Provide instructions for questions Culturally appropriate Sensitive items

Social desirability Avoid bias Limit negatively worded items Don’t include superfluous items

18

Page 19: Research design fw 2011

Modes of Administration Post Mail Internet/Email Telephone Group Administration One-on-one/Interview

A response rate of 50-60% is often considered an acceptable return rate for survey research.

Page 20: Research design fw 2011

Sources of Error Sampling Error: A result of the measuring a

characteristic in some, but not all, of the units or people in the population of interest. Reduced by larger samples.

Coverage Error: The sample drawn fails to contain all the subjects in the population of interest.

Measurement Error: Error exists in the instrument itself (i.e. not valid and/or reliable).

Nonresponse Error: The inability to obtain data for all questionnaire items from a person in the sample population Unit/total Nonresponse Item Nonresponse

Page 21: Research design fw 2011

How to Increase Response Rate Know your population of interest Pre-Incentives

Preferably monetary Post-Incentives

Raffle/random give away Cognitive dissonance Electronic format Cover letters

Attending physicians rather than residents/fellows Clear informed consent

Not coercive Not forced

Follow-up emails Pre-survey emails

Page 22: Research design fw 2011

Material Learned Cohort Studies

Case Control Studies

Cross-Sectional Studies

Developing Survey Instruments

Questions?