Questionnaire Design II - Vanderbilt...

40
Questionnaire Design II Mario Davidson, PhD Vanderbilt University School of Medicine Department of Biostatistics

Transcript of Questionnaire Design II - Vanderbilt...

Questionnaire Design II

Mario Davidson, PhDVanderbilt University School of Medicine

Department of Biostatistics

Overview

Formulation

Validity and Reliability

Translation

Preparation

The Pretest and Pilot

Introduction

Purpose

Difficulties

Proper Planning

Formulation

Defining the Domain

Types of Questions

Vocabulary

Formulating the Questions

Defining the Domain: Class Discussion

Proposed Research Study:

Graduates of Vanderbilt University School of

Medicine (VUSM) are successful. The Research Question Literature Search

Existing Questionnaires

Types of Questions: Open-ended

Examples How would you

describe success? What are the

primary tasks you perform at work?

Advantages Disadvantages

Types of Questions: Closed

Examples

1.What is your annual salary?

2.On a scale of 1 - 10, where 1 is dislike and 10 is like, rate how well you like your job.

Advantages Disadvantages

Types of Questions: Semi-closed

Example

1.What role have you taken the most in published research?

− Author− Reviewer− Other (specify)

____________− None− Don't Know

Defining the Domain: Brainstorming Session for

QuestionsIn small groups on a sheet of paper, write one

open, closed, and semi-closed questions and the

choices of answers for graduates of VUSM that

would help us understand the proposed research

study: Graduates of Vanderbilt University

School of Medicine have a strong

impact on published research.

Vocabulary

Appropriate Language Slang Ethnic Regional

Familiar Language

Wuz up?

Dot gone

Y'allI re

con'

Fixin' to

Cocodrie

Pro

ps

Formulating the Questions: Class Discussion

Discuss the following questions requested to be answered by

graduates of Vanderbilt University School of Medicine:

a. Are you an editor of a journal or a reviewer?

b. Have you recently had an article published?

c. Don't you believe receiving grant money is the

most important aspect of successful research?

d. Out of the published research that you have contributed to, what was the highest impact factor?

e. How often have you bribed someone in order to be published?

Formulating the Questions: Class Discussion on Biases

a. Are you an editor of a journal or a reviewer? Double-

barrelled

Solution: Make two separate questions.

b. Have you recently had an article published? Vague words

Solution: Try different words and ask others for feedback.

c. Don't you believe receiving grant money is the most

important aspect of successful research? Leading

Solution: Reword the question. On a scale of 1-10...

Formulating the Questions: Class Discussion on Biases

d. Out of the published research that you have contributed to, what was the highest impact factor? Ambiguous/Technical jargon/Relies on memory

Solution: Reword. Try not to rely too much on recall.

e. How often have you bribed someone in order to be published? Sensitive

Solution: If you must ask sensitive questions, try giving a lead sentence such as: Some researchers have admitted buying lunch, sending a gift.....

Formulating the Questions: Other Biases

Complex

Insensitive measure: Scale of 1 – 2 – 3 vs. 1 – 10

Too few categories

Missing and/or overlapping intervals

Horizontal vs. vertical response format

Length of questionnaire

Placement of questions

Skipping questions and branching

Cultural differences

Avoid hypothetical questions

Formulating the Questions: Other Biases (Dillman, 2007)

Do not use, check-all-that-apply question formats

Choose question wordings that are comparable to previous collected data

Consider questionnaires as conversations. Do not switch topics often. It gives the appearance of not listening to the responder.

Choose the first question carefully. It should apply to everyone and be easy and interesting.

Ask one question at a time. Ex: Moral Distress frequency and occurrence

Use spacing

Use bold for questions and light print for answer choices

Group Activity:

Revise your questions

Validity and Reliability Reliability

Test-Retest Method Equivalent Form Internal Consistency

Validity Internal and External Face Content Criterion Construct

Reliability and Validity

There are different definitions for the types of reliability and validityReliability refers to the consistency of a set of measurements.Validity refers to what degree the research reflects the given research problem.A measure may not be valid if it is not reliable.

http://www.experiment-resources.com/research-methodology.html

Reliability and Validity

http://www.idrc.ca/openebooks/069-1/img/designandco_144_la_0.jpg

Reliability: Test-Retest

The same instrument is provided to the same subjects twice.

The instrument should produce similar results. An issue with this method is finding the appropriate

time to re administer the instrument. Ex: If a pre-survey is administered regarding knowledge

of HIV before an intervention workshop, the post-survey will most likely give very different results. However if a post survey is given and the same survey is given in three weeks, you would expect the results to be similar.

Reliability: Equivalent Form

Different versions of the instrument are created

Results should be similar

Reliability: Internal Consistency

Items on the measure should provide consistent scores.

Split-halves Test Compare the odd questions to the evens Should correlate well

Validity: Internal and External

Internal Measures the confidence in which there is a cause-effect

relationship. “Could there be an alternative cause, or causes, that

explain my observations and results?”

External Can the results be generalizable to a larger population “To what populations, setting, treatment variables and

measurement variables can this effect be generalized.”

http://www.experiment-resources.com/external-validity.html

Validity: Face

At “face value” does the instrument make sense. Does not require expert opinion. Arguably the weakest type of validity Sometimes it is difficult to measure or get experts

opinion Ex of Face Validity: A survey may be given to

patients with irritable bowel syndrome regarding the syndrome. They may be asked to comment on if the relevant questions were asked.

Validity: Content

How well does the instrument represent every characteristic of the construct

Usually requires expert opinions

Statistical analysis should be considered

Validity: Criterion

Criterion Validity: How well does a measurement estimate or predict certain abilities

Concurrent

Predictive

The major differences deal with the time of administration

Validity: Concurrent

Concurrent Validity – How well does a measure correlate with another measure of the same type?

Concurrent suggest that the measures should be given (approximately) at the same time.

Ex: If two different IQ tests are given, they should correlate well.

Validity: Predictive

Predictive Validity – How well does a measure predict a construct in the future?

Ex: A survey may be given to understand if medical students receiving this lecture will be inclined to create a survey later in their careers. We may see if there is a correlation with the number of surveys designs they participate in once they have worked as a physician for 10 years.

Validity: Construct

Construct Validity – How well does the construct measure what it is suppose to be measuring?

Arguably the most important validity and difficult to prove.

Ex: A survey measuring medical students' wellness. Wellness is an abstract, theoretical construct.

If you can prove convergence and/or divergence, this may be enough.

Validity: Convergent and Divergent

Convergent Validity – Do constructs relate to expected constructs they should be related to?

Divergent Validity – Is the construct unrelated to construct believed to be unrelated?

Validity: Convergent

The construct, Self-Esteem, should relate with all other constructs. Self-worth and confidence should relate and social skills and self appraisal should too.

http://www.experiment-resources.com/convergent-validity.html

Translation

“Out of sight, out of mind”

“Invisible and insane”

Translation

Preliminary Translation Expert Evaluators Back Translation

Cross-language Equivalence

Preparation

Type of Analysis Establishment of Codes

Open-ended Questions Closed Questions

Code Book Live Document Precision and Completion Record of Algorithms

Preparation

Clearly define the purpose of the study. Provide clear and concise instructions. Discuss the policy on confidentiality Discuss the person's right to refuse any question(s). Include identifying data on multi-page

questionnaires.

The Pretest

Purpose

Appropriate Wording and Questions

Blank Answers Right Sample Skip Patterns Read Body Language Interpretations Similar

Length Request Feedback from

Subjects Listen to Feedback Software (Zoomerang,

Adobe Live Cycle Designer, SurveyMonkey.com, REDCap Survey)

Considerations

Pilot Test

The pilot test is a formalized small scale administration

Some Questions How is the response rate? Are there questions that are not being answered? Is relevant information being obtained from open-ended

questions? How much variability is occurring for each item?

Conclusion Start as Soon as Possible (ASAP)!

Don't Recreate the Wheel!

Pretest and Pilot the Right Population

Review, Revise, and Test

ReferencesDillman, D.A. (2007). Mail and internet surveys: The tailored design method(2nd ed.).

Hoboken, NJ: John Wiley & Sons, Inc.

http://www.idrc.ca/openebooks/069-1/img/designandco_144_la_0.jpg

http://www.experiment-resources.com/research-methodology.html

http://www.experiment-resources.com/external-validity.html

http://www.experiment-resources.com/convergent-validity.html

Questions?