Post on 15-Jan-2017
Research Methods
What makes a science a science?
• Based on empirical methods (info. Gained from direct observation and experiments)
• Objective (based on fact rather than opinion)• Falsifiable (if something is UNFALSIABLE it can
neither be proved nor disproved)• Only one paradigm (Kuhn, 1970)• Based on testing hypotheses
(Karl Popper 1935, and his hypothetico-deductive model)
Peer review:
• Assessment of scientific research by experts in the field
• Ensures published research is of high quality• Also used for:1. Allocation of research funding2. To test quality of university departments3. Aid publication of works in journals and books(Parliamentary Office of Science and Technology –
2002)
Peer review – Evaluation:
• Anonymity used to remove bias
• It isn’t always possible to find an expert in the field
• Publication bias – as journals tend to prefer publishing positive results
• Once a study is published, if fault is later found, it cannot be removed from the public domain
What is an aim?
• A general overview of what you set out to find
What is a theory?
• A well-established principle that has been developed to explain an aspect of the natural world.
• Arises from repeated observation and incorporates facts, laws, predictions and tested hypotheses
More general than a hypothesis
What are hypotheses?
• Specific, testable statements of prediction, it states what the research expects to find out
• To operationalise the hypotheses, you need to clearly state how the IV will be manipulated, and how the DV will be measured
• IV – Independent Variable (what you change)DV – Dependent Variable (cause of that change)
Null hypothesis:
• Statement of no difference.• Example:
‘there will be no significant difference between… blah blah blah’
Directional hypotheses:
• 1-tailed test • States the direction of the predicted
difference
for example: ‘Participants given pictures will remember significantly more items from a list of 10 than participants given a list of 10 words’
Non-directional hypotheses:
• 2-tailed hypothesis• States there will be a difference, but we don’t
know what direction that difference will be.
Correlation hypothesis:
• Similar to null, directional and non-directional hypothesis
• Just look for the word ‘correlation’ in the hypothesis
Looking for significance:
• In psychology, we look for the P≤0.05 value
• This means that the results could be 5% due to chance, however we are 95% sure the value is significant
Type 1 and type 2 errors (ooh fun…):
Type 1:• Reject Null (accept
alternative hypothesis)
• Likely to occur if the probability is TOO LENIENT
• E.g. from using P≤0.10 instead of P≤0.05
Type 2:• Accept null (reject
alternative hypothesis)
• Likely to occur if the probability is TOO STRINGENT
• E.g. From using P≤0.01 instead of P≤0.05
Statistical tests:
Content analysis:• Changing qualitative data into quantitative data• So it can be statistically analysed• Used in the past to analyse Kennedy & Nixon’s speech
(Schneidman, 1963)
Quantitative Objective If there is agreement, inter-rater reliability is easily tested Reductionist Subjective
How to draw graphs and tiiing:
Independent variable/categories X-axis
Dependent variable(Freq/units)
Y-axis
Title
When to use what graph:Chart When it’s used
Bar chart Nominal data, gaps between bars
Frequency polygon Interval/ordinal data, class intervals represented
Histogram Interval/ordinal data, intervals represented by midpoint, no gaps between bars
Line graphs Show continuous data
Scattergram Relationships between 2 variables
Journals:
Mnemonics to help you rememberTheAlienInMyRoomDoesn’tReadA lot
TheAppleInMyRearDoesn’tReallyAche
Title(The)
Short, but informative about the content of the paper
Abstract(Alien)
Brief summary inc. problem, method, results and conclusions
Introduction(In)
The problem, and how it’s being answered, and why it is (or isn’t) important.
Method(My)
How you went about your project, subsections: subjects, materials, procedure (subheadings make it easier to read)
Results(Room)
Summary of findings, results of statistical tests. Graphs & charts too
Discussion(Doesn’t)
Begin with summary of results, and what they indicate, say what can and cannot be concluded
References(Read)
List of articles cited, alphabetical, journals listed like “volume, year, page numbers”
Appendix(A lot)
Raw data goes here, (all original data) also any data which was collected but not used
Types of research methods – Laboratory study:
• Internal validity – controlled variables
• Control increases replicability and if consistent results are achieved, reliability
• Demand characters (may reduce validity)
• May have reduced external validity as experiments conducted aren’t always like real-life
Field study:
• Experimenter effects/demand characteristics – reduced
• Higher ecological validity, as it’s a natural setting
• Less control over extraneous varibles
• Demand characteristics may be present if ppts know they’re being studied
Natural experiment (natural IV):
• Only way to study some thingse.g. effects of privation
• Validity may be reduced, no random allocation
• Low replicability and therefore reliability?
• Not necessarily generalisable
Correlation:
• Shows relationships
• Can be conducted on a lot of data
• Easily replicated
• No cause/effect can be established
• May lack internal/external validity
Observation:
• Rich data as natural behaviour is observed (especially in covert observations)
• Demand characteristics in overt observations
• Observer bias
• Inter-rater reliability should be used to test
Content analysis (again):
• Inter-rater reliability can be easily tested
• Unobtrusive
• Highly subjective
• Time consuming
• Reductionist
Self-report techniques (interviews and questionnaires):
• Have large samples fairly quickly
• Open questions used for quantitative data (easily analysed)
• Closed questions for qualitative
• Social desirability bias
• Leading questions could reduce calidity
• Closed questions can reduce validity as it may not allow full response
Types of sampling method – Opportunity:
• Participants selected on who is most easily available
Easy to conduct
Easy to get large samples (in theory)
Biased (selection bias – researcher more likely to engage with smiley people and give non-verbal cues)
Only allows a small sample of target population
May not be representative
Volunteer sampling:
• Participants selected by asking for volunteers e.g. Advertisementsor national newspaper (more representative)
• If it’s a men-only survey, then better off to use a men’s health mag
QuickReach a wide variety of
people
Those who volunteer may not be representative of the target population as they may be more motivated/outgoing
Random sampling:• Identify target population• Make sure all members of
the population have an equal chance of being picked
• E.g. Putting names in a hat and picking out however many you need
• Or assigning them all numbers and using a random number generator to pick them
Less biased (more equal chance of selection)
Still same bias as some people may refuse to take part
Ethical issues:
• Informed consent• Confidentiality and anonymity• Right to withdraw at any time• Protection from harm• Deception• Debriefing
Dealing with ethical issues:Alternatives to informed consent:
1) Presumptive consent (assuming the ppt would be cool with whatever you’re testing)2) Prior general consent (slightly misinforming the ppt)
Alternatives to deception:1) Complete info. (ppts told everything, however Gallo et al (1973) found that sometimes this DOES affect the outcome, and sometimes it DOESN’T.2) Role playing (ppts informed about the general nature of the study and asked to role-play, however this could lead to unreliable findings)
Reliability:
• Whether, when replicated, the findings are consistent
Ways to test…• Inter-rater or inter-observer tested by finding
a strong correlation between their results
• Internal reliability – All items measure the same thing. Tested using the split-half method where test is split in two and you need a strong correlation between both halves
• External reliability – Produce the same results on different occasions by different researchers. Tested using the test-retest method on same ppts, however this requires a gap between 1st and 2nd test.
Validity:
• To what extent the research measures what it set out to measure
• Internal validity – How well the method being used measures what you set out to measure e.g. a behaviour
• To ensure internal validity variables should be well controlled and you can use triangulation… where research is analysed from multiple perspectives…
Testing internal validity:Naturalisticobservation
InterviewLaboratory experiment
Internal validity can also be tested by using counterbalancing:
Also tests that order effects aren’t affecting the outcome
• External validity/ecological validity – How well the research can be applied to the real world, e.g. recalling nonsense words isn’t a real-life task
What happens when you get a design question?
1. Don’t panic2. If it’s a 12 marker, you gotta include this stuff:
- Hypothesis- Independent variable- Dependent variable- Method- Design- Sample- Procedure/participants- Ethics- Control- (Analysis?)
Mnemonic: High iguanas don’t mind drugs so pick… ecstasy/cannabis?
Handy dandy template for AO2:
1) Start with further evidence (make sure it’s relevant)
2) Methodological criticisms (case studies/small samples/ecological validity)
3) Positive IDA4) Negative IDA5) (Any additional research )6) Conclusion