Log-transformation of continuous data The model, check of ... · The estimates are found like we...
Transcript of Log-transformation of continuous data The model, check of ... · The estimates are found like we...
-
1
PhD course in Basic Biostatistics – Day 2 Erik Parner, Department of Biostatistics, Aarhus University©
Log-transformation of continuous data Exercise 1.2+1.4+Standard1-1 (Triglyceride)Logarithms and exponentials
Two independent samples from normal distributionsThe model, check of the model, estimationComparing the two meansApproximate confidence interval and testExact confidence interval and test using the t-distribution
Comparing two populations using a non-parametric testThe Wilcoxon-Mann-Whitney test
Two independent samples from normal distributionsType 1 and type 2 errorsStatistical powerSample size calculations
-
2
Overview
Data to analyse Type of analysis Unpaired/Paired Type Day
Continuous One sample mean Irrelevant Parametric Day 1
Nonparametric Day 3
Two sample mean Non-paired Parametric Day 2
Nonparametric Day 2
Paired Parametric Day 3
Nonparametric Day 3
Regression Non-paired Parametric Day 5
Several means Non-paired Parametric Day 6
Nonparametric Day 6
Binary One sample mean Irrelevant Parametric Day 4
Two sample mean Non-paired Parametric Day 4
Paired Parametric Day 4
Regression Non-paired Parametric Day 7
Time to event One sample: Cumulative risk Irrelevant Nonparametric Day 8
Regression: Rate/hazard ratio Non-paired Semi-parametric Day 8
-
0
.5
1
1.5
2
2.5
De
nsity
0 .5 1 1.5Triglyceride
3
Log-transformation of continuous data
Continuous data with a long tail to the right are often log-transformed to obtain an approximate normal distribution.
Recall the triglyceride measurements. Applying a normal based prediction interval (PI) on the original data gives invalid results: e.g. the PI will not have 2.5% below and above the two limits.
4.2% ofdata
0% ofdata
-
4
The logarithm of the triglyceride measurements follows (approximately) a normal distribution:
0
.2
.4
.6
.8
1
Den
sity
-2 -1.5 -1 -.5 0 .5Log-triglyceride
-2
-1.5
-1
-.5
0
.5
Log
-tri
glyc
erid
e
-2 -1.5 -1 -.5 0 .5Inverse Normal
We then need to transform the results back to theoriginal scale to obtain useful results on the triglyceridemeasurements.
The method presented here relies on the fact that percentiles are preserved when creating a transformation of the data.
-
5
Both the logarithm and the exponential function areincreasing functions.
( ) ( ) ( ) ( )exp exp log logX A X A X A< ⇔ < ⇔ <
Logarithmic and exponential functions
-3
-2
-1
0
1
y
0 .5 1 1.5 2x
Logarithm
0
2
4
6
8
y
-2 -1 0 1 2x
Exponential
Thus
-
6
Medians and percentiles are preserved when making a transformation of the data:
16 % tothe right
50% to the right
explog
Logarithmic and exponential transformations
Prediction intervals are given by 2.5 and 97.5 percentile.
For a normal distribution the mean is equal to the median=50 percentile.
-
7
PI(-1.54;-0.01)
PI(0.21;0.99)
exp
CI mean-0.77(-0.81;-0.74)
CI median0.46 (0.44;0.48)
Transforming the results
0
.2
.4
.6
.8
1D
ensi
ty
-2 -1.5 -1 -.5 0 .5Log-triglyceride
0
.5
1
1.5
2
2.5
Den
sity
0 .5 1 1.5 2Triglyceride
-
8
Summary
Let Y denote the original observation.
If X=log(Y) has a normal distribution with mean=median=µ , and standard deviation=σ ,then
• a valid 95% CI for µ will transform intoa valid 95% CI for the median of Y = exp(X)
• a valid 95% PI for X will transform intoa valid 95% PI for Y = exp(X)
The relation between the means and medians are
( )( )2
( ) exp
( ) exp 0.5
median Y
mean Y
µ
µ σ
=
= + ⋅
-
9
It can be shown that
( )2( ) ( ) exp 1sd Y mean Y σ= ⋅ −
( )2( )( ) exp 1( )
sd Ycv Y
mean Yσ= = −
Hence the standard deviation of Y depends on the mean of Y.
For this reason the standard deviation is rarely used as a measure of the spread of the distribution of the original data in this setting.
In this setting the coefficient of variation (cv) is often used as a measure of the spread of the data
These details are in the video “Log-transformations”.
-
10
Properties logarithm and exponential function
The basic properties of the logarithms and exponentials that we will use throughout the course:
( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )
log log log log log log
exp exp exp exp exp exp
a b a b a b a b
a b a b a b a b
⋅ = + = −
+ = ⋅ − =
Product Sum
log
exp
( ) ( ) ( ) ( ) ( )log log exp exp expb aba b a a b a b= ⋅ ⋅ = =
-
11
Continuous data – two sample mean
Body temperature versus gender
Scientific question: Do the two gender have different normal body temperature?
Design: 130 participants were randomly sampled, 65 males and 65 females
Data: Measured temperature, gender
Summary of the data (the units are degrees Celsius):
--------------------------------------------------------------Gender | N(tempC) mean(tempC) sd(tempC) med(tempC)
----------+---------------------------------------------------Male | 65 36.72615 .3882158 36.7
Female | 65 36.88923 .4127359 36.9--------------------------------------------------------------
-
12
Body temperature: Plotting the data
The data looks “fine” - a few outliers among females?
35.5
3636
.537
37.5
38T
empe
ratu
re (
C)
Male FemaleGender
35.5
3636
.537
37.5
38T
empe
ratu
re (
C)
Male Female
Figure 2.1
-
13
Body temperature: Checking the normality in each group
0.5
10
.51
35 36 37 38
Male
Female
Den
sity
Graphs by Gender
35.5
3636
.537
37.5
38
36 36.5 37 37.5Inverse Normal
Male
35.5
3636
.537
37.5
38
36 36.5 37 37.5 38Inverse Normal
Female
Normality looks ok!
Figure 2.2
-
14
Body temperature: The model
A statistical model:
Two independent samples from normal distributions, i.e.
• the two samples are independent
and each are assumed to be a random sample from a normal distribution:
1. The observations are independent (knowing one observation will not alter the distribution of the others)
2. The observations come from the same distribution, e.g. they all have the same mean and variance.
3. This distribution is a normal distribution with unknown
mean, µi, and standard deviation, σi. N(µi, σi2)
-
15
Body temperature: Checking the assumptions
The first two – think about how data was collected!
1. Independence between groups –information on different individualsIndependence within groups: Data are from different individuals, so the assumption is probably ok.
2. In each group: The observations come from the same distribution. Here we can only speculate. Does the body temperature depend on known factors of interest, for example heart rate, time of day, etc.?
-
16
Body temperature: The estimates
The estimates are found like we did day 1:
( ) ( )( ) ( )
ˆ ˆ ˆ36.73 36.63;36.82 , 0.388, sem 0.048
ˆ ˆ ˆ36.89 36.79;36.99 , 0.413, sem 0.051
M M M
F F F
µ σ µµ σ µ
= = =
= = =
Observe that the width of the prediction interval is approximately
2 * 1.96 * 0.4 C = 1.6 C,
so there is a large variation in body temperature between individuals within each of the two groups
We see that the average body temperature is higher among women
-
17
Body temperature: Estimating the difference
Remember focus is on the difference between the two groups, meaning, we are interested in :
F Mδ µ µ= −The unknown difference in mean body temperature. This is of course estimated by:
ˆ ˆ ˆ 36.89 36.73 0.16F Mδ µ µ= − = − =
What about the precision of this estimate?What is the standard error of a difference?
-
18
The standard error of a difference
( ) ( ) ( ) ( )2 2ˆ ˆ ˆ ˆ ˆse se se seF M F Mδ µ µ µ µ= − = +
If we have two independent estimates and, like here, calculate the differences, then the standard error of the difference is given as
( ) 2 2ˆse 0.048 0.051 0.070δ = + =
We note that standard error of a difference between two independent estimates is larger than both of the two standard errors.
In the body temperature data we get:
( ) ( )ˆ ˆ1.96 se 0.163 1.96 0.070 0.025;0.301δ δ± ⋅ = ± ⋅ =and an approx. 95% CI
-
19
Testing no difference in means
Here we are especially interested in the hypothesis that body temperature is the same for the two gender:
Hypothesis: δ = δ0 = 0We can make an approx. test similar to day 1
( ) ( )ˆ: 0.025;0.301 se0 0.07. 3 016δ δ =
and find the p-value as
( ) ( )0 0.163 0 2.32
0.070
ˆ ˆ
ˆ ˆ0
obs se sez
δ δδ δδ− − −= = = =
( )2 Pr standard normal obsz⋅ ≥We get p=2.03%
-
20
Exact inference for two independent normal samples
Just like in the one sample setting, it is possible to make exact inference – based on the t-distribution.
And again these are easily made by a computer.
Remember the model: Two independent samples from normal distributions with means and standard deviations,
, ,M M F Fµ σ µ σand
Note, both the means and the standard deviations might be different in the two populations.
If one wants to make exact inference, then one has to make the additional assumption:
4. The standard deviations are the same: σM = σF
-
21
Exact inference for two independent normal samples
Testing the hypothesis : σM = σFThis is done by considering the ratio between the two estimated standard deviations:
2Largest observed standard deviation
Smallest observed standard deviationobsF
=
A large value of this F-ratio is critical for the hypothesis
The p-value = the probability of observing a F-ratio at least as large as we have observed - given the hypothesis is true!
The p-value is here found by using an F-distribution with
(nlargest-1) and (nsmallest-1) degrees of freedom:
( )( )2 Pr 1; 1largest smallest obsp value F n n F− = ⋅ − − ≥
-
22
Exact inference for two independent normal samples
Testing the hypothesis : σM = σFHere we have:
220.413 1.063 1.13
0.388obsF = = =
The observed variance (sd2) is 13% higher among women.
But could this be explained by sampling variation– what is the p-value?
To find the p-value we consult an F-distribution with 64=(65-1) and 64=(65-1) degrees of freedom.
We get p-value = 63%
The difference in the observed standard deviation can be explained by sampling variation.
We accept that σM = σF ! The fourth assumption is ok!
ˆ65 0.413
ˆ65 0.388F F
M M
n
n
σσ
= == =
so
-
23
Exact inference for two independent normal samples
We now have a common standard deviation : σ = σF = σMThis is estimated as a “weighted” average
Based on this we can calculate a revised/updated standard error of the difference:
( ) ( )( ) ( )
( ) ( )( ) ( )
2 2
2 2
ˆ ˆ1 1ˆ
1 1
0.413 65 1 0.388 65 10.401
65 1 65 1
F F M M
F M
n n
n n
σ σσ
⋅ − + ⋅ −=
− + −
⋅ − + ⋅ −= =
− + −
( ) 1 1 1 1ˆ ˆse 0.401 0.07065 65F Mn n
δ σ= ⋅ + = ⋅ + =
This is not found in the Stata output
-
24
Exact inference for two independent normal samples
Exact confidence intervals and p-values are found by using
a t-distribution with nM + nF − 2 = 65 + 65−2 = 128 d.f.
( )ˆ ˆ: se 0.070.1 063δ δ =
( ) ( )0.975ˆ ˆse 0.163 1.96 0.07 0.024;0.0 302tδ δ± ⋅ = ± ⋅ =
( )0 0.163
: 2.320.0
ˆ0
ˆ 70obs sH
et
δδ
δ −= = = =
and find the p-value as ( )2 Pr obst⋅ ≥t-distributionWe get p=2.2% (either from table of standard normal
distribution, or from Stata)
And the exact test:
-
25
Stata: two-sample normal analysis
. cd "D:\Teaching\BasalBiostat\Lectures\Day2"
D:\Teaching\BasalBiostat\Lectures\Day2
. use normtemp.dta, clear
. * Checking the normality.
. qnorm tempC if sex==1, title("Male") name(plot2, replace)
. qnorm tempC if sex==2, title("Female") name(plot3, replace)
. graph combine plot2 plot3, name(plotright, replace) col(1)
The F-test and t-test are easily done in Stata (more details can be found in the file day2.do).
-
26
. sdtest tempC, by(sex)
Variance ratio test
---------------------------------------------------------------
Group | Obs Mean Std.Err. Std.Dev. [95% Conf.Interval]
--------+------------------------------------------------------
Male | 65 36.72615 .0481522 .3882158 36.62996 36.82235
Female | 65 36.88923 .0511936 .4127359 36.78696 36.9915
--------+------------------------------------------------------combined 130 36.80769 .0357326 .4074148 36.73699 36.87839
---------------------------------------------------------------ratio = sd(Male) / sd(Female) f = 0.8847
Ho: ratio = 1 degrees of freedom = 64, 64
Ha: ratio < 1 Ha: ratio != 1 Ha: ratio > 1
Pr(F < f) = 0.3128 2*Pr(F < f)= 0.6256 Pr(F > f)= 0.6872
-
27
. ttest tempC, by(sex)
Two-sample t test with equal variances
---------------------------------------------------------------
Group | Obs Mean Std.Err. Std.Dev. [95%Conf.Interval]
-------+-------------------------------------------------------
Male | 65 36.72615 .0481522 .3882158 36.62996 36.82235
Female | 65 36.88923 .0511936 .4127359 36.78696 36.9915
-------+-------------------------------------------------------
combined 130 36.80769 .0357326 .4074148 36.73699 36.87839
-------+-------------------------------------------------------
diff | -.1630766 .070281 -.3021396 -.0240136
---------------------------------------------------------------
diff = mean(Male) - mean(Female) t = -2.3204
Ho: diff = 0 degrees of freedom = 128
Ha: diff < 0 Ha: diff != 0 Ha: diff > 0
Pr(T < t) = 0.0110 Pr(|T| > |t|)= 0.0219 Pr(T > t)= 0.9890
-
28
Exact inference for two independent normal samples
What if you reject the hypothesis of the same sd in the two groups?
1. This indicates that the variation in the two groups differ! Think about why!!!
2. Often it is due to the fact that the assumption of normality is not satisfied. Maybe you would do better by making the statistical analysis on another scale, e.g. log.
3. If you still want to compare the means on the original scale you can make approximate inference based on the t-distribution (e.g. ttest tempC, by(sex) unequal )
4. If you only want to test the hypothesis that the two distributions are located the same place, then can you use the non-parametric Wilcoxon-Mann-Whitney test – see later.
-
29
Body temperature example - formulations
Methods: Data was analyzed as two independent samples from normal distributions based on the Students t. The assumption of normality was checked by a Q-Q plot. Estimates are given with 95% confidence intervals.
Results:The mean body temperature was 36.9(36.8;37.0)C among women compared to 36.7(36.6;36.8)C among men. The mean was 0.16(0.02;0.30)C, higher for females and this was statistically significant (p=2.3%).
Conclusion:Based on this study we conclude that women have a small, but statistically significantly higher mean body temperature than men.
-
30
Example 7.2 Birth weight and heavy smoking
Scientific question: Does the smoking habits of the mother influence the birth weight of the child?
Design and data: (observational) The birth weight (kg) of children born by 14 heavy smokers and 15 non-smokers were recorded.
Summary of the data (the units is kg):
------------------------------------------------------------------------Group | Obs Mean Std. Err. Std. Dev. [95% Conf. Interval]
---------+---------------------------------------------------------------Non-smok | 15 3.627 .0925 .3584 3.428 3.825Heavy sm | 14 3.174 .1238 .4631 2.907 3.442
Already here we observe, that the average birth weight is smallest among heavy-smokers: difference=452 g
-
31
2.5
3
3.5
4
4.5B
irth
wei
ght
Non-smoker Heavy smokerSmoking habits
2.5
3
3.5
4
4.5
Birt
h w
eigh
t
Non-smoker Heavy smoker
Example 7.2 Birth weight and heavy smoking
Plot the data !!!!!!
-
32
Example 7.2 Birth weight and heavy smoking
0
.5
1
1.5
0
.5
1
1.5
2 3 4 5
Non-smoker
Heavy smoker
Den
sity
Graphs by Smoking habits
2.5
3
3.5
4
4.5
3 3.5 4 4.5Inverse Normal
Non-smokers
2.5
3
3.5
4
4.5
2.5 3 3.5 4Inverse Normal
Heavy smokers
Independence, same distribution and normality seems ok.
-
33
Example 7.2 Birth weight and heavy smokingexact inference
Compare the standard deviations (using the computer): 2
(13,14)0.4631
1.64 35%0.3584
fromobs FF p = = =
Conclusion of the test:If there was no difference between the two groups, then it would be almost impossible to observe such a large difference as we have seen – hence the hypothesis cannot be true!
We accept that the two standard deviations are identical.
and again by computer we get:
Difference in mean birth weight: 0.452(0.138;0.767) kg
Hypothesis: no difference in mean birth weight. p=0.06%
-
34
The birth weight example - formulationsMethods - like the body temperature example: Data ……intervals.
Results:The mean birth weight was 3.627(3.428;3.825) kg among non-smokers compared to 3.174(2.907;3.442) kg among heavy smokers. The difference 452(138;767)g was statistically significant (p=0.06%).
Conclusion:Children born by heavy-smokers have a birth weight, that is statistically significantly smaller, than that of children born by non-smokers. The study has only limited information on the precise size of the association.
Furthermore we have not studied the implications of the difference in birth weight or whether the difference could be explained by other factors, like eating habits……
-
35
Non-Parametric test: Wilcoxon-Mann-Whitney test
Until now we have only made statistical inference based on a parametric model.
E.g. we have focused on estimating the difference between two groups and supplying the estimate with a confidence interval.
We have also performed a statistical test of no difference based on the estimate and the standard error – a parametric test.
There are other types of tests – non-parametric tests –that are not based on a parametric model.
These test are also based on models, but they are not parametric models.
We will here look at the Wilcoxon-Mann-Whitney test, which is the non-parametric analogy to the two sample t-test.
-
36
Non-Parametric test: Wilcoxon-Mann-Whitney test
The key feature of all non-parametric tests is, that they are based on the ranks of the data and not the actual values.
Birth weight Rank
Birth weight Rank
2.340 1 2.710 32.380 2 3.310 102.740 4 3.360 112.860 5 3.410 122.900 6 3.510 143.180 7 3.540 163.230 8 3.600 17.53.270 9 3.610 193.420 13 3.700 233.530 15 3.730 243.600 17.5 3.830 253.650 20.5 3.890 263.650 20.5 3.990 273.690 22 4.080 28
4.130 29
Heavy smokers Non-smokers
Smallest
Number 17 and 18
-
37
Non-Parametric test: Wilcoxon-Mann-Whitney test
We can now add the rank in one of the groups, here the heavy smokers:
Heavy-smokers observed rank sum=150.5
Hypothesis: The birth weights among heavy-smokers and non-smokers is the same.
Assuming the hypothesis is true one can calculate the expected rank sum among the heavy-smokers and standard error of the observed rank sum and calculate a test statistics:
( )se
2.5210150.5
22.9197
obsz−=
−= = −
Observed ranksumO
Expected ranksumbserved ranksum
P-value = 0.9%
The p-value is found as ( )2 Pr standard normal obsz⋅ ≥
-
38
Non-Parametric test: Wilcoxon-Mann-Whitney test
We saw that the ranksum among heavy smokers was smallerthan expected if there was no true difference between the two groups.
So small that we only observe such a discrepancy in one out of 100 (p-val=0.9%) studies like this.
We reject the hypothesis!
ConclusionChildren born by heavy-smokers have a statistically significant lower birth weight than children born by non-smokers.
Remember this depends on, the sample size, the design, the statistical analysis...
-
39
Non-Parametric test: Wilcoxon-Mann-Whitney test
Some comments:
• There are two assumptions behind the test:
1. Independence between and within the groups.
2. Within each group: The observations come from the same distribution, e.g. they all have the same mean and variance.
• The test is designed to detect a shift in location in the two populations and not, for example, a difference in the variation in the two populations.
• You will only get a p-value – the possible difference in location will is not quantified by an estimate with a confidence interval.
• As a test it is just as valid as the t-test!
-
40
Stata: Wilcoxon-Mann-Whitney test
. use bwsmoking.dta,clear
(Birth weight (kg) of 29 babies born to 14 heavy smokers and 15 non-smokers)
. ranksum bw, by(group)
Two-sample Wilcoxon rank-sum (Mann-Whitney) test
group | obs rank sum expected-------------+---------------------------------
Non-smoker | 15 284.5 225Heavy smoker | 14 150.5 210-------------+---------------------------------
combined | 29 435 435
unadjusted variance 525.00adjustment for ties -0.26
----------adjusted variance 524.74
Ho: bw(group==Non-smoker) = bw(group==Heavy smoker)z = 2.597
Prob > |z| = 0.0094
-
41
Type 1 and type 2 errors
We will here return to the simple interpretation of a statistical test:
We test a hypothesis: δ = δ0We will make a
Type 1 error if we reject the hypothesis, if it is true.
Type 2 error if we accept the hypothesis, if it is false.
If we use a specific significance level, α, (typically 5%) then we know:
( )( )
0
0 0
Pr
Pr
reject given it is true
reject given
δ δ
δ δ δ δ α
= =
= = =
The risk of a Type 1 error = α
-
42
Type 1 and type 2 errors
What about the risk of Type 2 error:
( )( )
0
0 0
Pr
Pr ?
accept given it is not true
accept given
β δ δ
δ δ δ δ
= = =
= ≠ =This will depend on several things:
1. The statistical model and test we will be using
2. What is the true value of δ ? 3. The precision of the estimate.
What is the sample size and standard deviation?
That is, the risk of Type 2 error, β, is not constant.Often we consider the statistical power:
( )0 0Pr 1reject given δ δ δ δ β= ≠ = −
-
43
Statistical power – planning a study - testing for no difference
Suppose we are planning a new study of fish oil and its possible effect on diastolic blood pressure (DBP).
Assume we want to make a randomized trial with two groups of equal size and we will test the hypothesis of no difference. We believe that the true difference between groups in DBP is 5mmHg.
Furthermore we believe that the standard deviation in the increase in DBP is 9mmHg.
We plan to include 40 women in each group and analyze using a t-test.
What is the chance, that this study will lead to a statistically significant difference between the two groups, given the true difference is 5mmHg?
-
44
10
20
30
40
50
60
70
80
90
100
Pow
er in
%
0 20 40 60 80 100 Observations in each group
sd=10sd=9sd=8sd=7
True difference = 5 - Test for no difference
Statistical power, when the true difference is 5 and sd= 7,8,9 or 10 and we test the hypothesis of no difference.
n=40 power=69%
-
45
Statistical power – planning a study
We plan to include 40 women in each group and analyze using a t-test and the true difference is 5mmHg and sd=9mmHg
Power = 69%
That is, there is only 69% chance, that such a study will lead to a statistical significant result - given the assumptions are true.
How may women should we include in each group if we want to have a power of 90%?
Based on the plot we see that more than aprox. 69 women in each group will lead to a power of 90%.
-
46
10
20
30
40
50
60
70
80
90
100
Pow
er in
%
0 20 40 60 80 100 Observations in each group
sd=10sd=9sd=8sd=7
True difference = 5 - Test for no differencepower=90% n=69
Statistical power, when the true difference is 5 and sd= 7,8,9 or 10 and we test the hypothesis of no difference.
-
47
10
20
30
40
50
60
70
80
90
100
Pow
er in
%
0 20 40 60 80 100 Observations in each group
sd=10sd=9sd=8sd=7
True difference = 10 - Test for no difference
The power increases as a function of the expected difference between the groups and decreases as a function of the variation, standard deviation, within the groups
-
48
Power two unpaired normal samples
In general we have the five quantities in play:
1 2-
n
δ µ µσαβ
=====
The true difference between groups
The standard deviation each group
The significance level (typically 5%)
The risk of type 2 error = 1-the power
The sample size in each
wit
hi
p
n
grou
If we know four of these, then we can determine the last.
Typically, we know the first four and want to know the sample size.
or we know δ, σ, α and n and then we want to know the power.
-
49
Stata: Paired sample from a normal distribution
. power twomeans 0 5 , sd1(9) sd2(9) alpha(0.05) power(0.90)
Performing iteration ...Estimated sample sizes for a two-sample means testSatterthwaite's t test assuming unequal variancesHo: m2 = m1 versus Ha: m2 != m1Study parameters:
alpha = 0.0500power = 0.9000delta = 3.2867
m1 = 0.0000m2 = 5.0000
sd1 = 9.0000sd2 = 9.0000
Estimated sample sizes:N = 140
N per group = 70
* Prior to Stata 13:* sampsi 0 5, sd1(9) sd2(9) alpha(0.05) power(0.90)
Power calculations are done using the power command:
-
50
Comments on sample size calculations• Most often done by computer (in Stata power)
• There are many different formulas see Kirkwood & Stern Table 35.1. We will only look at a few in this course.
• It is in general more relevant to test that the difference is larger than a specified value.A so-called Superiority or Non-inferiority study.
• Or to plan the study so that your study is expected to yield a confidence interval with a certain width.
• You need to know the true difference and you must have an idea of the variation within the groups. The latter you might find based on hospital records or in the literature.
• Sample size calculations after the study has been carried out (post –hoc) is nonsense!!The confidence interval will show how much information you have in the study.