TUIT2B_1 - johnbuckleton.files.wordpress.com  · Web viewMR DESMOND: Doctor, still under the...

55
(Emerton J) UPON RESUMING AT 2.15 PM : .DM:DF:CAT 20/04/17 SC 11B 178 DISCUSSION Tuite 1 1 2 2 3

Transcript of TUIT2B_1 - johnbuckleton.files.wordpress.com  · Web viewMR DESMOND: Doctor, still under the...

(Emerton J)

UPON RESUMING AT 2.15 PM:

.DM:DF:CAT 20/04/17 SC 11B 178 DISCUSSIONTuite

1

1

2

23

<DUNCAN ALEXANDER TAYLOR, recalled:

MR DESMOND: Doctor, still under the guideline 3.2 of SWGDAM

sensitivity and specificity studies, in your validation

study article you say in relation to a false exclusion

rates, "We have no realistic way of measuring the false

exclusion rate except to say we have no undiagnosed

instances of false exclusion". Is that still accurate?

---Yes. That's correct.

You go on further to say, "Having looked at some data we turn

those low grade false inclusions since the LRs are low

and new neutrality or only slightly to the inclusionary

side. They occur when the false donor has the correct

alleles for inclusion and hence they are a property of

DNA rather than a consequence of the software not

performing". What do you mean there when you're saying

"when a false donor has the correct alleles for

inclusion"?---What I mean there is if you generate is

mixed DNA profile with a number of alleles at each region

and then you compare someone who's not one of those

contributors, but just by chance happens to have alleles

and information that make them look like a contributor,

then they will get a likelihood ratio that favours their

inclusion, not because of some error of software but just

simply because they happen to be unluckily enough to have

those same alleles.

Is that talking about advantageous matches?---Yes, that's what

it's classically called.

Are you able to opine as to the likely number of markers you

would see advantageous matches that in such a

construct?---What I can say is that if you generate a

likelihood ratio for a contributor to a mixed profile,

.DM:DF:CAT 20/04/17 SC 11B 179 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

and let's say that likelihood ratio is 1,000 favouring

the inclusion, then the probability of picking someone at

random from the population who would give you a

likelihood ratio of at least 1,000 favouring the

inclusion is less than 1 in 1,000.

You go on to say: "There are no modelling improvements that

could ever be made which will eliminate all LRs that

falsely favour inclusion"; correct?---Correct.

"This is because the phenomenon causes these results is not a

modelling phenomenon but is due to the available

biological data", correct?---Correct.

You then have a paragraph that's, I think it's still part of

the guideline 3.2 for sensitivity and specificity studies

but your paragraph is 1.2.1 "uncertainty in the number of

contributors". "The determination of the effect of

incorrectly assigning the number of contributors to a

profile on the interpretation is not explicitly a

requirement of developmental validation within the SWGDAM

guidelines, however this is something the STRmix

development team has explored". Now this is determining

the number of contributors based on the examination of

the EPG prior to inputting the data into STRmix, is that

right?---That's right.

"The true number of contributors to a profile is always

unknown", we have been over that before and you adhered

to that?---Yes.

"Analysts are likely to add contributors in the presence of an

artefact, high stutter or forward stutter peak", you

would adhere to that?---Yes.

That's your experience that they're likely to?---That's been

the experience, yeah.

.DM:DF:CAT 20/04/17 SC 11B 180 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

"The assumption of one fewer contributor than that actually

present may be made when contributors are at very low

levels, are affected by peak masking and a dropping out

(or visible below the analytical threshold) and in

profiles where DNA is from individuals with similar

profiles with the same concentrations". Are you happy

with that?---Yes.

A bit further on, "The inclusion of an additional contributor

beyond that present in the profile had the effect of

lowering the LR for trace contributors within the

profile", are you satisfied that's correct?---Yes.

So the issue of diagnosing the number of contributors can

impact upon the LR in a significant way when dealing with

trace contributors, small contributors?---It can lower

the LR for minor contributors if you increase the number

you use.

Yes. Moving on guideline 3.2.3, precision. Your paragraph

1.3. "There are no two analyses which will be completely

the same using the stochastic system like MCMC. This is

a phenomenon that is relatively new to forensic DNA

interpretation". But you go on to say, "The main cause

of high variability within STRmix is non-convergence

within the MCMC. Strictly Markov change do not converge,

they explore the sampling space forever until they are

told to stop. What we mean when we say Markov chance of

reached conversions is that all chains are sampling from

and remain in the same high probability space.

Non-convergence is caused by the MCMC change not being

run for a sufficient number of accepts". A bit further

on, "At the completion of burning, the MCMC progresses to

post burning. STRmix is set to run for a user defined

.DM:DF:CAT 20/04/17 SC 11B 181 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

number of burning and post burning accepts". Now, in

terms the appropriate point for the number of runs

burning and post burning acceptances which validation

data do you rely on for that?---Well, within STRmix we

have a diagnostic which indicates when the chains have

converged.

Sorry, say that again?---Within the STRmix output, the results

at the end, there is a diagnostic value which tells us if

the chains have converged which then tells us if they

have been run for long enough.

Have I misunderstood, I thought there isn't actual successful

convergence because it would just take too long and you

would have to have too many runs and iterations?---No,

the way that Markov Chain Monte Carlo works is that you

want, as that paragraph just said, all the chains to be

in the same sampling space which is of high probability.

Yes?---And if they are, then this diagnostic that comes out at

the end of STRmix will tell you that they are.

And if you it don't impose a limit you would run it until they

are actually converge?---Or that they never completely

converge.

Well, until they closer converge, if they are going to?---So

the founders of the Markov Chain Monte Carlo technique,

Metropolis and Hastings are their names, suggest that a

certain value for this convergence diagnostic passing,

since they are the inventors of the technique, I adhere

to what they say and consider that any passing value has

converged sufficiently that there will be no issue with

the results.

Okay. "Putting aside non-convergence there will always exist a

level of MCMC run-to-run variability"?---Correct.

.DM:DF:CAT 20/04/17 SC 11B 182 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

Is that just echoing back to "you'll never get exactly the same

LR"?---Yes.

You go on to say, "Variation in LRs produced from STRmix

analysis will depend on both the sample and the run

parameters. Sample-specific factors that affect

precision include; 1, the number of contributors to a DNA

profile, 2 the quality, intensity of the DNA profile, 3

the number of replicates available for analysis, 4 the

probability of the observed data given the genotype of

the POI as a contributor, and 5 the amount of STR

information available in the profile"; is that

correct?---Correct.

You then deal with the heading "number of MCM chains and

accepts, increasing the number of either accepts or moves

and adjusting the step size can reduce but not totally

remove the variation", still correct?---Yes.

But you go on to say, "There is, however, an associated run

time cost, hence a trade-off between reproducibility and

run time", you say in this article, "must be

struck"?---Correct.

What's the run time cost such that you form the view it becomes

prohibitive to increase the number of either accepts or

moves?---Well, the number of iterations that we use has

been scrutinised, and we have had a look at a number of

profiles and the amount of reproducibility for these

complex mixtures when we run the same mixture multiple

times, and we are happy with the amount of run-to-run

variability that we get with our current number of

accepts, so we don't push out those number of accepts.

Well, you have said that before, but I'm just taking you to

this particular statement where you've said there is a

.DM:DF:CAT 20/04/17 SC 11B 183 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

trade-off here between reproducibility and run time that

must be struck", meaning putting cost issues aside, it

could be run for a longer time, presumably to get a

closer reproducibility, is that right or is that

wrong?---It could be. That's correct.

Okay. Are you able to tell us what the run time cost you had

in mind that justifies the trade-off, what sort of

expense are we talking about?---No, I don't really think

it was based on a run time cost specifically for, I am

only talking here about my lab specifically, understand.

Yes?---There wasn't a run time cost for us specifically, it was

more to do with the level of reproducibility, and when we

determined the level of reproducibility, we were happy

with the run time that incurred.

Just pardon me a moment: You authored an article with Messrs

Buckleton and Evert published in FSI Genetics 16 2015 165

to 175 entitled "testing likelihood ratios produced from

complex DNA profiles"?---Yes.

The abstract commencing, "The performance of any model used to

analyse DNA profile evidence should be tested using

simulation, large scale validation studies based on

ground truth cases or alignment with trends predicted by

theory". Towards its end of the abstract you say, "A

better diagnostic is the average LR for HD True which

should be near to one. We tested the performance of a

continuous interpretation model on nine DNA profiles of

varying quality and complexity and verify the theoretical

expectations". In the introduction to the article you

then go on to say, "There is a limit in the type of DNA

profiles that can be assessed by simulation because

calibrating an LR of X requires simulation that has many

.DM:DF:CAT 20/04/17 SC 11B 184 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

more than X elements". You perhaps better put that in

English for us, an LR, calibrating an LR of X requires a

simulation that's got many more than X elements?---Yes,

so that what that means is, if you were randomly to

simulate profiles of non-contributors and compare them to

your STRmix analysis to generate a likelihood ratio, if

your true contributors had likelihood ratios of say tens

of millions then you would have to run more than tens of

millions of these simulations of random profiles in order

to properly cover all the genotypes of interest, and if

the person of interest, or the true contributor, had a

likelihood ratio in the tens of billions you need to run

tens of billions of simulations, that's what that

sentence means.

Okay. "For a complete DNA profile in a modern profiling system

the value of X can be over 20 orders of magnitude which

is well beyond the practical limits of any standard

computer". The passage I want to concentrate on, you

refer to Dorum and others and you give a reference, 11,

which I will take you to. "Outline and method to

overcome this restriction. The fraction of genotypes

equalling or exceeding the LR is calculated exactly and

assuming between locus independents". Now, you adhere to

that?---Yes, I don't, I'm not as familiar with this

article as some of the others you have been talking

about, but, yes, I adhere to that.

I will just read the sentence again because I am going to try

to marry it up, it may be it's on a different issue, but

there's a passage in Chakraborty's report which addresses

the issue of either dependence or non-dependence between

markers?---Okay.

.DM:DF:CAT 20/04/17 SC 11B 185 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

So your sentence was "Dorum and others outline a method to

overcome this restriction, the fraction of genotypes

equalling or exceeding the LR is calculated exactly and

assuming between locus independence"?---Okay.

You might recall, if not I'll remind you of it Chakraborty -

just pardon me a second - you've got his report over

there as I understand it in front of you?---That's right.

Okay. If you have a look at p.11, or my copy, it's p.11 and

it's paragraph 21 commencing with, "Also worthy to

note"?---Yes.

"Also worthy to note that lack of consensus among the methods

of interpreting complex mixed DNA mixture extends to

describe and model allele drop outs across alleles within

a locus or across loci. Since it is almost universal to

recognise that allelic drop outs are events mainly due to

DNA degradation", do you agree with that first?---I would

say that they are mainly due to the amount of DNA in the

sample rather than DNA degradation.

He continues, "It is reasonable to predict that drop out event

would occur more frequently (that is drop out probability

will be higher) for alleles with higher amplicon sizes

within as well as across loci". Do you agree with that

half of the statement?---Yes.

Okay. "Also reasonable to assert is if for a specific

evidentiary item allele or alleles of small amplicon

sizes are observed to have dropped out (generally for

loci on the left-hand side of the EPG). The chances of

alleles towards the right-hand side (those larger

amplicon sizes) would also be missing should occur more

frequently by chance alone". Do you agree with

that?---Yes.

.DM:DF:CAT 20/04/17 SC 11B 186 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

Next paragraph, "In other words, allele drop out probabilities

or their complements cannot be multiplied to developing

formulae for allele determination involving joint

behaviours of allele drop outs", do you agree with

that?---No. And neither does a large body of scientists

that do multiply drop out probabilities in our

calculations going right back to 2000.

Well, do you limit it to 2000?---That's when it first sort of

- - -

Is a recent - - - ?---That's when the first sort of seminal

paper came out with suggested multiplying drop out

probabilities in order to generate a probabilistic

likelihood ratio.

This is a core issue, isn't it, I mean if Chakraborty is

correct here and you're wrong, then really there's the

whole theory behind STRmix or any probabilistic

genotyping methodology becomes worthless, or meaningless,

largely, that's how fundamental or core this issue is, do

you agree with that at least?---No, I don't think I agree

with the way you're putting it there.

I am not trying dramatise it, but if he's right, that you are

not allowed to multiply across the loci, and that is

something that you do, you're conclusions ie the LRs

aren't then worth the paper they're written on, if that

is the breach of a fundamental tenant or a rule of math

in this situation; do you agree with that?---If this is

the case and we're doing something, we're assuming

independence of some event that's not independent, what

you need do is have a look at that the consequence of

that dependence and you make an assumption to say we're

say assuming that this dependence is not significance and

.DM:DF:CAT 20/04/17 SC 11B 187 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

then you look at the results that you get, the way the

science works isn't that you fundamentally throw out

everything that's been done for the past 20 years because

you don't agree with some dependence statement.

We learn by mistakes?---What you said is not supported by

scores of papers that have been produced in the last 20

years, and there's probably - - -

And that's - - -

HER HONOUR: Let the witness finish?---There's probably eight

or nine different probabilistic genotyping systems that

do multiply drop out probabilities across profiles, and

there's been studies that look at the amount of drop out

compared to what the models would predict and found that

they have been very closely aligned, and even if we were

to take what Chakraborty is saying which is DNA degrades

across the profile so you expect more drop out at the

high molecular rates than low molecular rates, that is

exactly how STRmix works, so he's misunderstood - firstly

he is not correct in some of his statements here.

Secondly he has misunderstood the way that SRT works,

making point 21 almost pointless.

It's almost pointless, did you say?---Almost pointless because

of the inaccuracies.

If that's your opinion, so be it, I want to concentrate on - -

- ?---And the opinion of many others.

I am sorry, I didn't hear you then?---And the opinion of many

others.

Well, everyone that comes to court I will ask a question about,

but in the meantime I am restricted to you?---Okay.

If it assist you or gives you some confidence, Professor

Balding is as firm in the opinion you're expressing as to

.DM:DF:CAT 20/04/17 SC 11B 188 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

the correctness of the multiplication rule, he gave

evidence yesterday?---Okay.

Okay? But he effectively said if it turned out we were wrong

this and Chakraborty is right then the whole system needs

to be re-looked at before we could meaningfully use the

LRs produced. But just going back a step. The

fundamental principle regarding multiplication, have I

understood you to say, it's first been published as

recently as 2000, thereabouts?---That was one of the

initial fundamental papers that talked about drop out

probabilities being incorporated into likelihood ratios.

I am concentrating specifically on this issue as to whether you

can or can't multiply the drop out probabilities within

and across the loci?---Doing so is very common practice

around the world.

But since 2000 or thereabouts, is that where we're at?---Yes.

Okay. Now, the reason why I'm taking you this part of

Chakraborty's at this point in the cross-examination was

because, and I might be completely wrong, but that

sentence in the testing likelihood ratio's paper that I

was asking you questions about, is that addressing this

issue? So I'll read it again, "Dorum and others outline

a method to overcome this restriction, the fraction of

genotypes equally or exceeding the LR is calculated

exactly and assuming between locus independence"?---Okay.

It is in part addressing this issue as to whether there's

dependence or independence between, amongst the

loci?---There is, but there are different aspects that

you consider to be dependent or independent between loci,

for example genotype frequencies are considered

independent between loci, you can multiply up these

.DM:DF:CAT 20/04/17 SC 11B 189 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

genotype frequencies to get a profile frequency, but in

STRmix aspects such as DNA amount is not independent

between loci, it's instead independent between

contributors, so for example if you have one contributor

with a lot of DNA and one contributor with a little of

DNA at one locus you expect that to be the same at the

next locus, so there's different factors of independence

between loci, depending on what aspect of the model you

are talking about.

Is this aspect that's being referred to here in the paper, the

testing likelihood ratios paper, addressing the

independence issue being raised by Chakraborty?---It

sounds like the independence issue that that sentence is

talking about is perhaps genotype probabilities.

Well, just for completeness, I will go to the Dorum paper, the

reference was Dorum, Bleka, Gill, Haned, Slipen, Savo and

others, exact computation of the distribution of

likelihood ratios with forensic applications"?---Okay.

And published in 2014 Volume 9 of FSI Genetics, and part of

that abstract reads, "If complex DNA profiles conditioned

on multiple individuals are evaluated it may be difficult

to assess the strength of the evidence based on the

likelihood ratio. A likelihood ratio does not give

information about the relative weights that are provided

by separate contributors. Alternatively the observed

likelihood ratio can be evaluated with respect to the

distribution of the likelihood ratio under the defence

hypothesis, we present an efficient algorithm to compute

an exact distribution of likelihood ratios that can be

applied to any LR-based model". The particular aspect of

the paper that deals with the issue of independence, the

.DM:DF:CAT 20/04/17 SC 11B 190 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

only reference to it I can find in the paper is at p.95

it falls under the heading of "methods", then it's

subheading "problem formulation". Next subheading "the P

value", next heading "algorithm - step one". Then the

authors say, "The marker loci are assumed to be

independent, which means that the likelihood ratio score

for a genotype profile is the product of the likelihood

ratio scores for the individual loci". Firstly, do you

agree with that statement, or if before you whether you

agree or not, what is that statement saying? I'll read

it again?---Okay.

"The marker loci are assumed to be independent, which means

that the likelihood ratio score for a genotype profile is

the product, and product is multiplication, it's another

word of saying multiplication, you accept

that?---Correct, yes.

"Is the product of the likelihood ratio scores for the

individual loci", so as I read it that's saying you can

multiply the LRs you obtained at each separate marker,

you just multiply them across to end up with the giant

LR?---Yes, that's how I understand it.

Which - - - ?---That's how I understand it as well.

Which is what your STRmix does, agreed?---Which is what STRmix

does, yes.

That's right, so this is addressing the independence

issue?---Well, this is take being the independence of

genotypes between different regions, which has never

really been in question.

Okay, but it's talking about multiplying the LRs across all

loci to produce the final LR?---Yes.

You can only do that, do you accept that, if there is

.DM:DF:CAT 20/04/17 SC 11B 191 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

independence between the markers?---Well, there's

independence between the genotypes between markers.

There's not necessarily independence, as I said before,

of all mass parameters across the profile.

No, but just on this issue as to whether you can multiply the

LRs, which is what STRmix does?---Yes, you can, you can

multiply the LRs each region to get the overall

likelihood ratio.

Because they are independent?---In that respect they are

independent, yes.

Now what I'm suggesting to you is these authors don't provide

any reference for the statement or assertion the marker

loci are assumed to be independent. Now, I'll ask you to

either, can you provide them now for those that exist,

that provide empirical data supporting the proposition

the loci are independent, either now or at a convenient

time if you can pass the references on to the Crown so

they can be made available to the defence. So you

understand what I'm asking for?---Yes.

HER HONOUR: Wasn't there an earlier answer though to the

effect that this system of multiplying the loci is used

in X number of different systems and has been used in X

number of different systems for X number of years and,

therefore, the validation studies in relation to those

systems validate that methodology, wasn't that the answer

that you gave earlier Dr Taylor?---Yes, it was.

Or something to that effect?---Yes.

MR DESMOND: Well, I understood your answer to be a combination

of the two, but I may have misunderstood it. That is,

the years of practice with this construct is proving what

we're doing, because we say its accurate, but also there

.DM:DF:CAT 20/04/17 SC 11B 192 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

is empirical data that supports the proposition that it

is appropriate to multiply. So I suppose I'll ask that

question again. Do you say there is any empirical data

that's been peer reviewed that supports the proposition

that the loci for the purposes of multiplication of the

LRs are independent?---I will have to go back and have a

look at literature. I will be going back a fair way

because it's one of those fundamental tenants of biology,

it's similar to looking for a reference for the process

of addition and saying and that's valid, it's going to be

quite difficult, but I can perhaps allay some concerns by

telling you that many of the markers in the current

profiling kits exist on different chromosomes, which

means by definition they are independent from one

another. There's a biological reason why that has to be

so.

Well, how many chromosomes are involved in the PP21

amplification?---So with the modern kits there are some

markers that are on different areas of the same

chromosome. Most of them are spread across different

chromosomes, some are on the same chromosome, and for

those pairs of markers there has been some validation

studies that look at their independence and found that

they can be multiplied, as all the other markers can.

Okay. Well, again, can you trouble yourself to find what

studies support that assertion you're making now?---Okay.

That is for the markers on the same chromosome as well as - - -

?---Yes.

- - - obviously the ones on different chromosomes. If

Your Honour would just pardon me.

HER HONOUR: Can I just ask, wasn't the product rule used for

.DM:DF:CAT 20/04/17 SC 11B 193 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

spurs as well, the earlier system, the binary

system?---I'm not really sure of that. It would have

been some form - perhaps I'll step back a second. There

is a term in forensic biology called the product rule

which often refers to carrying out a likelihood ratio

calculation without including any considerations of

co-ancestry within a population but if I put that aside

and we talk about the product rule simply being the

ability to multiply individual locus likelihood ratios to

obtain an error - - -

That's what I was referring to, yes?---In that case it would

have been because that's the way that likelihood ratios

are calculated.

Right.

MR DESMOND: Towards the end of this paper, the testing

likelihood ratios paper, doctor, you say, "We recognise

here that the reality of case work is that there is

generally a complexity threshold where DNA profiles will

not be analysed"?---Yes.

"This threshold should not be taken as evidence of the

non-reliability of the model's performance but rather a

practical or business decision made by the

analyst"?---Yes.

"It's a natural concern that is sometimes expressed by forensic

practitioners that where profiles contain relatively

small amounts of consequent appreciable levels of

uncertainty in their measurements and designation, then

the LR from a continuous calculation becomes increasingly

unreliable." Have you pursued or investigated further

this issue of identifying a complexity threshold by way

of providing any criteria or relevant guidelines to the

.DM:DF:CAT 20/04/17 SC 11B 194 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

forensic practitioners?---As I say in the paper there,

it's not really a science decision, it's a business

decision, so it's up to each individual laboratory and

their practices and their resources and their time

pressures to come up with a group of profiles and limits

on the sorts of profiles that they can practically

analyse in the time that they have to complete their

analyses.

I thought you were recognising the phenomenon that despite the

advances in the science that the probabilistic genotyping

provides theoretically, notwithstanding that there are

suitable cases where the complexity is just too high and

STRmix really shouldn't be used to assist further?---Well

if you - I think if you read that sentence again it says

that there's a complexity threshold that's not a product

of any level of reliability of a software but rather a

business decision.

Yes. But I mean one would think that a government funded

forensic science laboratory is not going to produce a

likelihood ratio for some business decision, some

financial concern, if that's what you mean by business.

What do you mean by business decision?---Well, an example

that I can use from forensic science South Australia is

that we don't analyse five person mixtures where we can't

assume anyone's a contributor. That's not because we

have any lack of faith in the ability of STRmix to do so,

in fact we have validated STRmix to do so, it's simply

that the amount of time required to analyse and then

interpret and then review these very very complex

profiles is a business decision and we have decided we

haven't got the resources or the time to do that.

.DM:DF:CAT 20/04/17 SC 11B 195 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

You've published the paper with Messrs Buckleton and Bright.

Again, does the use of probabilistic genotyping change

the way we should view sub-threshold data that's in the

Australian Journal of Forensic Science 2015?---Yes.

Apart from other things you have this to say: "This work

evaluates some options for analysts to deal with

sub-threshold information and the risks or benefits

associated with each in the context of analysis within a

continuous DNA interpretation system. We introduce a

novel method for dealing with sub-threshold data

implemented within the STRmix program that allows the

user to specify a prior belief in mixture proportions.

Much of the discussion will be dominated by the topic of

choosing a number of contributors for analysis, which is

where the sub AT peaks will have their biggest impact on

interpretations. There have been various works that look

at the consequences of over-estimation or

under-estimation of the number of contributors". A bit

further on: "This leads to the question of how, if at

all, sub-threshold information should be taken into

account when making the choice of a number of

contributors", and you say, "We, the authors, consider

four broad categories for consideration. 1. Ignore the

presence of sub-threshold peaks when interpreting DNA

profiles. 2. Change the method by which data are

generated (either by lowering the AT or carrying out

replicate PCRs). 3. Use informed priors on mixture

proportion in a probabilistic system. 4. Do not

interpret the DNA profile". Now, from memory you

ultimately concluded that - well, I won't say that unless

I get it wrong. I want to concentrate on change number

.DM:DF:CAT 20/04/17 SC 11B 196 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

2, change the method by which data are generated either

by lowering the AT or carrying out replicates?---Okay.

At paragraph 1.22, "Lowering the AT". "Comparing graphs

vertically in figure 1" - you may or may not recall,

there will be little point me holding it up to the

camera, but there's a - figure 1 is described in writing

as Log 10LR versus template per contributor (PG) using

sub-threshold information (experiment 1) or ignoring

sub-threshold information (experiment 2) for a range of

four person profiles"?---Okay.

So that's what the figures show apparently?---Yep.

"Comparing graphs vertically in fig. 1 shows very little

noticeable improvement in the ability to discern true and

false donors. However, comparing rows horizontally in

table 4 suggests that lowering the AT or using

sub-threshold information leads to improved ability to

assign the number of contributors. There is a cost in

expert time in using very low thresholds. Although no

evidence is presented here we assume that at very low

thresholds even the most skilled experts will let through

artefacts occasionally". In the conclusion section of

the document you say, "Continuous systems (at least

STRmix as trialled here) can overcome the issues of

missing low level data with minimal effects on the

outcome of the analysis. The effects of over estimation

of the number of contributors may not be too severe as

long as the system has been reliably validated for this

policy. This situation should not be used to enable a

reduction of valid quality practices such as replication

and careful expert inspection of profiles and cannot be

assumed to be conservative". It goes on a bit further,

.DM:DF:CAT 20/04/17 SC 11B 197 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

"We have discussed strategies to mitigate the effect of

uncertainty in the number of trace contributors present

when sub-threshold information is present in a DNA

profile. We support replication in lowering the AT

whenever practical. The use of sub-threshold data

without lowering the AT may be useful in some cases".

Just on those two statements, is that a different way of

saying the same thing, you either lower the AT so that

you don't no longer then have sub-threshold data that

would otherwise be sub-threshold data above the lowered

AT or else you just read - - - ?---That's what that

means.

Or else you just read the sub-threshold information that's

below the existing AT?---Yes, that's right.

Yes?---That's right.

What AT do you support or recommend in 2017 for PP21 profiles

that have been amplified and going to be analysed by

STRmix?---Okay. Well it depends largely on the model of

electrophoresis instrument that's being used, so common

thresholds that I've seen laboratories use and validate

for STRmix around the world for 31/30 instruments would

have been around 30 to 50 RFU range and for the 3500

instruments I've seen thresholds, analytical thresholds

of around 50 to 150.

You don't put any constraint on the general principle you

express there that you would support or recommend either

lowering an AT or simply making use of sub-threshold

information?---No, it's more of a theoretical discussion

on how to deal with this sub-threshold piece.

On either - - - ?---So it's a recommendation made in the

absence of any restrictions of, once again, resource and

.DM:DF:CAT 20/04/17 SC 11B 198 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

time and policy and lab practices and so forth.

But you could understand, and I acknowledge those issues, but

you could understand potentially from any given accused's

viewpoint if sub-threshold information is present or if

the AT is lowered so that there's more information than

what there previously was, it may be that there might be

more peaks that would be information used by STRmix, if

STRmix is the program that was going to be used, that

would get a weighting potentially as to whether these

further peaks are on allelic or stutter?---Perhaps I'll

answer it in this way. The validation that labs perform,

actually they all peak through minimal reasonable

analytical threshold that they can sustain within their

laboratory and also they have - I'll quote it from

policies around when samples would be replicated with

further PCR amplifications. Now of course there's always

the option to push those boundaries but if you do you

could continually make the same argument. So, for

example, if I did - if we had a policy of doing two

replica amplifications for each DNA extract, one of

three, one of four, one of five, if you were to lower the

analytical threshold from 100 to 90, why not lower it to

50, why not lower it to 20, why not lower it to 10? The

reality is there's always some limit to which you have to

balance the needs of your lab with the amount of

information that you're getting.

Yes, but what I'm suggesting to you is your comment there is

really, "I acknowledge what the current situation is that

we have, that there are analytical thresholds being used

by these laboratories", and I could - - - ?---Yes.

- - - only assume that you would say that knowing they apply

.DM:DF:CAT 20/04/17 SC 11B 199 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

these analytical thresholds based on their validation

studies and data, they just don't pull it out of thin

air?---Yes, that's right.

But it's in that context you're still supporting there may be

occasions where either lower the AT or make use of the

sub-threshold data if the AT's not to be lowered?---Yes.

It's still in that context of you saying that?---But those are

theoretical solutions to the problem of sub-threshold

peaks that we've suggested or recommended.

They can be practical solutions as well in a given case.

They're not necessarily theoretical. If the theory's

applied it becomes a practical solution to making use of

more data for the biologist to analyse or eject as being

genuine artifact?---Yes, the theory could be put into

practice.

And the lowering of the threshold increases a proper

description that the genotyping is truly becoming fully

continuous in the sense of where artificial ATs are

imposed either by policy, based on validated data or not,

there is a threshold there which is contra to what a

genuine, fully continuous genotyping system by definition

should be?---Um, I take your point. Any threshold - the

idea is always to remove all thresholds completely from

any evidence interpretation.

Okay. Your Honour, I'm going on to a new point, does

Your Honour wish me to continue or will we have a break

at this stage?

HER HONOUR: Are we going to finish today? What do you think?

MR DESMOND: No, Your Honour.

HER HONOUR: We won't.

MR DESMOND: No.

.DM:DF:CAT 20/04/17 SC 11B 200 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

HER HONOUR: All right. In that case we'll have a ten minute

break I think.

DR ROGERS: Just before you do that, Your Honour, I'm wondering

about whether Mr Desmond might be able to indicate the

length of tomorrow because we've got to go ahead now and

book the video-link for tomorrow.

HER HONOUR: Right.

DR ROGERS: And we need to know really - - -

MR DESMOND: I'll finish by lunchtime, Your Honour.

HER HONOUR: Half a day.

DR ROGERS: So if we do half a day.

MR DESMOND: Yes, half a day's long enough. If it's not long

enough then I've had my chance.

HER HONOUR: All right.

DR ROGERS: All right, thank you.

HER HONOUR: Half a day tomorrow. If you don't mind, Dr

Taylor, we'll take a 10 minute break now. That will give

you a chance to stretch your legs as well?---Thank you.

And we will resume at 20 past 3.

<(THE WITNESS WITHDREW)

(Short adjournment.)

<DUNCAN ALEXANDER TAYLOR, recalled:

MR DESMOND: Doctor, are you familiar with a paper published in

FSI genetics supplement series 2015 series five,

development of new peak height models for a continuous

method of mixture interpretation, Messrs Manabe, Hamano,

Kawai, Morimoto and Tamaki?---Not specifically, no.

"The abstract DNA mixture interpretation based on a continuous

model is an effective strategy for calculating rigorous

likelihood ratios as in peak heights in considering

stochastic effects. Such a model would require the

.DM:DF:CAT 20/04/17 SC 11B 201 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

elucidation of various biological parameters affecting

the expected peak heights. In the present study we

estimated the distributions of locus specific

amplification efficiency, heterozygote balance and to

stutter ratio in 15 commercially available STR loci using

2, 3, 4 single source DNA samples. Our data suggested

that the locus specific amplification efficiency followed

a normal distribution Whereas the heterozygote balance

followed a log normal distribution for each locus. We

modelled log normal distributions for stutter ratios with

allele specific mean values which exhibited a positive

correlation with allele repeat numbers. However, with

the D8, D21 and D2 loci the log normal distribution did

not fit our data because of the complex repeat structures

involved therefore an alternative model for each of these

three loci will need to be incorporated into a software

program based on a continuous model", that's the end of

the extract and in the results and discussion section of

the paper the authors pretty much repeat the same thing.

I'll just read this. "However LUS values could not be

determined for D8, D21 and D2. For example there are two

types of repeat structures in the D8, in the D8S locus

i.e.", and then - I'll read it out, "TCTA to the little A

and TCTA, then TCTG and TCTA to the little A minus

negative 2 for allele A. Therefore for these three loci

an alternative model is required and must be incorporated

into a software program based on a continuous model".

For the purpose of the question assume what these authors

say is correct?---Okay.

That your model would or would not need to be modified to

accommodate these three loci where LS values couldn't be

.DM:DF:CAT 20/04/17 SC 11B 202 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

determined?---No, I don't think our model would need to

be adjusted and I say that because during the validation

of STRmix in our lab but in many labs we have a look at

the stutter ratios for each allele for each locus and

determine whether or not the regressions that we are

using are STRmix and longest uninterrupted sequences

we're using in STRmix are applicable.

You say LUS values can be determined for D8, D21 and D2S, is

that your position?---I would suspect so. There's online

databases where you can look up these values.

Well, I'm asking you now. Your best recollection is would it

be - just so we're clear, is it you, you the developers

or is this a matter for internal lab validation studies,

identifying whether LUV values can or cannot be

determined for D8, D21 and D2S?---Well it's up to each

lab to individually determine the stutter ratios for each

locus and how STRmix will model that. Having said that

we have done work or in particular Joanne Bright has done

work to see whether or not there is much variation in

stutter ratios between labs using the same DNA profiling

kits and it turns out that there's not.

Is your final position even if these authors are right it

doesn't make much difference?---Well, we've modelled

stutter and stutter ratios for all these loci so we know

how it's performing.

For these three loci?---For those loci and many others we've

modelled stutter ratios and are perfectly happy with way

that we have modelled them for use in STRmix.

But prior to me bringing the article to your attention you

weren't aware of it?--- No.

Are you - on a different issue, on a more general footing - are

.DM:DF:CAT 20/04/17 SC 11B 203 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

you familiar with authors Alan Jamieson and Scott Bader

who have written a book "A Guide to Forensic DNA

Profiling", I believe it's in 2016?---I haven't seen the

book but I know of particularly Alan Jamieson but I

haven't seen that book.

I just want to read to you parts of their paragraphs dealing

with fully continuous probabilistic genotyping

approaches?---Okay.

And ask you whether you agree or disagree. "A significant

challenge to fully continuous probabilistic genotyping is

the lack of consensus within the scientific community

about which of a number of possible solutions is

correct", do you agree with that?---No, not really.

"In those instances where the forensic science community cannot

be persuaded that a particular approach is correct it

will not be possible for a probabilistic genotyping

algorithm that uses that approach to reasonably assert

that that probabilistic genotyping approach itself is

generally accepted", do you agree with that?---No.

"And if a probabilistic genotyping approach professes to have

finally solved the difficult task of distinguishing

between signal and noise in DNA test results it would be

irresponsible for that solution to not be shared in the

peer reviewed literature so that it could be applied by

those using conventional approaches to interpret test

results", do you agree with that?---That statement seems

more reasonable. My general feeling is that any

significant advances to the way that we can interpret DNA

evidence, or really any evidence in any field really have

a responsibility to then publish those results and share

it with the wider community.

.DM:DF:CAT 20/04/17 SC 11B 204 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

The phrase "distinguishing between signal and noise", is that a

reference to distinguishing between peaks and stutter and

other artifacts?---It's hard to say exactly what they're

talking about. It could be stutters and alleles, it

could be alleles and artifacts, it could be baseline

noise inherent in the instruments that generate profiles

and peaks.

Is this correct, there is no biological answer and not likely

to be one in the foreseeable future to the task of

distinguishing between alleles and stutter and other

artifacts if we include them? It's not a biological

solution that's ever - seemingly ever going to be

developed?---Well, that's - - -

It's a simulation exercise, it's a modelling solution that's

being promulgated, is that correct?---There's a few parts

to your question there. No, there isn't any biological

way we're going to be able to distinguish stutters or

peaks that are completely stutter versus peaks that are

partly stutter and partly allelic. The only way that we

can deal with that sort of uncertainty is through the use

of probabilities and probabilistic genotyping. So my

answer to that particular question would be that we

already account for that uncertainty and we have very

strong models for accounting for uncertainty surrounding

allelic or stutter nature of peaks. That's what we do

already. With regards to the second component, which is

identifying artifacts from other peaks, there is work

being done in that area. I'm in fact doing some work in

that area myself at the moment using a particular type of

machine learning known as artificial neural networks and

I've published work to that effect.

.DM:DF:CAT 20/04/17 SC 11B 205 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

I think I've read the paper or one of your papers, is it the

artificial robot or artificial intelligence for the

purposes of calculating the number of possible

contributors?---There is - well, the paper that I've

written is called "Teaching Artificial Intelligence to

Read Profiles" or electropherograms.

Yes?---There is another paper that uses machine learning to

assign number of contributors. I'm not an author on that

particular paper but it does exist.

I'll move on to the next statement by Jamison and Bader:

"Debates similar to those regarding signal and noise

continue to a greater or to a lesser extent for many of

the features of a test result that a human analyst or a

continuous probabilistic genotyping approach might

consider", do you agree with that?---Can you just read

that again?

"Debates similar to those regarding signal and noise continue

to a greater or to a lesser extent for many of the

features of a test result that a human analyst or a

continuous probabilistic genotyping approach might

consider"?---I don't really know what that sentence is

even saying.

Okay. "Proponents of some fully continuous probabilistic

genotyping approaches such as STRmix and True Allele have

suggested that validation studies that show that their

approaches consistently arrive at correct conclusions are

sufficient to ensure the reliability of their

approaches". Is his statement correct in the sense of

that that's what you, on behalf of STRmix, are

suggesting?---Well, again, there's a couple of components

to what - that sentence there. One is that it talks

.DM:DF:CAT 20/04/17 SC 11B 206 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

about we're saying that we get the correct likelihood

ratio, I believe you said at one point.

I'll read it again for you?---Okay.

"Proponents of some fully continuous probabilistic genotyping

approaches such as STRmix and True Allele have suggested

that validation studies that show that their approaches

consistently arrive at correct conclusions are sufficient

to ensure the reliability of their approaches"?---Okay,

so it says "correct conclusions" so by that we'll take it

that he means inclusionary values or exclusionary like

the ratio values for contributors or non-contributors, so

still there's a couple of points. One is that we don't

necessarily state that we have consistently correct

conclusions - I guess we're going back to the idea of

adventitious matching here where you can get

non-contributors who give inclusionary values just by

property of the DNA rather than by whatever system you

use to analyse it. However, what I would say is that I

do agree that we say that we've done large numbers of

validation studies and we've looked at a lot of profiles

and a lot of mixtures in a lot different situations and

we use this as evidence that STRmix produces reliable

results.

You may anticipate it but I'll be asking you questions further

on that issue in relation to the recent PCAST

publication?---Okay.

Just continuing with Jamieson and Bader, they continue that

last sentence that I've just read out?---Okay.

With the next, "That position is at odds, however, with the

long-standing expectation both in forensic science and

other disciplines such as computer science that

.DM:DF:CAT 20/04/17 SC 11B 207 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

validation studies are primarily intended to determine

the boundaries beyond which an approach cannot be

expected to generate reliable results. Validation

results cannot be extrapolated to say that a method can

generate a reliable interpretation for all manner of

samples. No validation study could be performed to

demonstrate that a probabilistic genotyping approach is

'fit for all purposes'. Instead they need to deliver

explicit limits such as while output for up to three

person mixtures and where degradation/inhibition has not

occurred are correct for more than 99 per cent of

samples, this approach is presently not suitable for use

with mixtures of four or more individuals where

degradation/inhibition may have occurred or where close

relatives may have been contributors to a mixed sample."

Was that too much or do you want me to read it again or

did you absorb it?---No, that's all right, ask your

question.

Well, do you agree with that?---I agree with the sentiment that

you need to know the limits of whatever system you're

using and certainly we have limits to what STRmix

analyses.

See, as I read it, he's really saying you're extrapolating your

validation results backwards to say, "Well, these are the

results we got, the results are correct therefore we

validated our system". It's a sort of reverse way of

validation. It may be I am misunderstanding what he's

saying?---No, but that is the standard way of validating

something, creating known mixtures or known results,

testing them within your system, knowing the answers that

they should give, looking at the answers and if the

.DM:DF:CAT 20/04/17 SC 11B 208 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

answers are as you expect, then you've validated your

system or at least for whatever it is that you've tested.

They go on to say, "Among the boundaries that need to be

explored separately and in concert are the number of

contributors to a sample (the larger the number of

contributors the more computationally challenging an

evaluation), the degree of degradation inhibition of the

DNA from each of the contributors to a sample, the degree

of relatedness of possible contributors to a sample and

the quality of useful information in a test result. Some

samples simply will not have appreciably more information

associated with them than negative controls and reagent

blanks". Do you agree with that?---That seems

reasonable.

He goes on to say, "Validation studies with dozens or even

hundreds of samples cannot suffice given that evidence

samples come in a very wide variety (of number of

contributors, degrees of degradation and mixture ratios)

and are unlikely to be well represented in any validation

work". Do you agree with that?---Not completely, no. I

would say that we've done quite extensive validation on

STRmix to quite a high degree of profile complexity and

that, we would expect, to cover the vast majority of case

work samples we would see and there's no reason that even

if we obtain a particular ratio of contributors in a case

work sample that we haven't tested in a validation sample

that there's any reason we can't use STRmix in those

situations.

After a draft report Professor Balding confirmed that was in

August last year a final report to the president entitled

"Report to the president, forensic science in criminal

.DM:DF:CAT 20/04/17 SC 11B 209 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

courts, ensuring scientific validity of feature

comparison methods" was published. Are you aware of

it?---Yes.

Apart from other sciences it addressed, one of which was the

science of probabilistic genotyping?---Yes, that's right.

On p.76 of the report - - - ?---I do actually have this report

with me.

You're welcome to open it up, it will be easier to follow,

follow me as I read parts of it to you. Just for sort of

context, p.76 there's the paragraph, "Subjective

interpretation of complex mixtures"?---Yes.

Then we can go down to the bottom of p.78, "Current efforts to

develop objective methods"?---Yes.

And the authors state, "Given these problems several groups

have launched efforts to develop probabilistic genotyping

computer programs that apply various algorithms to

interpret complex mixtures. As of March 2014 at least

eight probabilistic genotyping software programs had been

developed", and they list them, one of which is STRmix,

"with some being open source software and some being

commercial products. The FBI lab began using the STRmix

program less than a year ago in December 2015 and is

still in the process of publishing its own internal

developmental validation". Just on that, has that been

completed to your knowledge as yet, that FBI

developmental validation?---Yes, that's recently just

come out in Forensic Science International Genetics.

Okay. But is that akin to what FSL in Victoria have done

validating STRmix for their purposes, an internal

validation, or is the FBI one of greater - - -?---Yes, it

would have - - -

.DM:DF:CAT 20/04/17 SC 11B 210 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

It's the same?---Yes, it would have similar components to the

Victorian validation.

It may but is it more widespread, is it more of a generic

validation, or it's solely for the FBI lab type

validation or you're not sure?---This is a validation

that's specifically for the FBI lab but in doing that

validation they have constructed a number of mixtures and

they have done analyses of those mixtures, including

mixtures that are up to five people with complexity and

have minor components that are less, around five to 10

per cent of the profile, so there's a general scientific

interest because they've gone a bit further than some of

the other published validations.

All right. They then continue, "These probabilistic genotyping

software programs clearly represent a major improvement

over purely subjective interpretation. However, they

still require careful scrutiny to determine: One,

whether the methods are scientifically valid, including

defining the limitations on their reliability (that is

the circumstances in which they may yield unreliable

results) and, two, whether the software correctly

implements the methods. This is particularly important

because the programs employ different mathematical

algorithms and can yield different results for the same

mixture profile". Do you agree with that

paragraph?---Yes, both of those things are important to

look at.

Next paragraph: "Appropriate evaluation of the proposed

methods should consist of studies by multiple groups not

associated with the software developers that investigate

the performance and define the limitations of programs by

.DM:DF:CAT 20/04/17 SC 11B 211 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

testing them on a wide range of mixtures with different

properties". Do you agree with that sentence?---I agree

that these sorts of tests should be done. I don't really

see the need to cut out the software developers if

they're the ones that are doing the tests.

So you're saying evaluations both by software developers and

groups not associated with the software

developers?---Yes, both groups.

Okay. "In particular, it is important to address the following

issues: one, how well does the method perform as a

function of the number of contributors to the mixture?

How well does it perform when the number of contributors

to the mixture is unknown?" You would agree with that,

wouldn't you?---Yes, that's reasonable.

Number 2: "How does the method perform as a function of the

number of alleles shared among individuals in the

mixture? Relatedly, how does it perform when the

mixtures include related individuals?" Sounds reasonable

as well, doesn't it?---Sounds reasonable.

Number 3: "How well does the method perform - and how does

accuracy degrade - as a function of the absolute and

relative amounts of DNA from various contributors? For

example, it can be difficult to determine whether a small

peak in the mixture profile represents a true allele from

a minor contributor or a stutter peak from a nearby

allele from a different contributor (notably this issue

underlies a current case that has received considerable

attention)" - and I'll take you to the footnote in a

minute, but do you disagree with any of those sentences

or questions raised in issue 3?---Well the first sentence

there I agree with, it's good to look to see how the

.DM:DF:CAT 20/04/17 SC 11B 212 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

method performs as a function of the absolute and

relative amounts of DNA. The next sentence, for example,

it can be difficult to determine whether a small peak in

the mixture represents a true allele from a minor

component or a stutter peak from a nearby allele from a

different contributor. A lot of that is dealt with

probabilistically these days, so within STRmix. If

there's a small peak in a stutter position and now if

it's also in a forward stutter position STRmix will

consider it as allelic or stutter and weigh those options

up against one another.

Just on that issue, in a given situation where a peak is

weighted on one or more occasions as annual allele at a

given locus, and even when it's weighted at a very low

level, but is still weighted as an allele, and it's not a

match for the person of interest, then the reasonable

possibility must exist that the entire sample is not

coming from the person of interest, because you've got

that one allele that's a positive mismatch; do you agree

with that?---No. I mean, if you're talking about mixture

where there's a major component and a minor component and

you are talking about a putative - there might be

stutter, or it that might be stutter and an allele of a

minor, so sometimes the program will consider that that

peak in the stutter position is completely stutter, and

sometimes it will consider that it's partly stutter and

partly allelic and - - -

Sorry to interrupt, how does it reflect where it's partly

stutter and partly peak in the one piece of data that's

printed and we can read?---As in an electropherogram or

as in STRmix results?

.DM:DF:CAT 20/04/17 SC 11B 213 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

In the STRmix results, where you have like the top five

genotype page and the DCON?---If you look at the various

weightings for the different genotypes that STRmix

considered, if it's considered anywhere part of that

allele is allelic, then it will have a weight in it, or

appear in the list. But it's only considered allelic

part of the time there's no need for a minor contributor

to have that particular peak as one of their alleles

because it's only considered allelic part of the time.

The other part of the time it's considered entirety

stutter, which means the genotype of the minor

contributor could be something that did include that

stutter peak.

I understand the logic of that and that's why I said to you the

weight of the evidence may be that it's a stutter, but

STRmix is saying there's a reasonable possibility, and

let's use the figure of five per cent for the sake of the

argument, on five per cent of occasions that, in fact,

could be an allele?---Okay, yes.

If that's not a match at that particular locus with the person

of interest then that person can, in fact, then be

excluded on the basis it's reasonably possible he did not

contribute to that DNA?---Well, if it's a five, say five

per cent of the time STRmix considers that to be an

allelic and 95 per cent of the time it considers some

other combination of alleles to be genotype of the minor

component, if the person being compared aligns with the

95 per cent of the time genotype then they will get quite

a high likelihood ratio.

Indeed?---If they align with the five per cent genotype, which

includes some incorporation into that stutter peak, then

.DM:DF:CAT 20/04/17 SC 11B 214 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

they will get a lower rating and the likelihood ratio

will be lower and if they don't align with any

possibility that STRmix has considered then they will be

excluded.

I just want to the make sure we're - I've explained the issue

clearly to you as - - - ?---Okay.

As it may be I will seeking to raise with you in front of a

jury when we get around to having a trial. There are

occasions where in the STRmix data a given peak will be

deemed allelic for a minor component to the mixture,

perhaps let's say five per cent of the time, 95 per cent

of the other time that peak is being deemed is stutter,

okay?---Okay.

But on that scenario where it's allelic five per cent of the

time, at the particular marker, if that is a peak that

does not match with the known reference sample of the

person of interest, then if that in fact is an allele it

cannot be the DNA, the person of interest cannot have

contributed to that component, you agree with that, don't

you? Well, I should say it's a reasonable possibility

that he may not have contributed to that component of the

DNA profile?---If that peak was definitely allelic and he

didn't possess it then he couldn't be the contributor,

but you're saying in your scenario STRmix only considers

allelic five per cent of the time, which is a far shot

from if it's definitely the allelic.

Indeed it is, but it's still a reasonable possibility, that is

my point?---It's a possibility that has been - it's been

considered by STRmix in the holistic way that it

generates a likelihood ratio.

Okay. Look, sorry, I had digressed on that, but getting back

.DM:DF:CAT 20/04/17 SC 11B 215 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

to this document. Paragraph 4, or issue 4, "under what

circumstances and why", so before we do issue 4, I had

promised to take you to footnote 212?---Yes.

Some programs - sorry. "In this case examiners used two

different DNA software programs, STRmix and True Allele

and obtained different conclusions concerning whether DNA

from the defendant could be said to be included within

the low level DNA mixture profile obtained from a sample

collected from one of the victim's fingernails. The

judge ruled that the DNA evidence implicating the

defendant was inadmissible and it was referred to as the

Potsdam Boys murder case may hinge on miniscule DNA

sample from fingernail", in the New York times,

apparently, and there's a reference to DNA results

wouldn't be allowed in the Hillary murder trial. Are you

familiar with - that's two cases they're referring to, as

I read it, the Hillary case?---I think it's - - -

Or it's the one - - - ?---I think it's just the one case, just

the Hillary case.

You didn't give evidence or you did give evidence in that

trial, in the pre-trial proceedings?---I wasn't involved

in that trial at all.

Okay. And does include that you weren't involved in the

prosecution as your services personally hadn't been

retained on the STRmix result for the purpose of seeking

to have it admitted?---Correct, that will all handled by

John Buckton, one of the co-developers of STRmix.

All right. Let's go to issue 4, "Under what circumstances and

why does the method produce results (random inclusion

probabilities) that differ substantially from those

produced by other methods". Do you agree with

.DM:DF:CAT 20/04/17 SC 11B 216 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

that?---Yes, that's useful information to know.

I'll skip a paragraph. If you think we should go back to it,

tell me, but I will pea go to the paragraph, "Most

importantly current studies have adequately explored only

a limited range of mixture types (with respect to number

of contributors, ratio of minor contributors and total

amount of DNA)". Do you agree with that sentence?---Um,

yes, but go on.

"The two most widely used methods", is that right, that STRmix

an True Allele to your knowledge are the two most widely

used methods?---Probably out of the continuous genotyping

systems those would be the two most widely used methods,

yes.

"The two most widely used methods STRmix and True Allele appear

to be reliable within a certain range based on the

available evidence and the inherent difficulty of the

problem", and the problem is identified - he refers to a

number of articles, many of which you are joint author

with, so we don't need to repeat all those?---Okay.

But there is the footnote for that sentence. Do you agree with

that sentence?---The point I would make with that

sentence - there' two points that I would make about that

sentence. One is that in compiling the PCAST report, the

people on the board there were only considering published

material, there's a wealth of information in unpublished

laboratory validations which go towards the reliability

of STRmix beyond the range of what these people had

access to.

Yes?---The other point I would make is that since the published

- since this PCAST report was published the FBI

validation material has been published in a scientific

.DM:DF:CAT 20/04/17 SC 11B 217 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

journal and goes well beyond the limits outlined in the

PCAST report, as notably they go to five person mixtures,

and in those five person mixtures the minor component is

less than 10 per cent, so later on in the report when

they say that these limits could be expanded if further

published material is available, well, already further

published material is available, and just even further to

that, we, as in the STRmix development team, are

compiling all the validation material from about 20 or 30

different labs around the world that have validated

STRmix in different ways with different instruments. We

now have mixtures that totally number to the thousands

from single source profiles to very complex five person

mixtures down to contributors giving as little as half a

per cent of material which we are just preparing a draft

report for for publication, so very quickly these limits

that the PCAST report are talking about have either

already been exceeded by the FBI validation or will soon

be extraordinarily exceeded by the publication that we

will put out.

That's a big answer to that one sentence, and whether you

agreed with it. You are very, you were very concerned

when this report came out, Dr Taylor, isn't that fair?

This was seen as a roadblock in the, put aside commercial

interests, I am not accusing you of that, but in the

ongoing acceptance, particularly in America either by the

labs, but perhaps more importantly it raising potential

admissibility issues in their courts of law in the United

States, that was the potential roadblock this PCAST

report created, do you agree?---So that was a roadblock

that labs would be facing, although in all likelihood

.DM:DF:CAT 20/04/17 SC 11B 218 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

what they will really be facing is just additional time

in court to explain that the validation reports, or

validation work that they had done internally, addressed

the concerns of the PCAST report, so it wasn't likely to

find STRmix inadmissible in many cases, in my opinion,

because most labs will have already validated beyond the

level of what PCAST was limited to, and this is why I say

it's a shame they didn't have access to all this

unpublished validation material from labs and why I say

that we hope to rectify that by - - -

HER HONOUR: Dr Taylor, could you repeat that, the transcriber

just has had a bit of difficulty because you have been

breaking up?---Certainly.

MR DESMOND: I am happy if he answers it again, Your Honour.

HER HONOUR: Could you give the answer again, please, Dr

Taylor, if you can remember the question?---I think so.

So my opinion was that I didn't really see this report

being a roadblock in that STRmix was going to be found

inadmissible all of a sudden in a number of cases because

labs have done quite extensive validations to show that

STRmix is reliable and can function with these more

complex mixtures that go beyond what PCAST was limited

to, so it is a shame that PCAST didn't have access to

this extra material, the result will be more time in

court for analysts explaining what they have done.

MR DESMOND: Did you not read from the National District

Attorney's Association, marked for immediate release on 2

September 2016, so very quickly it seems after the final

report had been published, and I will just see if I can

find the particular author, but I can't. "National

District Attorney's Association slams President's council

.DM:DF:CAT 20/04/17 SC 11B 219 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

of advisors on science and tech report. The President's

council of advisors and science and tech, PCAST, voted

yesterday to release a report highly critical of

virtually every forensic discipline used in investigation

of prosecution of virtually every crime committed in

America". I will get to the relevant part. "The PCAST

position is that the forensic science discipline

specialising in the examination of bite marks, firearms,

tool marks, complex DNA matches, tyre treads and shoe

prints, each lack scientific foundational support and

should not be permitted for use in the criminal

courtroom". Now I am not saying he's correct, but I'm

saying the issue was publicly being agitated by

prosecuting authorities in America and I'm suggesting to

you that's a concern that you would have had, there were

potential admissibility issues that may arise in America

because of the PCAST report. Do you still adhere to your

previous answer?---Well, I agree with what you are

saying, there is potential admissibility issues as a

result of this PCAST report coming out, but as I say in

my opinion I don't think, I think the practical outcome

would have been a lot of time spent in court by analysts

talking about validation reports, that would have been

the main outcome.

I understand, but there was quite a reaction also from the

scientific community in - - - ?---There was a number of

letters from a number of different groups that responded

to the PCAST report.

Highly critical of the report and the way it was gone about

including some of the issues you've identified, they

ignored all the sort of unpublished validation material?

.DM:DF:CAT 20/04/17 SC 11B 220 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

---Yes.

Let's just go back to the report. Pardon me, I have just to

find where I've got it now. I think I'm up to this

paragraph that commences, "A number of papers have been

published". I don't think I have read that yet?---That

was the one that you - - -

Page 80. That is the next paragraph after issue 4?---That was

the one you skipped and you read the paragraph at the end

of p.80 and got up to footnote 215.

Okay, then I will keep skipping. The next sentence of that

paragraph I was reading, "Specifically", or just to give

the context because I am not sure if Her Honour has got

the paper. "The two most widely used methods STRmix and

True Allele appear to be reliable within a certain range

based on the available evidence and the inherent

difficulty of the problem. Specifically these methods

appear to be reliable for three person mixtures in which

the minor contributor constitutes at least 20 per cent of

the intact DNA in the mixture, and in which the DNA

amount exceeds the minimum level required for the

method", and there's a footnote there, 216 which reads,

"Such three person samples involve similar proportions

are more straightforward to interpret owing to the

limited number of alleles and relatively similar peak

height, the methods can also be reliably applied to

single source and simple mixture samples provided that in

cases where the two contributions cannot be separated by

differential extraction the proportion of the minor

contributor is not too low, e.g., at least 10 per cent".

Are you able to say where they have got that from, that

footnote?---No, no, I don't know.

.DM:DF:CAT 20/04/17 SC 11B 221 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

PCAST is comprised of a number of eminent scientists, not

so?---I believe it was a group of scientists and lawyers

and judges and statisticians, generally from outside the

forensic community, but a number of them.

I understand. But I think we could take it as granted they

just wouldn't make something up, that would be

unlikely?---That would be unlikely.

Okay. That statement or sentence, "Specifically these methods

appear to be reliable for three person mixtures in which

the minor contributor constitutes at least 20 per cent",

et cetera, do you agree with that sentence?---No, and

I'll say two things about it. One is that in an addendum

put out by the PCAST Group they slightly changed the way

that that sentence read, so that rather than it being in

which the minor contributor constitutes at least 20

percent they said in which case - in which the person of

interest constitutes at least 20 per cent they said, in

which case - in which the person of interest constitutes

at least 20 percent.

Yes?---Okay, so that's one difference. The second point I

would make is if you look over the page at p.81 the

second paragraph says, "When further studies are

published it will likely be possibly to extend the range

in which scientific validity has been established to

include more challenging samples. So already, as I said,

the FBI paper has now been published. It's already

extended those ranges, it's doing exactly what PCAST has

said.

We'll get to that but can I just clarify where - I haven't been

- Professor Balding mentioned this addendum of the change

from minor contributor to person of interest yesterday.

.DM:DF:CAT 20/04/17 SC 11B 222 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

I haven't been able to find it. I'm not accusing him of

making it up. Have you got a web link you could email

the Crown so that I could get it off them or - - -

?---Yes, certainly.

- - - are you able to email, if not the web link, the addendum

itself?---Yes, I will do.

I mean overnight so that I can ask you about it

tomorrow?---I'll do it as soon as I get back to work.

Thanks for that. If it's changed from minor contributor to

person of interest what's the significant change there,

is it that - - -?---There is quite a significance to that

change so you could imagine - let's start off with a nice

strong single source profile with a major contributor or

a single source profile, okay, and then you might have a,

that strong single source profile but now it's got a

second minor contributor present and you're happy to

interpret that and you might be comparing a person of

interest's reference who aligns with a major component of

that mixture of which there's no ambiguity in your

interpretation. Now you might get a third very minor

weak person coming up in that profile and all of a sudden

you've got a three person mixture where one of the minor

contributors is at below 20 per cent and even though the

person you are comparing alliance with a major component

and that there's no real ambiguity in your

interpretation, according to the PCAST need for a minor

contributor to be at least 20 per cent, you wouldn't be

able to interpret it but if you now change that to the

person of interest has to be at least 20 per cent now you

can interpret it.

But with the change does the person of interest have to

.DM:DF:CAT 20/04/17 SC 11B 223 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

contribute at least 20 percent to the minor contribution

or to the entire DNA?---To the profile.

All right, so continuing. Paragraph: "For more complex

mixtures, e.g. more contributors or lower proportions

there is relatively little published evidence". Do you

agree with that?---Yes, and if you look at the footnote

they do reference a number of publications of which I am

the author of a number of them which do look at more

complex mixtures but they consider that relatively little

so I'll go with their ruling on that.

"In human molecular genetics, an experimental validation of an

important diagnostic method would typically involve

hundreds of distinct samples." You'd agree with that,

wouldn't you?---That seems reasonable.

"One forensic scientist told PCAST that many more distinct

samples have in fact been analysed but the data have not

yet been collated and published", which I think you'd

agree with because you're saying well that's the case,

aren't you?---That's right.

But they go on to say, "Because empirical evidence is essential

for establishing the foundation or validity of a method

PCAST urges forensic scientists to submit, and leading

scientific journalists to publish, high quality

validation studies that properly establish the range of

reliability of methods for the analysis of complex DNA

mixtures". I think you'll agree with that. You'd agree

with that, wouldn't you? We really need to have

empirical data published in peer review journals?---Yes,

that is the ideal. One of, I believe one of the

criticisms of the PCAST report in that particular

recommendation that they make is that forensic journals,

.DM:DF:CAT 20/04/17 SC 11B 224 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

particularly high quality - sorry, leading scientific

journals don't publish validation material because of - -

-

Like forensic science, FSI, those sorts of journals?---Yes,

like FSI, they don't publish validation material because

it's not novel and it's not generally interesting to the

scientific community so there's this disconnect between

PCAST recommending that validation studies are published

in journals not accepting validation studies for

publication.

Well are you aware of any validation studies dealing with

either STRmix in particular or probabilistic genotyping

generally that have been submitted to any of the leading

publications but have not been accepted?---Um, gosh, I

have anecdotal evidence, I haven't been involved in those

particular papers specifically.

Okay. And then we get to the paragraph I think that you took

me to before when further studies are published and

you've identified the FBI study is now published?---Yes.

And that's, amongst other things, you say that validates for

five component mixtures, does it?---They include five

person mixtures in that study.

In Victoria STRmix is restricted to three still, is that right,

or has it increased in recent times?---I'm not sure.

"When further studies are published it will likely be possible

to extend the range in which scientific validity has been

established to include more challenging samples." You've

not looked at the Tuite - I can't remember now whether I

showed you any of the EPGs or not myself in the earlier

pre-trial but you've never been handed the EPGs by the

Crown to look at?---Yes, I have.

.DM:DF:CAT 20/04/17 SC 11B 225 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

You have, have you?---Yes.

Approximately when?---Well if you look - - -

Do you mean in the lead up for this time around or in the lead

up to last time you gave evidence?---If you have a look

at my report, my 20 page report dated 15 August 2016.

This is where you're addressing Chakraborty and Adams?---Yes,

that's right.

Were you given the EPGs to assist you with preparing this

report?---If you look at p.17, 17 and 18 and 19 are the

results of me re-analysing a number of those profiles in

the latest version of STRmix. So I've had those profiles

and I have re-analysed them and given the likelihood

ratios from a later version of STRmix there.

All right. I'll have a look at that overnight. So what's the

answer, you got them in the lead up to preparing this

report?---Yes.

Okay.

HER HONOUR: So what was the answer, they were given in the

lead up to which report?

MR DESMOND: To prepare this report.

DR ROGERS: My date is 15 August 2016.

HER HONOUR: Right, thank you.

MR DESMOND: That's probably a convenient stage to cease for

today, Your Honour.

HER HONOUR: All right. Very good. Dr Taylor, I'm afraid

you're going to have to come back tomorrow for the

morning?---That's all right.

And no doubt the OPP will arrange the setting up of the

video-link. We might switch you off now?---Okay.

Thank you?---Thank you.

<(THE WITNESS WITHDREW)

.DM:DF:CAT 20/04/17 SC 11B 226 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

MR DESMOND: Your Honour we will be discussing at some stage

tomorrow the timetable?

HER HONOUR: For the trial?

MR DESMOND: Yes.

HER HONOUR: Yes, I was wondering whether you were proposing to

make some kind of application beforehand.

MR DESMOND: I was thinking about it last night, Your Honour.

All I could say is it's possible but I need to review the

transcript.

HER HONOUR: All right.

MR DESMOND: And if I was going to make an application I'd send

an email to see what convenient date could be arranged

for a further pre-trial day or days if that application

is made.

HER HONOUR: All right. As I understand it the parties have

discussed witness availability and so forth and it's

looking like next year.

DR ROGERS: Yes, Your Honour. This year is effectively out for

a number, a range of witnesses unavailable dates.

HER HONOUR: Yes.

DR ROGERS: The road block for early next year is Deborah Scott

who will be on leave for three months.

HER HONOUR: At the beginning of next year?

DR ROGERS: At the beginning of next year. She returns in

April, at the beginning of April.

HER HONOUR: Right.

DR ROGERS: And at the moment everybody else is available from

April onwards. So we were rather hoping, notwithstanding

a possible application before Your Honour in relation to

the termination of the proceedings or otherwise, that we

could obtain the dates next year starting in April.

.DM:DF:CAT 20/04/17 SC 11B 227 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

HER HONOUR: All right.

DR ROGERS: Because it may well be that those dates then change

if we leave it, let it go for another month or so.

HER HONOUR: All right. I'll have a think about that

overnight.

DR ROGERS: Yes, thank you.

HER HONOUR: Now, Mr Tuite's bail was something that was done

administratively yesterday. I believe it needs to be

extended.

MR DESMOND: I don't think it was mentioned yesterday, was it?

I probably should have asked Your Honour.

HER HONOUR: Well, it wasn't, I did it administratively in

chambers.

MR DESMOND: Thank you for that, Your Honour.

DR ROGERS: Perhaps it could be formally extended today until

tomorrow morning.

HER HONOUR: Yes. And can you please then have a look at the

situation. I think it was extended until yesterday

morning at the previous directions hearing or mention.

MR DESMOND: I anticipate it would have been.

DR ROGERS: Yes, and I note that he turned up yesterday and

today.

HER HONOUR: Yes.

MR DESMOND: The issue is going to be if it's adjourned off for

12 months, that's a long time to - - -

DR ROGERS: What, be on bail?

MR DESMOND: Not to be on bail, but it's a long time ahead but

it is what it is, I suppose.

HER HONOUR: Yes, all right.

MR DESMOND: Unless Your Honour forms a different view that you

want a mention in a few months to check he's still

.DM:DF:CAT 20/04/17 SC 11B 228 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

23

around.

HER HONOUR: We might think about doing something like that.

MR DESMOND: Yes, Your Honour.

HER HONOUR: In the meantime, I will extend Mr Tuite's bail

until 10.30 am tomorrow morning.

DR ROGERS: And, finally, Your Honour, I have a personal

difficulty about being here after midday tomorrow.

HER HONOUR: Yes.

DR ROGERS: But Mr Sonnet will be here tomorrow and will just

step into my shoes, if that's all right with Your Honour.

HER HONOUR: Very good.

DR ROGERS: I'm sorry, I just have this prior commitment.

HER HONOUR: Yes, not a problem.

DR ROGERS: Thank you.

HER HONOUR: On that note the court will adjourn until 10.30

tomorrow morning.

ADJOURNED UNTIL FRIDAY 21 APRIL 201 7

.DM:DF:CAT 20/04/17 SC 11B 229 TAYLOR XXNTuite

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

23

WITNESS AND EXHIBIT LIST: PAGE:

DISCUSSION 178

DUNCAN ALEXANDER TAYLOR, RECALLED 179

THE WITNESS WITHDREW 201

DUNCAN ALEXANDER TAYLOR, RECALLED 201

THE WITNESS WITHDREW 227

DISCUSSION 228 -

229

ADJOURNED UNTIL FRIDAY 21 APRIL 2017 229

1

2