Liao Master Thesis

62
0 Confidence in Visual Decision-Making Master's Thesis for the degree 'Master of Science' in the Neuroscience Program Max-Planck International School George August University Göttingen Supervisors: Prof.Dr. Melanie Wilke and Ahmad Nazzal submitted by Liao Yi-An born in Taipei, Taiwan 2016

Transcript of Liao Master Thesis

Page 1: Liao Master Thesis

0

Confidence in Visual Decision-Making

Master's Thesis

for the degree 'Master of Science'

in the Neuroscience Program

Max-Planck International School

George August University Göttingen

Supervisors: Prof.Dr. Melanie Wilke and Ahmad Nazzal

submitted by

Liao Yi-An

born in

Taipei, Taiwan

2016

Page 2: Liao Master Thesis

1

First Referee: Prof. Dr. Melanie Wilke

Second Referee: Prof. Dr. Tobias Moser

Supervisors: Prof.Dr. Melanie Wilke and Ahmad Nazzal

Date of Submission of the Master Thesis: 31 March 2016

Date of Written Examination:24 August 2015

Date of Oral Examination: 26, 27 August 2015

Page 3: Liao Master Thesis

2

Table of Contents List of Figures ........................................................................................................................................... 4

Acknowledgements ................................................................................................................................. 6

Abstract ................................................................................................................................................... 7

Introduction ............................................................................................................................................. 8

Decision-Making .................................................................................................................................. 8

Overview in Decision-Making .......................................................................................................... 8

Perceptual Decision-making ............................................................................................................ 8

Models of Decision-Making ............................................................................................................. 9

Animal Studies in Decision-Making ............................................................................................... 12

Human fMRI Studies in Decision-Making ...................................................................................... 13

Confidence in Decision-Making ......................................................................................................... 14

Overview in Confidence ................................................................................................................ 14

Models of Confidence Formation .................................................................................................. 15

Animal Studies in Confidence ........................................................................................................ 18

Human fMRI Studies in Confidence ............................................................................................... 19

Aim of This Study ............................................................................................................................... 22

Materials and Methods ......................................................................................................................... 23

Participants ........................................................................................................................................ 23

Stimuli ................................................................................................................................................ 23

Visual Stimuli ................................................................................................................................. 23

Auditory Stimuli ............................................................................................................................. 23

Procedure .......................................................................................................................................... 24

Experimental Setup Outside the Scanner ......................................................................................... 24

Behavioral task outside the scanner ................................................................................................. 24

Experimental Setup and Behavioral Task Inside the Scanner ........................................................... 25

2-by-2 Repeated Measure ANOVA .................................................................................................... 26

fMRI Data Acquisition ........................................................................................................................ 26

fMRI Data Preprocessing and Analysis .............................................................................................. 26

Results ................................................................................................................................................... 29

Visual Task ......................................................................................................................................... 29

Behavior Results of Visual Task Outside the Scanner ................................................................... 29

Auditory Task ..................................................................................................................................... 37

Behavior Analysis of Auditory Task Outside the Scanner .............................................................. 37

Page 4: Liao Master Thesis

3

fMRI Results in Visual Task ................................................................................................................ 40

Stimulus Presentation Period ........................................................................................................ 40

Direction ........................................................................................................................................ 41

Difficulty ........................................................................................................................................ 41

Correctness .................................................................................................................................... 42

Event related Average in Midbrain ............................................................................................... 42

Event related Average in Right and Left Inferior Frontal Gyrus .................................................... 43

BOLD Signal in Candidate Regions for Confidence and Uncertainty ............................................. 43

BOLD Signal in Midbrain Increases along with the Confidence Rating ......................................... 44

Discussion .............................................................................................................................................. 45

Behavior Results Discussion .............................................................................................................. 45

Confidence in Auditory Task is Easily Affected .............................................................................. 46

fMRI Results of Visual Decision-Making Task .................................................................................... 46

Stimulus Presentation Period ........................................................................................................ 46

Direction ........................................................................................................................................ 47

Difficulty ........................................................................................................................................ 47

Correctness .................................................................................................................................... 48

Confidence ..................................................................................................................................... 49

Midbrain and Confidence Formation ................................................................................................ 50

Dorsolateral Prefrontal Cortex and Uncertainty ............................................................................... 50

Conclusion ............................................................................................................................................. 52

References ............................................................................................................................................. 53

Appendices ............................................................................................................................................ 58

Page 5: Liao Master Thesis

4

List of Figures

Fig1. Confidence in Signal Detection Theory.

Fig2. Bound Evidence Accumulation Model.

Fig3. Integration-and-Fire Attractor Model.

Fig4. Confidence in Signal Detection Theory.

Fig5. Confidence in Bound Evidence Accumulation Model.

Fig6. Two-Layer Model for Decision-Making and Confidence Read-Out.

Fig7. Neuron Firing Rate Prediction based on Two-Layer Model.

Fig8. BOLD Change Prediction based on Two-Layer Model.

Fig9. BOLD signal in DLPFC and Subgenual Cingulate Cortex.

Fig10. BOLD Signal Change in Different Difficulty.

Fig11. Neuron Firing Rate in Primate Pulvinar in RDM Task.

Fig12. Visual Task inside the Scanner.

Fig13. The Flow Chart for Identification of the Confidence and Uncertainty Encoding Regions.

Fig14. Accuracy and Confidence Outside the Scanner.

Fig15. Accuracy v.s. Confidenceand Went Trials Outside the Scanner.

Fig16. Number of Trial in Each Predictor Outside the Scanner.

Fig17. Performance Outside the Scanner.

Fig18. Confidence v.s. Difference of Stimuli Outside the Scanner.

Fig19. Accuracy and Confidence Inside the Scanner.

Fig20. Accuracy v.s. Confidenceand Went Trials Inside the Scanner.

Fig21. Number of Trial in Each Predictor Inside the Scanner.

Fig22. Performance Inside the Scanner.

Fig23. Confidence v.s. Difference of Stimuli and Confidence v.s. Difficulty Inside the Scanner.

Fig24. Repeated Measure ANOVA .

Fig25. Accuracy and Confidence in Audtitory Task Outside the Scanner.

Fig26. Accuracy v.s. Confidence in Auditory Task and Went Trials Outside the Scanner.

Page 6: Liao Master Thesis

5

Fig27. Number of Trial in Each Predictor of Auditory Task Outside the Scanner.

Fig28. Performance in Auditory Task Outside the Scanner.

Fig29. Confidence v.s. Difficulty in Auditory Task Outside the Scanner.

Fig30. Stimulus Presentation Period.

Fig31. Direction Encoding Regions.

Fig32. Difficult versus Easy in Correct trials.

Fig33. Correctness Encoding Regions.

Fig34. Event Related Average in Midbrain and Inferior Parietal Lobe.

Fig35. Event Related Average Right and Left Inferior Frontal Gyrus.

Fig36. BOLD signal v.s. Confidence in Candidates Regions.

Fig37. BOLD Signal in Midbrain in Different Confidence Rating.

Fig38. The Flow Chart for Identification of the Confidence and Uncertainty Encoding Regions.

Page 7: Liao Master Thesis

6

Acknowledgements I would like to take this oportuntity to thank all the lab members working in Cognitive Neurology in University Medicine Göttingen. I owe my gratitude to Ahmad Nazzal, who spent time in teaching me and helping when I faced difficulties. Especially I have to thank Prof. M. Wilke, who granted me this opportunity to work in the lab as a master student.

Page 8: Liao Master Thesis

7

Abstract Several brain regions were proposed to show neural correlates of confidence in decision-making. However, It is unclear whether those regions are involved in forming the decision or relfect confidence of the subjects in the decision being formed. Therefore, we scanned healthy human subjects using a slow event functional magnetic resonance imaging (fMRI) design, while performing a visual accumulation perceptual decision-making task and asked our subjects to rate their confidence in their choice. Behaviorally, subjects had higher confidence in correct trials than in error trials; they were most confident in easy correct trials and least confident in easy error trials. These behavioral findings recapitulated successfully previous non-human animal studies investigating confidence. Integrate-and-fire attractor model was able to explain previous non-human animals’ behavioral and neural signal of confidence. Therefore, we used assumptions proposed by integrate-and-fire attractor model to localize brain regions involved in forming confidence in decision-making. We found that the midbrain activity was modulated positively by the correct easy trials and negatively in error easy trials, while activity in dorsal lateral prefrontal cortex (DLPFC) was modulated positively by error easy trials and negatively by correct easy trials. Moreover activity in midbrain and DLPFC regions was modulated by the subjects’ confidence rating; meaning that activity in midbrain was highest when subjects were most confident and lowest when subjects were least confident and the other way around in DLPFC. The results mentioned above indicate that, based on the model prediction and the corresponding subjective confidence rating, midbrain might encode confidence and dorsolateral prefrontal cortex might encode uncertainty.

Page 9: Liao Master Thesis

Introduction The introduction is divided into three main parts: Decision-Making, Confidence, and the Aim of the Study.

Decision-Making

Overview in Decision-Making An important issue for animals to survive in an environment full of noisy sensory evidence is to make right decisions. Based on a priori or learned knowledge, animals have to make a decision in almost every situation: food identification, mating, escaping from predators. All these are important to the survival of species. Among animals which have social behavior, like primates and humans, the decision-making process can be more complicated.

Dorothy Cheney and Robert Seyfarth recorded a young monkey's call and played it to two female monkeys by a hidden loudspeaker. One of the female monkeys was the mother of the young monkey while the other one was from the same group of monkeys. When the recording was played, the mother turned her head toward the loudspeaker, and the other turned her head to the mother. It turned out that the second one was aware of the kinship between the young monkey and the mother, and decided to turn her head toward the mother. (Cheney DL, Seyfarth, 1980)

This study shows that a single perceptual stimulus, like the call of a young monkey, can provoke very different decisions based on memory and social contexts.

Previously published literature shows an increasing interest in this field. Neuroscientists, and psychologist not only study the perceptual decision-making process, but also the role of decision-making in economic behavior, social context, and in patients group with psychiatric diseases.

Perceptual Decision-making A main field in decision-making is perceptual decision-making. One of the well-developed perceptual decision-making methods is visual decision-making task. Newsome and colleagues used primates to investigate the perceptual decision-making by using a direction discrimination task (decide whether there were more dots toward the right or toward the left). They found out that, primates could decide based on a small group of neurons.

Not only visual decision-making task, other modalities like somatosensory, olfactory, taste and auditory decision-making tasks were also developed in the past few decades. The difficulty in investigation of somatosensory decision-making comes from the study itself. In a two-interval comparison of vibration study, subjects are asked to compare the second vibration (stimuli 2=S2) with the first vibration (stimuli 1=S1), whether S2's vibration frequency is higher than the S1. The problem is that, in such a design, short term memory plays an important role in keeping memory of S1, while it is not involved in S2(Romo et al., 2000).

Page 10: Liao Master Thesis

9

On the other hand, due to the good understanding in molecular level in olfactory system and the fact that rodents have excellent ability in discriminating different odors, olfactory decision-making task is one of the promising methods to uncover the mystery in this field. However, it also has drawbacks, namely the odors are hard to control in terms of time and space, and humans are way worse in discrimination in different odors than rodents (Shadlen and Kiani, 2013). In this study, we mainly used visual decision-making task and tried to implement auditory decision-making task. The details would be mentioned below.

Models of Decision-Making

Signal Detection Theory

Several theories have been introduced to explain decision-making behavior. In this section, the emphasis would be put on signal detection theory (SDT), accumulation boundary model, and integration-and-fire model.

In perceptual decision-making, signal detection theory (SDT) is one of the most popular frameworks (Fig1). This term was first introduced in 1954 to explain an experiment, in which subjects had to detect a light signal in a uniform light background (Tanner, Swets, 1954).

In this case, the responses of the subject can be divided into four categories: Hit (stimulus detected when stimulus was presented), Miss (stimulus not detected although stimulus was presented), false alarm (stimulus reported by the subject although no stimulus was presented) or Correct Rejection (no stimulus was reported when no stimulus was presented). The result depends highly on two factors, namely the difficulty of the task and the strategy subject’s use. It is not hard to imagine, in a more difficult task, Hit rate decreases while Miss rate increases; and if subjects stick to one of the choices, like signal detected, there will be no Miss trial.

The most important notion in SDT is decision variable (DV). DV can be understood as a proxy of perceptual evidence in the brain. The brain compares the generated DV with a predefined threshold. To illustrate the idea, imagine an RDM (random dot motion) task, in which subject has to decide, whether she has seen more dots moving toward the right side or the left side. Let us assume there are two neurons, each of them is most sensitive to one of the two directions. A weak right toward signal causes both neurons to fire with different firing rate. The probability of firing rate from both pools forms two normal distribution curves with overlapping. In this case, the DV would be the difference in firing rate from these two neurons. The DV will then be compared to a predefined threshold, allowing subject reports right or left.

Take previously mentioned Newsome's study as example. There are right pools, firing more while there are more dots moving toward the right side and left pools the other way

Page 11: Liao Master Thesis

10

round(Fig1). The idea is: monkeys are comparing firing rate of these two pools, and decide the direction based on which pool is firing more. If we subtract the firing rate of two pools as shown in Fig C, the and the criteria of DV would be the place where delta firing rate is zero, namely no difference in firing rate between these two groups.

The great advantage of SDT is that it replaced the high-threshold theory. In that theory, incorrect rate is thought to be the result of failure for some stimuli to reach the threshold. In SDT, the threshold is flexible. The incorrect rate thus arises from the nature of decision-making, the overlapping of two decision pools, but not from the weakness of stimuli.

Fig1. Confidence in Signal Detection Theory. (up)When a rightward stimulus is given, both leftward pool and rightward pool will respond to this stimulus. The pooled response(sp/s) of rightward pool is higher than leftward pool. However, there is an overlapping between these two pools, where the incorrectness occurs.(down) Subtraction of firing rate of rightward pool and leftward pool gives rise to the difference in firing rate between pools(R-L). The threshold of DV is then when the difference of firing rate is zero (Kiani and Shadlen, 2009).

Bound Evidence Accumulation or Diffusion-Drift Model

However, SDT cannot explain the time course, which is an essential property in the decision-making process. To illustrate the role of time course in decision-making, another model called drift-diffusion model, based on the bounded evidence accumulation, was introduced to explain two-alternative forced choice decision-making task (Fig2). In this model, decision variable 'drifts' from the starting point in the middle to either A+ (right side, red)or A- (left side, green) as it accumulates evidence from different direction inputs. When the decision variable reaches one of the boundaries (choose h1 and choose h2), a decision is made. The

Page 12: Liao Master Thesis

11

strength of this model is that, it also tells us how long it takes to make a decision by the mean drift rate.

Fig2. Bound Evidence Accumulation Model. In the drift diffusion model, the starting point is in the midpoint of starting line. As the time(x-axis) passes, the 'decision' is dragged toward Choose h1 or Choose h2, depending on the accumulated evidence for h1 and h2. The mean drift rate is dependent on the strength of evidence (Kiani and Shadlen, 2009).

Integrate-and-Fire Attracter Model

The integrate-and-fire attracter model (Fig3) was first introduced to explain the short term memory, in which there are different competing retrieval cues (Wang et al., 2002; Wang et al., 2008).

This model was then introduced to perceptual decision-making, describing the mechanism of decision in two or more competing inputs. Imagine there are two competing inputs: λ1 or λ2. Both inputs make the whole network more favorable to the corresponding decision attractor: Attractor for λ1 or λ2. Two stimuli compete each other inside the network via inhibition feedback to eachother. Once a choice is reached, making the whole neural network toward attractor for λ1 or attractor λ2, it is then difficult for the network to move back to another state of attractor.

The noise in the network comes from the random spiking of neurons. This makes it possible, that in some situations, a false decision (ex. decision for λ1) against the given stimulus (ex. λ2) may occur due to the noise. The decision-making mechanism represented in this model is thus probabilistic, because the noise can influence the decision to certain degree.

Page 13: Liao Master Thesis

12

Fig3. Integration-and-Fire Attractor Model. (up) A number of neurons are receiving different stimuli (λ1 or λ2 in this network). Different synaptic weights (Wij) may occur, and the overall output firing will be computed as a result from the whole network. Several inhibition feedbacks (not shown) occur in this network, which can also be regarded as the competition of these two choices. (down) The initial status in decision-making is in the spontaneous state (S). Once a choice is reached (the red dot, representing the whole network, falls into one of the decision state attractor), ether decision 1(D1) or decision 2 (D2), it is then harder to make the network go to another decision state attractor, or back to the spontaneous state. In other words, the potential is not favorable (Rolls and Deco 2010).

There are several characteristics which make this model different from the previous ones. First, in the integrate-and-fire attracter model, the mechanism for calculating the difference between the stimuli is represented by the feedback inhibition. No predefined threshold is needed in this model, while in other models, certain threshold should be reached. The noise in this model is not arbitrary, but produced by the random spiking of each neuron. Due to the fact that the current model contains many recurrent connections, the time course to reach a decision can take a long time. This model also predicts a longer response time in incorrect trials than in correct trials, while in the bound evidence accumulation model, only the hard trials take longer (Wang 2008, Deco and Rolls 2006).

Based on the advantages mentioned above, we decided to apply this model in our study as the model of decision-making.

Animal Studies in Decision-Making One of the most inspiring studies in animal decision-making was done by Newsome, Britten and Movshon. They recorded MT/V5 neurons of rhesus monkeys while the monkeys were doing a direction discrimination task. Their results showed that single-neuron response could

Page 14: Liao Master Thesis

13

predict the choice. Moreover, the variability of neurons could correlate to the variability in the behavior, namely the choices monkeys made. The key information from this study is that monkeys can decide based on the firing of a group of neurons. However, the question remains: where does the decision take place, or put it in other words, where is the DV represented in the brain?

A possible candidate is lateral inferior parietal area (LIP), because it receives the information from visual cortex, pulvinar, and it is not purely involved in motor or sensory but in both of these. The idea was then confirmed by Roitman and Shadlen.

They investigated the firing rate of LIP while monkeys were doing a RDM task. They found that the buildup rate of firing rate in LIP is higher when the stimuli is more homogeneous (easier trials) than more heterogeneous (harder trials). Furthermore, it is shown that the reaction time (time course to make a saccade) is shorter when the buildup rate of firing rate in LIP is higher, which is compatible with the idea of bounded evidence accumulation (Roitman and Shadlen, 2002). It was later further shown that, prefrontal cortex harbors a neural signal involved in forming decisions (Hunt et al., 2012)

Human fMRI Studies in Decision-Making Although the previous perceptual decision-making data are from primates' electrophysiology recording, more and more researchers are using functional magnetic resonance imaging (fMRI) to study human subjects.

The studies are often focusing on visual perceptual decision-making, like face recognition (Heekeren et al., 2004; Fleming et al., 2010; Banko et al., 2010), object recognition (Noppeney et al., 2010; Bode et al., 2012), face v.s. house discrimination(Tosoni et al., 2008), face v.s. cars discrimination (Philiastides and Sajda, 2007), RDM motion discrimination (Sunaert et al., 2000; Heekeren et al., 2006; Ho et al., 2009; Kayser et al., 2010) and color discrimination(Kayser et al., 2010)

Heekeren used a visual perceptual decision-making task, letting subjects to decide, whether they saw a face or a house in the presented image. It had been already known that ventral temporal region is involved in face identification.

Heekeren and colleges proposed, that the decision-making encoding regions in this task must fulfill two criteria: 1. The regions should respond more when the images are suprathreshold(easy to discriminate) than perithreshold (hard to discriminate). 2. The regions should correlate to the activity in ventral temporal region, to make sure that these regions use the information from sensory evidence. (Heekeren et al., 2004)

In their study, they identified left posterior dorsolateral prefrontal cortex (DLPFC) fulfilled all these two criteria. Moreover, the BOLD change in this region can predict the task performance. They thus claimed that, DLPFC might encode decision-making in this task.

Page 15: Liao Master Thesis

14

Confidence in Decision-Making

Overview in Confidence It has been a long time that psychologist are interested in human confidence (Peirce and Jastrow, 1884). Confidence, according to some psychologists, is a belief and a subjective feeling of the correctness of one's performance, thoughts, and decision. Confidence is also regarded as one of the most typical representatives of metacognition. Metacognition can be understood as thinking of one's thinking. Semantically, this working definition implies two types of cognition: in which the primary cognition refers to memory, perception and decision, and the second cognition refers to the judgment of the first cognition, like certainty (Lutteral et al., 2013).

Confidence tells us how sure we are of our previous decision. A valid confidence can reflect the possibility of the right or wrong decision. However, sometimes one's confidence is heavily biased, and cannot reflect the true situation. In such cases, like pathological gambler or patients with bipolar disorder, one might make an irrational decision with overconfidence.

Overconfidence seems to be irrational and leads to unfavorable outcomes. Surprisingly, at least in human being, it is preserved throughout the evolution. Some researchers argue that such an irrational overconfidence can motivate action and boost morale when facing danger or obstacles.(Johnson & Fowler, 2011).

Confidence is not only an inner voice to oneself, but it can also have an impact on others.

Take juridical process, for example, the confidence of an eyewitness not only reflects its subjective feeling of the decision, (namely whether the memory is valid or not) but also has an immense impact on the decision of the jury (whether to believe the given testimony).

It is thus interesting to understand the dynamics of confidence generation and the neuroanatomical basis, namely how and where confidence is generated.

Page 16: Liao Master Thesis

15

Models of Confidence Formation Confidence in Signal Detection Theory

SDT not only can be applied in perceptual decision-making, but also in confidence formation. Remember that to reach a decision, the DV has to be compared with a predefined threshold or boundaries. Take a color detection task as example, the subjects have to decide, whether the presented color is green or blue. The DV is driven from the presented stimulus, the mixture of green and blue color, and this DV would be compared with boundaries in mind.

From the perspective of SDT, the proxy of confidence is then the difference between stimulus and the predefined threshold or boundary of each decision (Fig4).

Fig4. Confidence in Signal Detection Theory. According to SDT, a perceptual decision-making task can be understood as the comparison of given stimuli and predefined boundaries. Two given stimuli S1 and S2 are compared with two predefined boundaries b1 and b2. The comparison between stimulus and boundary gives rise to the decision, and the absolute value between stimulus and boundary represents the confidence (Kepecs et al., 2008).

Confidence in Bound Evidence Accumulation Model

As discribed in the section of decision-making, it has been mentioned that the main difference between SDT and the bound evidence accumulation model(like drift diffusion model) is the concern of time course. If we put the evidence accumulation as y-axis, the time course needed to reach a decision as x-axis, and the log of the odds of the possible correct decision as colors from cool to warm, we can end up with a heat map representing confidence. In this heat map (Fig5), the cooler the color means the lower the certainty or confidence and vice versa. It is not surprising that when it takes more time to decide, the confidence is low(blue); and the shorter the decision-making time, the greater the confidence(red). (Gold and Shadlan, 2007; Kiani and Shadlen, 2009)

Page 17: Liao Master Thesis

16

Fig5. Confidence in Bound Evidence Accumulation Model. This heat map shows the relationship between the accumulated evidence toward each side, the decision time and the logs of odds in making a correct decision. The cooler the color, the lower the confidence, and vice versa (Gold and Shadlan, 2007; Kiani and Shadlen, 2009).

Confidence in Integration-and Fire Model based Two-Layer Decision-Making Model

Following the integrate-and-fire model in decision making (Wang, 2008), Rolls and Deco proposed a two-layer decision-making model (Fig6), implying that confidence is an emergent property of decision-making (Insabato, Pannunzi, Rolls, & Deco, 2010).

Fig6. Two-Layer Model for Decision-Making and Confidence Read-Out. The lower layer, decision-making network, receives stimuli, λA and λB, via pool DA (decision A) and pool DB (decision B). Both pools generate negative feedback to each other and positive feedback to themselves respectively. The pool C (pool confidence) in the upper layer, confidence network, reads out the sum of firing rate generated in the first layer. Pool C and Pool LC (pool low-confidence) inhibit each other and give positive feedback to themselves respectively. The Pool LC also receives λReference from other networks. The firing rate of pool C represents the degree how confident the decision has been made in the lower layer (Insabato, Pannunci, Rolls & Deco, 2010).

According to this model, decision-making network is formed by two layers of neurons. In a forced binary task, there will be two groups of neurons (Group A and Group B) in the first layer, being responsible for choice A and choice B (or decision A and decision B). These two

Page 18: Liao Master Thesis

17

groups of neurons not only receive evidence (λA and λB) from the perception layer but also inhibit each other. Apart from that, these groups also convey positive feedback to themselves. It is not hard to imagine, when the firing rate of one group, for example, choice A, is compatible with the presented evidence (evidence for A), the firing rate of choice A accumulates immensely and overwhelms the firing rate of choice B.

In the second layer, namely the confidence-forming layer, there are also two groups of neurons. These two groups are called confidence pool (pool CP) and low-confidence pool (pool LC). The confidence pool reads out the sum of firing rate from both the 'winning pool' and the 'losing pool' in the decision-making layer in a probabilistic manner.

The sum of firing rate is dominated by the winning group, which means if one group's firing rate is higher, the sum of both will also be higher. In this case, the winning group has more inhibition on the losing group and more positive feedback on itself.

Fig7. Neuron Firing Rate Prediction based on Two-Layer Model. (A) According to the two-layer model, the firing rate of Pool C is higher in correct trials than in incorrect trials. In correct trials, the firing rate is higher in easy trials (Δλ=30) than in hard trials (Δλ=10); while in incorrect trials, the firing rate is higher in hard trials than in easy trials. (B) The firing rate of Pool LC is higher in incorrect trials than in correct trials. In incorrect trials, the firing rate is higher in easy trials than in hard trials; while in correct trials, the firing rate is higher in hard trials than in easy trials (Insabato,Pannunci, Rolls & Deco, 2010).

It is important to note that, the confidence is already generated in the form of firing rate in decision-making network, the confidence network only reads out the result.

The simulation of the model by using different λA and λB shows that confidence pool fires more in correct trials(ex. Choice A wins when λA is larger) than in incorrect trials(ex. Choice A wins when λB is larger)(Fig7). The difference of CP firing rate between correct and incorrect trials is larger in easy(ex. the difference between λA and λB is large) than in difficult trial( the difference between λA and λB is small). This result recapitulates what researchers found in the animal studies and shed light on the plausible mechanism in decision-making and confidence formation.

Page 19: Liao Master Thesis

18

Animal Studies in Confidence It has been believed for a long time that only humans have the ability to report confidence, since animals lack verbal language. However, neuroscientists developed opt-out decision making it possible to let animals, rats or monkeys, report confidence (Kepecs, Uchida, Zariwala, & Mainen, 2008)(Komura, Nikkuni, Hirashima, Uetake, & Miyamoto, 2013).

Kepecs designed a two choice odor mixture categorization task. The difficulty, namely the difference of concentration of two odors varied from trial to trial. The rats were trained to decide, which odor had a higher concentration than another one based on the olfactory perception. The researchers recorded the firing rate of neurons in the orbitofrontal cortex (OFC), one of the main regions involved in the decision under uncertainty. They found that the neurons in OFC fired more in incorrect trials than in correct trials. But counterintuitively, the difference in firing rate between correct and incorrect trials was larger in easy trials than in hard trials.

Kepecs reasoned that this firing rate pattern might reflect the uncertainty of subjects. They developed a computational model to simulate the pattern of uncertainty (or confidence). In this model, both perceptual stimulus and categorizing boundary are encoded as a distribution of values. The simulation showed that the uncertainty had a double-U shaped pattern in uncertainty, which resembles the electrophysiological pattern from OFC.

To show whether rodents can behaviorally represent this uncertainty, Kepecs modified the task, encouraging rats to stay longer in front of the rewarding port when confident, and to abort the uncertain and initiate a new trial while being uncertain.

The pattern of the probability of restart resembles the firing rate of OFC neurons and the model simulation. Kepecs proposed that OFC is involved in uncertainty and animals are able, to some extent, to report confidence, which had been thought an ability requiring metacognition.

On the other hand, Komura designed a visual categorization task with random dots movement, to examine whether the visual thalamus, namely the LGN (lateral geniculate nucleus) and pulvinar are involved in perceptual decision making and confidence formation. In this study, monkeys were trained to decide the direction of the moving dots with a certain color given before each trial. According to the electrophysiological data Komura collected from 618 neurons (83 LGN neurons and 535 pulvinar neurons), 168 neurons showed a U-shape pattern firing rate in the late phase. In short, these neurons fired more when the RDM was more homogenous(an easier task), while they fired less when the task was harder. All these 168 neurons were pulvinar neurons, making it reasonable to speculate that pulvinar is linked to confidence formation.

Apart from two already presented options (risky trial), Komura added a third opt-out option to let subjects choose (escape trial). In this behavior task, subjects would obtain a full reward

Page 20: Liao Master Thesis

19

in correct risky trials, a 'beep' sound in risky incorrect trials, while not full but a minor reward in escape trial. The idea was, if monkeys were fully confident, they might risk the chance and chose an option. On the other hand, if the monkeys were not confident of the decision, the possibility to opt out the trial would be higher. It is thus supposed that opt-out trials reflect low confidence (Komura et al., 2013)(Fig11).

Fig11. Neuron Firing Rate in Primate Pulvinar in RDM Task. The firing rate of neurons in pulvinar (n=72) was higher in correct trials (black) than in incorrect trials (purple). In opt-out trials (blue), the firing rate was the lowest. The difference in firing rate was larger in easy trials than in hard trials (Komura et al., 2013).

It turned out that pulvinar neurons fired the least in opt-out (safe) trials, and fired more in risky trials, proposing that pulvinar might encode confidence. Moreover, based on the firing rate in pulvinar, it was possible to predict the behavior in the decision, taking the risky or safe choice.

Komura then inactivated pulvinar to confirm its role in confidence-formation. It turned out that inactivation of pulvinar led to an increase in the opt-out rate while the binary choice in categorization was not affected. Therefore, they concluded, that pulvinar encodes confidence formation in visual categorization task.

The animal studies shown here propose, that the confidence is formed when the decision is made because it is possible to predict the behavior purely based on certain brain regions' activity.

Human fMRI Studies in Confidence As a non-invasive method, fMRI has been used for many years to investigate certain brain regions' function. Several regions have been reported to be involved in post-decision confidence in humans. These regions include: dorsolateral prefrontal cortex, ventral striatum, right anterior insula, right posterior parietal cortex, middle frontal gyrus, ventromedial prefrontal cortex, lingual gyrus, fusiform gyrus, calcarine gyrus, angular gyrus (Heekeren, 2015; Fleming, 2014; Hebscher, 2015).

It is interesting to know whether confidence is a task-dependent or a task-independent property. To clarify this problem, Heekeren invited subjects to perform color-motion-detection task. Subjects were asked before each trial to focus ether on the motion or the

Page 21: Liao Master Thesis

20

color of the moving dots. The subjects then had to decide the color or motion of moving dots. But in this study, the subjects first reported confidence of the decision in mind, and then reported the decision made. The reasoning is: if a region of which the BOLD signal is dependent on the confidence reported by the subjects and showing no difference between two tasks(color or motion), these regions are then candidates for confidence encoding (Heekeren, 2015).

They found that BOLD signal increased with reported confidence in following regions: right lingual, calcarine and left angular gyrus. Other regions like left lingual gyrus, right inferior parietal lobule, bilateral DMPFC/SMA and left postcentral gyrus showed a negative correlation in BOLD signal with certainty.

In another study Fleming invited 20 subjects to perform a food choice task in the scanner. The subjects were then asked to report the confidence of their previous decision. Then the subjects had to bid for the food they have chosen, using the Becker-DeGroot-Marschak (BDM) mechanism (Fleming, 2014).

If confidence is the emergent property in value-based decision-making, Fleming hypothesized; that the region encodes value comparison should also represent subjective confidence. That is to say, the region should show a correlation in BOLD signal both with difference in value (DV) and with subjective confidence. It is shown that ventromedial prefrontal cortex, vmPFC, and precuneus were positively correlated both to the increase of difference in value and subjective certainty. The parametric model of difference in value was then split into high and low confidence, in order to investigate the interaction between difference in value and subjective confidence.

Since the vmPFC is correlated to both difference in value and subjective confidence, this region is thought to be the first-order network. The question now would be the read-out of this information generated in vmPFC. Fleming proposed that right lateral prefrontal cortex might be a good candidate, due to the fact that the BOLD signal of which is correlated to the subjective confidence, but not the difference in value.

Several brain regions have been reported to be involved in confidence read-out. These regions include anteromedial prefrontal cortex (Del Cul et al., 2009), anterior prefrontal cortex and temporal lobe (Fleming et al., 2010), Brodmann area 46 (Lau and Passingham, 2006), ventral striatum (Hebart et al., 2016), pulvinar (Komura et al., 2013), supplementary eye field (Middlebrooks and Sommer, 2012), and lateral intraparietal cortex (Kiani & Shadlen, 2009).

fMRI Studies in Confidence in Human Subjects Based on the Integrate-and-Fire Model

Following the idea of integrate-and-fire model based two-layer decision-making model, Rolls and his colleagues developed an olfactory pleasantness task (Rolls, Grabenhorst, Deco, 2010).

Page 22: Liao Master Thesis

21

Based on the prediction, in the confidence encoding region, the BOLD signal should be higher in correct trials than in incorrect trials (Fig8). In correct trials, the BOLD signal increased when the difference in stimuli increased corresponding to the upper curve in double-U shape figure. In incorrect trials, the BOLD signal decreased while the difference in stimuli decreased (Fig8).

Fig8. BOLD Change Prediction based on Two-Layer Model. Based on the model, the firing rate of neurons in perceptual decision-making region can be translated to BOLD signal. (left) The BOLD change was larger in correct trials than in incorrect trials. (right) In correct trials, The BOLD signal change was linearly positively proportional to the difference (ΔI) , negatively linearly in the incorrect trials(Rolls, Grabenhorst, Deco , 2010).

Fig9. BOLD signal in DLPFC and Subgenual Cingulate Cortex. Both in DLPFC and subgenual cingulate cortex, the BOLD signal was higher in correct trials than in incorrect trials, which represents the signature of confidence encoding region (Rolls, Grabenhorst, Deco , 2010).

It was found that the BOLD signal change of dorsolateral prefrontal cortex and subgenual cingulate cortex was higher in correct trials than in incorrect (incorrect) trials (Fig9).

Besides, in both regions, the BOLD signal showed a positive linear correlation to difficulty in correct trials and a negative linear correlation to difficulty in incorrect trials (Fig10). Nevertheless, they didn't have subjective confidence rating in their study.

Page 23: Liao Master Thesis

22

Fig10. BOLD Signal Change in Different Difficulty. Both in DLPFC and subgenual cingulate cortex, the BOLD signal was higher in correct easy trials than in correct hard trials; and more active in incorrect hard trials than in incorrect easy trials (Rolls, Grabenhorst, Deco , 2010).

Aim of This Study

In this study, the aim was to investigate confidence report under visual perceptual decision-making. Identify brain regions which carry the neural signature of confidence in decision-making found in non-human animal studies as proposed and explained by intergrate-and-fire attractor models.

Page 24: Liao Master Thesis

23

Materials and Methods

Participants A total of 17 right-handed paid volunteers (mean 26.5 year-old, 9 females) with normal or corrected vision participated in this experiment. fMRI screening form and general health questionnaire were given before the participation. Tasks in this study including the methods used were explained, and informed consent was also asked to sign before the study. None of these 17 subjects has or had neurological or psychiatric diseases, traumas or under any kinds of regular medication. Psychophysics data are from all these 17 subjects. One subject droped out after the psychophysics and one had difficulties in followng the instruction. 15 subjects were invited to the fMRI sessions and fMRI data of each subject was collected, while 2 subjects had poor fixation inside the scanner, and 1 subject had high head motion. Therefore, fMRI data from 12 subjects were included in the fMRI image analysis. The study was approved by the ethics committee at University of Göttingen and University Clinics Göttingen, Germany, and was in accordance with the Declaration of Helsinki.

Stimuli

Visual Stimuli Each train of stereo white flickers lasts for 3 seconds. Each train has 6 flickers per second (#Flickers right (F(r)) + #Flickers left (F(l)) = 8 per second) and each flicker is 16.667 milliseconds. For minimizing the adaptation, consecutive flickers were set to have a minimum inter-pulse interval of 120ms (Brunton et al. 2013).

First and last flickers were stereo to prevent bias towards the side. Stimuli were drawn from a Poisson's distribution, while the exact number of flickers difference ((#F (r)) - #F (l))) in each difficulty level was controlled. There are four difficulties in stimuli presented outside the scanner: easy (5 flickers difference), medium (3 flickers difference), and hard (1 flickers difference), and chance (0 flickers difference).

There were only two difficulties presented in the scanner: hard (1 flickers difference) and easy(3 flickers difference). Note that the terminology, in psychophysics inside the scanner, 3 flickers difference is termed 'easy' trial in this thesis. Ten random variations of each stimulus were generated. Stimuli were generated by A. Nazzal M.D. using MATLAB, version R2011b using customized scripts from Brody’s lab at Princeton University.

Auditory Stimuli A train of stereo clicks lasting 3 seconds was presented over headphones in each trial. Each train has 20 clicks per second (#Clicks right (CR) + #Clicks left (CL) = 20 per second) each click lasts 3ms. A minimum inter-pulse interval of 33 ms is added between each two consecutive clicks, to minimize the adaptation (Brunton et al. 2013). To prevent that subjects would bias toward one side based on the very first and very last clicks, clicks were set to be stereo. In each stimulus, the stimuli distribution over time and space was drawn from a Poisson's

Page 25: Liao Master Thesis

24

distribution and the exact difference in clicks (#CR - #CL) in each stimulus was controlled the difficulty was assigned as follows: easy (40 clicks difference), medium (20 clicks difference), and hard (5 clicks difference), and chance (0 clicks difference). The stimuli were generated by A. Nazzal M.D. using MATLAB, version R2011b using customized scripts from Brody’s lab at Princeton University.

Procedure

Fig12. Visual Task inside the Scanner. In the scanner, the subjects first fixed at the red cross for 1 second. The stimuli (flickers in visual task) were presented for 3 seconds, followed by a delay from 6 to 8 seconds. The red cross then changed color to green color, indicating the period for decision-making (left or right). Thereafter followed by a yellow cross, indicating the confidence report period (1 to 4). A rest period from 6 to 8 seconds was represented by the blue cross at the end of each trial.

Experimental Setup Outside the Scanner Participants were seated in a small closed quite room with their head position stabilized by a chin rest. They were asked to maintain their fixation onto the central cross throughout the experiment. A computer desktop with Matlab 2010Rb and Psychophysical Toolbox (www.psychotoolbox.org) were used to control stimuli delivery. In visual task, flickers were presented on a 27-inch monitor (Dell, Ultrasharp U2711b) with an eye-screen distance of 57 cm. The spatial resolution of the monitor was 2569 x 1440 pixels, and the monitor had a refresh rate of 60 Hz. In auditory task, the stimuli, namely the clicks were delivered by a headset (Sennheiser headset). Participants were asked to answer the task using a standard keyboard in both visual and auditory task.

Behavioral task outside the scanner In auditory task, participants were asked to decide, whether they heard more cliques from the right side or the left side of the headset. In the visual task, participants were asked, whether there were more flickers on the right side to or on the left side to the cross. Participants were asked to use the whole information from the stimuli period to decide, not only based on the first few stimuli or last few stimuli. Each run contained 35 trials, and was started by the subjects pressing any key on the keyboard. Fixation cross color changed over time, and subjects were instructed before each session the meaning of the color, the task should be done under which cross color. All trials started with the red cross in the middle of the screen. Stimuli were presented while the cross was red, both in visual and auditory task.

Page 26: Liao Master Thesis

25

Cross remained red for a delay period of (1-3) seconds after the stimuli presentation stopped. After this short delay period, the red cross was replaced by a green cross. It was at this phase, the subjects were asked to decide, whether she had seen more stimuli on the right side or more stimuli on the left side. Subjects were asked to report the decision via button-pressing on the keyboard, 1 meant left and 2 meant right. The green cross period was 2 seconds. It was then replaced by a yellow cross. During the yellow-cross period, subjects were asked to report the confidence of the previous decision. Subjects had 2 seconds to report the confidence via keyboard. The rating system of confidence in this study was a non-continuous scale system. Subjects had to rate their confidence into 4 categories by button press from 1 to 4. Subjects were instructed that 1 meant 0 % confidence, 2 meant 25% confidence, 3 meant 75% confidence and 4 meant 100% confidence. Subjects were asked to use the whole range of the rating system in each run. No feedback was given for the participants after completing the trial. A rest period of (1-3) seconds represented by a blue cross between confidence rating and beginning of next trial was implemented. After the blue cross, an orange cross presented and represented the detection of fixation by eyetracker. Once the eyetracker detected the pupil, a next trial began. Jittering of delay and rest was used to prevent participants from forming a response strategy. Each participant finished 4 runs of each modality of a 196 trials in total. Performance level was communicated to the subject after each run. An accuracy level of more than 75% was a prerequisite before participants did the task in the scanner.

Experimental Setup and Behavioral Task Inside the Scanner Participants were placed in the MRT scanner in a supine position, wearing foam earplugs and noise-isolating headset for noise minimization in the visual task. Visual stimuli were presented using MR-compatible liquid crystal display goggles (Resonance Technology, Northridge, CA). The spatial resolution was 800 X 600 pixels, covering a visual field of 32 X 24 degrees. The refresh rate was 60 Hz. Eye position was monitored using an MR compatible eye tracking system (Arrington Research, Scottsdale, AZ). Similar to the task outside the scanner, subjects were asked to decide whether there were more flickers on the right side or on the left side to the middle cross. Note that in visual decision-making task inside the scanner there were only two difficulties presented: 1 flicker difference (hard) and 3 flickers difference (easy). Slow event related design was implemented with a longer jittered ITI. A delay of 6-8 seconds durations was applied to make sure that the BOLD signal returned to the baseline allowing proper separation of task related events. Before the fMRI session, participants were instructed again the whole task, the meaning of different cross color, how to report decision and confidence by using the answer box, and they were asked to use the information of the whole stimuli period, not only based on the information at the beginning or at the end of the stimuli presentation phase. Each run started by receiving a pulse from the scanner to synchronize the task with scanner acquisition time. Fixation cross appeared and participants were requested to fixate for one second, in order to initiate the presentation of the stimulus. Trials will not start until participants fixated for 1 second.

Page 27: Liao Master Thesis

26

Subjects were asked to report the decision via button-pressing on an answer box: 1 means left and 2 means right. Subjects were asked to answer the decision during the green-cross period. During the yellow-cross period, subjects were asked to report the confidence of the previous decision. Subjects had 2 seconds to report the confidence via an answer box. The rating system of confidence in this study was a non-continuous scale system. Subjects had to rate their confidence into 4 categories by button press from 1 to 4. Subjects were instructed that 1 meant 0 % confidence, 2 meant 25% confidence, 3 meant 75% confidence and 4 meant 100% confidence. Subjects were asked to use the whole range of the rating system in each run. No feedback was given for the participants after completing the trial. A rest period of (6-8) seconds between confidence rating and beginning of next trial was used. After the blue cross, an orange cross presented and represented the detection of fixation by eyetracker. Once the eyetracker detected the pupil, a next trial began. Jittering of delay and rest period was used to allow of relaxation of the fMRI signal (Fig12).

2-by-2 Repeated Measure ANOVA Percent accuracy and mean confidence were calculated for visual task in different difficulties inside and outside the scanner. To investigate whether the difficulty or change of environment (inside and outside the scanner) affected subjects’ performance and confidence rating, a 2-by-2 repeated measure ANOVA on performance with environment and difficulty as factors was constructed (easy/hard; inside the scanner/outside the scanner). To investigate whether the difficulty or change of environment (inside and outside the scanner) affected subjects’ confidence rating, another 2-by-2 repeated measure ANOVA on performance with environment and difficulty as factors was constructed (easy/hard; inside the scanner/outside the scanner).

fMRI Data Acquisition All images were acquired using a 3Tesla Magnetom TIM Trio scanner (Siemens Healthcare, Erlangen, Germany) with a 12-channel phased-array head coil. First, a high-resolution T1-weighted anatomical scan (three-dimensional (3D) turbo fast low angle shot, echo time (TE): 3.26 ms, repetition time (TR): 2,250 ms, inversion time: 900 ms, flip angle 98, isotropic resolution of 1 3 1 3 1 mm3) was obtained. All functional data were acquired using T2-weighted gradient-echo echo-planar imaging (TE: 30 ms, TR: 2,000 ms, flip angle 708, 33 slices of 3-mm thickness, no gap between slices at an in-plane resolution of 3 3 3mm2). Four dummy scans were added at the beginning of each run to allow for T1 equilibrium. A total of 425 whole brain volumes were acquired in each functional run. All participants were invited to do one fMRI session, each contained 4 runs, and each run contained 30 trials.

fMRI Data Preprocessing and Analysis BrainVoyager QX Software version 2.8 (Brain Innovation, Maastricht, The Netherlands), and the Neuroelf 0.9c toolbox for Matlab (retrieved from http:// neuroelf.net/) were used for the preprocessing and analysis of the functional data. Standard preprocessing steps including 3D motion correction, slice scan time correction and temporal filtering [linear

Page 28: Liao Master Thesis

27

trend removal and high pass filtering (2cycles/run)] were performed. The functional data were co-registered to the anatomical reference scans, transformed into Talairach space and spatially smoothed with a Gaussian kernel (full width at half maximum 8 3 8 3 8mm3).

Further statistical analysis was performed using the general linear model (GLM) implemented in the BrainVoyager software. First level GLM was estimated for each subject. For each run, the stimulus presentation was categorized and modeled as: correct easy trials, correct hard trials, correct easy trials, and correct hard trials. The delay period was categorized and modeled as: correct easy trials, correct hard trials, correct easy trials, and correct hard trials. The response period was categorized and modeled as: correct easy trials, correct hard trials, correct easy trials, and correct hard trials. The rest period was modeled as baseline. GLM generated in BrainVoyager was then analyzed in Neuroelf toolbox and Matlab for further figure presentation.

For the group results, a random effects analysis using the GLM was performed with 12 participants. For all statistical maps, multiple comparison correction was performed at the cluster level. Maps were thresholded at an initial cluster forming threshold with p < 0.005 unless stated otherwise. The size of the resulting clusters was assessed for significance using Monte Carlo simulations as implemented in BrainVoyager’s cluster-level statistical threshold estimator plugin. Reported clusters are significant at a level of p < 0.05 unless stated otherwise.

Difficulty Map

For difficulty map, the contrast hard > easy was constructed for correct trials and incorrect trials separately. The cluster threshold was set to be 25 voxels, and the positive clusters were selected to run alphasim and the significance value was set to be p < 0.05.

Direction Map

For direction map, the contrast right easy > rest and left easy > rest were used. In each of previous contrasts a strict threshold initial cluster forming threshold with p < 0.001 and alphasim FEW (p < 0.05) k for uncorr. p < 0.001. The remaining clusters were then overlapped on the same statistical map.

Correctness Map

For correctness map, the contrast correct (1)>incorrect (-1) was used. The cluster threshold was set to be 25 voxels, and the positive clusters were selected to run alphasim and the significance value was set to be p < 0.05.

Confidence Encoding Region Identification

To identify the confidence encoding regions, the steps shown in Fig13 were followed. The correct encoding regions and incorrect regions were first identified as candidates for

Page 29: Liao Master Thesis

28

confidence and uncertainty encoding regions. Then difficulty order in these regions was checked, whether they fulfilled the signature proposed by the integrate-and-fire based two-layer decision-making model. In confident encoding regions, the BOLD signal should be arranged as: correct easy> correct hard> incorrect hard> incorrect easy. In uncertainty encoding regions, the BOLD signal should follow the order: incorrect easy> incorrect hard> correct hard> correct easy. If the regions passed this criterion, the final checkpoint was whether this BOLD signal could correspond to the subjective confidence rating. If this last criterion was fulfilled, the confidence pool and uncertainty pool were then confirmed.

Fig13. The Flow Chart for Identification of the Confidence and Uncertainty Encoding Regions. In the correctness map, the red regions encode correctness while the blue regions encode incorrectness, each region can be candidate for confidence encoding and candidate for uncertainty respectively. Further investigation requires taking special signature in different difficulty into account. The region which encodes confidence should be more active in correct easy (Cor Easy) than in correct hard (Cor Hard) trials, still less active in incorrect hard(Incor Hard) trials, and the least active in incorrect easy(Incor Easy) trials. The region encoding uncertainty should be more active in correct hard than in correct easy trials, still less active in correct hard trials and the least in correct easy trials. At last, the BOLD signal in confidence encoding region should vary depending on the confidence rating. The BOLD signal should be higher in confident trials (Conf) than in unconfident trials(Unconf) for confidence pool; and higher in unconfident trials than in confident trials for uncertainty pool.

Page 30: Liao Master Thesis

29

Results

Visual Task

Behavior Results of Visual Task Outside the Scanner 17 subjects were invited to visual decision making psychophysical experiment. Each subject participated 10 runs, each run contained 35 trials including 5 chance trials outside the scanner. Trials with delayed decision report and delay, mistaken confidence report were excluded. 5290 trials from 17 subjects were included for behavior data analysis.(Sub1=280, Sub2=165, Sub3=225, Sub4=315, Sub5=350, Sub6=350, Sub7=315, Sub8=350, Sub9=350, Sub10=350, Sub11=350, Sub12=350, Sub13=315, Sub14=350, Sub15=245, Sub16=315, Sub17=315)

Performance and Confidence Rating of Visual Task Outside the Scanner

Fig14. Accuracy and Confidence Outside the Scanner. (left) The mean of accuracy is higher when the difference in stimuli is larger.(right) The mean of confidence rating (from 1 to 4)is higher when the difference in stimuli is larger.

The difficulties of visual task outside the scanner are set to be 1, 3, 5 flickers difference, representing hard, medium and easy trial respectively. It is revealed that the accuracy in visual task outside the scanner was not biased to one side. The accuracy increases along with the difference of presented stimuli (Fig14, left). (left easy: M=87.2% , SEM=.013; left medium: M=73.6%, SEM=.017; left hard: M=53.5% , SEM=.019; right easy: M=91.4%, SEM=.01; right medium: M=75.3%, SEM=.016; right hard: M=57.5%, SEM=.019)

In the confidence rating, subjects were asked to rate the confidence from 1 to 4, representing 0% confidence, 25% confidence, 75% confidence, 100% confidence. The confidence rating is symmetrical in terms of direction, no preference toward onside was shown. It is clear that the subjects were more confident when the difference of stimuli was large, which also means the trials were easy, while the subjects reported lower confidence, when the trials were medium, and the lowest confidence when the trials were hard(Fig14, right). (left easy: M=2.82, SEM=.035; left medium: M=2.46, SEM=.034; left hard: M=2.24 ,

Page 31: Liao Master Thesis

30

SEM=.033; right easy: M=2.78, SEM=.034; right medium: M=2.46, SEM=.035; right hard: M=2.21, SEM=.034)

Fig15. Accuracy v.s. Confidenceand Went Trials Outside the Scanner. (left) The mean of accuracy is higher when the mean of confidence is higher.(right) Subjects chose more right chioce when there were more stimuli presented to the right side.

The subjects tended to report higher confidence when the accuracy was higher(Fig15, left). (Confidence=1: M=59.9%, SDE=.018; Confidence=2: M=68.5%, SDE=.013; Confidence=3: M=76.7%, SDE=.011; Confidence=4: M=86.6%, SDE=.013)

The probability of right-toward decision outside the scanner increases along with the right presented stimuli (Fig15, right). (L-R=5: M=12.8% SDE= .013; L-R=3: M=27.4% SDE=.017; L-R=1: M=47.7% SDE=.019; R-L=1: M=57.5% SDE=.019;R-L=3: M=75.3% SDE=.016; R-L=5: M=91.4% SDE=.01)

Fig16. Number of Trial in Each Predictor Outside the Scanner. Visual trials outside the scanner can be categorized in eight different predictor categories. It is noted that the probability for subjects to make an incorrect confident decision in easy trials outside the scanner was the lowest. (IncoUnconfEasy=incorrect unconfident easy trials, IncoUnconfHard= incorrect unconfident hard trials, IncoConfEasy=incorrect confidence easy trials, IncoConfHard=incorrect confidence easy trials, CoUnconfEasy= correct unconfident easy trials, CoUnconfHard= correct unconfidenc hard trials, CoConfEasy= correct confident easy trials, CoConfHard= correct confident hard trials)

Page 32: Liao Master Thesis

31

Due to the fact that the probable number of runs performed inside the scanner would be significantly lower than the one outside the scanner, it is important to have every kind of predictor inside the scanner. It is revealed that, within 5290 trials outside the scanner, 3169 trials were medium and hard trials, which would be considered as 'easy' and 'hard' trials respectively in the scanner setting. 306(9.6%) trials were correct confident hard trials; 580(18.2%) trials were correct confident easy trials; 489 (15,3%) trials were correct unconfident hard trials; 496(15.5%) trials were correct unconfident easy trials; 252(7.9%) trials were incorrect confident easy trials, 149 (4.7%) trials were incorrect unconfident easy trials; 401(12.6%) trials were incorrect unconfident hard trials; 496(15.5%) trials were incorrect unconfident hard trials(Fig16).

Fig17. Performance Outside the Scanner. The performance of subject was the best in easy trials, worse in medium trials and the worst in hard trials. Note that even in hard trials, the accuracy was still higher than the probability by chance.

It is revealed that, the performance of subjects outside the scanner was the best in easy trials (M=89.35%, SDE: .008), worse in medium trials (M=73.95%, SDE: .012), and the worst in hard trialst(M=54.9%, SDE: .013)(Fig17).

Page 33: Liao Master Thesis

32

Fig18. Confidence v.s. Difference of Stimuli Outside the Scanner. Confidence of correct trials(green) is higher than incorrect trials(red). The difference in confidence rating is larger in easy trials than in hard trials.

The difference in confidence rating is the largest in the easiest trials (5 flickers difference), smaller in the medium trials(3 flickers difference) and the smallest in the hard trials(1 flickers difference). This double-U shape curve(Fig18) is compatible with the animal studies of Kepecs and Koumura.(correct easy left trials: M=2.92, SEM=.036; correct medium left trials: M=2.54, SEM=.038; correct hard left trials: M=2.25, SEM=.043; correct hard right trials M=2.2, SEM=.047; correct medium right trials: M=2.56, SEM=.041; correct easy right trials: M=2.56, SEM=2.85; incorrect easy left trials: M=2.18, SEM=.10; incorrect medium left trials: M=2.26, SEM=.071; incorrect hard left trials: M=2.24, SEM=.05; incorrect hard right trials M=2.21, SEM=.049; incorrect medium right trials: M=2.16, SEM=.065; correct easy right trials: M=2.00, SEM=.10)

Page 34: Liao Master Thesis

33

Behavior Analysis of Data from Inside the Scanner

15 subjects were invited to fMRI visual decision making session, contributing to the psychophysical data from inside the scanner. Each subject participated 1 session, each session contained 4 runs, each run contained 30 trials.

Fig19. Accuracy and Confidence Inside the Scanner. (left) The mean of accuracy is higher when the difference in stimuli is larger.(right) The mean of confidence rating (from 1 to 4)is higher when the difference in stimuli is larger.

The difficulties of visual task inside the scanner were set to be 1 and 3 flickers difference, representing hard and easy trial respectively. It is revealed that the accuracy increases along with the difference of presented stimuli on both left and right side(Fig19, left).(left easy: M=77.2% , SEM=.02; left hard: M=46.1% , SEM=.025; right easy: M=76.5%, SEM=.023; right hard: M=54.5%, SEM=.023)

In the confidence rating, the same number-scale rating system was used inside the scanner. Subjects were asked to rate the confidence using the answer box. The confidence rating is symmetrical in terms of direction; no preference toward one side is shown. It is clear that the subjects were more confident when the difference of stimuli was large, which also means the trials were easy, while the subjects reported lower confidence, when the trials were hard (Fig19, right). (left easy: M=2.48, SEM=.046; left hard: M=2.27 , SEM=.046; right easy: M=2.56, SEM=.05; right hard: M=2.21, SEM=.045)

Page 35: Liao Master Thesis

34

Fig20. Accuracy v.s. Confidenceand Went Trials Inside the Scanner. (left) The mean of accuracy is higher when the mean of confidence is higher in each difficulty(easy and hard) respectively. (right) Subjects chose more right chioce when there were more stimuli presented on the right side.

As outside the scanner, the confidence rating is positively correlated to accuracy in spite of the difficulty (Fig20, left). (Easy trial: Confidence=1: M=57.8%, SDE=.041; Confidence=2: M=73.8%, SDE=.028; Confidence=3: M=80.5%, SDE=.023; Confidence=4: M=94.3%, SDE=.020; Hard trial: Confidence=1: M=47.3%, SDE=.034; Confidence=2: M=54.5%, SDE=.03; Confidence=3: M=55.9%, SDE=.03; Confidence=4: M=65.3%, SDE=.055)

The probability of right-toward decision inside the scanner increased along with the right presented stimuli (Fig20, right). (L-R=3: M=22.8% SDE=.02; L-R=1: M=46.6% SDE=.025; R-L=1: M=54.5% SDE=.023;R-L=3: M=76.5% SDE=.022)

Fig21. Number of Trial in Each Predictor Inside the Scanner. Visual trials inside the scanner can be categorized in eight different predictor categories. It is noted that the probability for subjects to make an incorrect confident decision in easy trials inside the scanner was the lowest. (IncoUnconfEasy=incorrect unconfident easy trials, IncoUnconfHard= incorrect unconfident hard trials, IncoConfEasy=incorrect confidence easy trials, IncoConfHard=incorrect confidence easy trials, CoUnconfEasy= correct unconfident easy trials, CoUnconfHard= correct unconfidenc hard trials, CoConfEasy= correct confident easy trials, CoConfHard= correct confident hard trials)

Page 36: Liao Master Thesis

35

1817 trials in total were collected from 15 subjects inside the scanner. 200(11.1%) trials were correct confident hard trials; 376(20.7%) trials were correct confident easy trials; 256 (14.1%) trials were correct unconfident hard trials; 265(14.6%) trials were incorrect unconfident hard trials; 145(8.0%) trials were incorrect unconfident easy trials, 67 (3.7%) trials were incorrect unconfident hard trials; 243(13.4%) trials were incorrect unconfident easy trials; 265(14.6%) trials were correct confident hard trials(Fig21).

Fig22. Performance Inside the Scanner. The performance inside the scanner was 76.7% in easy trials and 54% in hard trials.

The performance, namely the accuracy of subjects inside the scanner was higher in easy trials (M=76.7%, SDE:. 0146)than in hard trials(M=54.0%, SDE:. 0172)(Fig22).

Fig23. Confidence v.s. Difference of Stimuli and Confidence v.s. Difficulty Inside the Scanner. (left)Confidence of correct trials (green) is higher than incorrect trials (red). (right) The difference in confidence rating is larger in easy trials than in hard trials.

The double-U shape curve presented here is compatible with the animal studies of Kepecs and Koumura, indicating that subjects had overall higher confidence in the correct trials than in the incorrect trials (Fig22, left). In the correct trials, (correct easy left trials: M=2.59, SEM=.051; correct hard left trials: M=2.29, SEM=.061; correct hard right trials M=2.34 SEM=.063, correct easy right trials: M=2.75, SEM=.056; incorrect easy left trials: M=2.11,

Page 37: Liao Master Thesis

36

SEM=.094; incorrect hard left trials: M=2.25, SEM=.071; incorrect hard right trials M=2.05, SEM=.062; correct easy right trials: M=2.02, SEM=.086)

The difference in confidence rating is larger in the easy trials (3 flickers difference), and smaller in the hard trials (1 flickers difference).(correct easy trials: M=2.32, SEM=.044; correct hard trials: M=2.66, SEM=.038; incorrect hard trials: M=2.14, SEM=.05; incorrect easy trials: M=2.07, SEM=.064)(Fig23, right)

Fig24. Repeated Measure ANOVA. Repeated measure ANOVA exam was performed to investigate whether there was a change of strategy in confidence rating between outside the scanner(black box, left) and inside the scanner(right)

Repeated Measure ANOVA Results

To compare whether the difference in environment affects the performance and confidence report, a 2-by-2 repeated measure ANOVA exam was performed (Fig24). It shows that confidence rating didn‘t change because of the change of environment (F=3.129, p=0.102). The performance didn‘t change because of the change of environment (F (=0.054, p=0.820). The performance changed in different difficulties (F=65.077, p<0.05), so did the confidence rating (F=131.814, p<0.05). There was no interaction between difficulty and environment in terms of confidence report (F=0.312, p=0.587), and there was no interaction between difficulty and environment in terms of performance (F=2.101, p=0.173).

Page 38: Liao Master Thesis

37

Auditory Task

Behavior Analysis of Auditory Task Outside the Scanner

Fig25. Accuracy and Confidence in Audtitory Task Outside the Scanner. (left) The mean of accuracy is higher when the difference in stimuli is larger.(right)The mean of confidence rating (from 1 to 4)is higher when the difference in stimuli is larger.

The difficulties of auditory task outside the scanner were set to be 5, 7 and 10 clicks difference, representing hard, medium and easy trials respectively. It is revealed that the accuracy increases along with the difference of presented stimuli on both left and right side(Fig25, left).(left easy: M=80.5% , SEM=.017; left medium: M=72.1% , SEM=.019; left hard: M=59.5% , SEM=.022; right easy: M=88.4%, SEM=.013; right medium: M=79.1%, SEM=.017; right hard: M=74.2%, SEM=.020)

In the confidence rating, the same number-scale rating system was used as in the visual tasks. It is clear that the subjects were more confident when the difference of stimuli was large, which also means the trials were easy, while the subjects reported lower confidence, when the trials were hard (Fig25, right). (left easy: M=2.31, SEM=.037; left medium: M=2.30 , SEM=.037; left hard: M=2.10 , SEM=.038; right easy: M=2.56, SEM=.013; right medium: M=2.40, SEM=.040; right hard: M=2.28, SEM=.042)

Page 39: Liao Master Thesis

38

Fig26. Accuracy v.s. Confidence in Auditory Task and Went Trials Outside the Scanner. (left) The mean of accuracy is higher when the mean of confidence is higher in each difficulty(easy=9 cliques difference=blue; medium=7 cliques difference green; hard=5 cliques difference=red) respectively. (right) Subjects chose more right chioce when there were more stimuli presented to the right side.

As outside the scanner, the confidence rating is positively correlated to accuracy in spite of the difficulty (Fig26, left). (Easy trial: Confidence=1: M=63.0%, SDE=.036; Confidence=2: M=83.0%, SDE=.018 Confidence=3: M=91.3%, SDE=.015; Confidence=4: M=98.1%, SDE=.011; Medium trial: Confidence=1: M=60.1%, SDE=.033; Confidence=2: M=73.4%, SDE=.022; Confidence=3: M=82.4%, SDE=.020; Confidence=4: M=91.6%, SDE=.024; Hard trial: Confidence=1: M=53.9%, SDE=.031; Confidence=2: M=65.6%, SDE=.022; Confidence=3: M=69.5%, SDE=.026; Confidence=4: M=77.5%, SDE=.040)

The probability of right-toward decision outside the scanner increases along with the right presented stimuli(Fig26, right).(L-R=10: M=19.5% SDE=.017; L-R=7: M=27.9% SDE=.019; L-R=5: M=42.6% SDE=.022; R-L=10: M=88.4% SDE=.013; R-L=7: M=79.1% SDE=.017; R-L=5: M=74.2% SDE=.020)

Page 40: Liao Master Thesis

39

Fig27. Number of Trial in Each Predictor of Auditory Task Outside the Scanner. Auditory trials outside the scanner can be categorized in eight different predictor categories. It is noted that in auditory task, the probability for subjects to make an incorrect confident decision in easy trials outside the scanner was the lowest. (IncoUnconfEasy=incorrect unconfident easy trials, IncoUnconfHard= incorrect unconfident hard trials, IncoConfEasy=incorrect confidence easy trials, IncoConfHard=incorrect confidence easy trials, CoUnconfEasy= correct unconfident easy trials, CoUnconfHard= correct unconfidenc hard trials, CoConfEasy= correct confident easy trials, CoConfHard= correct confident hard trials)

In auditory task, 2384 trials from 12 subjects were collected. 258 (10.8%) trials were correct confident hard trials; 424 (17.8%) trials were correct confident easy trials; 405 (17.0%) trials were correct unconfident hard trials; 436 (18.3%) trials were correct unconfident easy trials; 96 (4.0%) trials were incorrect confident hard trials, 78 (3.3%) trials were incorrect confident easy trials; 251 (10.5%) trials were incorrect unconfident hard trials; 436 (18.4%) trials were incorrect unconfident easy trials (Fig27).

Fig28. Performance in Auditory Task Outside the Scanner. The performance was 84.6% in easy trials and 75.8% in medium trials and 65.1% in hard trials in auditory task outside the scanner.

It is revealed that, the performance of subjects in auditory task outside the scanner was the best in easy trials (M=84.6%, SDE: .011), worse in medium trials (M=75.8%, SDE: .013), and the worst in hard trialst(M=65.1%, SDE: .014)(Fig28).

Page 41: Liao Master Thesis

40

Fig29. Confidence v.s. Difficulty in Auditory Task Outside the Scanner. The difference in confidence rating was the largest in easiest trials (9 cliques difference), smaller in medium trials (7 cliques difference) and the smallest in the hard trials(5 cliques difference).

The difference in confidence rating was the largest in the easy trials(9 cliques difference), and smaller in the medium trials(7 cliques difference), the smallest in the hard trials(5 cliques difference).(correct easy trials: M=2.29, SEM=.029; correct medium trials: M=2.47, SEM=.031; correct hard trials: M=2.55, SEM=.035; incorrect easy trials: M=2.00, SEM=.059; incorrect medium trials: M=1.99, SEM=.051; incorrect hard trials: M=1.84, SEM=.047)(Fig29)

fMRI Results in Visual Task

Stimulus Presentation Period

Fig30. Stimulus Presentation Period. Cingulate gyrus(CG), precentral gyrus(PreC), precuneus(PC) cerebellum(CB), insula(Ins) and middle occipital gyrus(MO) show higher BOLD signal during stimuli presented period than in rest period. Cut:x,y,z=(-7,-61,43) , right most (-30,15,1), k-threshold=26, p<0.005

In stimuli period, insula, cingulate gyrus, precentral gyrus, precuneus, cerebellum, middle occipital gyrus show higher BOLD signal than in rest period (Fig30). The contrast shown in this map was Stimulus>Resting State, with FWE (p < 0.05) k for uncorr. p < 0.005: 26 voxels.

Page 42: Liao Master Thesis

41

Direction

Fig31. Direction Encoding Regions. Right occipital lobe (ROL) shows activation in right trials, while left occipital lobe (LOL) shows activation in left trials. Cut: x,y,z=(-4,-65,4), k-threshold=15, p<0.001 (figure by A. Nazzal)

In right trials (more flickers on the right side), left occipital lobe showed a higher activation and in left trials (more flickers on the left side), right occipital lobe showed a higher activation(Fig31). The contrasts were Right Presented Trials>Rest and Left Presented Trials>Rest, with with FWE (p < 0.05) k for uncorr. p < 0.001: 15 voxels.

Difficulty

Fig32. Difficult versus Easy in Correct trials. Anterior cingulate(AC) and middle frontal cortex(MFC) show more activation in hard trials than in easy trials. Cut: x,y,z=(5,24,26), k- threshold=45. p<0.05

In correct trials, medial frontal cortex (MFC; x,y,z=(37,45,26); 207 voxels) and anterior cingulate (AC; x,y,z=(9,29,26); 189 voxels) showed higher BOLD signal in difficult trials than in easy trials(Fig32). The contrasts shown here were Correct Hard Trials>Correct Easy Trials, with FWE (p < 0.05) k for uncorr. p < 0.05: 45 voxels.

Page 43: Liao Master Thesis

42

Correctness

Fig33. Correctness Encoding Regions. Midbrain (MB) shows more activation in correct trials(red), while inferior frontal gyrus(IFG) show more activation in incorrect trials(blue). Cut: x,y,z=(2,40,1), k-threshold=40, p<0.05

Right Inferior frontal gyrus(RIFG, x,y,z=(45,53,1); 66 voxels) and left inferior frontal gyrus (LIFG, x,y,z=(-48,35,10); 66 voxels) showed more activation(blue) in incorrect trials than in correct trials, while midbrain(MB, x,y,z=(6,-16,11); 73 voxels) showed more activation in correct trials than in incorrect trials(Fig33). The contrasts here were Correct Trials>Incorrect Trials and Incorrect Trials>Correct Trials, with FWE (p < 0.05) k for uncorr. p < 0.05: 40 voxels.

Event related Average in Midbrain

Fig34. Event related Average in Midbrain. In midbrain, the BOLD signal was the highest in correct easy trials, smaller in correct hard trials, much smaller in incorrect hard trials and the smallest in incorrect east trials.

The event related average (ERA) was checked in confidence candidate midbrain, inferior parietal lobe and thalamus. It is revealed that in left midbrain, the highest BOLD signal in correct easy trials, followed by correct hard trials, incorrect hard trials and the BOLD signal in incorrect easy trials is the lowest (Fig34).

Page 44: Liao Master Thesis

43

Event related Average in Right and Left Inferior Frontal Gyrus

Fig35. Event related Average Right and Left Inferior Frontal Gyrus. In right and left dorsolateral prefrontal cortex(inferior frontal gyrus), the BOLD signal was the highest in incorrect easy trials, smaller in incorrect hard trials, much smaller in correct hard trials and the smallest in correct easy trials.

The event related average (ERA) was checked in uncertainty candidates: right and left dorsolateral prefrontal cortex. It is revealed that both in right and left dorsolateral prefrontal cortex, the BOLD showed the highest signal in incorrect easy trials, followed by incorrect hard trials, correct hard trials and correct easy trials (Fig35).

BOLD Signal in Candidate Regions for Confidence and Uncertainty

Fig36. BOLD signal v.s. Confidence in Candidates Regions. (left) In midbrain, the BOLD signal is higher in confident trials (Conf=rating 3 and 4) than in unconfident trials (Unconf=rating 1 and 2); (middle) In left DLPFC, the BOLD signal is higher in unconfident trials than in confident trials.(right) In right DLPFC, the BOLD signal is higher in unconfident trials than in confident trials. (figures by A. Nazzal)

It is shown that, in midbrain, the the BOLD signal was higher in confident trials (Conf=rating 3 and 4) than in unconfident trials (Unconf=rating 1 and 2); in right and left DLPFC, the BOLD signal was higher in unconfident trials than in confident trials (Fig36).

Page 45: Liao Master Thesis

44

BOLD Signal in Midbrain Increases along with the Confidence Rating

Fig37. BOLD Signal in Midbrain in Different Confidence Rating. BOLD signal is the highest in midbrain while subjects chose 100% confident (rating=4), followed by BOLD signal in 75% confident (rating=3), 25% confident (rating=2)and 0% confident(rating=1). (figures by A. Nazzal)

In midbrain, the BOLD signal was the highest when subjects 100% confident (rating=4), less high in 75% confident (rating=3), still less high in 25% confident (rating=2) and the BOLD signal is the lowest in 0% confident (rating=1)(Fig37).

Page 46: Liao Master Thesis

45

Discussion

Several regions have been proposed to participate in confidence formation and uncertainty in decision-making. In this study, we managed to recapitulate the double-U shape curve in visual decision-making experiment both outside and inside the scanner. We also identified the regions encoding confidence (midbrain) and uncertainty (DLPFC) based on the prediction of the fire-and-integration attractor model based two-layer decision-making model.

Behavior Results Discussion In the behavior experiment, we found that the accuracy and confidence rating were both higher when the difficulty of task was easier (the difference in number of stimuli is large), and lower when the difficulty of task was harder (Fig13 and Fig19). We used a non-continuous scale instead of a continuous scale to rate the confidence in this study. The association between accuracy and confidence proved that this non-continuous scale could reflect the subjective confidence.

It is also revealed that in both visual and auditory task outside the scanner, the confidence report was compatible with the double-U shape curve published in the previous literature (Komura, Kepecs, 2008). The confidence report of visual task inside the scanner also showed a double-U shape curve. The double-U shape curve, as a signature of perceptual decision-making, shows that subjects have higher confidence in correct trials than in incorrect trials. And the difference in confidence between correct and incorrect trials is larger in easy trials than in hard trials.

Although two double-U shape figures from outside and inside the scanner look similar, it is important to note that outside the scanner, the delay period was shorter than inside the scanner. Moreover, it is essential to know whether there was difference in performance and confidence-report in these two different environments (outside and inside the scanner). The repeated measure ANOVA analysis revealed that, there was no change in behavior of performance and confidence-report in different environments (outside and inside the scanner). There was neither interaction between environment and performance, nor interaction between environment and confidence-report, indicating that subjects kept the same confidence report strategy throughout the whole study. This also indicates that confidence pool stops to accumulate evidence after the decision is formed.

It is easy to understand that subjects have more confidence in correct easy trials than in correct hard trials. However, one might wonder why in the incorrect trials the curve is the other way round. This counterintuitive curve, representing that subjects have lower confidence in incorrect easy trials than in incorrect hard trials, can be explained by the nature of probabilistic behavior of confidence rating in context of perceptual decision making (Hangya et al., 2015).

Page 47: Liao Master Thesis

46

The low confidence in incorrect easy trials can also be interpreted as following: in such easy trials, it is hard for subjects to make a mistake. If subjects still make mistake in such easy trials, they cannot be that confident (Kepecs et al., 2008).

It is important to note that in the instruction of this study which was repeated before each session both outside and inside the scanner, the subjects were asked to rate the confidence of the correctness of previous decision. So the confidence rating in all trials in this study represents the estimation of subjects in mind: how confident she thought she had done a correct decision. In other words, if a subject thought she had done a mistake, she would have reported low confidence, not high confidence (even though she was highly confident that she had made a mistake).

Confidence in Auditory Task is Easily Affected It is noticed that the auditory task can be easily influenced by any noise in environment (both outside and inside the scanner). Interestingly, the noise affected more on confidence rating than on performance, which indicates that even though the subjects were able to perform properly in decision-making task, they could not rate their confidence accordingly. In psychophysics, we reset the setting in order to minimize the influence of noise. However, it is hard to prevent noise inside the scanner. It is thus interesting to know, whether the subjects were really able to decide and report confidence as outside the scanner, and more importantly, as in the visual task.

It is interesting as well that such a dissociation between performance and subjective confidence can be easily caused by the environmental noise.

Due to this reason, we didn't invite subjects for auditory task inside the scanner. A more theoretical question would be: In auditory task, it is hard to rule out the possibility that the noise in the scanner would not contribute to the stimuli accumulation, namely the first layer of the decision-making model.

fMRI Results of Visual Decision-Making Task Before we started to investigate the confidence and uncertainty encoding regions, it is important to look at the first-order analysis, namely the brain regions associated with correctness, difficulty, and direction.

Stimulus Presentation Period It shows that primary visual cortex, secondary visual cortex, middle temporal regions, posterior parietal cortex, frontal regions, insula and anterior cingulate were active during the stimulus presentation period (Fig30). This result recapitulates the previous studies in our lab. Our task was a visual task, so it is not surprising that both primary visual cortex and secondary visual cortex were active during the task. Middle temporal regions, posterior parietal cortex are parts of the so -called dorsal stream pathway. Visual perception can be divided into two main streams: ventral stream and dorsal stream. Ventral stream begins

Page 48: Liao Master Thesis

47

from the primary visual cortex, goes through the secondary visual cortex, visual area V4 and then to the inferior temporal cortex. This pathway is often dubbed “What Pathway”, indicating that this pathway is associated with the object recognition and identification.

Dorsal stream also starts from the primary visual cortex, goes through the secondary visual cortex, dorsomedial area, middle temporal area and to the posterior parietal cortex. The dorsal stream is also called the “Where Pathway”, meaning that this pathway is associated with motion, object location. The dorsal stream is thus involved in evidence accumulation over time.

Our data shows a high activity in dorsal stream areas. This may be due to the nature of the task: the subjects were asked to focus on the quantity of the stimuli (which side has more flickers?), but not the characteristics of the stimuli (what kind of stimuli?).

Direction Each trial in the task was ether right trial or left trial. It is shown that in right trials, left occipital cortex shows a larger activation, while in left trials, right occipital cortex shows a larger activation (Fig31). This result follows the understanding that primary and secondary visual cortex are located in the occipital region for contralateral visual perception. However, the effect is extremely small if we look into the subtraction contrast (Left-Right or Right-Left) of both sides. The reason might lie in the fact that, in this study, the difference of flickers is close to each other in both easy (3 flickers difference) and hard (1 flicker difference) trials. Previous studies in the lab with higher difference in flickers showed a more evident direction driven activity in occipital cortex.

Difficulty Since difficulty and correctness are highly interdependent (in difficult trials, there are more incorrect trials, and in easy trials, there are more correct trials.), to prevent the confounding effect, we investigated the difficulty in two different situations: difficulty in correct trials and difficulty in incorrect trials (Fig32).

In correct trials:

The middle frontal cortex and anterior cingulate showed a larger BOLD signal in hard trials than in easy trials. Anterior cingulate, which is also part of medial frontal cortex, is often active when the subjects face high-conflict trials. It was first thought that anterior cingulate is involved in incorrect detection, since in primates study, the neurons in anterior cingulate fire more in incorrect trials (Niki and Watanabe, 1979).

In incorrect trials, our data didn't show difference between difficult trials and easy trials.

The model proposed by Rolls and Deco may explain this negative finding. Both in confidence and unconfidence pools the predicted firing rate of incorrect easy trials and incorrect hard

Page 49: Liao Master Thesis

48

trials are close to each other. However, it is unknown whether difficulty pool, if that exists, also shows firing rate behavior as the confidence and unconfidence pool.

Correctness The model proposed by Rolls and Deco roots in the understanding that the BOLD signal should behave differently in correct and incorrect trials. In their study, they found that in subgenual cingulated cortex, dorsolateral prefrontal cortex, posterior cingulated cortex, medial area 10 had more activity in correct trials than in incorrect trials (Rolls, Grabenhorst and Deco, 2010).

It is revealed in the correctness map of our study, that midbrain showed higher BOLD signal in correct trials, while middle temporal gyrus and DLPFC show higher BOLD signal change in incorrect trials (Fig33). These regions mentioned above are thus candidates for confidence encoding regions and uncertainty encoding regions.

The identification of uncertainty and confidence regions followed the flow chart are shown in Fig38.

Fig38. The Flow Chart for Identification of the Confidence and Uncertainty Encoding Regions. In the correctness map, the red regions encode correctness while the blue regions encode incorrectness, each region can be candidate for confidence encoding and candidate for uncertainty respectively. Further investigation requires taking special signature in different difficulty into account. The region which encodes confidence should be more active in correct easy (Cor Easy) than in correct hard (Cor Hard) trials, still less active in incorrect hard (Incor Hard) trials, and the least active in incorrect easy (Incor Easy) trials. The region encoding uncertainty should be more active in correct hard than in correct easy trials, still less active in correct hard trials and the least in correct easy trials. At last, the BOLD signal in confidence encoding region should vary depending on the confidence rating. The BOLD signal should be higher in confident trials (Conf) than in unconfident trials (Unconf) for confidence pool; and higher in unconfident trials than in confident trials for uncertainty pool.

Page 50: Liao Master Thesis

49

Confidence It is revealed in the correctness map, that midbrain showed higher BOLD signal in correct trials, while middle temporal gyrus and DLPFC from both sides showed higher BOLD signal in incorrect trials. Based on the integrate-and-fire model and the following BOLD signal prediction: the BOLD signal of brain regions encoding confidence should be larger in correct trials than in incorrect trials. For the brain regions encoding uncertainty, the BOLD signal should be the other way round: larger in incorrect trials than in correct trials.

Midbrain was then candidates for encoding confidence while middle temporal gyrus, RDLPFC, LDLPFC were candidates for encoding uncertainty.

However, the different behavior of BOLD signal in correct and incorrect trials is just one and the first criteria for identification of the confidence encoding regions. It is important to note that, one of the key signatures of confidence is, both in psychophysics and electrophysiology, the difference of confidence rating (or firing rate) should be larger in easy trials then in hard trials.

Therefore, the BOLD signal from an ideal confidence encoding region should not only show higher BOLD signal in correct trials than incorrect trials, but according to the prediction of model, the BOLD signal should follow the order: correct easy, correct hard, incorrect hard, incorrect easy. For an ideal uncertainty encoding region, the BOLD signal should not only be higher in incorrect trials than in correct trials, but the order of BOLD signal should be: incorrect easy, incorrect hard, correct hard, correct easy.

Taking this second criteria into account, midbrain and inferior parietal lobe stay candidates for confidence encoding; and right and left DLPFC are candidates for uncertainty encoding. Middle temporal gyrus has been excluded since the order of BOLD signal was not arranged in the difficulty proposed by the model (Fig34, Fig35).

For the confirmation of confidence pool and uncertainty pool, the BOLD signal should behave along with the subjective rating by subjects. Which means, the confidence pool has higher BOLD signal when subjects subjectively feel more confident (BOLD: confidence(3,4)> unconfidence(1,2)), and the uncertain pool has higher BOLD signal when subjects subjectively feel less confident(more uncertain; BOLD: unconfidence(1,2)>confidence(3,4))

It is revealed, that midbrain showed an higher BOLD signal when subjects rated higher confidence, and showed lower BOLD signal when subjects rated lower confidence, fulfilling the last criteria for a confidence encoding region. Both right and left DLPFC showed a higher BOLD signal when subjects rated lower confidence (higher uncertainty), and showed lower BOLD signal when subjects rated higher confidence (lower uncertainty), fulfilling the last criteria for a uncertainty encoding region.

Page 51: Liao Master Thesis

50

Midbrain and Confidence Formation The midbrain includes superior and inferior colliculi, tegmentum, substantia nigra, red nucleus and other nuclei and fasciculi. The basic function of midbrain includes visual coordination (superior colliculi), auditory coordination (inferior colliculi), motor coordination (substantia nigra) and gait (red nucleus).

Dopaminergic neurons in midbrain are considered to be involved in reward (Schultz et al., 1998), salience (Berridge et al., 2007), novelty (Redgrave and Gurney, 2006), working memory (Williams and Goldman-Rakic, 1995) and learning (Steinberg et al., 2013).

Recent finding also showed dopaminergic neurons are involved in certainty and precision of beliefs (Schwartenbeck et al., 2015).

Schwartenbeck designed an event-related fMRI study, in which subjects had to decide whether to accept the current offer or wait for a possible higher offer, risking of losing everything. In this task, the precision can be regarded as confidence that a more valuable option would appear in the future. They found that midbrain activity can be associated with the expected precision of beliefs, which can also be understood as confidence of reaching a desired goal.

Although midbrain, especially substantia nigra and ventral tegmental area are full of dopaminergic neurons, one must notice that there are many situations which can contribute to the hemodynamic BOLD signal in this region. BOLD signal is highly associated with local field potential (LFP). The local field potential is usually correlated with neurons spiking, but not necessarily.

A higher BOLD response in this region may due to the following situations: glutamatergic inputs onto tonically active dopaminergic neurons, LFP onto silent DA neurons, burst firing DA neurons, LFPs by inhibition of GABAergic inputs onto DA neurons, LFPs by GABAergic inputs onto DA neurons, DA release by DA neurons, or DA release by tonically acctive DA neurons (Düzel et al., 2009).

Our results can be explained by the dopaminergic pathway, but whether the high BOLD signal in this region can truly reflect dopaminergic neurons' activity is still under debate.

Dorsolateral Prefrontal Cortex and Uncertainty DLPFC is located in the superior and lateral part of frontal cortex, including BA 9 and 46. The organization of DLPFC follows a dorsal-ventral axis. It has been revealed that, dorsal DLPFC is more involved in working memory and the ventral DLPFC is more involved in retrieving information. Besides, DLPFC has a strong connection to the limbic system, as well to other parts of the brain like temporal, parietal and occipital regions. Patients suffering from DLPFC lesion would have difficulties in planning (Chang et al., 2008), working memory cognitive flexibility (Monsell et al., 2003).

Page 52: Liao Master Thesis

51

DLPFC is involved in task switching (Hyafil et al., 2009), maintaining attentional demands (MacDonald et al., 2000), top-down attentional control (Milham et al., 2003), interfering and conflicting information (Schmacher et al., 2003), moral decision (Glenn et al., 2011), decision-making based on unexpected stimulus (Paulus et al., 2001) and also decision-making under uncertainty (Huettel et al., 2005)

Huettel designed a task to investigate whether DLPFC is encoding uncertainty. Subjects were presented a series of circles and triangles with different numbers, then subjects had to report, which form they expected to show up in the feedback phase. The task was set as following, if all stimuli were of one type (circle or triangle), there was only 20% uncertainty. This means that in 80% of cases, the type shown in feedback phase will be the same as in the stimuli presentation phase. The uncertainty increases along with the increase of the variability of the types during the stimuli presentation phase. It is found, that DLPFC is increased activity when the task became more uncertain.

Our study is compatible with the previous findings, that DLPFC is involved in decision-making under uncertainty. We further point out, based on the integrate-and-fire model, that DLPFC might encode uncertainty itself, or be responsible for the uncertainty readout.

Page 53: Liao Master Thesis

52

Conclusion In this study, we managed to recapitulate the double-U shape confidence behavior both in visual and auditory decision-making task.

Our study combined the prediction of different signatures in confidence pool and uncertainty pool proposed by integrate-and-fire model based two-layer decision-making model and subjective confidence report by the subjects.

Based on this methodology, we managed to identify regions which might encode confidence and uncertainty. The proposed regions are not modulated by difficulty.

The brain region which might be involved in confidence formation is midbrain, while the right and left dorsolateral prefrontal cortex might be involved in uncertainty.

Page 54: Liao Master Thesis

53

References

Aitchison L, Bang D, Bahrami B, Latham PE. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making. PLoS Comput Biol. 2015 Oct 30;11(10):e1004519. doi: 10.1371/journal.pcbi.1004519. eCollection 2015 Oct.

Banko,E.M.,Gal,V.,Kortvelyes,J.,Kovacs,G.,andVidnyanszky,Z.(2011). Dissociating the effect of noise on sensory processing and overall decision difficulty. J. Neurosci. 31, 2663–2674.doi:10.1523/JNEUROSCI.2725-10.2011

Bode, S.,Bogler,C.,Soon,C.S.,andHaynes,J.-D.(2012).Theneural encoding of guesses in the human brain. Neuroimage 59, 1924–1931.doi: 10.1016/j.neuroimage.2011.08.106

Brunton BW, Botvinick MM, Brody CD. Rats and humans can optimally accumulate evidence for decision-making. Science. 2013 Apr 5;340(6128):95-8. doi: 10.1126/science.1233912.

Deco G, Rolls ET. Decision-making and Weber's law: a neurophysiological model. Eur J Neurosci. 2006 Aug;24(3):901-16.

Del Cul A, Dehaene S, Reyes P, Bravo E, Slachevsky A.Causal role of prefrontal cortex in the threshold for access to consciousness. Brain. 2009 Sep;132(Pt 9):2531-40. doi: 10.1093/brain/awp111. Epub 2009 May 11.

DiFeliceantonio AG, Berridge KC. Dorsolateral neostriatum contribution to incentive salience: Opioid or dopamine stimulation makes one reward cue more motivationally attractive than another. Eur J Neurosci. 2016 Feb 29. doi: 10.1111/ejn.13220.

Düzel E, Bunzeck N, Guitart-Masip M, Wittmann B, Schott BH, Tobler PN.Functional imaging of the human dopaminergic midbrain.Trends Neurosci. 2009 Jun;32(6):321-8. doi: 10.1016/j.tins.2009.02.005. Epub 2009 May 14.

Fleming,S.M.,Whiteley,L.,Hulme,O.J.,Sahani,M.,andDolan,R.J.(2010). Effects of category-specific costs on neural systems for perceptual decision- making. J. Neurophysiol. 103, 3238–3247.doi:10.1152/jn.01084.2009

Gold JI, Shadlen MN. The neural basis of decision making. Annu Rev Neurosci. 2007;30:535-74. Review.

Hangya, B, Sanders, JI & Kepecs A (2015) A mathematical framework for statistical decision confidence. BioRXiv

Hebart MN, Schriever Y, Donner TH, Haynes JD. The Relationship between Perceptual Decision Variables and Confidence in the Human Brain. Cereb Cortex. 2016 Jan;26(1):118-30. doi: 10.1093/cercor/bhu181. Epub 2014 Aug 11.

Page 55: Liao Master Thesis

54

Hebscher M, Barkan-Abramski M, Goldsmith M, Aharon-Peretz J, Gilboa A. Memory, Decision-Making, and the Ventromedial Prefrontal Cortex (vmPFC): The Roles of Subcallosal and Posterior Orbitofrontal Cortices in Monitoring and Control Processes. Cereb Cortex. 2015 Oct 1. pii: bhv220.

Heekeren HR, Marrett S, Bandettini PA, Ungerleider LG. A general mechanism for perceptual decision-making in the human brain. Nature. 2004 Oct 14;431(7010):859-62.

Heekeren,H.,Marrett,S.,Ruff,D.,Bandettini,P.,andUngerleider,L.(2006). Involvement of human left dorsolateral prefrontal cortex in perceptua ldecision-making is independent of response modality. Proc.Natl.Acad.Sci.U.S.A. 103, 10023. doi:10.1073/pnas.0603949103

Heekeren,H.R.,Marrett,S.,Bandettini,P.A.,andUngerleider,L.G.(2004).A general mechanism for perceptual decision-making in the human brain. Nature 431, 859–862.doi:10.1038/nature02966

Heekeren,H.R.,Marrett,S.,andUngerleider,L.G.(2008).The neural systems that mediate human perceptual decision making. Nat.Rev.Neurosci. 9, 467–479.doi: 10.1038/nrn2374

Heereman J, Walter H, Heekeren HR. A task-independent neural representation of subjective certainty in visual perception. Front Hum Neurosci. 2015 Oct 9;9:551. doi: 10.3389/fnhum.2015.00551. eCollection 2015.

Ho,T.C.,Brown,S.,andSerences,J.T.(2009).Domain general mechanisms of perceptual decision making in human cortex. J. Neurosci. 29, 8675–8687.doi: 10.1523/JNEUROSCI.5984-08.2009

Hollerman JR, Schultz W. Dopamine neurons report an incorrect in the temporal prediction of reward during learning. Nat Neurosci. 1998 Aug;1(4):304-9.

Huettel SA, Song AW, McCarthy G. Decisions under uncertainty: probabilistic context influences activation of prefrontal and parietal cortices. J Neurosci. 2005 Mar 30; 25(13):3304-11 Hyafil A, Summerfield C, Koechlin E. Two mechanisms for task switching in the prefrontal cortex. J Neurosci. 2009 Apr 22; 29(16):5135-42.

Mechanisms underlying cortical activity during value-guided choice.Hunt LT, Kolling N, Soltani A, Woolrich MW, Rushworth MF, Behrens TE. Nat Neurosci. 2012 Jan 8;15(3):470-6, S1-3. doi: 10.1038/nn.3017.

Johnson DD, Fowler JH. The evolution of overconfidence. Nature. 2011 Sep 14;477(7364):317-20. doi: 10.1038/nature10384.

Kayser,A.S.,Erickson,D.T.,Buchsbaum,B.R.,andD’Esposito,M.(2010b). Neural representations of relevant and irrelevant features in perceptual decision making.J. Neurosci. 30, 15778–15789.doi:10.1523/JNEUROSCI.3163- 10.2010

Page 56: Liao Master Thesis

55

Kepecs A, Uchida N, Zariwala HA, Mainen ZF. Neural correlates, computation and behavioural impact of decision confidence. Nature. 2008 Sep 11;455(7210):227-31. doi: 10.1038/nature07200.

Kiani R, Shadlen MN.Representation of confidence associated with a decision by neurons in the parietal cortex. Science. 2009 May 8;324(5928):759-64. doi: 10.1126/science.1169405.

Komura Y, Nikkuni A, Hirashima N, Uetake T, Miyamoto A. Responses of pulvinar neurons reflect a subject's confidence in visual categorization. Nat Neurosci. 2013 Jun;16(6):749-55. doi: 10.1038/nn.3393. Epub 2013 May 12.

Lau HC, Passingham RE. Relative blindsight in normal observers and the neural correlate of visual consciousness. Proc Natl Acad Sci U S A. 2006 Dec 5;103(49):18763-8. Epub 2006 Nov 21.

MacDonald AW 3rd, Cohen JD, Stenger VA, Carter CS. Dissociating the role of the dorsolateral prefrontal and anterior cingulate cortex in cognitive control. Science. 2000 Jun 9; 288(5472):1835-8.Memoirs of the National Academy of Sciences 3:75-83 (1884)

Middlebrooks PG, Sommer MA. Neuronal correlates of metacognition in primate frontal cortex. Neuron. 2012 Aug 9;75(3):517-30. doi: 10.1016/j.neuron.2012.05.028.

Milham MP, Banich MT, Claus ED, Cohen NJ. Practice-related effects demonstrate complementary roles of anterior cingulate and prefrontal cortices in attentional control. Neuroimage. 2003 Feb; 18(2):483-93. Monsell S (2003). "Task switching". Trends in Cognitive Sciences 7 (3): 134–140.

Niki H, Watanabe M. Prefrontal and cingulate unit activity during timing behavior in the monkey. Brain Res. 1979 Aug 3;171(2):213-24.

Noppeney,U.,Ostwald,D.,andWerner,S.(2010).Perceptual decisions formed by accumulation of audiovisual evidence in prefrontal cortex.J.Neurosci.30,7434–7446. doi:10.1523/JNEUROSCI.0455-10.2010

nsabato A, Pannunzi M, Rolls ET, Deco G. J. Confidence-related decision making. Neurophysiol. 2010 Jul;104(1):539-47. doi: 10.1152/jn.01068.2009. Epub 2010 Apr 14.

Paulus MP, Hozack N, Frank L, Brown GG .Incorrect rate and outcome predictability affect neural activation in prefrontal cortex and anterior cingulate during decision-making. Neuroimage. 2002 Apr; 15(4):836-46.

Philiastides,M.G.,andSajda,P.(2007).EEG-informed fMRI reveals spatiotemporal characteristics of perceptual decision making.J.Neurosci.27,13082–13091.doi:10.1523/JNEUROSCI.3540-07.2007

Redgrave P, Gurney K. The short-latency dopamine signal: a role in discovering novel actions? Nat Rev Neurosci. 2006 Dec;7(12):967-75. Epub 2006 Nov 8. Review.

Page 57: Liao Master Thesis

56

Resulaj A, Kiani R, Wolpert DM, Shadlen MN. Changes of mind in decision-making. Nature. 2009 Sep 10;461(7261):263-6. doi: 10.1038/nature08275. Epub 2009 Aug 19.

Roitman JD, Shadlen MN. J Neurosci. Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. 2002 Nov 1;22(21):9475-89.

Rolls ET, Deco G.Attention in natural scenes: Neurophysiological and computational bases. Neural Netw. 2006 Nov;19(9):1383-94. Epub 2006 Oct 2.

Rolls ET, Grabenhorst F, Deco G. J Neurophysiol. Decision-making, incorrects, and confidence in the brain. 2010 Nov;104(5):2359-74. doi: 10.1152/jn.00571.2010. Epub 2010 Sep 1.

Rolls ET, Grabenhorst F, Deco G.Choice, difficulty, and confidence in the brain. Neuroimage. 2010 Nov 1;53(2):694-706. doi: 10.1016/j.neuroimage.2010.06.073. Epub 2010 Jul 6.

Salinas E, Hernandez A, Zainos A, Romo R. Periodicity and firing rate as candidate neural codes for the frequency of vibrotactile stimuli. J Neurosci. 2000 Jul 15;20(14):5503-15.

Schumacher EH, Elston PA, D'Esposito M.Neural evidence for representation-specific response selection.J Cogn Neurosci. 2003 Nov 15; 15(8):1111-21.

Schwartenbeck P, FitzGerald TH, Mathys C, Dolan R, Friston K. The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes. Cereb Cortex. 2015 Oct;25(10):3434-45. doi: 10.1093/cercor/bhu159. Epub 2014 Jul 23.

Schwartenbeck P, FitzGerald TH, Mathys C, Dolan R, Kronbichler M, Friston K. Evidence for surprise minimization over value maximization in choice behavior. Sci Rep. 2015 Nov 13;5:16575. doi: 10.1038/srep16575.

Seyfarth RM, Cheney DL, Marler P. Monkey responses to three different alarm calls: evidence of predator classification and semantic communication. 1980 Nov 14;210(4471):801-3.

Shadlen MN, Kiani R. Decision making as a window on cognition. Neuron. 2013 Oct 30;80(3):791-806. doi: 10.1016/j.neuron.2013.10.047.

Steinberg EE, Keiflin R, Boivin JR, Witten IB, Deisseroth K, Janak PH. A causal link between prediction incorrects, dopamine neurons and learning. Nat Neurosci. 2013 Jul;16(7):966-73. doi: 10.1038/nn.3413. Epub 2013 May 26.

Sunaert,S.,VanHecke,P.,Marchal,G.,andOrban,G.A.(2000).Attentiontospeedof motion,speeddiscrimination,andtaskdifficulty:anfmristudy.Neuroimage11,612-623.doi:10.1006/nimg.2000.0587

Tanner WP Jr, Swets JA. A decision-making theory of visual detection. Psychol Rev. 1954 Nov;61(6):401-9

Page 58: Liao Master Thesis

57

Tosoni,A.,Galati,G.,Romani,G.L.,andCorbetta,M.(2008).Sensory-motor mechanisms in human parietal cortex under liearbitrary visual decisions. Nat.Neurosci. 11, 1446–1453.doi:10.1038/nn.2221

Wang XJ. Decision making in recurrent neuronal circuits. Neuron. 2008 Oct 23;60(2):215-34. doi: 10.1016/j.neuron.2008.09.034.

Williams GV, Goldman-Rakic PS. Modulation of memory fields by dopamine D1 receptors in prefrontal cortex. Nature. 1995 Aug 17;376(6541):572-5.

Page 59: Liao Master Thesis

58

Appendices

Stimuli Presentation Cluster Table

• Cluster0001_39_35_10_RH_Middle_Frontal_Gyrus_ - 81 voxel(s) • SC0001_0001_39_35_10_RH_Inferior_Frontal_Gyrus_ - 11 voxel(s) (local max.) • SC0001_0002_33_32_22_RH_Middle_Frontal_Gyrus_ - 44 voxel(s) (local max.) • SC0001_0003_36_47_13_RH_Middle_Frontal_Gyrus_ - 17 voxel(s) (local max.) • SC0001_0004_30_47_22_RH_Middle_Frontal_Gyrus_ - 9 voxel(s) (local max.) • Cluster0002_-30_20_1_LH_Insula_ - 122 voxel(s) • SC0002_0001_-30_20_1_LH_Insula_ - 110 voxel(s) (local max.) • SC0002_0002_-33_11_16_LH_Insula_ - 12 voxel(s) (local max.) • Cluster0003_15_5_10_RH_Caudate_ - 32 voxel(s) • Cluster0004_36_-4_40_RH_Cingulate_Gyrus_ - 1284 voxel(s) • SC0004_0001_36_-4_40_RH_Precentral_Gyrus_ - 95 voxel(s) (local max.) • SC0004_0002_-9_-1_49_LH_Cingulate_Gyrus_ - 48 voxel(s) (local max.) • SC0004_0003_-24_-13_46_LH_Middle_Frontal_Gyrus_ - 131 voxel(s) (local max.) • SC0004_0004_48_8_28_RH_Inferior_Frontal_Gyrus_ - 166 voxel(s) (local max.) • SC0004_0005_24_-1_55_RH_Middle_Frontal_Gyrus_ - 91 voxel(s) (local max.) • SC0004_0006_30_23_1_RH_Insula_ - 127 voxel(s) (local max.) • SC0004_0007_39_-4_52_RH_Precentral_Gyrus_ - 78 voxel(s) (local max.) • SC0004_0008_-42_-4_31_LH_Precentral_Gyrus_ - 71 voxel(s) (local max.) • SC0004_0009_6_5_52_RH_Cingulate_Gyrus_ - 58 voxel(s) (local max.) • SC0004_0010_45_5_37_RH_Precentral_Gyrus_ - 90 voxel(s) (local max.) • SC0004_0011_-9_8_43_LH_Cingulate_Gyrus_ - 28 voxel(s) (local max.) • SC0004_0012_6_20_37_RH_Cingulate_Gyrus_ - 87 voxel(s) (local max.) • SC0004_0013_51_5_19_RH_Inferior_Frontal_Gyrus_ - 47 voxel(s) (local max.) • SC0004_0014_-15_-16_52_LH_Middle_Frontal_Gyrus_ - 24 voxel(s) (local max.) • SC0004_0015_30_-7_64_RH_Middle_Frontal_Gyrus_ - 44 voxel(s) (local max.) • SC0004_0016_-9_20_34_LH_Cingulate_Gyrus_ - 17 voxel(s) (local max.) • SC0004_0017_48_14_10_RH_Precentral_Gyrus_ - 43 voxel(s) (local max.) • SC0004_0018_-3_-10_61_LH_Medial_Frontal_Gyrus_ - 5 voxel(s) (local max.) • SC0004_0019_-42_-16_49_LH_Precentral_Gyrus_ - 15 voxel(s) (local max.) • SC0004_0020_-60_5_34_LH_Precentral_Gyrus_ - 9 voxel(s) (local max.) • SC0004_0021_-45_-4_16_LH_Precentral_Gyrus_ - 5 voxel(s) (local max.) • SC0004_0022_51_14_46_RH_Middle_Frontal_Gyrus_ - 5 voxel(s) (local max.) • Cluster0005_9_-16_7_RH_Thalamus - 74 voxel(s) • SC0005_0001_9_-16_7_RH_Thalamus_ - 67 voxel(s) (local max.) • SC0005_0002_9_-25_7_RH_Thalamus_ - 7 voxel(s) (local max.) • Cluster0006_-42_-37_43_LH_Inferior_Parietal_Lobule_ - 26 voxel(s) • Cluster0007_45_-40_7_RH_Middle_Temporal_Gyrus_ - 687 voxel(s) • SC0007_0001_45_-40_7_RH_Middle_Temporal_Gyrus_ - 75 voxel(s) (local max.) • SC0007_0002_33_-52_-23_RH_Declive - 98 voxel(s) (local max.) • SC0007_0003_24_-64_-20_RH_Declive - 57 voxel(s) (local max.) • SC0007_0004_45_-49_10_RH_Middle_Temporal_Gyrus_ - 87 voxel(s) (local max.) • SC0007_0005_39_-64_7_RH_Middle_Occipital_Gyrus_ - 99 voxel(s) (local max.) • SC0007_0006_63_-34_19_RH_Superior_Temporal_Gyrus_ - 54 voxel(s) (local max.) • SC0007_0007_42_-43_-8_RH_Parahippocampal_Gyrus_ - 34 voxel(s) (local max.) • SC0007_0008_45_-70_-5_RH_Inferior_Temporal_Gyrus_ - 66 voxel(s) (local max.) • SC0007_0009_9_-70_-17_RH_Culmen_ - 24 voxel(s) (local max.) • SC0007_0010_57_-67_4_RH_Middle_Occipital_Gyrus_ - 10 voxel(s) (local max.) • SC0007_0011_30_-70_13_RH_Posterior_Cingulate_ - 31 voxel(s) (local max.) • SC0007_0012_39_-40_-26_RH_Fusiform_Gyrus_ - 23 voxel(s) (local max.) • SC0007_0013_42_-58_-14_RH_Fusiform_Gyrus_ - 29 voxel(s) (local max.) • Cluster0008_-45_-43_19_LH_Superior_Temporal_Gyrus_ - 43 voxel(s)

Page 60: Liao Master Thesis

59

• SC0008_0001_-45_-43_19_LH_Superior_Temporal_Gyrus_ - 13 voxel(s) (local max.) • SC0008_0002_-48_-55_4_LH_Superior_Temporal_Gyrus_ - 21 voxel(s) (local max.) • SC0008_0003_-57_-43_16_LH_Superior_Temporal_Gyrus_ - 9 voxel(s) (local max.) • Cluster0009_27_-58_43_RH_Sub-Gyral_ - 443 voxel(s) • SC0009_0001_27_-58_43_RH_Precuneus_ - 101 voxel(s) (local max.) • SC0009_0002_24_-55_34_RH_Precuneus_ - 69 voxel(s) (local max.) • SC0009_0003_36_-49_40_RH_Inferior_Parietal_Lobule_ - 85 voxel(s) (local max.) • SC0009_0004_15_-61_37_RH_Precuneus_ - 71 voxel(s) (local max.) • SC0009_0005_39_-37_37_RH_Supramarginal_Gyrus_ - 48 voxel(s) (local max.) • SC0009_0006_18_-73_46_RH_Precuneus_ - 47 voxel(s) (local max.) • SC0009_0007_24_-76_37_RH_Cuneus_ - 22 voxel(s) (local max.) • Cluster0010_-27_-61_-26_LH_Lingual_Gyrus_ - 696 voxel(s) • SC0010_0001_-27_-61_-26_LH_Declive - 47 voxel(s) (local max.) • SC0010_0002_-33_-52_-29_LH_Culmen - 62 voxel(s) (local max.) • SC0010_0003_-27_-52_46_LH_Superior_Parietal_Lobule_ - 124 voxel(s) (local max.) • SC0010_0004_-48_-67_1_LH_Middle_Occipital_Gyrus_ - 97 voxel(s) (local max.) • SC0010_0005_-39_-61_-23_LH_Declive - 58 voxel(s) (local max.) • SC0010_0006_-30_-61_-14_LH_Declive_ - 85 voxel(s) (local max.) • SC0010_0007_-39_-52_-17_LH_Fusiform_Gyrus_ - 40 voxel(s) (local max.) • SC0010_0008_-15_-73_46_LH_Precuneus_ - 65 voxel(s) (local max.) • SC0010_0009_-27_-70_16_LH_Posterior_Cingulate_ - 18 voxel(s) (local max.) • SC0010_0010_-15_-58_-38_LH_Anterior_Lobe - 11 voxel(s) (local max.) • SC0010_0011_-27_-64_25_LH_Middle_Temporal_Gyrus_ - 22 voxel(s) (local max.) • SC0010_0012_-15_-67_-41_LH_Pyramis - 10 voxel(s) (local max.) • SC0010_0013_-21_-67_37_LH_Precuneus_ - 45 voxel(s) (local max.) • SC0010_0014_-30_-82_16_LH_Middle_Occipital_Gyrus_ - 6 voxel(s) (local max.) • SC0010_0015_-15_-70_-29_LH_Declive - 6 voxel(s) (local max.)

Difficulty Map Cluster Table

• Cluster0001_33_47_22_RH_Middle_Frontal_Gyrus_ - 158 voxel(s) • SC0001_0001_33_47_22_RH_Middle_Frontal_Gyrus_ - 118 voxel(s) (local max.) • SC0001_0002_36_62_16_RH_Superior_Frontal_Gyrus_ - 40 voxel(s) (local max.) • Cluster0002_9_29_31_RH_Medial_Frontal_Gyrus_ - 54 voxel(s) • Cluster0003_-9_14_22_LH_Lentiform_Nucleus_ - 102 voxel(s) • SC0003_0001_-9_14_22_LH_Caudate_ - 28 voxel(s) (local max.) • SC0003_0002_-27_-7_7_LH_Lentiform_Nucleus_ - 26 voxel(s) (local max.) • SC0003_0003_-18_11_10_LH_Lentiform_Nucleus_ - 21 voxel(s) (local max.) • SC0003_0004_-24_-10_19_LH_Lentiform_Nucleus_ - 7 voxel(s) (local max.) • SC0003_0005_-27_-4_-2_LH_Lentiform_Nucleus_ - 8 voxel(s) (local max.) • SC0003_0006_-18_2_16_LH_Lentiform_Nucleus_ - 12 voxel(s) (local max.) • Cluster0004_48_5_16_RH_Precentral_Gyrus_ - 48 voxel(s) • SC0004_0001_48_5_16_RH_Inferior_Frontal_Gyrus_ - 25 voxel(s) (local max.) • SC0004_0002_48_-7_25_RH_Insula_ - 23 voxel(s) (local max.) • Cluster0005_12_-22_10_RH_Thalamus_ - 57 voxel(s)

Correctness Cluster Table

Correct>Incorrect

• Cluster0001_-21_-10_19_LH_Lentiform_Nucleus_ - 59 voxel(s) • SC0001_0001_-21_-10_19_LH_Thalamus_ - 24 voxel(s) (local max.) • SC0001_0002_-27_-7_7_LH_Lentiform_Nucleus_ - 21 voxel(s) (local max.) • SC0001_0003_-24_-1_16_LH_Lentiform_Nucleus_ - 13 voxel(s) (local max.)

Page 61: Liao Master Thesis

60

• Cluster0002_6_-16_-11_RH_MidbrainRed_Nucleus - 73 voxel(s) • SC0002_0001_6_-16_-11_RH_MidbrainRed_Nucleus_ - 33 voxel(s) (local max.) • SC0002_0002_0_-28_-17_LH_MidbrainRed_Nucleus_ - 40 voxel(s) (local max.) • Cluster0003_-30_-22_37_LH_Sub-Gyral_ - 48 voxel(s) • SC0003_0001_-30_-22_37_LH_Postcentral_Gyrus_ - 24 voxel(s) (local max.) • SC0003_0002_-21_-19_40_LH_Cingulate_Gyrus_ - 13 voxel(s) (local max.) • SC0003_0003_-33_-31_34_LH_Inferior_Parietal_Lobule_ - 11 voxel(s) (local max.) • Cluster0004_3_-25_1_LH_Thalamus_ - 49 voxel(s) • SC0004_0001_3_-25_1_LH_Thalamus_ - 24 voxel(s) (local max.) • SC0004_0002_-18_-31_1_LH_Thalamus_ - 25 voxel(s) (local max.)

Incorrect>Correct

• Cluster0001_45_53_1_RH_Inferior_Frontal_Gyrus_ - 66 voxel(s) • SC0001_0001_45_53_1_RH_Inferior_Frontal_Gyrus_ - 9 voxel(s) (local max.) • SC0001_0002_45_47_10_RH_Inferior_Frontal_Gyrus_ - 43 voxel(s) (local max.) • SC0001_0003_36_56_16_RH_Middle_Frontal_Gyrus_ - 14 voxel(s) (local max.) • Cluster0002_-48_35_10_LH_Inferior_Frontal_Gyrus_ - 66 voxel(s) • SC0002_0001_-48_35_10_LH_Inferior_Frontal_Gyrus_ - 25 voxel(s) (local max.) • SC0002_0002_-51_38_1_LH_Inferior_Frontal_Gyrus_ - 19 voxel(s) (local max.) • SC0002_0003_-42_38_-2_LH_Inferior_Frontal_Gyrus_ - 14 voxel(s) (local max.) • SC0002_0004_-51_29_-2_LH_Inferior_Frontal_Gyrus_ - 8 voxel(s) (local max.) • Cluster0003_48_11_-11_RH_Superior_Temporal_Gyrus_ - 41 voxel(s) • SC0003_0001_48_11_-11_RH_Superior_Temporal_Gyrus_ - 33 voxel(s) (local max.) • SC0003_0002_42_14_-23_RH_Superior_Temporal_Gyrus_ - 8 voxel(s) (local max.) • Cluster0004_18_2_55_RH_Middle_Frontal_Gyrus_ - 43 voxel(s) • SC0004_0001_18_2_55_RH_Middle_Frontal_Gyrus_ - 34 voxel(s) (local max.) • SC0004_0002_21_14_40_RH_Middle_Frontal_Gyrus_ - 9 voxel(s) (local max.) • Cluster0005_60_-58_7_RH_Superior_Temporal_Gyrus_ - 70 voxel(s) • SC0005_0001_60_-58_7_RH_Middle_Temporal_Gyrus_ - 8 voxel(s) (local max.) • SC0005_0002_63_-49_19_RH_Superior_Temporal_Gyrus_ - 19 voxel(s) (local max.) • SC0005_0003_45_-49_25_RH_Superior_Temporal_Gyrus_ - 16 voxel(s) (local max.) • SC0005_0004_39_-46_16_RH_Superior_Temporal_Gyrus_ - 8 voxel(s) (local max.) • SC0005_0005_54_-46_28_RH_Supramarginal_Gyrus_ - 19 voxel(s) (local max.)

Page 62: Liao Master Thesis

61

Herewith I declare that this Master Thesis ' Confidence in Visual Decision-Making' is the product of my own, independent work and was written with no other sources than quoted

Göttingen 31, 3, 2016 Liao Yi-An