Ben Cipollini & Garrison Cottrell - Computer...

Post on 09-Mar-2021

2 views 0 download

Transcript of Ben Cipollini & Garrison Cottrell - Computer...

Ben Cipollini & Garrison CottrellCOGSCI 2014 UC San Diego July 25, 2014 .1

A Developmental Model of Hemispheric Asymmetry

of Spatial Frequencies

Ben Cipollini & Garrison CottrellCOGSCI 2014 UC San Diego July 25, 2014 .2

Lateralization Is intertwined with human cognition

Manual skill Language

Face Processing

What causes Lateralization? We’re not sure, but vision may be tractable

• Describe the data & existing models

• Motivate our anatomical prediction

• Define the model

• Show old & new results

Talk Outline

5

Data & Models

6

lateralization in vision Two datasets, Two theories

LH RH

Navon figures (local vs. global),Gratings (high vs. low frequency)

LH RH

small

Faces vs. words

(with apologies to an exception: Hsiao et al, 2008)

lateralization in vision Two datasets, Two theories

Navon Figures & Frequency GratingsTop-down frequency filtering

Faces & WordsLeft & Right FFA competition

Sergent (1982); Ivry & Robertson (1998) Plaut & Behrmann (2011)

lateralization in vision Two datasets, Two theories

Navon Figures & Frequency GratingsTop-down frequency filtering

Faces & WordsLeft & Right FFA competition

• No neural mechanism• No developmental story.

Sergent (1982); Ivry & Robertson (1998) Plaut & Behrmann (2011)

lateralization in vision Two datasets, Two theories

Navon Figures & Frequency GratingsTop-down frequency filtering

Faces & WordsLeft & Right FFA competition

• No neural mechanism• No developmental story.

Sergent (1982); Ivry & Robertson (1998) Plaut & Behrmann (2011)

lateralization in vision Two datasets, Two theories

Navon Figures & Frequency GratingsTop-down frequency filtering

Faces & WordsLeft & Right FFA competition

• No neural mechanism• No developmental story.

Sergent (1982); Ivry & Robertson (1998) Plaut & Behrmann (2011)

lateralization in vision Two datasets, Two theories

Navon Figures & Frequency GratingsTop-down frequency filtering

Faces & WordsLeft & Right FFA competition

• No neural mechanism• No developmental story.

• No statement about neural changes• No connection to frequency filtering

Sergent (1982); Ivry & Robertson (1998) Plaut & Behrmann (2011)

lateralization in vision Two datasets, Two theories

Neither model:• Accounts for all stimuli showing asymmetry• Predicts how to find or verify a neural asymmetry.

lateralization in vision What’s in common?

11

RH specializations

lateralization in vision What’s in common?

11

RH specializations

Global level contour

lateralization in vision What’s in common?

11

RH specializations

Global level contour Face configuration or contour

lateralization in vision What’s in common?

12Pitts & Martinez (2014); Volberg (2014)

lateralization in vision What’s in common?

12

Perhaps contour / shape processing is better in the right hemisphere!

Pitts & Martinez (2014); Volberg (2014)

Our Motivation (this is the challenging part)

13

lateralization in vision long-range lateral connections?

14

Long-range lateral connections are:• The key component in contour processing (e.g. Gilbert & Li, 2012)

• More active in situations that lead to greater lateralization (lower stimulus strength, engaging top-down attention)

lateralization in vision long-range lateral connections?

flattenedcortex

15

Long-range lateral connections are:• The key component in contour processing (e.g. Gilbert & Li, 2012)

• More active in situations that lead to greater lateralization (lower stimulus strength, engaging top-down attention)

lateralization in vision long-range lateral connections?

flattenedcortex

15

Long-range lateral connections are:• The key component in contour processing (e.g. Gilbert & Li, 2012)

• More active in situations that lead to greater lateralization (lower stimulus strength, engaging top-down attention)

lateralization in vision long-range lateral connections?

15

Long-range lateral connections are:• The key component in contour processing (e.g. Gilbert & Li, 2012)

• More active in situations that lead to greater lateralization (lower stimulus strength, engaging top-down attention)

lateralization in vision long-range lateral connections?

15

Long-range lateral connections are:• The key component in contour processing (e.g. Gilbert & Li, 2012)

• More active in situations that lead to greater lateralization (lower stimulus strength, engaging top-down attention)

lateralization in vision long-range lateral connections?

15

Long-range lateral connections are:• The key component in contour processing (e.g. Gilbert & Li, 2012)

• More active in situations that lead to greater lateralization (lower stimulus strength, engaging top-down attention)

lateralization in vision long-range lateral connections?

16

Good evidence that long-range lateral connections are:• A key component in contour processing (e.g. Gilbert & Li, 2012)

• More active in situations that elicit greater lateralization (lower stimulus strength, engaging top-down attention)

lateralization in vision long-range lateral connections?

17

Good evidence that long-range lateral connections are:• A key component in contour processing (e.g. Gilbert & Li, 2012)

• More active in situations that elicit greater lateralization (lower stimulus strength, engaging top-down attention)

lateralization in vision long-range lateral connections?

17

Good evidence that long-range lateral connections are:• A key component in contour processing (e.g. Gilbert & Li, 2012)

• More active in situations that elicit greater lateralization (lower stimulus strength, engaging top-down attention)

lateralization in vision long-range lateral connections?

18

Good evidence that long-range lateral connections are:• A key component in contour processing (e.g. Gilbert & Li, 2012)

• More active in situations that elicit greater lateralization (lower stimulus strength, engaging top-down attention)

lateralization in vision long-range lateral connections?

19

Good evidence that long-range lateral connections are:• A key component in contour processing (e.g. Gilbert & Li, 2012)

• More active in situations that elicit greater lateralization (lower stimulus strength, engaging top-down attention)

Galuske et al (2000): wider spacing of interconnected patches in LH (BA22)

lateralization in vision following known data

20

RH: NarrowLH: Wide

Galuske et al (2000): wider spacing of interconnected patches in LH (BA22)

lateralization in vision following known data

20

RH: NarrowLH: Wide

lateralization in vision following known data

20

Hsiao et al (2008; 2013): Differential Encoding model

Our model The differential encoding model

21

. global level,

faces, low frequencies,

contours

Differential Encoding Our hypothesis

LH: Wide RH: NarrowLH RH

small

22

local level, words,

high frequencies

vs

. global level,

faces, low frequencies,

contours

Differential Encoding Our hypothesis

LH: Wide RH: NarrowLH RH

small

22

local level, words,

high frequencies

vs

. global level,

faces, low frequencies,

contours

Differential Encoding Our hypothesis

LH: Wide RH: NarrowLH RH

small

22

local level, words,

high frequencies

vs

• Create the network (850 input / output pixels, 850 hidden units. Choose 𝞂, #conns)

• Train the network on a set of images

Differential encoding model Training methods

23

• Create the network (850 input / output pixels, 850 hidden units. Choose 𝞂, #conns)

• Train the network on a set of images

Differential encoding model Training methods

23

• Create the network (850 input / output pixels, 850 hidden units. Choose 𝞂, #conns)

• Train the network on a set of images

Differential encoding model Training methods

23

• Create the network (850 input / output pixels, 850 hidden units. Choose 𝞂, #conns)

• Train the network on a set of images

Differential encoding model Training methods

24

• Create the network (850 input / output pixels, 850 hidden units. Choose 𝞂, #conns)

• Train the network on a set of images

Differential encoding model Training methods

24

Differential encoding model Analysis methods

25

• Create the network (850 input / output pixels, 850 hidden units. Choose 𝞂, #conns)

• Train the network on a set of images

• Present an image and compute:

Differential encoding model Analysis methods

25

• Create the network (850 input / output pixels, 850 hidden units. Choose 𝞂, #conns)

• Train the network on a set of images

• Present an image and compute:

• Output image (spatial frequency analysis)

Differential encoding model Analysis methods

25

• Create the network (850 input / output pixels, 850 hidden units. Choose 𝞂, #conns)

• Train the network on a set of images

• Present an image and compute:

• Output image (spatial frequency analysis)

• Hidden unit activations (used as input to train a separate classification network on a behavioral task)

Differential encoding model Analysis methods

25

Previous results

26

lateralization in vision Navon Figures in a target detection task

Adapted from Sergent (1982)

LH (RVF)

RH (LVF)

Local Target

CVF (BH)

Global Target

Task: Did you see a target letter?Targets: T, HDistractors: L,F

27

lateralization in vision Navon Figures in a target detection task

Adapted from Sergent (1982)

RH (LVF) LH (RVF) LH

(RVF)RH

(LVF)

Local Target

CVF (BH)

Global Target

Task: Did you see a target letter?Targets: T, HDistractors: L,F

27

lateralization in vision Navon Figures in a target detection task

Adapted from Sergent (1982)

RH (LVF) LH (RVF) LH

(RVF)RH

(LVF)

Local Target

CVF (BH)

Global Target

Task: Did you see a target letter?Targets: T, HDistractors: L,F

27

lateralization in vision Navon Figures in a target detection task

Adapted from Sergent (1982)

RH (LVF) LH (RVF) LH

(RVF)RH

(LVF)

Local Target

CVF (BH)

Global Target

Task: Did you see a target letter?Targets: T, HDistractors: L,F

27

lateralization in vision Navon Figures in a target detection task

Adapted from Sergent (1982)

RH (LVF) LH (RVF) LH

(RVF)RH

(LVF)

Local Target

CVF (BH)

Global Target

Task: Did you see a target letter?Targets: T, HDistractors: L,F

27

lateralization in vision Navon Figures in a target detection task

Adapted from Sergent (1982)

RH (LVF) LH (RVF) LH

(RVF)RH

(LVF)

Local Target

CVF (BH)

Global Target

Task: Did you see a target letter?Targets: T, HDistractors: L,F

27

LH (wide) RH (narrow)Methods:

• Construct our networks (sample connections from different distributions for LH and RH).

• Train on Navon figures (16 stimuli; T,H,L,F at each level).

• Record hidden unit activities for each image.

• Train separate classification neural networks on Sergent’s behavioral task.

Differential encoding model Accounting for human behavior (Sergent, 1982)

28Hsiao et al. (2013)

Differential encoding model Accounting for human behavior

29

Extract hidden unit representations,train LH & RH classifiers (not shown)

LH RH

Human Data

Adapted from Sergent (1982)

RH (LVF)

LH (RVF)

Local Global

Differential encoding model Accounting for human behavior

29

Extract hidden unit representations,train LH & RH classifiers (not shown)

LH RH

Human Data

Adapted from Sergent (1982)

RH (LVF)

LH (RVF)

Local Global

Model Data

Hsiao et al. (2013)

Differential encoding model Accounting for human behavior

29

Extract hidden unit representations,train LH & RH classifiers (not shown)

LH RH

LH RH

Differential encoding model Spatial frequency biases

30

Extract output images, comparepower spectrum precision

LH

RH

Lower Higher

-𝚫 lo

g(po

wer

)

RH - LH (vs. original)

Hsiao et al. (2013)

LH RH

Differential encoding model Spatial frequency biases

30

Extract output images, comparepower spectrum precision

Lower Higher

Cipollini et al. (COGSCI 2012)

Differential encoding model spatial frequency biases

31

Lower Higher

Cipollini et al. (COGSCI 2012)

Differential encoding model spatial frequency biases

31

..and a number of other results(including an interesting departure from previous models)

new results

32

• Train the network on a set of natural image patches (250)

1. Use log-polar warping of images to simulate “cortical expansion” of the fovea in retinotopic cortex.

• x-axis: angle (0…2!)

• y-axis: log(radius)

2. Never re-train the network; reuse the same network for all other sets of images (hidden unit encodings, output images).

Differential encoding model Expt #1: Train on natural images

33

Differential encoding model spatial frequency biases

34

Train on logpolar natural images, examine spatial frequencies

Lower Higher

RH - LH (vs. original)

Differential encoding model spatial frequency biases

34

Train on logpolar natural images, examine spatial frequencies

Without retraining the network, present other images

Lower Higher

RH - LH (vs. original)

Differential encoding model spatial frequency biases

34

Train on logpolar natural images, examine spatial frequencies

Without retraining the network, present other images

Lower Higher

RH - LH (vs. original)

RH - LH (vs. original)

Differential encoding model spatial frequency biases

35

Without retraining the network, present other images and classify:

Train on logpolar natural images, examine spatial frequencies

Lower Higher

RH - LH (vs. original)

Local Global

• Using more realistically trained network, we are able to replicate some of our previous findings.

Differential encoding model Expt #1: Summary

36

37

Differential encoding model Expt #2: Developmental model

37

Differential encoding model Expt #2: Developmental model

Validation?

37

Differential encoding model Expt #2: Developmental model

Validation? Origins?

37

Differential encoding model Expt #2: Developmental model

We can address with a developmental approach!

Validation? Origins?

37

Differential encoding model Expt #2: Developmental model

Previously:

• vary connection distributions

• measure spatial frequencies

We can address with a developmental approach!

Validation? Origins?

37

Differential encoding model Expt #2: Developmental model

Previously:

• vary connection distributions

• measure spatial frequencies

Developmental approach:

• vary spatial frequencies

• measure connection distributions

We can address with a developmental approach!

Validation? Origins?

Katz and Callaway (1992)

During development:

• Visual acuity / contrast sensitivity is poor in infancy, but it improves over time. (e.g. Peterzell et al., 1995; Atkinson et al., 1997)

• Patchy connectivity matures via pruning & strengthening connections due to visual experience (e.g. Katz & Callaway, 1992; Burkhalter et al., 1993).

• RH begins maturing earlier than the LH (e.g. Geschwind & Galaburda, 1985; Hellige 1993; Chiron et al., 1997)

Differential encoding model Pruning interacts with acuity

38

Katz and Callaway (1992)

RH will prune connections under blurrier (lower spatial frequency) input

During development:

• Visual acuity / contrast sensitivity is poor in infancy, but it improves over time. (e.g. Peterzell et al., 1995; Atkinson et al., 1997)

• Patchy connectivity matures via pruning & strengthening connections due to visual experience (e.g. Katz & Callaway, 1992; Burkhalter et al., 1993).

• RH begins maturing earlier than the LH (e.g. Geschwind & Galaburda, 1985; Hellige 1993; Chiron et al., 1997)

Differential encoding model Pruning interacts with acuity

38

Differential encoding model vary frequencies, measure connections

39

Befo

re

RHLH

Differential encoding model vary frequencies, measure connections

39

Methods:

• Start RH and LH networks with equivalent connections.

Befo

re

RHLH

Differential encoding model vary frequencies, measure connections

39

Methods:

• Start RH and LH networks with equivalent connections.

• Train on natural images; RH receives more blurring of the images than the LH.

Epochs: 1-10 … 21-30 … 31..40 … 51-end

RH

LH

<=3.0cpd <=6.5cpd <=16cpd Full fidelity

more blurredless blurred

Befo

reA

fter

RHLH

Differential encoding model vary frequencies, measure connections

39

Methods:

• Start RH and LH networks with equivalent connections.

• Train on natural images; RH receives more blurring of the images than the LH.

• While training, remove the weakest connections.

Differential encoding model post-training connection distributions

40

Compile connection distribution

Differential encoding model post-training connection distributions

40

RH: More blurred

Differential encoding model post-training connection distributions

40

LH: Less blurred

RH: More blurred

Differential encoding model post-training connection distributions

40

LH: Less blurred

RH: More blurred

RH - LH

- =

Differential encoding model post-training connection distributions

40

LH: Less blurred

RH: More blurred

RH - LH

- =Same association as

in our previous studies!

Differential encoding model post-training connection distributions

40

41

Original Developmental

Differential encoding model Need for interhemispheric competition?

Weaker lateralization in developmental model than previous work.

Interhemispheric competition can amplify effects

Differential encoding model post-training changes

42

RH is specialized; LH is not.

Differential encoding model post-training changes

42

• We validated the associations between:

• Shorter connections & lower frequency encoding

• Longer connections & higher frequency encoding

• We showed that the model RH was changed from the original, the LH much less so.

Differential encoding model Expt #2: Summary

43

• We postulate that the RH has shorter long-range lateral connections in retinotopic visual areas (V4v / LOC).

• This connection asymmetry:

• Can account for many behavioral asymmetries.

• Leads to a RH bias for encoding low spatial frequency information (though we suggest perhaps only contour information)

• May appear during typical human development.

Connectivity Asymmetry Conclusions

44

• Unify models: replicate all behavioral results in the developmental model.

• Spatial frequency processing: Is the RH bias specific to contours and configurations, or general to all LSF information?

• Interhemispheric transfer: What role does it play in development and during central vision?

Connectivity Asymmetry Next steps

45

Thank you!

• Collaborators

• Gary Cottrell

• Janet Hsiao

• Funding sources

• NSF/TDLC

• CARTA

• Cognitive Science Society for the perception/action modeling award

• Robert J. Glushko and Pamela Samuelson Foundation for the student travel award

46

47