On Bubbles and Drifts: Continuous attractor networks in brain models Thomas Trappenberg Dalhousie...
-
Upload
gael-davies -
Category
Documents
-
view
214 -
download
0
Transcript of On Bubbles and Drifts: Continuous attractor networks in brain models Thomas Trappenberg Dalhousie...
On Bubbles and Drifts:Continuous attractor networks in brain models
Thomas TrappenbergThomas Trappenberg
Dalhousie University, Canada Dalhousie University, Canada
Once upon a time ... (my CANN shortlist)
Wilson & Cowan (1973) Grossberg (1973) Amari (1977) … Sampolinsky & Hansel (1996) Zhang (1997) … Stringer et al (2002)
It’s just a `Hopfield’ net …
I ext rout
w
w
x
Recurrent architecture Synaptic weights
In mathematical terms …
Updating network states (network dynamics)
Gain function
Weight kernel
Weights describe the effective interaction profile in Superior Colliculus
TT, Dorris, Klein & Munoz, J. Cog. Neuro. 13 (2001)
Network can form bubbles of persistent activity (in Oxford English: activity packets)
0 5 10 15 20
20
40
60
80
100
Time [t]
Nod
e in
dex
External stimulus
End states
Space is represented with activity packets in the hippocampal system
From Samsonovich & McNaughtonPath integration and cognitive mapping in a continuous attractor neural J. Neurosci. 17 (1997)
There are phase transitions in the weight-parameter space
CANNs work with spiking neurons
Xiao-Jing Wang, Trends in Neurosci. 24 (2001)
Shutting-off works also in rate model
Time
No
de
Various gain functions are used
End states
CANNs can be trained with Hebb
Hebb:
Training pattern:
Normalization is important to have convergent method
• Random initial states• Weight normalization
w(x,50)
Training timex
x y
w(x,y)
Gradient-decent learning is also possible (Kechen Zhang)
Gradient decent with regularization = Hebb + weight decay
CANNs have a continuum of point attractors
Point attractors and basin of attraction
Line of point attractors
Can be mixed: Rolls, Stringer, Trappenberg A unified model of spatial and episodic memoryProceedings B of the Royal Society 269:1087-1093 (2002)
Neuroscience applications of CANNs
Persistent activity (memory) and winner-takes-all (competition)
• Working memory (e.g. Compte, Wang, Brunel etc)
• Place and head direction cells (e.g. Zhang, Redish, Touretzky, Samsonovitch, McNaughton, Skaggs, Stringer et al.)
• Attention (e.g. Olshausen, Salinas & Abbot, etc)
• Population decoding (e.g. Wu et al, Pouget, Zhang, Deneve, etc )
• Oculomotor programming (e.g. Kopecz & Schoener, Trappenberg)
• etc
Superior colliculus intergrates exogenous and endogenous inputs
C N
S N p r
T h a l
S E F
F E F
L IP
S C
R F
Cerebellum
Superior Colliculus is a CANN
TT, Dorris, Klein & Munoz, J. Cog. Neuro. 13 (2001)
CANN with adaptive input strength explains express saccades
CANN are great for population decoding (fast pattern matching implementation)
CANN (integrators) are stiff
… and drift and jumpTT, ICONIP'98
Modified CANN solves path-integration
CANNs can learn dynamic motor primitives
Stringer, Rolls, TT, de Araujo, Neural Networks 16 (2003).
Drift is caused by asymmetries
NMDA stabilization
CANN can support multiple packets
Stringer, Rolls & TT,Neural Networks 17 (2004)
How many activity packets can be stable?
T.T., Neural Information Processing-Letters and Reviews, Vol. 1 (2003)
Stabilization can be too strong
TT & Standage, CNS’04
CANN can discover dimensionality
0
1 1
( )( )( ) ( )
hdhd vii ihd
ac hd acij jhd c hd ac
j
inh hdij j
j
c hd cij j
j
dh th t Iw w r t
dt C
w rw r rC C
r
t
t
C
: activity of node i
: firing rate
: synaptic efficacy matrix
: global inhibition
: visual input
: time constant
: scaling factor
: #connections per node
: slope
: threshold
viI
ih
0
inhw
ir
ijw
Continuous dynamic (leaky integrator):
The model equations:
NMDA-style stabilization:1
2
if 0.5( )
elsewherei
i
rr
2 ( ( )) 1(1 e )i ih rir
ij i jw kr r Hebbian learning:
c hd hd cij i jw kr r r ac hd hd acij i jw kr r r