Mathematical Neuroscience - Trinity College, Dublinormondca/notes/Neuroscience.pdfMathematical...

29
Mathematical Neuroscience Course: Dr. Conor Houghton 2010 Typeset: Cathal Ormond May 6, 2011

Transcript of Mathematical Neuroscience - Trinity College, Dublinormondca/notes/Neuroscience.pdfMathematical...

Mathematical Neuroscience

Course: Dr. Conor Houghton 2010Typeset: Cathal Ormond

May 6, 2011

Contents

1 Introduction 21.1 The Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Pyramidal Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Signalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Connection Between Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Electrodynamics 62.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Equilibrium Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3 Nernst Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4 Gates and Transient Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.4.1 Persistence Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.4.2 Transient Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.5 Hodgkin-Huxley Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.6 Integrate-and-Fire Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.7 Synapses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.8 Post-Synaptic Conductances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3 Coding 183.1 Spike Trains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.2 Tuning Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.3 Spike-Triggered Averages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.4 Linear Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.5 Problems with Linear Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.6 Rate Based Spiking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

1

Chapter 1

Introduction

1.1 The Brain

The brain consists of Neurons (grey matter) and of Glial Cells (white matter). Neuronsparticipate actively in signalling and in computations. Glial cells offer structural support and havea metabolic and modulating role. We will be dealing mostly with neurons.

1.2 Pyramidal Neuron

Table 1.1: Parts of a Pyramidal Neuron

• The Soma is the cell body. It is the site of metabolic processes and contains the nucleus.This is where the incoming signals are integrated.

2

1.3. SIGNALLING CHAPTER 1. INTRODUCTION

• Dendrites carry signals into the soma. They are passive, in the sense that the signals diffuse,they are quite short (approx. 4mm)

• Axons carry signals away from the soma, by active signalling. They are quite long (approx.40mm).

1.3 Signalling

• dendrites: passive, signal comes in

• soma: sums up signals by time weighing:

τ V̇︸︷︷︸linear relaxation

to 0

= −V + signals︸ ︷︷ ︸voltage changed by

incoming signals

where V is the voltage and τ is a constant.

• axons: actively propagating signals. If the voltage in the soma passes some threshold, a spike(or voltage pulse) is sent down the axon.

1.4 Connection Between Neurons

An axon terminates at a Synapse:When a spike arrives at a synapse, the voltage in the dendrite changes:

Table 1.2: A Synapse

3

1.4. CONNECTION BETWEEN NEURONS CHAPTER 1. INTRODUCTION

Table 1.3: When a spike arrives at a synapse

The chemical gradients involved are sodium (Na+), potassium (K+), calcium (Ca2+) andchlorine (Cl−). These gradients are maintained by ion pumps - tiny machines which consumeenergy while transporting ions. Ion Gates are ion-selective gated (i.e. open or closed) channels.The gate is usually controlled by voltage gradients or chemical signals. There are several types ofsignals:

• Passive Channels: allow specific ions to leak through

• Pumps: pump some ions in and some ions out, e.g. sodium in, calcium out.

• Gated Channels: can open or close in response to voltage gradients, concentration gradi-ents or chemical signals.

Note: the word gradient is used here, but it is slightly misleading, in that the voltages and concen-trations vary discontinuously across the membrane.

Spikes, aka active potentials, are voltage pulses which propagate along the axon. Depolarizationis where the current flowing into the cell changes the membrane potential to less negative/positivevalues. The opposite is Hyperpolarization. If a neuron is depolarized enough to raise the mem-brane potential above a certain threshold, the neuron generates an action potential, called a Spike,which has a potential of ∼ 100mV and lasts ∼ 1ms.

For a few milliseconds after a spike, it may be virtually impossible to have another spike. Thisis called the Absolute Refractory Period. For a longer interval (∼ 10ms) known as theRelative Refractory Period it is more difficult - but not impossible - to evoke an action po-tential. This is important, as action potentials are the only type of membrane potential fluctuationwhich can propagate over large distances.

4

1.4. CONNECTION BETWEEN NEURONS CHAPTER 1. INTRODUCTION

Table 1.4: Voltage of a Spike

In the synapse, the voltage transient of the action potential opens ion channels, producing aninflux of Ca2+ that prompts the release of a neurotransmitter. This binds to receptors at thepostsynaptic (signal receiving) side of the synapse, causing ion-conducting channels to open. Whena spike arrives at a synapse, it changes the voltage in the dendrite.

A spike is non-linear. The energy for a spike comes from the energy stored in the membraneby the gradient, so the membrane sustains the spike. Spikes propagate without dissipation. Atbranches, the spike continues equally down each branch. If the pump shuts off, the cell can stillproduce ∼ 70, 000 spikes.

When a spike arrives at the synapse, the vesicles migrate towards the cleft and some of themburst. This migrations is due to an increase in calcium levels. Channels open and ions can pass into or out of the dendrites causing the change in voltage.

5

Chapter 2

Electrodynamics

2.1 Introduction

The neuron relies on moving ions around using voltages and dissipation (i.e. Brownian motion ofions and atoms). All particles have thermal energy, and this energy on average is proportional tothe temperature, in particular at temperature T we have

〈Eion〉 = KBT

where 〈Eion〉 is the average energy per ion and KB is the Boltzmann constant. We will calculatethe typical voltage of a neuron so that the voltage gaps will roughly have this potential energy.

A Mole of something is a specific number of constituent particles, i.e. Avogadro’s numberL = 6.2 × 1023. The thermal energy of a mole is given by RT where R is the gas constantgiven by R = LKB = 8.31J/mol. We need the thermal energy to be similar to the potential gapdue to voltages in the neuron. If you have a potential gap of VT then the energy required to movea charge q (the charge of one proton) across the gap is qVT . Similarly, the energy required to moveone mole of charged ions against a potential of VT is FVT , where F is Faraday’s constant given byqL. Balancing these, we get

qVT = KBT

VT =KBT

q=RT

F≈ 27mV

The intracellular resistance to current flows can cause substantial differences in the membrane po-tential measured in different parts of a neuron. Long, narrow stretches of dendrites or axonal cablescan cause a high resistance. Neurons that have few of these may have relatively uniform membranepotentials across their surface. These are called Electronically Compact neurons.

Assume that the voltage is the same everywhere in a cell. This is equivalent to saying that thetime-scales of dissipation across the cell are small compared to the other cells. This is a harmlessenough assumption, as we are really dealing with a small section of membrane. If we have a voltageacross a membrane, the charge is stored on the membrane. The amount of charge stored Q dependslinearly on the voltage, so

Q = CV

6

2.2. EQUILIBRIUM POTENTIAL CHAPTER 2. ELECTRODYNAMICS

where C is the capacitance given by C = cA, where c is the specific capacitance per area and A isthe area of the membrane. The current through the membrane is given by

I =dQ

dt= C

dV

dt

Ohm’s Law tells us that

I =1

RV

i.e. that the current is linearly proportional to voltage, where G = 1R is the conductance, and R is

the resistance. We also have the specific resistance and specific conductance given respectively by

R =r

AG =

g

A

We may combine the above equations to see the following:

[RC] = [T ]

[R] = [CV ][I−1] = [V ][Q−1][T ]

[C] = [Q][V −1]

We wish to have one equation for V , but before we do this we need to think about chemicalgradients, i.e. the differences in ion concentrations across the membrane. If there is no voltage gapand high conductivity, then there is a sodium current in the absence of a potential. Ohm’s Lawcan be modified in the presence of concentration differences:

[V − Ei] = IR

where Ei is the Reversal Potential that would be required to prevent the net diffusion acrossthe barrier. This value will change as a current changes concentrations. We will ignore this andassume that the current is small.

2.2 Equilibrium Potential

Equilibrium Potential is the voltage gap required to prevent a current in the presence of achemical gradient. Equilibrium potential is given by the Nernst equation which we will derive.Imagine ions of charge zq, where q is the charge of a single proton and z = 1 for Na+. These ionswill need energy −zqV to cross the barrier. What is the probability that the ion has that energy?The distribution of energy is given by the the Boltzman distribution, i.e.

p(ε) =1

zexp

(− ε

KBT

)P(ε1 < energy of ion < ε2) =

∫ ε2

ε1

1

ze− εKBT dε

7

2.3. NERNST EQUATION CHAPTER 2. ELECTRODYNAMICS

This implies that

1 =

∫ ∞0

1

zexp

(− ε

KBT

)dε

=

[− 1

zKBTexp

(− ε

KBT

)]

=

[−KBT

zexp

(− ε

KBT

)]

=KBT

z

Which gives us that z = KBT . We also have

P(ε > −zqV ) =1

KBT

∫ ∞−zqV

exp

(− ε

KBT

)dε

=1

KBT

[KBT exp

(− ε

KBT

)]

= exp

(zqV

KBT

)= exp

(zV

VT

)where VT =

KBT

qis the typical voltage.

2.3 Nernst Equation

Consider a cell barrier. Inside, only exp( zVVT ) cells have enough energy to diffuse out to the exterior.Outside, all cells have enough energy to diffuse into the interior. Let pi and pe be the concentrationsof ions in the interior and exterior respectively. Assume, near equilibrium, that the diffusion flowis equivalent to the concentration of energetically available ions. Then:

pi exp

(zE

VT

)= pe

⇒ exp

(zE

VT

)=pepi

⇒ E =VTz

log

(pepi

)The latter of which is the Nernst Equation. Each ion has a different equilibrium potential. Na+

has a potential of c 70mV . The current for sodium is given by

gNa(V − ENa)

8

2.3. NERNST EQUATION CHAPTER 2. ELECTRODYNAMICS

and so the Hodgkin-Huxley Equation is given by:

CdV

dt= −i+

IeA

where Ie is an electrode current. This accounts for experimental situations with injected currentin the brain replaced by a synaptic current. The current for each ion is then given by Ohm’s law,ix = 1

rx(V − Ex). We will frequently make use of the conductance gx = 1

rx. This gives:

i = gl(V − El)︸ ︷︷ ︸leaking current

+gNa(V − ENa) + gK(V − EK)

gl is the conductance of all permanently open channels, whereas gNa, gk are the conductancesthrough the gated channels: channels that generate a particular conductance that allows only onetype of ion to pass through.

Models that describe the membrane potential of a neuron by just a single variable V are calledSingle-Compartment Models. The basic equation for all single compartment models is, asabove:

cmdV

dt= −im +

IeA

Table 2.1: The Equivalent Circuit of a Neuron

The structure of such a model is the same as an electrical circuit, called an Equivalent Circuit,which consists of a capacitor and a set of variable and non-variable resistors corresponding to the dif-ferent membrane conductances. The membrane resistance is given by Ohm’s law, V = IRm. Notethat we will often use specific resistances and capacitances, denoted by small letters, Rm ≡ rm

A andCm ≡ cmA, where A is the surface area of the neuron.

9

2.4. GATES AND TRANSIENT CHANNELS CHAPTER 2. ELECTRODYNAMICS

2.4 Gates and Transient Channels

2.4.1 Persistence Channels

Voltage dependent channels open and close as a function of membrane potentials. A channel thatacts like it has a single type of gate is a Persistent Channel opening of the gate us calledactivation of the conductance. We denote the probability that a gate is open as Pk. The opening of

Table 2.2: The Equivalent Circuit of a Neuron

a persistent gate may involve a number of different changes. In general, if k independent, identicalevents are required for a channel to open, Pk can be written as

Pk = nk

where n is the probability that any one of the k independent gating events has occurred. Note thatfor the Hodgkin-Huxley equation, we have k = 4. The rate at which the open probability for a gatechanges is given by

dn

dt= αn(1− n) + βnn

where αn is the opening rate and βn is the closing rate. Simplifying, we have

τndn

dt= n∞ − n τn =

1

αn + βn, n∞ =

αnαn + βn

We can look at this as an inhomogeneous first order ODE:

τdf

dt= f∞ − f

10

2.4. GATES AND TRANSIENT CHANNELS CHAPTER 2. ELECTRODYNAMICS

Assume that f∞ and τ are constant. Then solving this we have

f(t) = f∞ + (f0 − f∞) exp

(− tτ

)2.4.2 Transient Channels

Table 2.3: The Equivalent Circuit of a Neuron

The activation is coupled to a voltage sensor, and acts like a gate in a persistent channel. Asecond gate - the deactivation fate - can block that channel once it is open. Only the middle panelcorresponds to an open, ion-conducting state. Since the first gate acts like the one in the persistentchannel, we can say that

P(gate 1 is open) = mk

where m is an activation variable similar to n from before and k is an integer. The ball acts as thesecond gate. We have

P(ball does not block the channel pore) = h

h is called the Inactivation Variable. The activation and inactivation variables m and h aredistinguished by having opposite voltage dependencies. For the transient channels to conduct, bothgates must be open and assuming they both act independently. This has probability:

PNa = m3h

11

2.5. HODGKIN-HUXLEY MODEL CHAPTER 2. ELECTRODYNAMICS

As with the persistent channels, we get

dm

dt= αm(1−m)− βmm

dh

dt= αh(1− h)− βhh

Functions m∞ and h∞ describing the steady-state activation and inactivation levels, and voltagedependent time constraints for m and h can be defined as for persistent channels. To turn on aconductance maximally, it may be first necessary to hyperpolarize the neuron below its restingpotential and then depolarize it. Hyperpolarization raises the value of the inactivation variable h,also called Deinactivation.

The second step - depolarization - increases the value of m, the activation variable. Only whenm and h are are both non-zero is the conductance turned on. Note that the conductance canbe reduced in magnitude by either decreasing m or h. Decreasing h is called Inactivation anddecreasing m is called Deactivation.

2.5 Hodgkin-Huxley Model

The revised Hodgkin-Huxley model is described by

i = gl(V − El) + gNam3h(V − ENa) + gKn

4(V − EK)

where the bar indicates a constant. This is constructed by writing the membrane current as thesum of a leakage current, a delayed-rectified K+ current and a transient Na+ current. A positiveelectrode current is injected into the model, causing an initial rise of the membrane potential. Whenthe current has been risen up to about −50mV , the m variable that describes the activation of theNa+ conductance suddenly jumps from nearly 0 to nearly 1. Initially, the h variable (expressing thedegree of inactivation of the Na+ conductance) is 0.6. Thus, for a brief period, both m and h aresignificantly different from 0. This causes a large influx of Na+ ions, producing a sharp downwardspike of inward current (q) which causes the membrane potential to rise rapidly to around 50mV ,near the Na+ equilibrium potential.

The rapid increase in both V and m is due to a positive feedback effect. Depolarization of themembrane causes m to increase and the resulting activation of the Na+ conductance makes Vincrease. This drives h→ 0, causing the Na+ current to shut off. The rise in V also activates theK+ conductance by driving n towards 1. This increases the K+ current which drives the membranepotential back down to negative values. After V has returned to the reset value, the gates returnto their reset states, i.e.

n∞ ∼ 0 m∞ ∼ 0 h∞ ∼ 1

so n,m and h relax to these values. This is not instantaneous, and so there is a refractory period.The Connor-Stevens Model provides an alternative description of action-potential generation.The membrane current in this model is given by:

im = gl(V − El) + gNam3h(V − ENa) + gKn

4(V − EK) + gAa3b(V − EA)

12

2.6. INTEGRATE-AND-FIRE MODELS CHAPTER 2. ELECTRODYNAMICS

This model has an additional K+ conductance (called the A current) which is transient. The A cur-rent causes the firing rate to rise continuously from 0, and to increase roughly linearly for currentsover the range shown know as a Type I neuron. If A-current is switched off, the firing rate is muchhigher and jumps discontinuously to a non-zero value (Type II). The A-current also delays theoccurrence of the first action potential. (A-current lowers the internal voltage, and reduces spiking).

This model can be extended by including a transient Ca2+ conductance, e.g. in thalmocorticalneurons. A transient Ca2+ conductance acts - in many ways - like a slower version of the transientNa+ conductance that generates action potentials. Instead of producing an action potential, atransient Ca2+ conductance generates a slower transient depolarization, sometimes called a Ca2+

spike. This causes the neuron to fire a burst of action potentials which are Na+ spikes riding onthe slower Ca2+ spike.

Neurons can fire action potentials either at a steady rate or in bursts, even without current injec-tion or synaptic output. Periodic bursting gives rise to transient Ca2+ spikes with action potentialsriding on them. Ca2+ current during these bursts causes a dramatic increase in intracellular Ca2+

concentration. This activates a Ca2+ dependent K+ current, which - along with the inactivationof a Ca2+ current - terminates the burst. The interburst interval is determined primarily by thetime it takes for the intracellular Ca2+ concentration to a low value, which deactivates the Ca2+

dependent K+ current, allowing another burst to be generated.

Membrane potentials can vary considerably over the surface of the cell membrane, especially forneurons with long and narrow processes, or if we consider rapidly changing membrane potentials.The attenuation and delay within a neuron are most severe when electrical signals travel down thelong, narrow, cable-like structures of dendritic or axonal branches. For this reason, the mathe-matical analysis of signal propagation within neurons is called Cable Theory. The voltage dropacross a cable segment of length ∆x, radius a and intracellular resistivity rL is

∆V = V (x+ ∆x)− V (x) = −RLIL

where

RL =rl∆x

πa2IL = −πa

2∆V

rl∆x= −πa

2

rL

∂V

∂x

Many axons are covered with an insulating sheath of myelin, except at certain gaps in the neuron,called the Nodes of Ranvier where there is a high density of Na+ channels. There is no spikeat the myelinated bits, since the myelin acts as an insulator, and so the signal travels as a current.The signal therefore gets weaker and goes faster and is actively regenerated at the nodes of Ranvier.Action potential propagation is thus sped up.

2.6 Integrate-and-Fire Models

The mechanisms that we have shown by which K+ and Na+ produce action potentials are wellunderstood and can be modelled quite accurately. However, neuron models can be simplified andsimulation can be drastically accelerated if these biophysical mechanisms are not explicitly in-cluded in the model. Integrate-And-Fire models do this by stating that an action potential

13

2.6. INTEGRATE-AND-FIRE MODELS CHAPTER 2. ELECTRODYNAMICS

occurs whenever the membrane potential of the model neuron reaches a threshold value, Vt. Afterthe spike, the potential is rest to a value Vr, where Vr < Vt. I and F models only model subthresholdmembrane potential dynamics.

In the simplest model, all active membrane conductances are ignored, including synaptic inputs,and the entire membrane conductance is modelled as a single passive leakage term:

im = gl(V − El)

This is known as the Leaky Integrate-and-Fire model. The membrane potential in this modelis determined by

cmdV

dt= −gl(V − El) +

IeA

If we multiply across by rm = 1gm

and define rmcm = τm, then

τmdV

dt= El − V +RmIe

To generate action potentials in the model, we augment this by the rule that whenever V reachesthe threshold value Vt, an action potential is fired and the potential is reset to Vr. When Ie = 0we have V = EL, and so EL is the resisting potential. To get the membrane potential, we simplyintegrate the above equation. The firing rate of an I and F model in response to a constant injectedcurrent can by computed analytically:

V (t) = EL +RmIe + (V (0)− EL −RmIe) exp

(− t

τm

)This is valid only as long as V (t) < Vt. Suppose that at t = 0, an action potential has just fired,so V (0) = Vr. If tisi is the time to the next spike, then we have

Vt = V (tisi) = El +RmIe + (Vr − EL −RmIe) exp

(− tisiτm

)⇒ exp

(− tisiτm

)=Vt − El −RmIeVr − El −RmIe

⇒ − tisiτm

= log

(Vt − El −RmIeVr − El −RmIe

)⇒ tisi = τm log

(Rmie + El − VrRmie + El − Vt

)wheneverRmIe > Vt−EL. Otherwise, we have that tisi =∞. We call tisi the Interspike Intervalfor the constant Ie. Alternatively, we can calculate the interspike-interval firing rate of the neuronrisi:

risi =1

tisi=

1

τm

[log

(Rmie + EL − VrRmie + EL − Vt

)]−1

whenever RmIe > Vt−EL. Otherwise, we have that risi = 0. For sufficiently large values of Ie (i.e.RmIe >> vt − EL), we can use the linear approximation of the logarithm, (log(1 + z) ' z) to seethat

risi 'RmIe + EL − Vtτm(Vt − Vr)

14

2.7. SYNAPSES CHAPTER 2. ELECTRODYNAMICS

which shows that the firing rate grows linearly with Ie for large Ie. Real neurons exhibit spike-rateadaptation in that tisi lengthens over time when a constant current is injected into the cell, beforesettling to a steady state value, i.e. stabilizing.

So far, our passive I and F model has been based on two separate approximations:

• a highly simplified description of the action potential

• linear approximation for the total membrane current.

We’ll keep the first assumption, but we can still model the membrane current in as much detail asnecessary. We can model spike-rate adaptation by including an additional current in the model:

τmdV

dt= El − V − rmgsra(V − EK) +RmIE︸ ︷︷ ︸

inputs

where EK ' EL ' Vr and gsra is the spike-rate adaptation conductance, and has been modelledas a K+ conductance, so when activated hyperpolarizes the neuron, i.e. moves it away from firing.We’ll assume that gsra relaxes exponentially to 0 with a time constant τsra, i.e.

τsradgsradt

= −gsra

Clearly, a non-zero gsra changes the equilibrium potential:

τmdV

dt= EL + rmgsraEK − (1 + rmgrsa)V + inputs

⇒ τm1 + rmgrsa

dV

dt=EL + rmgsra1 + rmgrsa

− V + reduced inputs

The refractory effect is not included in the basic I and F model. Refractoriness can be incorporatedby adding a conductance similar to gsra described above, but with a much smaller τ , and a muchlarger ∆g (conductance increment).

2.7 Synapses

Synaptic transmission begins when a spike invades the pre-synaptic terminal and activates Ca2+

channels leading to a rise in the concentration of Ca2+. Ca2+ enters the button, and the vesiclesmigrate to the cell membrane and fuse. They then burst, releasing neurotransmitters into thecleft. These diffuse across the cleft, and bind to receptors on the post-synaptic neuron, leading tothe opening of ion channels that modify the conductance of the post-synaptic neuron. The neuro-transmitter is then reabsorbed and the channels close again. We want to model this mathematically.

A Ligand-Gated Channel is one which opens or closes in response to the binding of a neu-rotransmitter to a receptor. There are two broad classes of synaptic conductances:

• Ionotropic Gates - where the neurotransmitter binds directly to the gate (fast, simple).

15

2.8. POST-SYNAPTIC CONDUCTANCES CHAPTER 2. ELECTRODYNAMICS

• Metabotropic Gates - where the neurotransmitter binds to receptors that are not on thegate, but where the binding initiates a biochemical process that opens the gate and has othereffects.

The two major neurotransmitters that are found in the brain are:

• Glutamates: excitatory transmitters. Principal ionotropic receptors are AMPA and NMDA.

• GABA (Gamma-aminobutyric acid): an inhibitory transmitter.

A synapse will either have a glutamate or GABA and then a mixture of the corresponding gates.As with a voltage-dependent conductance, a synaptic conductance can be written as a product ofa maximal conductance and an open channel probability:

gs = gsP

where P is the probability that an individual gate is open. P can be expressed as a product of twoterms that reflect processes occurring on the pre and post-synaptic sides of the synapse:

P = PrPs

where Pr is the probability that the transmitter is released by the pre-synaptic terminal followingthe arrival of an action potential. Pr varies to take account of vesicle depletion. here, we let Pr = 1.

2.8 Post-Synaptic Conductances

In a simple model of a directly activated receptor channel, the transmitter interacts with the channelthrough a binding reaction in which k transmitter molecules bind to a closed receptor and open itin the reverse reaction. The transmitter molecules unbind from the receptor and is closes. This ismodelled by

dPsdt

= αs(1− Ps) + βsPs

where βs is the constant closing rate and αs is the opening rate which depends on the concentrationof the transmitter available for binding, i.e. αs depends on the chance of a neurotransmitter is closeenough to a transmitter to bind. We’ll assume that αs >> βs, so we can ignore βs in our initialcalculations.

When an action potential invades the pre-synaptic terminal, the transmitter concentration risesand αs grows rapidly, causing Ps to increase. Ps rises towards 1 with a time-scale τα = 1

αs. Assume

that the spike arrives at t = 0, and assume that t ∈ [0, T ]. Then we have

Ps(t) = 1 + (Ps(0)− 1) exp

(− t

τα

)If Ps(0) = 0, then

Ps(t) = 1− exp

(− t

τα

)

16

2.8. POST-SYNAPTIC CONDUCTANCES CHAPTER 2. ELECTRODYNAMICS

so the largest change in Ps occurs in this case. Following the release of the transmitter, thetransmitter concentration reduces rapidly. This sets αs = 0 and Ps then decays exponentially with

timescale τβ =1

βs. Typically τβ >> τα. Then open probability takes its maximum value at t = T ,

and then for t ≥ T decays exponentially at a rate determined by βs:

Ps(t) = Ps(T ) exp (−βs(t− T ))

If Ps(0) = 0 (as it will if there is no synaptic release immediately before the release at t = 0) themaximum value for Ps is

Pmax = 1− exp

(− Tτα

)This gives us from beforehand that

Ps(T ) = Ps(0) + Pmax(1− P (0))

i.e. if a spike arrives at a time t+ T , then

Ps(t+ T ) = Ps(t) + ∆P where ∆Ps = Pmax(1− Ps(t))

One simple model leaves out the T -scale dynamics:

τβdPsdt

= −Ps note: τs = τβ

The model discussed above, i.e.

Ps(t) =

1 + (Ps(0)− 1) exp

(− t

τα

)t ∈ [0, T ]

Ps(T ) exp (−βs(t− T )) t ≥ T

can be used to describe synapses with slower rise times, but there are many other models. Oneway of describing both the rise and fall of a synaptic conductance is to express Ps as the differenceof two exponentials:

Ps(t) = βPmax

(exp

(− t

τ1

)− exp

(− t

τ2

))where τ1 and τ2 are two time-scales in response to a spike arriving when Ps ∼ 0 and β is somenormalization constant. This model allows for a smooth rise as well as a smooth fall. Anotherpopular synaptic response is given by the α-function

Ps(t) =Pmaxt

τsexp

(1− t

τs

)This model starts at 0, reaches peak value at t = τs and decays with a time constant τs. Again, thisis favoured for its simplicity and because it somewhat resembles the actual conductance response,albeit with too slow a rise. As with the previous model, to implement it properly, it should beunderstood as a solution to a differential equation.

17

Chapter 3

Coding

3.1 Spike Trains

A Spike Train is a series of spike times and is the result of extracellular recording. It is believedthat the spike times are the information carrying component of spike trains. If we ignore the briefduration of an action potential, we can just count the spikes. For n spikes, denote these times byti for i = 1, . . . , n. The trial is taken to start at time 0 and ends at time T , i.e.

ρ(t) =n∑i=1

δ(t− ti)

The spike count, or the average number of spikes, is given by

n =

∫ T

0ρ(τ) dτ

We denote the spike count rate by r, which is given by

r =n

T=

1

T

∫ T

0ρ(τ) dτ

The next step is to discretize time and produce a histogram, i.e. divide T into subintervals for theform [nδt, (n+ 1)δt] and define

r(t) =

∫ (n+1)δt

nδtρ(τ) dτ t ∈ [nδt, (n+ 1)δt]

i.e. r(t) is the number of spikes in the corresponding interval. We repeat over numerous trials tosee that the firing rate is the average

〈r(t)〉 =

⟨∫ (n+1)δt

nδtρ(τ) dτ

⟩trials

A more sophisticated point of view would be to have a moving window

r(t) =

∫ t+ δt2

t− δt2

ρ(τ) dτ

18

3.1. SPIKE TRAINS CHAPTER 3. CODING

which gives rise to a histogram without the rigid discretisation. Again, with multiple trials, theaverage is

〈r(t)〉 =

⟨1

δt

∫ t+ δt2

t− δt2

ρ(τ) dτ

⟩trials

Thus, r(t)δt is the number of spikes in [t− δt2 , t+ δt

2 ] and if you average over trials, 〈r(t)δt〉 is theaverage number of spikes that fall in that interval. We regard a neuron as having a firing rate

r(t) = lim# trials→∞

δt→0

⟨∫ t+ δt2

t− δt2

ρ(τ) dτ

⟩trials

which may be approximated by 〈r(t)〉. The firing rate for a set of repeated trials at a resolution∆t is defined as

r(t) =1

∆t

∫ t+∆t

t〈ρ(τ)〉 dτ

so r(t)∆t is the number of spikes occurring in (t,∆t), i.e. the fraction of trials on which a spikeoccurred in (t,∆t).

Note:r is the spike count rater(t) is firing rate

〈r〉 is the average firing rate, equal to〈n〉r

=1

T

∫ T

0〈ρ(τ)〉

In practise, the firing rate is something we calculate from a finite number of trials, and what mat-ters is the usefulness of a given prescription for calculating the firing rate in terms of how well itcan be modelled. The basic point is that since spike trains are so variable, they don’t give us agood way to describe the response. In this description, the firing rate is some what of giving asmoothed, averaged quantity for r̂ which can easily fir into models and be compared to experiments.

A simple way of extracting an estimate of the firing fate from a spike trains is to divide timeinto discrete bins of duration ∆t, count the number of spikes within each bin and divide by ∆t, so

rapp(t) =

n∑i=1

w(t− ti)

where w is the window function defined by

w(t) =

{ 1

∆tif t ∈ [−∆t

2 ,∆t2 ]

0 otherwise

Alternatively, we have

rapp(t) =

∫ ∞−∞

w(τ)ρ(t− τ) dτ = (w ∗ ρ)(t)

This integral is called the Linear Filter, and the window function (also called the Filter Kernel)specifies how the neural response function evaluated at time t − τ contributes to the firing rate

19

3.2. TUNING CURVES CHAPTER 3. CODING

approximated at time t. This use of a sliding window avoids the arbitrariness of the bin placementand produces a rate that might appear to have better temporal resolution.

One thing that’s commonly done is to replace the filter kernel with a more smooth function, like aGaussian:

w(t) =1√

2πσwexp

(− t2

2σ2w

)In such a filter calculation, the choice of filter forms part of the prescription. Other choices include

w(t) =

1

t0e− tt0 if t > 0

0 otherwise

w(t) =

{α2te−αt if t > 00 otherwise

There is no experimental evidence to show that any given filter is better than another, not is thereany derivation from principle. The choice of ∆t or σw does matter, usually chosen by validatingagainst the data.

3.2 Tuning Curves

Neuronal responses typically depend on many different properties of stimulus. A simple way ofcharacterizing the response of a neuron is to count the number of action potentials fired duringthe presentation of a stimulus and then repeat (an infinite number of times) and average. ATuning Curve is the graph of the average 〈r〉 against some experimental parameter.

Response tuning curves characterize the average response of a neuron to a given stimulus. Wenow consider the complementary procedure of averaging the given stimuli that produces a given re-sponse. The resulting quantity, called the spike-triggered average stimulus provides a useful way ofcharacterizing neuronal selectivity. STAs are computed using stimuli characterized by a parameters(t) that varies over time.

3.3 Spike-Triggered Averages

This is another way of describing the relationship between stimulus and response, exactly how wecan better understand the linear models. The Spike Triggered Average Stimulus, denotedC(τ), is the average value of the stimulus at a time interval τ before a spike is fired. It is given by

C(τ) =

⟨1

n

n∑i=1

s(ti − τ)

20

3.4. LINEAR MODELS CHAPTER 3. CODING

In other words, for a spike occurring at time ti, we determine s(ti − τ), sum over all n spikes in atrial and divide the total by n. In addition, we average over trials, so

C(τ) =

⟨1

n

n∑i=1

s(ti − τ)

' 1

〈n〉

⟨n∑i=1

s(ti − τ)

⟩=

1

〈n〉

⟨∫ T

0ρ(t)s(t− τ) dt

⟩=

1

〈n〉

∫ T

0〈ρ(t)〉s(t− τ) dt

=1

〈n〉

∫ T

0r(t)s(t− τ) dt

which is the stimulus response correlation.

Correlation functions are a useful way of determining how two quantities that vary over timeare related to each other. The correlation function of the firing rate and the stimulus is

Qrs(τ) =1

T

∫ T

0r(t)s(t+ τ) dt

From this we can see that

C(τ) =1

〈r〉Qrs(−τ)

Because the argument of this correlation function is −τ , the STA stimulus is often called the reversecorrelation function. The STA stimulus is widely used to study average and characterize neuralresponses. Because C(τ) is the average value of the stimulus at a time τ before a stimulus, largervalues of τ represent times further in the past relative to the triggering spike. For this reason,we plot the STA with the time axis going backward compared to the normal convention. This al-lows the average spike-triggered stimulus to be read of from the plots in the usual left-to-right order.

The results obtained by spike-triggered averaging depend on the particular set of stimuli usedduring an experiment. There are certain advantages to using a stimulus that is uncorrelated fromone time to the next, e.g. a white-noise stimulus. This condition of white-noise stimulus can beexpressed using the stimulus-stimulus correlation function:

Qss(τ) =1

T

∫ T

0s(t)s(t+ τ) dt

if you had a white-noise stimulus, you might expect that for negative values of τ , we have C(τ) = 0.

3.4 Linear Models

From before, we noted that

C(τ) =1

〈r〉Qrs(−τ)

21

3.4. LINEAR MODELS CHAPTER 3. CODING

We can see that C(τ) this depends only on Qrs. However, a better description would be to takeQss into account. In for formula for Qrs, we cannot be sure if a non-zero value reflects a statisticalrelationship between s(t) and r(t+ τ) or, for example, one between s(t) and s(t+ τ) and anotherbetween s(t+ τ) and r(t+ τ). Thus, the problems with the STA are:

1. no accounting for Qss

2. it only depends on 2nd order statistics

3. no accounting for spike-spike effects

Linear Models can solve the first problem, but not the other two. We consider:

r̃(t) = r0 +

∫ ∞0

D(τ)s(t− τ) dτ

where r0 is a constant which accounts for any background firing when s = 0. D(τ) is a weightingfactor that determines how strongly and with what sign the value of s(t− τ) affects the firing rateat the time t− τ . The integral in this equation is a linear filter of the same for *** as those definedbefore.

In the linear model, a neuron has a kernel associated with it, and the predicted firing rate isthe convolution of the kernel and the stimulus. We can think of this equation as being the first twoterms in a Volterra Expansion - the functional equivalent of the Taylor series expansion usedto generate power series approximations of the functions:

r̃(t) = r0 +

∫ ∞0

D1(τ)s(t− τ) dτ +

∫ ∞0

∫ ∞0

D2(τ1, τ2)s(t− τ1)s(t− τ2) dτ1 dτ2

+

∫ ∞0

∫ ∞0

D3(τ1, τ2, τ3)s(t− τ1)s(t− τ2)s(t− τ3) dτ1 dτ2 dτ3

The question now is what is D(τ) and how to calculate it. The standard method is reverse cor-relation - without loss of generality, we’ll absorb r0 into r̃ and r to let r0 = 0, or simply considerr → r − r0. We wish to choose the kernel D to minimize the squared difference between theestimated response to a stimulus and the actual measured response to a stimulus and the actualmeasured response averaged over the duration of the trial (T ), i.e.

ε =1

T

∫ T

0(r(t)− r̃(t))2 dt

This is called the Objective Function. To optimize this, we want∂ε

∂D(τ)as a problem in the

calculus of variations. However, instead, we want to phrase the problem as a simple variation. Wesend D(τ) to D(τ) + δD(τ) and calculate the corresponding variation in ε. Let ε′ be the new errorunder this translation:

ε′ =1

T

∫ T

0(r2 − 2rr̃′ + (r̃′)2) dt

22

3.4. LINEAR MODELS CHAPTER 3. CODING

where the ′ denotes the new estimate, not the derivative. We know that r̃′ is given by:

r̃′(t) =

∫ ∞0

(D(τ) + δR(τ))s(t− τ) dτ

= r̃(t) +

∫ ∞0

δD(τ)s(t− τ) dτ

If we let ε′ = ε+ δε, we have that

δε =1

T

∫ T

0(r2 − 2rr̃′ + (r̃′)2) dt− 1

T

∫ T

0(r2 − 2rr̃ + (r̃)2) dt

=1

T

∫ T

02r(r̃ − r̃′) + ((r̃′)2 − (r̃)2) dt

=1

T

∫ T

0

[−2r

∫ ∞0

δD(τ)s(t− τ) dτ + 2r̃

∫ ∞0

δD(τ)s(t− τ) dτ

]dt+O(δD2)

=2

T

∫ ∞0

δD(τ)

∫ T

0s(t− τ)(r̃(t)− r(t)) dt dτ

where we change the order of integration. For optimal D(τ), we want δε = 0, so we need to have∫ T

0s(t− τ)(r̃(t)− r(t)) dt

which is an integral equation for D(τ). We have that∫ T

0s(t− τ)r(t) dτ =

∫ T

0s(t− τ)r̃(t) dτ

=

∫ T

0s(t− τ)

∫ ∞0

D(σ)s(t− σ) dσ dt

=

∫ T

0

∫ ∞0

s(t− τ)D(σ)s(t− σ) dσ dt

=

∫ ∞0

∫ T

0s(t− τ)D(σ)s(t− σ) dt dσ

=

∫ ∞0

D(σ)

∫ T

0s(t− τ)s(t− σ) dt dσ

Now we consider ∫ T

0s(t− τ)s(t− σ) dt dσ

Letting t′ = t− σ, we have∫ T

0s(t− τ)s(t− σ) dt dσ =

∫ T

0s(t′ + σ − τ)s(t′) dt′ = TQss(σ − τ)

Recalling that

Qrs(−τ) =1

T

∫ T

0r(t)s(t− τ) dt =

1

T

∫ ∞0

D(σ)

∫ T

0s(t− τ)s(t− σ) dt dσ

23

3.4. LINEAR MODELS CHAPTER 3. CODING

we conclude thatQrs(−τ) = (D ∗Qss)(τ)

since (it can be shown) Qss is an even function of τ . This method is know as the reverse correlationbecause the firing rate-stimulus correlation function is evaluated at −τ in this equation.

What happens if the stimulus is white noise? If knowing s(t) tells you something about s(t + τ)for τ 6= 0, then it can be argued that Qss = σ2δ(t) for T →∞ where σ2 is the variance of s(t) at apoint. Substituting this into the above equation, we have

Qrs(−τ) = (D ∗Qss)(τ)

=

∫ ∞0

Qss(τ − τ ′)D(τ ′) dτ ′

= σ2

∫ ∞0

δ(τ − τ ′)D(τ ′) dτ ′

= σ2D(τ)

whence we conclude

D(τ) =1

σ2Qrs(−τ)

Previously, we have seen that the spike STA was approximated by

C(t) ' 1

〈r〉Qrs(−τ)

so

D(τ) ' 〈r〉C(τ)

σ2

Thus, the linear kernel is approximately equal to the STA for a white noise stimulus.

We can also think of the linear kernel as encoding information about how the neuron respondsto the stimulus in a way that separates the response from the structure of the stimulus. Thismakes it useful in situations where we need to use a highly structured stimulus to study the sortof behaviour the neuron has when performing its computational tasks. Calculating D(τ) can betricky:

• At its simplest, we consider the Fourier Transform, and recall that

F(f ∗ g) = F(f)F(g)

So we haveF[Qrs(−τ)] = F[Qss(τ)]F[D(τ)]

giving

D(τ) = F−1

[F[Qrs(−τ)]

F[Qss(τ)]

]However, this will not always work as our convolution is not quite correct. Also, F[Qss(τ)] issometimes quite small, so our division can give rise to errors.

24

3.5. PROBLEMS WITH LINEAR MODEL CHAPTER 3. CODING

• Another approach is to rewrite the equations as a matrix equation by discretizing time:

Qrs(−τ) 7→ Qrs(−nδτ) = Qrsn (a vector)D(τ) 7→ Dn = D(−nδτ)

We then writeQssnn′ = Sss(nδτ − n′δτ)

and we can see thatQrsn = Qssnn′Dn′︸ ︷︷ ︸

Qss∗D

soDn′ = (Qssnn′)−1Qrsn

It turns out that Qssnn′ is always invertible, but often its eigenvalues are small, and thisdominate the inverse matrix.

3.5 Problems with Linear Model

We have the following failures of the linear model:

1. The objective function ε is not chosen from principle - there is a subtle model dependence inwhat we did.

2. s(t) introduces more model dependence in real applications.

3. There are no spikes.

Ideally, we consider a stimulus, estimate its spike train based on a given model and compare it toits experimentally determined spike train. Here, we estimate the firing rate based on our model,and compare it to the firing rate of the spike train determined by experiment. We consider thefollowing questions

1. Are there better models that produce spikes instead of firing rates?

2. Alternatively, can we supplement a firing fate model with a model that gives spikes?

3. How does a spike model relate to the definition of ε?

3.6 Rate Based Spiking

The idea is that the probability of a spike depends only on the current value of the firing rate.This gives us a Poisson process. For small time intervals ∆t, the probability of a spike at timet ∈ [t0, t0 + ∆t] is

P(t0 < t < t0 + ∆t) = r(t0)∆t as ∆T → 0

We’re interested in the probability of a given spike train, so

P(t11 < t < t21, t12 < t < t22, . . . , t

1n < t < t2n) =

∫ t21

t11

∫ t22

t12

· · ·∫ t2n

t1n

P [t1, t2, . . . , tn] dtn

25

3.6. RATE BASED SPIKING CHAPTER 3. CODING

Here, P [t1, t2, . . . , tn] is the probability density function. Note that if we have n spikes, the proba-bility distribution for those spikes occurring at time (t1, t2, . . . , tn) is the sum of the probabilities ofthe spikes occurring at (tσ(1), tσ(2), . . . , tσ(n)), where σ is a permutation of {1, 2, . . . , n}. Each spikehas a constant distribution over [0, T ], so we get

P [t1, t2, . . . , tn] =n!

TnP[n]

where P[n] is the probability that n spikes occur. To calculate P[n], we divide the interval into

m subintervals of width ∆t =T

m. We can assume that ∆t is sufficiently small that we never get

two spikes within any one subinterval, because at the end of the calculation, we take ∆t → 0.The probability of a spike occurring in on specific interval is r∆t, and the probability of n spikesoccurring in n given intervals is (r∆t)n. Similarly, the probability that a spike doesn’t occur in aninterval is (1− r∆t), so the probability of having the remaining M − n intervals without spikes is

(1 − r∆t)M−n. Finally, the number of ways of putting n spikes into M intervals is

(M

n

). This

gives us

P[n] = limM→∞

(M

n

)(rT

M

)n(1− rT

M

)M(

1− rT

M

)nTo take the limit, we note that as ∆t→ 0, M grows without bound because M∆t = T . Because n

is fixed, we can write M − n 'M =T

δt. Using this approximation and defining ε = −r∆t, we find

that

lim∆t→0

(1− r∆t)M−n = limε→0

[(1 + ε)

]−rT= e−rT

Also, for large enough M ,M !

(M − n)!'Mn so we have

P[n] =(rT )n

n!e−rT

which is the Poisson Distribution. We can compute the mean and standard deviation of thisdistribution:

〈n〉 =∞∑n=0

nP[n]

=

∞∑n=1

n(rT )n

n!e−rT

=

( ∞∑n=1

(rT )n−1

n− 1!

)rTe−rT

=

( ∞∑m=0

(rT )m

m!

)rTe−rT

= rT

26

3.6. RATE BASED SPIKING CHAPTER 3. CODING

Note that

〈n2〉 =∞∑n=0

n2P[n]

=∞∑n=0

n(n− 1)P[n] +∞∑n=0

nP[n]

=

∞∑n=2

n(n− 1)(rT )n

n!e−rT + 〈n〉

= (rT )2e−rT∞∑n=2

(rT )n−2

(n− 2)!+ rT

= (rT )2 + rT

so the variance is given byσ2 =

⟨〈n2〉 − 〈n〉2

⟩= rT

In general, the ratio of the variance and the mean is known as the Fano Factor, F = σ2

〈n〉 . For ahomogeneous Poisson process, F = 1. However, even with homogeneous stimulus, the Fano factorfor Poisson spiking is usually greater than 1. We have the following considerations

• Homogeneous Poisson spiking doesn’t describe spike trains.

• More interesting evidence is provided by the distribution of the inter-spike intervals. Theprobability density of the time interval between adjacent spikes is called the inter-spike intervaldistribution, and it has a useful statistic or characterizing spiking patterns. Let ti be the timebetween spikes. A similar argument to the previous one shows that

P(ti) = ne−rti

• Neuronal spiking is clearly not Poisson. For a start, there is the refractory period.

• Even if it was Poisson, it is unlikely that it would be homogeneous.

In the homogeneous Poisson process, the firing rate s not constant, but the probability of getting aspike depends only on the current value of the firing rate r(t). We need a formula for P [t1, t2, . . . , tn].Consider the time between 2 spikes at ti and ti+1. Divide that into M subintervals.

P [no spike] =M∏m=1

(1− (rTm)∆t)

where (rTm)∆t is the the probability of a spike in the mth subinterval and

∆t =ti+1 − tiM

The trick is to take logarithms, so

logP [no spike] =M∑m=1

log(1− (rTm)∆t) ' −M∑m=1

r(Tm)∆t

27

3.6. RATE BASED SPIKING CHAPTER 3. CODING

recalling that log(1 + z) ' z for small enough z. Assuming that r is “nice”, we have that

logP [no spike] = −∫ ti+1

ti

r(t) dt

Thus we have

P [no spike] = exp

(−∫ ti+1

ti

r(t) dt

)Combining this, we have

P [t1, t2, . . . , tn] =n∏i=1

r(ti) exp

(−∫ ti+1

ti

r(t) dt

)= exp

(−

n∑i=1

∫ ti+1

ti

r(t) dt

)n∏i=1

r(ti)

= exp

(−∫ T

0r(t) dt

) n∏i=1

r(ti)

28