/blog

Introduction

A collection of short essays about several topics that interest me. I write about my main interest: theoretical neuroscience, and its connection to other topics such as philosophy, biotechnology, biology, society, computers and politics. Besides that I will include short snippets of code and small 'tutorials' on techniques that interest me.




20201215 Neuronal cell assemblies

neuronal cell assemblies; cell assemblies;

Neuronal assemblies, groups of neurons that are active in a coordinated way, and are thought to 'bridge' the single neuron scale with that of global brain dynamics. But what exactly neuronal assemblies are remains a topic of debate. Some aspects on which a consensus exists are that

  1. a neuronal assembly —as stated before— is a group of transiently coordinated neurons (Hebb, 1949; Legendy, 1967; Varela et al., 2001);
  2. these groups can be associatively linked (Hebb, 1949; Legendy, 1967; Palm, 1982); and
  3. neurons can be part of multiple assemblies and not all neurons need to partake each time in the activity of an assembly (Legendy, 1967; Palm, 1982).
Clearly, what is lacking from these properties is an account of the relevance of neuronal assemblies in connecting single neurons to cognition and behaviour.

A connectionist (the presently prevalent way to think about the brain) account of neuronal assemblies would have combinations of neurons represent different aspects of a stimulus, action, or thought (such as color, shape, which muscles to contract in order to move a hand, etc.), and as such assemblies of neurons would represent 'higher level' concepts (a red ball, a green triangle, a synchronous movement of hand and arm, etc.). In this way a neuronal assembly would function as a meso-scale neuron for meso-scale concepts, not very different from the idea of the grandmother cell (just arguably with a cooler name). I would object however that with this conception we throw away a lot of potentiality of neuronal assemblies.

I propose a conception of neuronal assemblies as (elements of) transiently self-organised structures that underlie cognition (Varela et al., 2001). In this way there is no necessary mapping between neuronal assemblies and (parts of) stimuli or behaviours. These associations are of course a possibility, but the important part for me it to retain the virtuality of the internal dynamics that makes up a neuronal assembly. In this way an assembly becomes more than a 'sum of neurons', but serves as a qualitatively different phenomenon. What needs to be explained now is the internal dynamics of these assemblies, and how these assemblies afford interactions between each other in order to create the structures needed to behave.

References:




20200915 First-passage-time distribution of leaky IF neuron

First-passage-time; Inter-spike-interval; leaky IF neuron

Historically the rate of firing of neurons has been used as the primary measure for the functioning of neurons. Especially with the possibility to record the activity of single neurons many studies have been carried out to quantify the rate of firing of single neurons in response to particular stimuli. However neurons are 'noisy', and so these experiments typically result in a distribution of inter-spike times, called 'inter-spike-interval histogram', of a neuron.

A widely used model of a single neuron is the leaky integrate-and-fire neuron, originally introduced by Lapicque in 1907. This abstract model consists of a description of the evolution of a membrane voltage \(v\) of a neuron, driven by an input \(I\): $$ \tau \frac{dv}{dt} = I(t)-v(t). \tag{1} $$ The parameter \(\tau\) is the membrane time-constant and determines how fast the membrane potential follows changes in steady state inputs, and how fast the potential decays back to its steady state after a perturbation. The second component of the leaky integrate-and-fire neuron is a spike-and-reset rule: if the potential reaches a threshold value \(v \leq v_c\) it is reset to a value \(v_r < v_c\), whereafter its evolution is again governed by (1). The firing rate of the leaky integrate-and-fire neuron can thus be determined from the time it takes between consecutive spike-and-resets. In this post I will look at the expression of the inter-spike-interval distribution of the leaky integrate-and-fire neuron.

Deterministic constant input

For constant inputs \(I(t) = \bar{I}\), the membrane potential shows an exponential approach to the value \( v_\infty = \bar{I} \). In the case of a constant supra-threshold input \( \bar{I}>v_c \) it is straightforward to calculate the firing rate of the model neuron: $$ T_{isi} = -\tau ln\left(\frac{v_c-\bar{I}}{v_r-\bar{I}}\right). $$

Stochastic input

However, as stated before, neurons are generally considered to be noisy. This noisiness can be captured by considering an input $$ I(t) = \mu + \sigma \eta(t), $$ in which \(\mu\) is a constant (mean) input, and \(\eta(t)\) is a fluctuating noise input. With this input (1) becomes a stochastic differential equation (SDE). A classical problem is to describe the first-passage-time (FTP) distribution, the distribution of the times that it takes for such a process to cross a threshold value, of SDEs. In the case that the noise part of the input is taken to be a gaussian white noise \(\eta(t) = dW_t\), the membrane potential follows an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930; Ricciardi & Sacerdote, 1979). The exact description of the first-passage-time distribution (the 'first-passage-time problem') of Ornstein-Uhlenbeck type processes is, however, still an unsolved problem.

It will thus in general not be possible to obtain an exact expression for the first-passage-time distribution of the leaky integrate-and-fire neuron. Gerstein and Mandelbrot (1964) considered the first-passage-time distribution of the perfect integrate-and-fire neuron, which is equal to the leaky integrate-and-fire neuron, but without the exponential drive back to a resting potential (i.e. (1) without the \(-v\) term on the r.h.s.). In this case the membrane potential describes a (biased) Brownian motion, for which exact descriptions of the first-passage-time distribution are known.

However, an important characteristic of neuronal membrane potentials is the relaxation to a resting potential in absence of input (Gluss, 1967). Steps have been made towards the expression of the first-passage-time distribution of the leaky integrate-and-fire neuron. Exact expression exist for restricted sets of parameters (Siebert, 1969; Sugiyama, Moore & Perkel, 1970), in Laplace transformed form (Roy & Smith, 1969; Sugiyama, Moore & Perkel, 1970, Capocelli & Ricciardi, 1971), as well as by approximation (Swalger & Schimansky-Geier, 2008).

A naive and an-exact FPT distribution

Here however, I would like to present a naive approach to an an-exact expression of the first-passage-time distribution which agrees surprisingly well with numerical simulations. This expression will be exact in the case that the reset potential is the additive inverse of the mean input: \(v_r = -\mu\). Considering a neuron driven by a specifically coloured noise, depending on the membrane time-constant \(\tau\): $$ \eta(t) = W(t) + \tau dW_t, $$ where \(W(t) = \int_0^t dW_t\). Defining \(k = \tau^{-1}\) for notational convenience, equation (1) becomes: $$ dv = k(\mu-v) dt + \sigma ( kW dt + dW_t). \tag{3}$$ Shifting the membrane potential in order to remove the \(\mu\) term (thus \(x := v-\mu\), \(x_c := v_c-\mu\) & \(x_r := v_r-\mu\), and Laplace transforming (3) leads to: $$ (k+s)\widetilde{x} = \sigma (k+s)\widetilde{W} + x(0). $$ Dividing both sides by \((k+s)\), applying the inverse Laplace transform, and differentiating gives: $$ dx = \sigma dW_t - kx(0) e^{-kt} dt, $$ which describes a Brownian motion (perfect integrate-and-fire neuron) with an exponential driving. Since the leaky integrate-and-fire neuron gets completely reset after a spike occurs and we are interested in the inter-spike interval distribution we can set \(x(0) = x_r\).

Natural diffusion

The evolution of the probability density of the membrane potential \(P(x,t)\) is described by the Fokker-Planck equation $$ \frac{\partial P}{\partial t} = kx_r e^{-kt} \frac{\partial P}{\partial x} + \frac{\sigma^2}{2}\frac{\partial^2 P}{\partial x^2}, \tag{4} $$ with the initial condition \(P(x,0) = \delta(x-x_r) \) and Dirichlet boundary \(P(x_c, t) = 0\). By disregarding for the moment the spike-and-reset mechanism (or equivalently setting \(x_c = \infty\)) we can solve for \(P\): $$ P(x,t) = \frac{1}{\sqrt{2\pi\sigma^2t}} \exp{\left[ -\frac{(x-x_re^{-kt})^2}{2\sigma^2t}\right]}, $$ finding that \(P(x,t)\) is a Gaussian with time-dependent variance \(\sigma^2t\) and mean \(x_re^{-kt}\). Unfortunately, when enforcing the spike-and-reset mechanism (i.e. \(x_c < \infty)\), no solution has (so far) been found for general cases.

Perfect IF neuron

For the perfect integrate-and-fire neuron the Fokker-Planck equation (4) does not have the drift term \(kx_re^{-kt}\frac{\partial P}{\partial x}\), and thus has a natural solution $$ P(x,t) = \frac{1}{\sqrt{2\pi\sigma^2t}}exp{\left[-\frac{(x-x_r)^2}{2\sigma^2t} \right]}. $$ The absence of the time-dependent mean makes it possible to construct a solution with the boundary \(P(x_c,t)=0\) by using the method of images $$ P_c(x,t) = \frac{1}{\sqrt{2\pi\sigma^2t}}\left(exp{\left[-\frac{(x-x_r)^2}{2\sigma^2t}\right]} - exp{\left[-\frac{(x-2x_c+x_r)^2}{2\sigma^2t}\right]} \right), \tag{5}$$ thus a solution by the superposition of the natural solution and a virtual 'sink' outside the considered domain \([\infty, x_c)\). This solution can be verified by checking that \(P_c(x,t)\rvert_{x=x_c}\) = 0, \(P_c(x,t)\rvert_{t=0}=\delta(x-x_r)\) and that \(P_c\) solves the Fokker-Planck equation (4).

Return to the leaky IF neuron

Naively, one would try to apply the method of images also to solve (4), trying to equivalently to subtract a mirror distribution with an inverted mean: $$ \hat{P}_c(x,t) = \frac{1}{\sqrt{2\pi\sigma^2t}}\left( \exp{\left[ -\frac{(x-x_re^{-kt})^2}{2\sigma^2t}\right]} - \exp{\left[ -\frac{(x-2xc+x_re^{-kt})^2}{2\sigma^2t}\right]} \right), \tag{6}$$ which complies with \(\hat{P}_c(x,t)\rvert_{x=xc}=0\) and \(\hat{P}_c(x,t)\rvert_{t=0}=\delta(x-x_r)\), for \(x < x_c\). But crucially (in general) does not solve the Fokker-Planck equation (4). In section Special cases with exact expressions I list some cases in which (6) is an exact solution.

FTP distribution

Continuing naively with the membrane voltage distribution of (6), by integrating over possible values of \(x\) we find the probability distribution that at time \(t\) a neuron has not yet fired: $$ S(t) = \int_{-\infty}^{x_c}\hat{P}_c(x,t)dx = -erf\left(-\frac{x_c-x_re^{-kt}}{\sqrt{2\sigma^2t}}\right), \tag{7} $$ then \(1-S(t)\) is the probability that a neuron has fired some time before \(t\). Changes in \(1-S(t)\) with respect to time then relate to the probability of the timings of threshold crossings. The first-passage-time distribution is then the time-derivative of \(1-S(t)\): $$ f(t) = \frac{\partial}{\partial t} \left[1-S(t)\right] = \left[ \frac{\lvert x_c-x_re^{-kt}\rvert}{\sqrt{2\pi\sigma^2t^3}} - \frac{2kx_re^{-kt}}{\sqrt{2\pi\sigma^2t}} \right] exp\left(-\frac{(x_c-x_re^{-kt})^2}{2\sigma^2t}\right). \tag{8} $$ Thus the first-passage-time distribution is the superposition of a Lévy distribution with the product of a decaying exponential and a gaussian.

The following figure shows the an-exact first-passage-time distribution and measured histograms for several different values of \(k=\tau^{-1}\), for \(x_r=-3\) and \(x_c = 1\).

fpt_image
Fig.01: First-passage-time distributions and histograms for several values of \(k=\tau^{-1}\). The remaining parameters are \(x_r=-3\) and \(x_c=1\). The measured histograms are the result of numerical simulation of \(1e4\) neurons carried out untill \(1e6\) time-steps

References: