Methods


The Hindmarsh-Rose Model


The Hindmarsh-Rose model is a simple model of neuronal activity that allows for complex behaviors such as bursting and chaotic spiking (Hindmarsh and Rose, 1984). It is described by three coupled first order differential equations with eight parameters.

x=y-ax^3+bx^2-z+I

y=c-dx^2

z=r(s(x-x_0)-z

All variables are dimensionless. In these equations, x is the membrane potential, $y$ is the recovery variable, and z is the slow adaptation current. The parameters a, b, c, and d model the ion channels that produce action potentials, where a and b determine spike shape and c and d determine spike frequency. The parameter r forces z to adjust slowly relative to x and y, while s controls the tendency to burst and x0 is the resting potential. Finally, I is an applied current from outside the neuron such as from a patch clamp.

The HR-neuron model exhibits chaotic characteristics that allow for multiple possible variations on spike times, shape, and quantity. These chaotic dynamics are well studied for many possible parameter combinations (Storace, Linaro, & de Lange, 2008). Researchers have previously demonstrated that both genetic algorithms and MCMC methods are capable of solving chaotic equations (Loskutov, Molkov, Mukhin, & Feigin, 2007; Peng, Liu, Zhang, & Wang, 2009).

We adapted the HR neuron model for responding to natural stimuli by replacing the applied current, I, with an estimated response function,



x=y-ax^3+bx^2-z+\hat{r}(t)

\hat{r}(t) = h(t) \star s(t)

where h(t) is a linear filter, s(t) is the stimulus, and $\star$ is convolution. This adjustment allows us to fit the HR model to in vivo neuron recording data.





Genetic Algorithm


Mathematical models such as the HR-neuron provide researchers a framework to understand and predict qualities of a given system of interest. Researchers have developed many methods for finding an optimal set of parameters for a mathematical model such that the behavior of said mathematical model best match observations in a given data set. One such method is the genetic algorithm (Holland, 1973). This class of algorithm behaves very similar to Darwinian evolution. Some population of possible individual parameter estimates are assessed for some fitness value, and those with the lowest fitness are dropped while those with the highest remain and breed. That is, suppose a mathematical model f(x) has N many parameters. A genetic algorithm will first create a population of size P of individual parameter estimates, I, of size N. For each I in P some measure of similarity between f(x) and the observed data is made. This is known as an individual estimate’s fitness, fit(I). After assessing fit(I) for each I in P, a genetic algorithm will then sort each individual by their fit(I) and discard those I with the poorest fit(I) while retaining those I with the best fit(I). These remaining I will then be bred with one another using some breeding rule to create new I to restore P to its original size. To increase diversity, each element of these new I has a small chance to mutate its parameters to be different from the parameters of its parents. This process of assessing fitness, dropping off poor individuals, and mating new individuals is then repeated until some stopping rule is achieved.


Lynch and Houghton (2015) used a genetic algorithm to estimate parameters of multiple neuron models. Of particular interest, they propose genetic algorithms as a means of solving STRF models. We applied this method as one means of estimating the parameters of an augmented HR-neuron model that includes a sensory filter (DSTRF).


Affine Invariant MCMC


We also estimated neuron and filter parameters using a Markov Chain Monte Carlo (MCMC) technique. MCMC provides some distinct advantages over other parameter estimation methods, such as variational methods. First, MCMC provides an estimate for the full posterior distribution of the parameters rather than just a single value as with genetic algorithms. Having the posterior distribution for the parameters is useful for drawing inferences from the uncertainty of the parameter estimates. Secondly, MCMC allows for a Bayesian approach to estimating the parameters. Prior knowledge or beliefs about the parameters may be used in the estimation procedure and then updated afterwards. Finally, MCMC is simple to run in parallel on multi-core or multi-processor computers, which allows for significant reductions in run time.


To sample from the posterior distribution, we used emcee, a Python package implementing an affine-invariant ensemble sampler (Foreman-Mackey et al. 2013). The affine-invariant sampler is insensitive to covariances between parameters and requires tuning of many fewer hyper-parameters than standard MCMC algorithms (Goodman and Weare 2010). Further, the ensemble method used by the sampler was designed to run in a parallel processing environment. Rather than having one chain randomly sampling the posterior distribution, the ensemble sampler has hundreds of small chains sampling at once. These features should allow the sampler to converge on good parameter estimations more quickly than standard MCMC algorithms.


Evaluating Fitness

We evaluated the fitness of our models by using the SPIKE synchronization metric (Kruez et al, 2015). This metric calculates time similarity between spike trains by counting up the number of coincidences between.  The metric is calculated,



SYNC = \frac{1}{M}\sum_{k=1}^{M}C_k

\[C_i^{(n,m)} = \left \{\begin{matrix}
1 if min_j(|t_i^m-t_j^m)|) <\tau_{i,j}& \\

0 > otherwise &

\end{matrix}\]

where M is number of possible coincidences, C is the coincidence factor  for pairs of spikes, and $\tau$ is the coincidence window.

This metric provides some advantages over other commonly used spike-train similarity metrics, such as van-Rossum distance (van Rossum, 2001). It is time-scale independent, requires no parameter optimization, and very computationally efficient because it requires no convolutions.




Method Validation - Twin Data Analysis


We demonstrate the effectiveness of both MCMC and genetic algorithms for solving DSTRF HR-neuron models by means of “twin data” analysis (Toth et al., 2011). We set the parameters of an HR-neuron model to known fixed values and integrated across with some initial condition x(0) =  {x(0), y(0), z(0)} and some known filter h(t) that we convolved with Gaussian noise to produce a response estimate of r(t). This generated a series of spiking patterns across x(t). Using a height threshold, we generated a spike train from this DSTRF model. We then compared this spike train to spike trains generated in the same manner from MCMC and genetic algorithm estimates of the DSTRF model. When an MCMC or genetic algorithm generated spike train bests matches our original spike train based on some fitness function, we inspect and compare parameter estimates.