ANNarchy (Artificial Neural Networks architect)
Neuro-computational models are different from classical neural networks (deep learning) in many aspects:
- The complexity of the neurons, whose activity is governed by one or several differential equations instead of a simple weighted sum.
- The complexity and diversity of the learning rules (synaptic plasticity), compared to gradient descent.
- The size of the networks needed to simulate significant parts of the brain.
- The huge diversity of models, architectures, frameworks used by researchers in computational neuroscience.
The increasing size of such networks asks for efficient parallel simulations, using distributed systems (OpenMP, MPI) or GPUs (CUDA). However, computational neuroscientists cannot be expected to be also experts in parallel computing. There is a need for a general-purpose neuro-simulator, with an easy but flexible interface allowing to define a huge variety of models, but which is internally efficient and allows for fast parallel simulations on various hardwares.
Over many years, we have developed ANNarchy (Artificial Neural Networks architect), a parallel simulator for distributed rate-coded or spiking neural networks. The definition of the models is made in Python, but the library generates optimized C++ code to actually run the simulation on parallel hardware, using either openMP or CUDA. The current stable version is 4.7 and is released under the GNU GPL v2 or later.
The code is available at:
https://github.com/ANNarchy/ANNarchy
The documentation is available at:
Core principles
ANNarchy separates the description of a neural network from its simulation. The description is declared in a Python script, offering high flexibility and readability of the code, and allowing to use the huge ecosystem of scientific libraries available with Python (Numpy, Scipy, Matplotlib…). Using Python furthermore reduces the programming effort to a minimum, letting the modeller concentrate on network design and data analysis.
A neural network is defined as a collection of interconnected populations of neurons. Each population comprises a set of similar artificial neurons (rate-coded or spiking point-neurons), whose activity is ruled by one or many ordinary differential equations. The activity of a neuron depends on the activity of other neurons through synapses, whose strength can evolve with time depending on pre- or post-synaptic activities (synaptic plasticity). Populations are interconnected with each other through projections, which contain synapses between two populations.
ANNarchy provides a set of classical neuron or synapse models, but also allows the definition of specific models. The ordinary differential equations (ODE) governing neural or synaptic dynamics have to be specified by the modeler. Contrary to other simulators (except Brian) which require to code these modules in a low-level language, ANNarchy provides a mathematical equation parser which can generate optimized C++ code depending on the chosen parallel framework. Bindings from C++ to Python are generated thanks to Cython (C-extensions to Python), which is a static compiler for Python. These bindings allow the Python script to access all data generated by the simulation (neuronal activity, connection weights) as if they were simple Python attributes. However, the simulation itself is independent from Python and its relatively low performance.
Example of a pulse-coupled network of Izhikevich neurons
To demonstrate the simplicity of ANNarchy’s interface, let’s focus on the “Hello, World!” of spiking networks: the pulse-coupled network of Izhikevich neurons (Izhikevich, 2003). It can be defined in ANNarchy as:
from ANNarchy import *
# Create the excitatory and inhibitory population
= Population(geometry=1000, neuron=Izhikevich)
pop = pop[:800] ; Inh = pop[800:]
Exc
# Set the population parameters
= np.random.random(800) ; ri = np.random.random(200)
re = 5.0 ; Inh.noise = 2.0
Exc.noise = 0.02 ; Inh.a = 0.02 + 0.08 * ri
Exc.a = 0.2 ; Inh.b = 0.25 - 0.05 * ri
Exc.b = -65.0 + 15.0 * re**2 ; Inh.c = -65.0
Exc.c = 8.0 - 6.0 * re**2 ; Inh.d = 2.0
Exc.d = -65.0 ; Inh.v = -65.0
Exc.v = Exc.v * Exc.b ; Inh.u = Inh.v * Inh.b
Exc.u
# Create the projections
= Projection(pre=Exc, post=pop, target='exc')
exc_proj =Uniform(0.0, 0.5))
exc_proj.connect_all_to_all(weights
= Projection(pre=Inh, post=pop, target='inh')
inh_proj =Uniform(0.0, 1.0))
inh_proj.connect_all_to_all(weights
# Compile
compile()
# Start recording the spikes in the network to produce the plots
= Monitor(pop, ['spike', 'v'])
M
# Simulate 1 second
1000.0, measure_time=True)
simulate(
# Retrieve the spike recordings and the membrane potential
= M.get('spike')
spikes = M.get('v')
v
# Compute the raster plot
= M.raster_plot(spikes)
t, n
# Compute the population firing rate
= M.histogram(spikes)
fr
# Plot the results
import matplotlib.pyplot as plt
= plt.subplot(3,1,1)
ax 'b.', markersize=1.0)
ax.plot(t, n, = plt.subplot(3,1,2)
ax 15])
ax.plot(v[:, = plt.subplot(3,1,3)
ax
ax.plot(fr) plt.show()