File: examples-frompapers-computing%20with%20neural%20synchrony-hearing_Fig7A_Jeffress.txt

package info (click to toggle)
brian 1.4.3-1
  • links: PTS, VCS
  • area: main
  • in suites: stretch
  • size: 23,436 kB
  • sloc: python: 68,707; cpp: 29,040; ansic: 5,182; sh: 111; makefile: 61
file content (73 lines) | stat: -rw-r--r-- 2,713 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
.. currentmodule:: brian

.. index::
   pair: example usage; NeuronGroup
   pair: example usage; run
   pair: example usage; TimedArray
   pair: example usage; show
   pair: example usage; raster_plot
   pair: example usage; Connection
   pair: example usage; SpikeMonitor
   pair: example usage; linspace
   pair: example usage; StateMonitor

.. _example-frompapers-computing with neural synchrony-hearing_Fig7A_Jeffress:

Example: Fig7A_Jeffress (frompapers/computing with neural synchrony/hearing)
============================================================================

Brette R (2012). Computing with neural synchrony. PLoS Comp Biol. 8(6): e1002561. doi:10.1371/journal.pcbi.1002561
------------------------------------------------------------------------------------------------------------------
Figure 7A. Jeffress model, adapted with spiking neuron models.
A sound source (white noise) is moving around the head.
Delay differences between the two ears are used to determine the azimuth of the source.
Delays are mapped to a neural place code using delay lines (each neuron receives input
from both ears, with different delays).

::

    from brian import *
    
    defaultclock.dt = .02 * ms
    dt = defaultclock.dt
    
    # Sound
    sound = TimedArray(10 * randn(50000)) # white noise
    
    # Ears and sound motion around the head (constant angular speed)
    sound_speed = 300 * metre / second
    interaural_distance = 20 * cm # big head!
    max_delay = interaural_distance / sound_speed
    print "Maximum interaural delay:", max_delay
    angular_speed = 2 * pi * radian / second # 1 turn/second
    tau_ear = 1 * ms
    sigma_ear = .05
    eqs_ears = '''
    dx/dt=(sound(t-delay)-x)/tau_ear+sigma_ear*(2./tau_ear)**.5*xi : 1
    delay=distance*sin(theta) : second
    distance : second # distance to the centre of the head in time units
    dtheta/dt=angular_speed : radian
    '''
    ears = NeuronGroup(2, model=eqs_ears, threshold=1, reset=0, refractory=2.5 * ms)
    ears.distance = [-.5 * max_delay, .5 * max_delay]
    traces = StateMonitor(ears, 'x', record=True)
    
    # Coincidence detectors
    N = 300
    tau = 1 * ms
    sigma = .05
    eqs_neurons = '''
    dv/dt=-v/tau+sigma*(2./tau)**.5*xi : 1
    '''
    neurons = NeuronGroup(N, model=eqs_neurons, threshold=1, reset=0)
    synapses = Connection(ears, neurons, 'v', structure='dense', delay=True, max_delay=1.1 * max_delay)
    synapses.connect_full(ears, neurons, weight=.5)
    synapses.delay[0, :] = linspace(0 * ms, 1.1 * max_delay, N)
    synapses.delay[1, :] = linspace(0 * ms, 1.1 * max_delay, N)[::-1]
    spikes = SpikeMonitor(neurons)
    
    run(1000 * ms)
    raster_plot(spikes)
    show()