File: examples-hears_sound_localisation_model.txt

package info (click to toggle)
brian 1.4.3-1
  • links: PTS, VCS
  • area: main
  • in suites: sid, stretch
  • size: 23,436 kB
  • sloc: python: 68,707; cpp: 29,040; ansic: 5,182; sh: 111; makefile: 61
file content (126 lines) | stat: -rw-r--r-- 5,305 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
.. currentmodule:: brian

.. index::
   pair: example usage; xlim
   pair: example usage; show
   pair: example usage; argmax
   pair: example usage; erbspace
   pair: example usage; shape
   pair: example usage; ylim
   pair: example usage; plot
   pair: example usage; Repeat
   pair: example usage; FunctionFilterbank
   pair: example usage; sum
   pair: example usage; FilterbankGroup
   pair: example usage; ylabel
   pair: example usage; tile
   pair: example usage; SpikeCounter
   pair: example usage; Sound
   pair: example usage; run
   pair: example usage; Connection
   pair: example usage; IRCAM_LISTEN
   pair: example usage; amax
   pair: example usage; NeuronGroup
   pair: example usage; Gammatone
   pair: example usage; RestructureFilterbank
   pair: example usage; xlabel
   pair: example usage; whitenoise
   pair: example usage; scatter

.. _example-hears_sound_localisation_model:

Example: sound_localisation_model (hears)
=========================================

Example demonstrating the use of many features of Brian hears, including
HRTFs, restructuring filters and integration with Brian. Implements a
simplified version of the "ideal" sound localisation model from Goodman
and Brette (2010).

The sound is played at a particular spatial location (indicated on the final
plot by a red +). Each location has a corresponding assembly of neurons, whose
summed firing rates give the sizes of the blue circles in the plot. The most
strongly responding assembly is indicated by the green x, which is the estimate
of the location by the model.

Reference:

`Goodman DFM, Brette R (2010). Spike-timing-based computation in sound
localization. PLoS Comput. Biol. 6(11) <http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1000993>`__.

::

    from brian import *
    from brian.hears import *
    
    # Download the IRCAM database, and replace this filename with the location
    # you downloaded it to
    hrtfdb = IRCAM_LISTEN(r'F:\HRTF\IRCAM')
    subject = 1002
    hrtfset = hrtfdb.load_subject(subject)
    # This gives the number of spatial locations in the set of HRTFs
    num_indices = hrtfset.num_indices
    # Choose a random location for the sound to come from
    index = randint(hrtfset.num_indices)
    # A sound to test the model with
    sound = Sound.whitenoise(500*ms)
    # This is the specific HRTF for the chosen location
    hrtf = hrtfset.hrtf[index]
    # We apply the chosen HRTF to the sound, the output has 2 channels
    hrtf_fb = hrtf.filterbank(sound)
    # We swap these channels (equivalent to swapping the channels in the
    # subsequent filters, but simpler to do it with the inputs)
    swapped_channels = RestructureFilterbank(hrtf_fb, indexmapping=[1, 0])
    # Now we apply all of the possible pairs of HRTFs in the set to these
    # swapped channels, which means repeating them num_indices times first
    hrtfset_fb = hrtfset.filterbank(Repeat(swapped_channels, num_indices))
    # Now we apply cochlear filtering (logically, this comes before the HRTF
    # filtering, but since convolution is commutative it is more efficient to
    # do the cochlear filtering afterwards
    cfmin, cfmax, cfN = 150*Hz, 5*kHz, 40
    cf = erbspace(cfmin, cfmax, cfN)
    # We repeat each of the HRTFSet filterbank channels cfN times, so that
    # for each location we will apply each possible cochlear frequency
    gfb = Gammatone(Repeat(hrtfset_fb, cfN),
                    tile(cf, hrtfset_fb.nchannels))
    # Half wave rectification and compression
    cochlea = FunctionFilterbank(gfb, lambda x:15*clip(x, 0, Inf)**(1.0/3.0))
    # Leaky integrate and fire neuron model
    eqs = '''
    dV/dt = (I-V)/(1*ms)+0.1*xi/(0.5*ms)**.5 : 1
    I : 1
    '''
    G = FilterbankGroup(cochlea, 'I', eqs, reset=0, threshold=1, refractory=5*ms)
    # The coincidence detector (cd) neurons
    cd = NeuronGroup(num_indices*cfN, eqs, reset=0, threshold=1, clock=G.clock)
    # Each CD neuron receives precisely two inputs, one from the left ear and
    # one from the right, for each location and each cochlear frequency
    C = Connection(G, cd, 'V')
    for i in xrange(num_indices*cfN):
        C[i, i] = 0.5                 # from right ear
        C[i+num_indices*cfN, i] = 0.5 # from left ear
    # We want to just count the number of CD spikes
    counter = SpikeCounter(cd)
    # Run the simulation, giving a report on how long it will take as we run
    run(sound.duration, report='stderr')
    # We take the array of counts, and reshape them into a 2D array which we sum
    # across frequencies to get the spike count of each location-specific assembly
    count = counter.count
    count.shape = (num_indices, cfN)
    count = sum(count, axis=1)
    count = array(count, dtype=float)/amax(count)
    # Our guess of the location is the index of the strongest firing assembly
    index_guess = argmax(count)
    # Now we plot the output, using the coordinates of the HRTFSet
    coords = hrtfset.coordinates
    azim, elev = coords['azim'], coords['elev'] 
    scatter(azim, elev, 100*count)
    plot([azim[index]], [elev[index]], '+r', ms=15, mew=2)
    plot([azim[index_guess]], [elev[index_guess]], 'xg', ms=15, mew=2)
    xlabel('Azimuth (deg)')
    ylabel('Elevation (deg)')
    xlim(-5, 350)
    ylim(-50, 95)
    show()