|
|
|
1 Introduction and outlook |
|
|
3 | (29) |
|
1.1 What is computational neuroscience? |
|
|
3 | (2) |
|
1.2 Organization in the brain |
|
|
5 | (13) |
|
|
18 | (3) |
|
1.4 Is there a brain theory? |
|
|
21 | (5) |
|
1.5 A computational theory of the brain |
|
|
26 | (6) |
|
2 Scientific programming with Python |
|
|
32 | (11) |
|
2.1 The Python programming environment |
|
|
32 | (1) |
|
2.2 Basic language elements |
|
|
33 | (7) |
|
2.3 Code efficiency and vectorization |
|
|
40 | (3) |
|
|
43 | (22) |
|
3.1 Vector and matrix notations |
|
|
43 | (2) |
|
|
45 | (1) |
|
|
46 | (1) |
|
|
47 | (6) |
|
3.5 Basic probability theory |
|
|
53 | (12) |
|
|
|
4 Neurons and conductance-based models |
|
|
65 | (33) |
|
4.1 Biological background |
|
|
65 | (5) |
|
4.2 Synaptic mechanisms and dendritic processing |
|
|
70 | (7) |
|
4.3 The generation of action potentials: Hodgkin--Huxley |
|
|
77 | (13) |
|
4.4 FitzHugh-Nagumo model |
|
|
90 | (2) |
|
4.5 Neuronal morphologies: compartmental models |
|
|
92 | (6) |
|
5 Integrate-and-fire neurons and population models |
|
|
98 | (35) |
|
5.1 The leaky integrate-and-fire models |
|
|
98 | (10) |
|
5.2 Spike-time variability |
|
|
108 | (7) |
|
5.3 Advanced integrate-and-fire models |
|
|
115 | (2) |
|
5.4 The neural code and the firing rate hypothesis |
|
|
117 | (4) |
|
5.5 Population dynamics: modelling the average behaviour of neurons |
|
|
121 | (8) |
|
5.6 Networks with non-classical synapses |
|
|
129 | (4) |
|
6 Associators and synaptic plasticity |
|
|
133 | (36) |
|
6.1 Associative memory and Hebbian learning |
|
|
133 | (7) |
|
6.2 The physiology and biophysics of synaptic plasticity |
|
|
140 | (6) |
|
6.3 Mathematical formulation of Hebbian plasticity |
|
|
146 | (7) |
|
6.4 Synaptic scaling and weight distributions |
|
|
153 | (10) |
|
6.5 Plasticity with pre- and postsynaptic dynamics |
|
|
163 | (6) |
|
|
|
7 Feed-forward mapping networks |
|
|
169 | (45) |
|
7.1 Deep representational learning |
|
|
169 | (3) |
|
|
172 | (17) |
|
7.3 Convolutional neural networks (CNNs) |
|
|
189 | (8) |
|
7.4 Probabilistic interpretation of MLPs |
|
|
197 | (8) |
|
7.5 The anticipating brain |
|
|
205 | (9) |
|
8 Feature maps and competitive population coding |
|
|
214 | (36) |
|
8.1 Competitive feature representations in cortical tissue |
|
|
214 | (2) |
|
|
216 | (7) |
|
8.3 Dynamic neural field theory |
|
|
223 | (14) |
|
8.4 `Path' integration and the Hebbian trace rule |
|
|
237 | (4) |
|
8.5 Distributed representation and population coding |
|
|
241 | (9) |
|
9 Recurrent associative networks and episodic memory |
|
|
250 | (53) |
|
9.1 The auto-associative network and the hippocampus |
|
|
250 | (5) |
|
9.2 Point-attractor neural networks (ANN) |
|
|
255 | (12) |
|
9.3 Sparse attractor networks and correlated patterns |
|
|
267 | (6) |
|
9.4 Chaotic networks: a dynamic systems view |
|
|
273 | (8) |
|
9.5 The Boltzmann Machine |
|
|
281 | (8) |
|
9.6 Re-entry and gated recurrent networks |
|
|
289 | (14) |
|
|
|
10 Modular networks and complementary systems |
|
|
303 | (20) |
|
10.1 Modular mapping networks |
|
|
303 | (6) |
|
10.2 Coupled attractor networks |
|
|
309 | (5) |
|
|
314 | (2) |
|
10.4 Complementary memory systems |
|
|
316 | (7) |
|
11 Motor Control and Reinforcement Learning |
|
|
323 | (40) |
|
11.1 Motor learning and control |
|
|
323 | (4) |
|
11.2 Classical conditioning and reinforcement learning |
|
|
327 | (2) |
|
11.3 Formalization of reinforcement learning |
|
|
329 | (16) |
|
11.4 Deep reinforcement learning |
|
|
345 | (18) |
|
|
363 | (24) |
|
|
363 | (5) |
|
12.2 An interconnecting workspace hypothesis |
|
|
368 | (3) |
|
12.3 Complementary decision systems |
|
|
371 | (3) |
|
12.4 Probabilistic reasoning: causal models and Bayesian networks |
|
|
374 | (8) |
|
12.5 Structural causal models and learning causality |
|
|
382 | (5) |
Index |
|
387 | |