Atjaunināt sīkdatņu piekrišanu

Fundamentals of Computational Neuroscience 2nd Revised edition [Mīkstie vāki]

4.21/5 (47 ratings by Goodreads)
(Faculty of Computer Science, Dalhousie University, Canada)
  • Formāts: Paperback / softback, 416 pages, height x width x depth: 246x189x23 mm, weight: 783 g, numerous figures
  • Izdošanas datums: 29-Oct-2009
  • Izdevniecība: Oxford University Press
  • ISBN-10: 0199568413
  • ISBN-13: 9780199568413
Citas grāmatas par šo tēmu:
  • Mīkstie vāki
  • Cena: 94,68 €*
  • * Šī grāmata vairs netiek publicēta. Jums tiks paziņota lietotas grāmatas cena
  • Šī grāmata vairs netiek publicēta. Jums tiks paziņota lietotas grāmatas cena.
  • Daudzums:
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Formāts: Paperback / softback, 416 pages, height x width x depth: 246x189x23 mm, weight: 783 g, numerous figures
  • Izdošanas datums: 29-Oct-2009
  • Izdevniecība: Oxford University Press
  • ISBN-10: 0199568413
  • ISBN-13: 9780199568413
Citas grāmatas par šo tēmu:
"Computational neuroscience is the theoretical study of the brain to uncover the principles and mechanisms that guide the development, organization, information processing, and mental functions of the nervous system. Although not a new area, it is only recently that enough knowledge has been gathered to establish computational neuroscience as a scientific discipline in its own right. Given the complexity of the field, and its increasing importance in progressing our understanding of how the brain works, there has long been a need for an introductory text on what is often assumed to be an impenetrable topic. The new edition of Fundamentals of Computational Neuroscience build on the success and strengths of the first edition. It introduces the theoretical foundations of neuroscience with a focus on the nature of information processing in the brain. The book covers the introduction and motivation of simplified models of neurons that are suitable for exploring information processing in large brain-like networks. Additionally, it introduces several fundamental network architectures and discusses their relevance for information processing in the brain, giving some examples of models of higher-order cognitive functions to demonstrate the advanced insight that can be gained with such studies. Each chapter starts by introducing its topic with experimental facts and conceptual questions related to the study of brain function. An additional feature is the inclusion of simple Matlab programs that can be used to explore many of the mechanisms explained in the book. An accompanying webpage includes programs for download. The book is aimed at those within the brain and cognitive sciences, from graduate level and upwards"--Provided by publisher.



Computational neuroscience is the theoretical study of the brain to uncover the principles and mechanisms that guide the development, organization, information processing, and mental functions of the nervous system. Although not a new area, it is only recently that enough knowledge has been gathered to establish computational neuroscience as a scientific discipline in its own right. Given the complexity of the field, and its increasing importance in progressing our understanding of how the brain works, there has long been a need for an introductory text on what is often assumed to be an impenetrable topic.

The new edition of Fundamentals of Computational Neuroscience build on the success and strengths of the first edition. It introduces the theoretical foundations of neuroscience with a focus on the nature of information processing in the brain. The book covers the introduction and motivation of simplified models of neurons that are suitable for exploring information processing in large brain-like networks. Additionally, it introduces several fundamental network architectures and discusses their relevance for information processing in the brain, giving some examples of models of higher-order cognitive functions to demonstrate the advanced insight that can be gained with such studies.

Each chapter starts by introducing its topic with experimental facts and conceptual questions related to the study of brain function. An additional feature is the inclusion of simple Matlab programs that can be used to explore many of the mechanisms explained in the book. An accompanying webpage includes programs for download. The book will be the essential text for anyone in the brain sciences who wants to get to grips with this topic.
1 Introduction 1
1.1 What is computational neuroscience?
1
1.1.1 Tools and specializations in neuroscience
2
1.1.2 Levels of organization in the brain
3
1.2 What is a model?
5
1.2.1 Phenomenological and explanatory models
6
1.2.2 Models in computational neuroscience
7
1.3 Is there a brain theory?
8
1.3.1 Emergence and adaptation
9
1.3.2 Levels of analysis
11
1.4 A computational theory of the brain
13
1.4.1 Why do we have brains?
13
1.4.2 The anticipating brain
14
Exercises
16
Further reading
17
I Basic neurons 19
2 Neurons and conductance-based models
21
2.1 Biological background
21
2.1.1 Structural properties
22
2.1.2 Information-processing mechanisms
23
2.1.3 Membrane potential
24
2.1.4 Ion channels
25
2.2 Basic synaptic mechanisms and dendritic processing
26
2.2.1 Chemical synapses and neurotransmitters
26
2.2.2 Excitatory and inhibitory synapses
26
2.2.3 Modelling synaptic responses
27
2.2.4 Non-linear superposition of PSPs
32
2.3 The generation of action potentials: Hodgkin—Huxley equations
33
2.3.1 The minimal mechanisms
34
2.3.2 Ion pumps
35
2.3.3 Hodgkin—Huxley equations
35
2.3.4 Numerical integration
37
2.3.5 Refractory period
38
2.3.6 Propagation of action potentials
41
2.3.7 Above and beyond the Hodgkin—Huxley neuron: the Wilson model o
41
2.4 Including neuronal morphologies: compartmental models
46
2.4.1 Cable theory
47
2.4.2 Physical shape of neurons
48
2.4.3 Neuron simulators
49
Exercises
50
Further reading
50
3 Simplified neuron and population models
53
3.1 Basic spiking neurons
53
3.1.1 The leaky integrate-and-fire neuron
54
3.1.2 Response of IF neurons to very short and constant input currents
55
3.1.3 Activation function
57
3.1.4 The spike-response model
59
3.1.5 The Izhikevich neuron
61
3.1.6 The McCulloch—Pitts neuron
63
3.2 Spike-time variability o
64
3.2.1 Biological irregularities
64
3.2.2 Noise models for IF neurons
67
3.2.3 Simulating the variability of real neurons
67
3.2.4 The activation function depends on input
69
3.3 The neural code and the firing rate hypothesis
70
3.3.1 Correlation codes and coincidence detectors
71
3.3.2 How accurate is spike timing?
73
3.4 Population dynamics: modelling the average behaviour of neurons
74
3.4.1 Firing rates and population averages
74
3.4.2 Population dynamics for slow varying input
75
3.4.3 Motivations for population dynamics o
76
3.4.4 Rapid response of populations
78
3.4.5 Common activation functions
79
3.5 Networks with non-classical synapses: the sigma—pi node
81
3.5.1 Logical AND and sigma—pi nodes
82
3.5.2 Divisive inhibition
82
3.5.3 Further sources of modulatory effects between synaptic inputs
83
Exercises
84
Further reading
84
4 Associators and synaptic plasticity
87
4.1 Associative memory and Hebbian learning
87
4.1.1 Hebbian learning
88
4.1.2 Associations
89
4.1.3 Hebbian learning in the conditioning framework
91
4.1.4 Features of associators and Hebbian learning
93
4.2 The physiology and biophysics of synaptic plasticity
94
4.2.1 Typical plasticity experiments
94
4.2.2 Spike timing dependent plasticity
96
4.2.3 The calcium hypothesis and modelling chemical pathways
97
4.3 Mathematical formulation of Hebbian plasticity
99
4.3.1 Spike timing dependent plasticity rules
99
4.3.2 Hebbian learning in population and rate models
100
4.3.3 Negative weights and crossing synapses
104
4.4 Synaptic scaling and weight distributions
105
4.4.1 Examples of STDP with spiking neurons
106
4.4.2 Weight distributions in rate models
109
4.4.3 Competitive synaptic scaling and weight decay
110
4.4.4 Oja's rule and principal component analysis
113
Exercises
116
Further reading
116
II Basic networks 117
5 Cortical organization and simple networks
119
5.1 Organization in the brain
119
5.1.1 Large-scale brain anatomy
120
5.1.2 Hierarchical organization of cortex
121
5.1.3 Rapid data transmission in the brain
123
5.1.4 The layered structure of neocortex
124
5.1.5 Columnar organization and cortical modules
125
5.1.6 Connectivity between neocortical layers
127
5.1.7 Cortical parameters
128
5.2 Information transmission in random networks o
130
5.2.1 The simple chain
130
5.2.2 Diverging–converging chains
130
5.2.3 Immunity of random networks to spontaneous background activity
131
5.2.4 Noisy background
132
5.2.5 Information transmission in large random networks
133
5.2.6 The spread of activity in small random networks
134
5.2.7 The expected number of active neurons in netlets
134
5.2.8 Netlets with inhibition
135
5.3 More physiological spiking networks
137
5.3.1 Random network
137
5.3.2 Networks with STDP and polychrony
140
Exercises
142
Further reading
142
6 Feed-forward mapping networks
143
6.1 The simple perceptron
143
6.1.1 Optical character recognition (OCR)
143
6.1.2 Mapping functions
145
6.1.3 The population node as perceptron
147
6.1.4 Boolean functions: the threshold node
148
6.1.5 Learning: the delta rule
150
6.2 The multilayer perceptron
155
6.2.1 The update rule for multilayer perceptrons
157
6.2.2 Generalization
159
6.2.3 The generalized delta rules
160
6.2.4 Biological plausibility of MLPs
163
6.3 Advanced MLP concepts o
165
6.3.1 Kernel machines and radial-basis function networks
165
6.3.2 Advanced learning
166
6.3.3 Batch versus online algorithm
168
6.3.4 Self-organizing network architectures and genetic algorithms
169
6.3.5 Mapping networks with context units
170
6.3.6 Probabilistic mapping networks
172
6.4 Support vector machines o
173
6.4.1 Large-margin classifiers
173
6.4.2 Soft-margin classifiers and the kernel trick
175
Exercises
178
Further reading
179
7 Cortical feature maps and competitive population coding
181
7.1 Competitive feature representations in cortical tissue
181
7.2 Self-organizing maps
183
7.2.1 The basic cortical map model
183
7.2.2 The Kohonen model
184
7.2.3 Ongoing refinements of cortical maps
188
7.3 Dynamic neural field theory
190
7.3.1 The centre-surround interaction kernel
191
7.3.2 Asymptotic states and the dynamics of neural fields
192
7.3.3 Examples of competitive representations in the brain
195
7.3.4 Formal analysis of attractor states o
198
7.4 Path' integration and the Hebbian trace rule o
202
7.4.1 Path integration with asymmetrical weight kernels
202
7.4.2 Self-organization of a rotation network
204
7.4.3 Updating the network after learning
204
7.5 Distributed representation and population coding
205
7.5.1 Sparseness
206
7.5.2 Probabilistic population coding
207
7.5.3 Optimal decoding with tuning curves
209
7.5.4 Implementations of decoding mechanisms
210
Exercises
212
Further reading
213
8 Recurrent associative networks and episodic memory
215
8.1 The auto-associative network and the hippocampus
215
8.1.1 Different memory types
215
8.1.2 The hippocampus and episodic memory
218
8.1.3 Learning and retrieval phase
219
8.2 Point-attractor neural networks (ANN)
219
8.2.1 Network dynamics and training
220
8.2.2 Signal-to-noise analysis o
224
8.2.3 The phase diagram
227
8.2.4 Spurious states and the advantage of noise
230
8.2.5 Noisy weights and diluted attractor networks
232
8.3 Sparse attractor networks and correlated patterns
233
8.3.1 Sparse patterns and expansion recoding
234
8.3.2 Control of sparseness in attractor networks
235
8.4 Chaotic networks: a dynamic systems view
238
8.4.1 Attractors
239
8.4.2 Lyapunov functions
240
8.4.3 The Cohen-Grossberg theorem
242
8.4.4 Asymmetrical networks
243
8.4.5 Non-monotonic networks
246
Exercises
246
Further reading
247
III System-level models 249
9 Modular networks, motor control, and reinforcement learning
251
9.1 Modular mapping networks
251
9.1.1 Mixture of experts
252
9.1.2 The 'what-and-where' task
253
9.1.3 Product of experts
256
9.2 Coupled attractor networks
257
9.2.1 Imprinted and composite patterns
258
9.2.2 Signal-to-noise analysis
259
9.3 Sequence learning
262
9.4 Complementary memory systems
264
9.4.1 Distributed model of working memory
264
9.4.2 Limited capacity of working memory
265
9.4.3 The spurious synchronization hypothesis
266
9.4.4 The interacting-reverberating-memory hypothesis
268
9.5 Motor learning and control
270
9.5.1 Feedback controller
271
9.5.2 Forward and inverse model controller
272
9.5.3 The cerebellum and motor control
273
9.6 Reinforcement learning
274
9.6.1 Classical conditioning and the reinforcement learning problem
275
9.6.2 Temporal delta rule
277
9.6.3 Temporal difference learning
278
9.6.4 The actor—critic scheme and the basal ganglia
282
Further reading
286
10 The cognitive brain
289
10.1 Hierarchical maps and attentive vision
289
10.1.1 Invariant object recognition
289
10.1.2 Attentive vision
291
10.1.3 Attentional bias in visual search and object recognition
294
10.2 An interconnecting workspace hypothesis
295
10.2.1 The global workspace
295
10.2.2 Demonstration of the global workspace in the Stroop task
297
10.3 The anticipating brain
298
10.3.1 The brain as anticipatory system in a probabilistic framework
299
10.3.2 The Boltzmann machine
302
10.3.3 The restricted Boltzmann machine and contrastive Hebbian learning
304
10.3.4 The Helmholtz machine
306
10.3.5 Probabilistic reasoning: causal models and Bayesian networks
308
10.3.6 Expectation maximization
310
10.4 Adaptive resonance theory
313
10.4.1 The basic model
313
10.4.2 ART1 equations
316
10.4.3 Simplified dynamics for unsupervised letter clustering
317
10.5 Where to go from here
319
Further reading
321
A Some useful mathematics 323
A.1 Vector and matrix notations
323
A.2 Distance measures
325
A.3 The δ-function
326
B Numerical calculus 327
B.1 Differences and sums
327
B.2 Numerical integration of an initial value problem
327
B.3 Euler method
328
B.4 Higher-order methods
330
B.5 Adaptive Runge—Kutta
331
Further reading
334
C Basic probability theory 335
C.1 Random numbers and their probability (density) function
335
C.2 Moments: mean, variance, etc.
336
C.3 Examples of probability (density) functions
338
C.3.1 Bernoulli distribution
338
C.3.2 Binomial distribution
338
C.3.3 Chi-square distribution
338
C.3.4 Exponential distribution
339
C.3.5 Lognormal distribution
339
C.3.6 Multinomial distribution
339
C.3.7 Normal (Gaussian) distribution
339
C.3.8 Poisson distribution
340
C.3.9 Uniform distribution
340
C.4 Cumulative probability (density) function and the Gaussian error function
340
C.5 Functions of random variables and the central limit theorem
341
C.6 Measuring the difference between distributions
342
C.6.1 Marginal, joined, and conditional distributions
343
Further reading
344
D Basic information theory 345
D.1 Communication channel and information gain
345
D.2 Entropy, the average information gain
348
D.2.1 Difficulties in measuring entropy
349
D.2.2 Entropy of a spike train with temporal coding
350
D.2.3 Entropy of a rate code
351
D.3 Mutual information and channel capacity
353
D.4 Information and sparseness in the inferior-temporal cortex
354
D.4.1 Population information
354
D.4.2 Sparseness of object representations
356
D.5 Surprise
357
Further reading
359
E A brief introduction to MATLAB 361
E.1 The MATLAB programming environment
361
E.1.1 Starting a MATLAB session
362
E.1.2 Basic variables in MATLAB
363
E.1.3 Control flow and conditional operations
366
E.1.4 Creating MATLAB programs
369
E.1.5 Graphics
370
E.2 A first project: modelling the world
371
E.3 Octave
373
E.4 Scilab
375
Further reading
377