Atjaunināt sīkdatņu piekrišanu

Fundamentals of Computational Neuroscience [Mīkstie vāki]

4.21/5 (50 ratings by Goodreads)
  • Formāts: Paperback / softback, 354 pages, height x width: 246x171 mm, weight: 571 g, numerous figures
  • Izdošanas datums: 20-Jun-2002
  • Izdevniecība: Oxford University Press Inc
  • ISBN-10: 0198515839
  • ISBN-13: 9780198515838
Citas grāmatas par šo tēmu:
  • Mīkstie vāki
  • Cena: 88,38 €*
  • * Šī grāmata vairs netiek publicēta. Jums tiks paziņota lietotas grāmatas cena
  • Šī grāmata vairs netiek publicēta. Jums tiks paziņota lietotas grāmatas cena.
  • Daudzums:
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Formāts: Paperback / softback, 354 pages, height x width: 246x171 mm, weight: 571 g, numerous figures
  • Izdošanas datums: 20-Jun-2002
  • Izdevniecība: Oxford University Press Inc
  • ISBN-10: 0198515839
  • ISBN-13: 9780198515838
Citas grāmatas par šo tēmu:
Computational neuroscience is the theoretical study of the brain to uncover the principles and mechanisms that guide the development, organization, information processing, and mental functions of the nervous system. Although not a new area, it is only recently that enough knowledge has been gathered to establish computational neuroscience as a scientific discipline in its own right. Given the complexity of the field, and its increasing importance in progressing our understanding of how the brain works, there has long been a need for an introductory text on what is often assumed to be an impenetrable topic. "Fundamentals of Computational Neuroscience" is one of the first introductory books on this topic. It introduces the theoretical foundations of neuroscience with a focus on the nature of information processing in the brain. The book covers the introduction and motivation of simplified models of neurons that are suitable for exploring information processing in large brain-like networks. Additionally, it introduces several fundamental network architectures and discusses their relevance for information processing in the brain, giving some examples of models of higher-order cognitive functions to demonstrate the advanced insight that can be gained with such studies. Each chapter starts by introducing its topic with experimental facts and conceptual questions related to the study of brain function. An additional feature is the inclusion of simple Matlab programs that can be used to explore many of the mechanisms explained in the book. An accompanying webpage includes programs for download. The book is aimed at those within the brain and cognitive sciences, from graduate level and upwards.
Introduction
1(12)
What is computational neuroscience?
1(2)
Domains in computational neuroscience
3(3)
What is a model?
6(3)
Emergence and adaptation
9(1)
From exploration to a theory of the brain
10(3)
Neurons and conductance-based models
13(25)
Modelling biological neurons
13(1)
Neurons are specialized cells
14(2)
Basic synaptic mechanisms
16(6)
The generation of action potentials: Hodgkin-Huxley equations
22(7)
Dendritic trees, the propagation of action potentials, and compartmental models
29(3)
Above and beyond the Hodgkin-Huxley neuron: fatigue, bursting, and simplifications
32(6)
Spiking neurons and response variability
38(18)
Integrate-and-fire neurons
38(4)
The spike-response model
42(2)
Spike time variability
44(4)
Noise models for IF-neurons
48(8)
Neurons in a network
56(33)
Organizations of neuronal networks
56(9)
Information transmission in networks
65(7)
Population dynamics: modelling the average behaviour of neurons
72(7)
The sigma node
79(5)
Networks with nonclassical synapses: the sigma-pi node
84(5)
Representations and the neural code
89(31)
How neurons talk
89(6)
Information theory
95(5)
Information in spike trains
100(7)
Population coding and decoding
107(5)
Distributed representation
112(8)
Feed-forward mapping networks
120(26)
Perception, function representation, and look-up tables
120(5)
The sigma node as perception
125(5)
Multilayer mapping networks
130(4)
Learning, generalization, and biological interpretations
134(4)
Self-organizing network architectures and genetic algorithms
138(2)
Mapping networks with context units
140(2)
Probabilistic mapping networks
142(4)
Associators and synaptic plasticity
146(28)
Associative memory and Hebbian learning
146(3)
An example of learning associations
149(4)
The biochemical basis of synaptic plasticity
153(1)
The temporal structure of Hebbian plasticity: LTP and LTD
154(4)
Mathematical formulation of Hebbian plasticity
158(3)
Weight distributions
161(4)
Neuronal response variability, gain control, and scaling
165(5)
Features of associators and Hebbian learning
170(4)
Auto-associative memory and network dynamics
174(33)
Short-term memory and reverberating network activity
174(2)
Long-term memory and auto-associators
176(3)
Point-attractor networks: the Grossberg-Hopfield model
179(6)
The phase diagram and the Grossberg-Hopfield model
185(5)
Sparse attractor neural networks
190
Chaotic networks: a dynamic systems view
187(15)
Biologically more realistic variations of attractor networks
202(5)
Continuous attractor and competitive networks
207(26)
Spatial representations and the sense of direction
207(4)
Learning with continuous pattern representations
211(4)
Asymptotic states and the dynamics of neural fields
215(7)
'Path' integration, Hebbian trace rule, and sequence learning
222(4)
Competitive networks and self-organizing maps
226(7)
Supervised learning and rewards systems
233(21)
Motor learning and control
233(4)
The delta rule
237(4)
Generalized delta rules
241(5)
Reward learning
246(8)
System level organization and coupled networks
254(30)
System level anatomy of the brain
254(4)
Modular mapping networks
258(5)
Coupled attractor networks
263(5)
Working memory
268(5)
Attentive vision
273(6)
An interconnecting workspace hypothesis
279(5)
A MATLAB guide to computational neuroscience
284(32)
Introduction to the MATLAB programming environment
284(6)
Spiking neurons and numerical integration in MATLAB
290(8)
Associators and Hebbian learning
298(3)
Recurrent networks and network dynamics
301(5)
Continuous attractor neural networks
306(5)
Error-back-propagation network
311(5)
A Some useful mathematics 316(4)
Vector and matrix notations
316(2)
Distance measures
318(1)
The δ-function
319(1)
B Basic probability theory 320(7)
Random variables and their probability (density) function
320(1)
Examples of probability (density) functions
320(3)
Cumulative probability (density) function and the Gaussian error function
323(1)
Moments: mean and variance
324(1)
Functions of random variables
325(2)
C Numerical integration 327(6)
Initial value problem
327(1)
Euler method
327(1)
Example
328(1)
Higher-order methods
328(3)
Adaptive Runge-Kutta
331(2)
Index 333