Atjaunināt sīkdatņu piekrišanu

E-grāmata: Fundamentals of Computational Neuroscience: Third Edition

(Faculty of Computer Science, Dalhousie University)
  • Formāts: 416 pages
  • Izdošanas datums: 28-Nov-2022
  • Izdevniecība: Oxford University Press
  • Valoda: eng
  • ISBN-13: 9780192696137
  • Formāts - EPUB+DRM
  • Cena: 55,90 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Formāts: 416 pages
  • Izdošanas datums: 28-Nov-2022
  • Izdevniecība: Oxford University Press
  • Valoda: eng
  • ISBN-13: 9780192696137

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

Computational neuroscience is the theoretical study of the brain to uncover the principles and mechanisms that guide the development, organization, information processing, and mental functions of the nervous system. Although not a new area, it is only recently that enough knowledge has been gathered to establish computational neuroscience as a scientific discipline in its own right. Given the complexity of the field, and its increasing importance in progressing our understanding of how the brain works, there has long been a need for an introductory text on what is often assumed to be an impenetrable topic.

The new edition of Fundamentals of Computational Neuroscience build on the success and strengths of the previous editions. It introduces the theoretical foundations of neuroscience with a focus on the nature of information processing in the brain. The book covers the introduction and motivation of simplified models of neurons that are suitable for exploring information processing in large brain-like networks. Additionally, it introduces several fundamental network architectures and discusses their relevance for information processing in the brain, giving some examples of models of higher-order cognitive functions to demonstrate the advanced insight that can be gained with such studies.

Each chapter starts by introducing its topic with experimental facts and conceptual questions related to the study of brain function. An additional feature is the inclusion of simple Matlab programs that can be used to explore many of the mechanisms explained in the book. An accompanying webpage includes programs for download. The book will be the essential text for anyone in the brain sciences who wants to get to grips with this topic.
I BACKGROUND
1 Introduction and outlook
3(29)
1.1 What is computational neuroscience?
3(2)
1.2 Organization in the brain
5(13)
1.3 What is a model?
18(3)
1.4 Is there a brain theory?
21(5)
1.5 A computational theory of the brain
26(6)
2 Scientific programming with Python
32(11)
2.1 The Python programming environment
32(1)
2.2 Basic language elements
33(7)
2.3 Code efficiency and vectorization
40(3)
3 Math and Stats
43(22)
3.1 Vector and matrix notations
43(2)
3.2 Distance measures
45(1)
3.3 The δ-function
46(1)
3.4 Numerical calculus
47(6)
3.5 Basic probability theory
53(12)
II NEURONS
4 Neurons and conductance-based models
65(33)
4.1 Biological background
65(5)
4.2 Synaptic mechanisms and dendritic processing
70(7)
4.3 The generation of action potentials: Hodgkin--Huxley
77(13)
4.4 FitzHugh-Nagumo model
90(2)
4.5 Neuronal morphologies: compartmental models
92(6)
5 Integrate-and-fire neurons and population models
98(35)
5.1 The leaky integrate-and-fire models
98(10)
5.2 Spike-time variability
108(7)
5.3 Advanced integrate-and-fire models
115(2)
5.4 The neural code and the firing rate hypothesis
117(4)
5.5 Population dynamics: modelling the average behaviour of neurons
121(8)
5.6 Networks with non-classical synapses
129(4)
6 Associators and synaptic plasticity
133(36)
6.1 Associative memory and Hebbian learning
133(7)
6.2 The physiology and biophysics of synaptic plasticity
140(6)
6.3 Mathematical formulation of Hebbian plasticity
146(7)
6.4 Synaptic scaling and weight distributions
153(10)
6.5 Plasticity with pre- and postsynaptic dynamics
163(6)
III NETWORKS
7 Feed-forward mapping networks
169(45)
7.1 Deep representational learning
169(3)
7.2 The perceptron
172(17)
7.3 Convolutional neural networks (CNNs)
189(8)
7.4 Probabilistic interpretation of MLPs
197(8)
7.5 The anticipating brain
205(9)
8 Feature maps and competitive population coding
214(36)
8.1 Competitive feature representations in cortical tissue
214(2)
8.2 Self-organizing maps
216(7)
8.3 Dynamic neural field theory
223(14)
8.4 `Path' integration and the Hebbian trace rule
237(4)
8.5 Distributed representation and population coding
241(9)
9 Recurrent associative networks and episodic memory
250(53)
9.1 The auto-associative network and the hippocampus
250(5)
9.2 Point-attractor neural networks (ANN)
255(12)
9.3 Sparse attractor networks and correlated patterns
267(6)
9.4 Chaotic networks: a dynamic systems view
273(8)
9.5 The Boltzmann Machine
281(8)
9.6 Re-entry and gated recurrent networks
289(14)
IV SYSTEM-LEVEL MODELS
10 Modular networks and complementary systems
303(20)
10.1 Modular mapping networks
303(6)
10.2 Coupled attractor networks
309(5)
10.3 Sequence learning
314(2)
10.4 Complementary memory systems
316(7)
11 Motor Control and Reinforcement Learning
323(40)
11.1 Motor learning and control
323(4)
11.2 Classical conditioning and reinforcement learning
327(2)
11.3 Formalization of reinforcement learning
329(16)
11.4 Deep reinforcement learning
345(18)
12 The cognitive brain
363(24)
12.1 Attentive vision
363(5)
12.2 An interconnecting workspace hypothesis
368(3)
12.3 Complementary decision systems
371(3)
12.4 Probabilistic reasoning: causal models and Bayesian networks
374(8)
12.5 Structural causal models and learning causality
382(5)
Index 387
Thomas P. Trappenberg is a Professor of Computer Science at Dalhousie University. He has also been a Research Fellow at Riken Brain Science Institute and a Research Officer at Oxford University. He has previously published Fundamentals of Computational Neuroscience, First Ed., Fundamentals of Computational Neuroscience, Second Ed. and Fundamentals of Machine Learning.