Atjaunināt sīkdatņu piekrišanu

Computational Neuroscience: A First Course 2013 ed. [Hardback]

3.50/5 (12 ratings by Goodreads)
  • Formāts: Hardback, 135 pages, height x width: 235x155 mm, weight: 3495 g, XI, 135 p., 1 Hardback
  • Sērija : Springer Series in Bio-/Neuroinformatics 2
  • Izdošanas datums: 05-Jun-2013
  • Izdevniecība: Springer International Publishing AG
  • ISBN-10: 3319008609
  • ISBN-13: 9783319008608
Citas grāmatas par šo tēmu:
  • Formāts: Hardback, 135 pages, height x width: 235x155 mm, weight: 3495 g, XI, 135 p., 1 Hardback
  • Sērija : Springer Series in Bio-/Neuroinformatics 2
  • Izdošanas datums: 05-Jun-2013
  • Izdevniecība: Springer International Publishing AG
  • ISBN-10: 3319008609
  • ISBN-13: 9783319008608
Citas grāmatas par šo tēmu:
Computational Neuroscience - A First Course provides an essential introduction to computational neuroscience and equips readers with a fundamental understanding of modeling the nervous system at the membrane, cellular, and network level. The book, which grew out of a lecture series held regularly for more than ten years to graduate students in neuroscience with backgrounds in biology, psychology and medicine, takes its readers on a journey through three fundamental domains of computational neuroscience: membrane biophysics, systems theory and artificial neural networks. The required mathematical concepts are kept as intuitive and simple as possible throughout the book, making it fully accessible to readers who are less familiar with mathematics. Overall, Computational Neuroscience - A First Course represents an essential reference guide for all neuroscientists who use computational methods in their daily work, as well as for any theoretical scientist approaching the field of computational neuroscience.

This introduction to computational neuroscience equips readers with a solid understanding of techniques for modeling the nervous system at the membrane, cellular and network level. Covers membrane biophysics, systems theory and artificial neural networks.

Recenzijas

A useful guide for any neuroscientist desiring to incorporate computational methods into their research. The book covers three fundamental areas of computational neuroscience: membrane physics, systems theory, and artificial neural networks. At the end each chapter, suggested reading materials are listed to help guide the reader to supporting resources which will fill in background material and\or extend the topic presented. (Stanley R. Huddy, Mathematical Reviews, May, 2015)

This book focuses on basic mathematical modeling approaches and computational methods in neuroscience, and it is a significant presentation of some basic aspects of computational neuroscience. I recommend it for students and researchers in neuroscience who are interested in mathematical modeling and computational methods. (Jin Liang, zbMATH 1317.92001, 2015)

This is a good text discussing the mathematic neuromodeling introducing differential calculus inc. differential and partial differentional equations as they apply to temporal- spatial dynamics of the neural code. I recommend this book for all audiences with an interest in neuroscience. (Joseph J. Grenier, Amazon.com, August, 2014)

1 Excitable Membranes and Neural Conduction
1(22)
1.1 Membrane Potentials
1(3)
1.2 The Hodgkin-Huxley Theory
4(9)
1.2.1 Modeling Conductance Change with Differential Equations
5(2)
1.2.2 The Potassium Channel
7(2)
1.2.3 The Sodium Channel
9(2)
1.2.4 Combining the Conductances in Space Clamp
11(2)
1.3 An Analytical Approximation: The FitzHugh-Nagumo Equations
13(2)
1.4 Passive Conduction
15(4)
1.5 Propagating Action Potentials
19(1)
1.6 Summary and Outlook
19(1)
1.7 Suggested Reading
20(3)
2 Receptive Fields and the Specificity of Neuronal Firing
23(34)
2.1 Spatial Summation
23(12)
2.1.1 Correlation and Linear Spatial Summation
23(5)
2.1.2 Lateral Inhibition: Convolution
28(3)
2.1.3 Correlation and Convolution
31(2)
2.1.4 Spatio-Temporal Summation
33(1)
2.1.5 Peri-Stimulus Time Histogram (PSTH) and Tuning Curves
34(1)
2.2 Functional Descriptions of Receptive Fields
35(5)
2.2.1 Isotropic Profiles: Gaussians
35(2)
2.2.2 Orientation: Gabor Functions
37(2)
2.2.3 Spatio-Temporal Gabor Functions
39(1)
2.2.4 Why Gaussians?
40(1)
2.3 Non-linearities in Receptive Fields
40(9)
2.3.1 Linearity Defined: The Superposition Principle
40(2)
2.3.2 Static Non-linearity
42(2)
2.3.3 Non-linearity as Interaction: Volterra Kernels
44(1)
2.3.4 Energy-Type Non-linearity
45(2)
2.3.5 Summary: Receptive Fields in the Primary Visual Pathway
47(2)
2.4 Motion Detection
49(5)
2.4.1 Motion and Flicker
49(1)
2.4.2 Coincidence Detector
50(1)
2.4.3 Correlation Detector
51(2)
2.4.4 Motion as Orientation in Space-Time
53(1)
2.5 Suggested Reading
54(3)
3 Fourier Analysis for Neuroscientists
57(26)
3.1 Examples
58(5)
3.1.1 Light Spectra
58(1)
3.1.2 Acoustics
58(3)
3.1.3 Vision
61(1)
3.1.4 Magnetic Resonance Tomography
61(2)
3.2 Why Are Sinusoidals Special?
63(7)
3.2.1 The Eigenfunctions of Convolution: Real Notation
63(3)
3.2.2 Complex Numbers
66(1)
3.2.3 The Eigenfunctions of Convolution: Complex Notation
67(1)
3.2.4 Gaussian Convolution Kernels
68(2)
3.3 Fourier Decomposition: Basic Theory
70(7)
3.3.1 Periodic Functions
71(1)
3.3.2 The Convolution Theorem; Low-Pass and High-Pass
72(3)
3.3.3 Finding the Coefficients
75(2)
3.4 Fourier Decomposition: Generalizations
77(2)
3.4.1 Non-periodic Functions
77(1)
3.4.2 Fourier-Transforms in Two and More Dimensions
78(1)
3.5 Summary: Facts on Fourier Transforms
79(2)
3.6 Suggested Reading
81(2)
4 Artificial Neural Networks
83(30)
4.1 Elements of Neural Networks
83(6)
4.1.1 Activity and the States of a Neural Network
84(1)
4.1.2 Activation Function and Synaptic Weights
85(1)
4.1.3 The Dot Product
86(1)
4.1.4 Matrix Operations
87(1)
4.1.5 Weight Dynamics ("Learning Rules")
88(1)
4.2 Classification
89(11)
4.2.1 The Perceptron
89(1)
4.2.2 Linear Classification
90(2)
4.2.3 Limitations
92(2)
4.2.4 Supervised Learning and Error Minimization
94(5)
4.2.5 Support Vector Machines
99(1)
4.3 Associative Memory
100(6)
4.3.1 Topology: the Feed-Forward Associator
101(1)
4.3.2 Example: A 2 x 3 Associator
102(1)
4.3.3 Associative Memory and Covariance Matrices
103(1)
4.3.4 General Least Square Solution
104(1)
4.3.5 Applications
105(1)
4.4 Self-organization and Competitive Learning
106(5)
4.4.1 The Oja Learning Rule
107(2)
4.4.2 Self-organizing Feature Map (Kohonen Map)
109(2)
4.5 Suggested Reading
111(2)
5 Coding and Representation
113(18)
5.1 Population Code
113(10)
5.1.1 Types of Neural Codes
113(1)
5.1.2 Information Content of Population Codes
114(3)
5.1.3 Reading a Population Code: The Center of Gravity Estimator
117(2)
5.1.4 Examples, and Further Properties
119(4)
5.1.5 Summary
123(1)
5.2 Retinotopic Mapping
123(5)
5.2.1 Areal Magnification
124(2)
5.2.2 Conformal Maps
126(1)
5.2.3 Log-Polar Mapping
127(1)
5.3 Suggested Reading
128(3)
References 131(2)
Index 133
Hanspeter Mallot received his PhD from the Faculty of Biology, University of Mainz, Germany, in 1986. In the following years, he held postdoctoral and research positions at the Massachusetts Institute of Technology, the Ruhr-University Bochum, the Max-Planck-Institute for Biological Cybernetics in Tübingen and the Institute for Advanced Study, Berlin. In 2000, he was appointed Professor of Cognitive Neuroscience at the Eberhard-Karls-University, Tübingen. Research focusses on spatial cognition in rats, humans and robots, using behavioral experiments in virtual reality, eye-movement recordings, and simulated agents in hard- and software.

Hanspeter Mallot is currently Professor of Cognitive Neuroscience. He is a member of the editorial board of the journal "Spatial Cognition and Computation". In the past, he served as a president of the European Neural Network Society (ENNS), as president of the German Society for Cognitive Science (GK) and as a member of the Neuroscience review pannel of the Deutsche Forschungsgemeinschaft.