Atjaunināt sīkdatņu piekrišanu

E-grāmata: Visual Cortex and Deep Networks: Learning Invariant Representations

4.20/5 (10 ratings by Goodreads)
(Massachusetts Institute of Technology), (Massachusetts Institute of Technology)
Citas grāmatas par šo tēmu:
  • Formāts - PDF+DRM
  • Cena: 70,13 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
Citas grāmatas par šo tēmu:

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

The ventral visual stream is believed to underlie object recognition in primates. Over the past fifty years, researchers have developed a series of quantitative models that are increasingly faithful to the biological architecture. Recently, deep learning convolution networks -- which do not reflect several important features of the ventral stream architecture and physiology -- -- have been trained with extremely large datasets, resulting in model neurons that mimic object recognition but do not explain the nature of the computations carried out in the ventral stream. This book develops a mathematical framework that describes learning of invariant representations of the ventral stream and is particularly relevant to deep convolutional learning networks.

The authors propose a theory based on the hypothesis that the main computational goal of the ventral stream is to compute neural representations of images that are invariant to transformations commonly encountered in the visual environment and are learned from unsupervised experience. They describe a general theoretical framework of a computational theory of invariance (with details and proofs offered in appendixes) and then review the application of the theory to the feedforward path of the ventral stream in the primate visual cortex.

Series Foreword ix
Preface xi
1 Invariant Representations: Mathematics of Invariance
1(22)
1.1 Introduction and Motivation
1(3)
1.2 Invariance Reduces Sample Complexity of Learning
4(1)
1.3 Unsupervised Learning and Computation of an Invariant Signature (One-Layer Architecture)
5(4)
1.4 Partially Observable Groups
9(1)
1.5 Optimal Templates for Scale and Position Invariance Are Gabor Functions
10(1)
1.6 Quasi Invariance to Nongroup Transformations Requires Class-Specific Templates
11(3)
1.7 Two Stages in the Computation of an Invariant Signature: Extension of the HW Module to Hierarchical Architectures
14(7)
1.8 Deep Networks and i-Theory
21(2)
2 Biophysical Mechanisms of Invariance: Unsupervised Learning, Tuning, and Pooling
23(6)
2.1 A Single-Cell Model of Simple and Complex Cells
23(1)
2.2 Learning the Wiring in the Single-Cell Model
24(1)
2.3 Hebb Synapses and Principal Components
24(2)
2.4 Spectral Theory and Pooling
26(2)
2.5 Tuning of Simple Cells
28(1)
3 Retinotopic Areas: V1, V2, V4
29(16)
3.1 V1
29(6)
3.1.1 A New Model from i-Theory for Eccentricity Dependence of Receptive Fields
29(3)
3.1.2 Fovea and Foveola
32(2)
3.1.3 Scale and Position Invariance in V1
34(1)
3.1.4 Tuning of Cells in V1
35(1)
3.2 V2 and V4
35(10)
3.2.1 Multistage Pooling
35(1)
3.2.2 Predictions of Crowding Properties in the Foveola and Outside It (Bouma's Law)
36(3)
3.2.3 Scale and Shift Invariance Dictates the Architecture of the Retina and the Retinotopic Cortex
39(3)
3.2.4 Tuning of Simple Cells in V2 and V4
42(3)
4 Class-Specific Approximate Invariance in Inferior Temporal Cortex
45(8)
4.1 From Generic Templates to Class-Specific Tuning
45(1)
4.2 Development of Class-Specific and Object-Specific Modules
45(3)
4.3 Domain-Specific Regions in the Ventral Stream
48(1)
4.4 Tuning in the Inferior Temporal Cortex
49(1)
4.5 Mirror-Symmetric Tuning in the Face Patches and Pooling over Principal Components
49(4)
5 Discussion
53(12)
5.1 i-Theory: Main Ideas
53(1)
5.2 i-Theory, Deep Learning Networks, and the Visual Cortex
54(1)
5.3 Predictions and Explanations
55(1)
5.4 Remarks
56(5)
5.5 Ideas for Research
61(4)
Appendix
65(44)
A.1 Invariant Representations and Bounds on Learning Rates: Sample Complexity
65(3)
A.1.1 Translation Group
66(1)
A.1.2 Scale and Translation: 1-D Affine Group
67(1)
A.2 One-Layer Architecture: Invariance and Selectivity
68(12)
A.2.1 Equivalence between Orbits and Probability Distributions
68(1)
A.2.2 Cramer-Wold Theorem and Random Projections for Probability Distributions
69(1)
A.2.3 Finite Random Projections Almost Discriminate among Different Probability Distributions
70(2)
A.2.4 Number of Templates Depends on Pooling Size
72(3)
A.2.5 Partially Observable Groups
75(4)
A.2.6 Approximate Invariance to Nongroup Transformations
79(1)
A.3 Multilayer Architecture: Invariance, Covariance, and Selectivity
80(6)
A.3.1 Recursive Definition of Simple and Complex Responses
80(2)
A.3.2 Inheriting Transformations: Covariance
82(4)
A.4 Complex Cells: Wiring and Invariance
86(1)
A.5 Mirror-Symmetric Templates Lead to Odd-Even Covariance Eigenfunctions
86(1)
A.6 Gabor-like Shapes from Translation Group
87(5)
A.6.1 Spectral Properties of Template Transformations Covariance Operator: Cortical Equation
87(2)
A.6.2 Cortical Equation: Derivation and Solution
89(3)
A.7 Gabor-like Wavelets
92(5)
A.7.1 Derivation from an Invariance Argument for 1-D Affine Group
92(2)
A.7.2 Wavelet Templates from Best Invariance and Heisenberg Principle
94(3)
A.8 Transformations with Lie Group Local Structure: HW Module
97(1)
A.9 Factorization of Invariances
98(3)
A.10 Invariant Representations and Their Cost in Number of Orbit Elements
101(5)
A.10.1 Discriminability Cost for Hierarchical versus Single-Layer Architecture
102(2)
A.10.2 Invariance Cost for Hierarchical versus Single-Layer architecture
104(2)
A.11 Nonlinearities Are Key in Gaining Compression from Repeated Random Projections
106(3)
A.11.1 Hierarchical (Linear) Johnson-Lindenstrauss Lemma
106(1)
A.11.2 Example of Nonlinear Extension of Johnson-Lindenstrauss Lemma
107(2)
References 109(8)
Index 117