Atjaunināt sīkdatņu piekrišanu

Introduction to Deep Learning: From Logical Calculus to Artificial Intelligence 2018 ed. [Mīkstie vāki]

3.78/5 (36 ratings by Goodreads)
  • Formāts: Paperback / softback, 191 pages, height x width: 235x155 mm, weight: 454 g, 38 Illustrations, black and white; XIII, 191 p. 38 illus., 1 Paperback / softback
  • Sērija : Undergraduate Topics in Computer Science
  • Izdošanas datums: 15-Feb-2018
  • Izdevniecība: Springer International Publishing AG
  • ISBN-10: 3319730037
  • ISBN-13: 9783319730035
  • Mīkstie vāki
  • Cena: 46,91 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Standarta cena: 55,19 €
  • Ietaupiet 15%
  • Grāmatu piegādes laiks ir 3-4 nedēļas, ja grāmata ir uz vietas izdevniecības noliktavā. Ja izdevējam nepieciešams publicēt jaunu tirāžu, grāmatas piegāde var aizkavēties.
  • Daudzums:
  • Ielikt grozā
  • Piegādes laiks - 4-6 nedēļas
  • Pievienot vēlmju sarakstam
  • Formāts: Paperback / softback, 191 pages, height x width: 235x155 mm, weight: 454 g, 38 Illustrations, black and white; XIII, 191 p. 38 illus., 1 Paperback / softback
  • Sērija : Undergraduate Topics in Computer Science
  • Izdošanas datums: 15-Feb-2018
  • Izdevniecība: Springer International Publishing AG
  • ISBN-10: 3319730037
  • ISBN-13: 9783319730035
This textbook presents a concise, accessible and engaging first introduction to deep learning, offering a wide range of connectionist models which represent the current state-of-the-art. The text explores the most popular algorithms and architectures in a simple and intuitive style, explaining the mathematical derivations in a step-by-step manner. The content coverage includes convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks and autoencoders. Numerous examples in working Python code are provided throughout the book, and the code is also supplied separately at an accompanying website.

Topics and features: introduces the fundamentals of machine learning, and the mathematical and computational prerequisites for deep learning; discusses feed-forward neural networks, and explores the modifications to these which can be applied to any neural network; examines convolutional neural networks, and the recurrent connections to a feed-forward neural network; describes the notion of distributed representations, the concept of the autoencoder, and the ideas behind language processing with deep learning; presents a brief history of artificial intelligence and neural networks, and reviews interesting open research problems in deep learning and connectionism.









This clearly written and lively primer on deep learning is essential reading for graduate and advanced undergraduate students of computer science, cognitive science and mathematics, as well as fields such as linguistics, logic, philosophy, and psychology.
1 From Logic to Cognitive Science
1(16)
1.1 The Beginnings of Artificial Neural Networks
1(4)
1.2 The XOR Problem
5(3)
1.3 From Cognitive Science to Deep Learning
8(3)
1.4 Neural Networks in the General AI Landscape
11(1)
1.5 Philosophical and Cognitive Aspects
12(5)
References
15(2)
2 Mathematical and Computational Prerequisites
17(34)
2.1 Derivations and Function Minimization
17(8)
2.2 Vectors, Matrices and Linear Programming
25(7)
2.3 Probability Distributions
32(7)
2.4 Logic and Turing Machines
39(2)
2.5 Writing Python Code
41(2)
2.6 A Brief Overview of Python Programming
43(8)
References
49(2)
3 Machine Learning Basics
51(28)
3.1 Elementary Classification Problem
51(6)
3.2 Evaluating Classification Results
57(2)
3.3 A Simple Classifier: Naive Bayes
59(2)
3.4 A Simple Neural Network: Logistic Regression
61(7)
3.5 Introducing the MNIST Dataset
68(2)
3.6 Learning Without Labels: K-Means
70(2)
3.7 Learning Different Representations: PCA
72(3)
3.8 Learning Language: The Bag of Words Representation
75(4)
References
77(2)
4 Feedforward Neural Networks
79(28)
4.1 Basic Concepts and Terminology for Neural Networks
79(3)
4.2 Representing Network Components with Vectors and Matrices
82(2)
4.3 The Perceptron Rule
84(3)
4.4 The Delta Rule
87(2)
4.5 From the Logistic Neuron to Backpropagation
89(4)
4.6 Backpropagation
93(9)
4.7 A Complete Feedforward Neural Network
102(5)
References
105(2)
5 Modifications and Extensions to a Feed-Forward Neural Network
107(14)
5.1 The Idea of Regularization
107(2)
5.2 L1 and L2 Regularization
109(2)
5.3 Learning Rate, Momentum and Dropout
111(5)
5.4 Stochastic Gradient Descent and Online Learning
116(2)
5.5 Problems for Multiple Hidden Layers: Vanishing and Exploding Gradients
118(3)
References
119(2)
6 Convolutional Neural Networks
121(14)
6.1 A Third Visit to Logistic Regression
121(4)
6.2 Feature Maps and Pooling
125(2)
6.3 A Complete Convolutional Network
127(3)
6.4 Using a Convolutional Network to Classify Text
130(5)
References
132(3)
7 Recurrent Neural Networks
135(18)
7.1 Sequences of Unequal Length
135(1)
7.2 The Three Settings of Learning with Recurrent Neural Networks
136(3)
7.3 Adding Feedback Loops and Unfolding a Neural Network
139(1)
7.4 Elman Networks
140(2)
7.5 Long Short-Term Memory
142(3)
7.6 Using a Recurrent Neural Network for Predicting Following Words
145(8)
References
152(1)
8 Autoencoders
153(12)
8.1 Learning Representations
153(3)
8.2 Different Autoencoder Architectures
156(2)
8.3 Stacking Autoencoders
158(3)
8.4 Recreating the Cat Paper
161(4)
References
163(2)
9 Neural Language Models
165(10)
9.1 Word Embeddings and Word Analogies
165(1)
9.2 CBOW and Word2vec
166(2)
9.3 Word2vec in Code
168(3)
9.4 Walking Through the Word-Space: An Idea That Has Eluded Symbolic AI
171(4)
References
173(2)
10 An Overview of Different Neural Network Architectures
175(10)
10.1 Energy-Based Models
175(3)
10.2 Memory-Based Models
178(3)
10.3 The Kernel of General Connectionist Intelligence: The bAbI Dataset
181(4)
References
182(3)
11 Conclusion
185(4)
11.1 An Incomplete Overview of Open Research Questions
185(1)
11.2 The Spirit of Connectionism and Philosophical Ties
186(3)
Reference
187(2)
Index 189
Dr. Sandro Skansi is an Assistant Professor of Logic at the University of Zagreb and Lecturer in Data Science at University College Algebra, Zagreb, Croatia.