Atjaunināt sīkdatņu piekrišanu

E-grāmata: C++ Template Metaprogramming in Practice: A Deep Learning Framework

(Birkbeck, University of London, UK)
  • Formāts: 338 pages
  • Izdošanas datums: 01-Dec-2020
  • Izdevniecība: CRC Press
  • ISBN-13: 9781000219777
  • Formāts - EPUB+DRM
  • Cena: 125,22 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Bibliotēkām
  • Formāts: 338 pages
  • Izdošanas datums: 01-Dec-2020
  • Izdevniecība: CRC Press
  • ISBN-13: 9781000219777

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

"Using the implementation of a deep learning framework as an example, C++ Template Metaprogramming in Practice: A Deep Learning Framework explains the application of metaprogramming in a relatively large project and emphasizes ways to optimize systems performance. The book is suitable for developers with a basic knowledge of C++. Developers familiar with mainstream deep learning frameworks can also refer to this book to compare the differences between the deep learning framework implemented with metaprogramming and compile-time computing with deep learning frameworks using object-oriented methods. Consisting of eight chapters, the book starts with two chapters discussing basic techniques of metaprogramming and compile-time computing. The rest of the book's chapters focus on the practical application of metaprogramming in a deep learning framework. It examines rich types and t systems, expression templates, and writing complex meta-functions, as well as such topics as : Heterogeneous dictionaries and policy templates, an introduction to deep learning, type system and basic data types, operations and expression templates, basic layers composite and recurrent layers, and evaluation and its optimization. Metaprogramming can construct flexible and efficient code for C++ developers who are familiar with object-oriented programming ; the main difficulty in learning and mastering C++ metaprogramming is establishing the thinking mode of functional programming. The meta-programming approach involved at compile time is functional, which means that the intermediate results of the construction cannot be changed, and the impact may be greater than expected. This book enables C++ programmers to develop a functional mindset and metaprogramming skills. The book also discusses the development cost and use cost of metaprogramming and provides workarounds for minimizing these costs"--

Using the implementation of a deep learning framework as an example, C++ Template Metaprogramming in Practice: A Deep Learning Framework explains the application of metaprogramming in a relatively large project and emphasizes ways to optimize systems performance. The book is suitable for developers with a basic knowledge of C++. Developers familiar with mainstream deep learning frameworks can also refer to this book to compare the differences between the deep learning framework implemented with metaprogramming and compile-time computing with deep learning frameworks using object-oriented methods.

Consisting of eight chapters, the book starts with two chapters discussing basic techniques of metaprogramming and compile-time computing. The rest of the book’s chapters focus on the practical application of metaprogramming in a deep learning framework. It examines rich types and t systems, expression templates, and writing complex meta-functions, as well as such topics as:

  • Heterogeneous dictionaries and policy templates
  • An introduction to deep learning
  • Type system and basic data types
  • Operations and expression templates
  • Basic layers
  • Composite and recurrent layers
  • Evaluation and its optimization

Metaprogramming can construct flexible and efficient code. For C++ developers who are familiar with object-oriented programming, the main difficulty in learning and mastering C++ metaprogramming is establishing the thinking mode of functional programming. The meta-programming approach involved at compile time is functional, which means that the intermediate results of the construction cannot be changed, and the impact may be greater than expected. This book enables C++ programmers to develop a functional mindset and metaprogramming skills. The book also discusses the development cost and use cost of metaprogramming and provides workarounds for minimizing these costs.

Preface xiii
Acknowledgment xxv
PART I INTRODUCTION
1 Basic Tips
3(28)
1.1 Metafunction and type_traits
3(6)
1.1.1 Introduction to Metafunctions
3(1)
1.1.2 Type Metafunction
4(2)
1.1.3 Various Metafunctions
6(2)
1.1.4 type_traits
8(1)
1.1.5 Metafunctions and Macros
8(1)
1.1.6 The Nominating Method of Metafunctions in This Book
9(1)
1.2 Template Template Parameters and Container Templates
9(4)
1.2.1 Templates as the Input of Metafunctions
10(1)
1.2.2 Templates as the Output of Metafunctions
10(1)
1.2.3 Container Templates
11(2)
1.3 Writing of Sequences, Branches, and Looping Codes
13(14)
1.3.1 Codes Executed in Sequence Order
13(1)
1.3.2 Codes Executed in Branches
14(1)
1.3.2.1 Implementing Branches Using std::conditional and std::conditional_t
15(1)
1.3.2.2 Implementing Branches with (Partial) Specialization
15(3)
1.3.2.3 Implementing Branches Using std::enable_if and std::enable_if_t
18(1)
1.3.2.4 Compile-time Branches with Different Return Types
19(1)
1.3.2.5 Simplify Codes with if constexpr
20(1)
1.3.3 Codes Executed in Loops
21(2)
1.3.4 Caution: Instantiation Explosion and Compilation Crash
23(2)
1.3.5 Branch Selection and Short Circuit Logic
25(2)
1.4 Curiously Recurring Template Pattern (CRTP)
27(2)
1.5 Summary
29(1)
1.6 Exercises
29(2)
2 Heterogeneous Dictionaries and Policy Templates
31(38)
2.1 Introduction to Named Arguments
31(2)
2.2 Heterogeneous Dictionaries
33(13)
2.2.1 How to Use the Module
33(3)
2.2.2 The Representation of the Keys
36(2)
2.2.3 Implementation of Heterogeneous Dictionaries
38(1)
2.2.3.1 External Framework
38(1)
2.2.3.2 Implementation of the Function Create
39(2)
2.2.3.3 The Main Frame of Values
41(2)
2.2.3.4 Logic Analysis of NewTupleType
43(2)
2.2.4 A Brief Analysis of VarTypeDict's Performance
45(1)
2.2.5 stdr.tuple as the Cache
45(1)
2.3 Policy Templates
46(17)
2.3.1 Introduction to Policies
46(2)
2.3.1.1 Policy Objects
48(1)
2.3.1.2 Policy Object Templates
48(1)
2.3.2 Defining Policies and Policy Objects (Templates)
49(1)
2.3.2.1 Policy Grouping
49(2)
2.3.2.2 Declarations of Macros and Policy Objects (Templates)
51(1)
2.3.3 Using Policies
52(2)
2.3.4 Background Knowledge: Dominance and Virtual Inheritance
54(1)
2.3.5 Policy Objects and Policy Dominance Structures
55(2)
2.3.6 Policy Selection Metafunction
57(1)
2.3.6.1 Main Frame
57(1)
2.3.6.2 The Metafunction MinorCheck_
58(3)
2.3.6.3 Construct the Final Return Type
61(1)
2.3.7 Simplifying Declarations of Policy Objects with Macros
62(1)
2.4 Summary
63(1)
2.5 Exercises
64(5)
PART II THE DEEP LEARNING FRAMEWORK
3 A Brief Introduction to Deep Learning
69(14)
3.1 Introduction to Deep Learning
69(7)
3.1.1 From Machine Learning to Deep Learning
70(2)
3.1.2 A Wide Variety of Artificial Neural Networks
72(1)
3.1.2.1 Artificial Neural Networks and Matrix Operations
72(1)
3.1.2.2 Deep Neural Network
73(1)
3.1.2.3 Recurrent Neural Networks
73(1)
3.1.2.4 Convolutional Neural Networks
73(1)
3.1.2.5 Components of Neural Networks
73(1)
3.1.3 Organization and Training of Deep Learning Systems
74(1)
3.1.3.1 Network Structure and Loss Function
74(1)
3.1.3.2 Model Training
75(1)
3.1.3.3 Predictions with Models
76(1)
3.2 The Framework Achieved in This Book---MetaNN
76(6)
3.2.1 From Computing Tools of Matrices to Deep Learning Frameworks
76(1)
3.2.2 Introduction to MetaNN
77(2)
3.2.3 What We Will Discuss
79(1)
3.2.3.1 Data Representation
79(1)
3.2.3.2 Matrix Operations
80(1)
3.2.3.3 Layers and Automatic Derivation
80(1)
3.2.3.4 Evaluation and Performance Optimization
81(1)
3.2.4 Topics Not Covered in This Book
82(1)
3.3 Summary
82(1)
4 Type System and Basic Data Types
83(48)
4.1 The Type System
84(8)
4.1.1 Introduction to the Type System
84(1)
4.1.2 Classification Systems of Iterators
85(2)
4.1.3 Use Tags as Template Parameters
87(1)
4.1.4 The Type System of MetaNN
88(2)
4.1.5 Metafunctions Related to the Type System
90(1)
4.1.5.1 Metafunction IsXXX
90(1)
4.1.5.2 Metafunction DataCategory
91(1)
4.2 Design Concepts
92(9)
4.2.1 Support for Different Computing Devices and Computing Units
92(1)
4.2.2 Allocation and Maintenance of Storage Space
93(1)
4.2.2.1 Class Template Allocator
93(1)
4.2.2.2 Class Template ContinuousMemory
94(2)
4.2.3 Shallow Copy and Detection of Write Operations
96(1)
4.2.3.1 Data Types without Requirements of Support for Element-level Reading and Writing
96(1)
4.2.3.2 Element-level Writing and Shallow Copy
97(1)
4.2.4 Expansion of Underlying Interfaces
98(1)
4.2.5 Type Conversion and Evaluation
99(1)
4.2.6 Data Interface Specifications
100(1)
4.3 Scalars
101(3)
4.3.1 Declaration of Class Templates
102(1)
4.3.2 A Specialized Version Based on CPU
102(1)
4.3.2.1 Type Definitions and Data Members
103(1)
4.3.2.2 Construction, Assignment, and Movement
103(1)
4.3.2.3 Reading and Writing Elements
103(1)
4.3.2.4 Evaluating Related Interfaces
104(1)
4.3.3 The Principal Type of Scalars
104(1)
4.4 Matrix
104(11)
4.4.1 Class Template Matrix -
105(1)
4.4.1.1 Declarations and Interfaces
105(1)
4.4.1.2 Dimensional Information and Element-level Reading and Writing
106(1)
4.4.1.3 Submatrix
107(2)
4.4.1.4 Underlying Access Interfaces of Matrix
109(1)
4.4.2 Special Matrices: Trivial Matrix, Zero Matrix, and One-hot Vector
110(1)
4.4.2.1 Trivial Matrix
110(3)
4.4.2.2 Zero Matrix
113(1)
4.4.2.3 One-hot Vector
113(1)
4.4.3 Introducing a New Matrix Class
114(1)
4.5 List
115(12)
4.5.1 Template Batch
115(3)
4.5.2 Template Array
118(1)
4.5.2.1 Introduction of the Template Array
118(1)
4.5.2.2 Class Template Arraylmp
119(2)
4.5.2.3 Metafunction IsIterator
121(1)
4.5.2.4 Construction of Array Objects
122(1)
4.5.3 Duplication and Template Duplicate
123(1)
4.5.3.1 Introduction of the Template Duplicate
123(2)
4.5.3.2 Class Template Duplicatelmp
125(1)
4.5.3.3 Construction of the Duplicate Object
126(1)
4.6 Summary
127(1)
4.7 Exercises
127(4)
5 Operations and Expression Templates
131(32)
5.1 Introduction to Expression Templates
131(3)
5.2 The Design Principles of Operation Templates in MetaNN
134(2)
5.2.1 Problems of the Operation Template Add
134(1)
5.2.2 Behavior Analysis of Operation Templates
134(1)
5.2.2.1 Validation and Derivation of Types
135(1)
5.2.2.2 Division of Object Interfaces
135(1)
5.2.2.3 Auxiliary Class Templates
136(1)
5.3 Classification of Operations
136(2)
5.4 Auxiliary Templates
138(6)
5.4.1 The Auxiliary Class Template OperElementType_I OperDeviceType_
138(1)
5.4.2 The Auxiliary Class Template OperXXX_
139(1)
5.4.3 The Auxiliary Class Template OperCateCal
139(2)
5.4.4 The Auxiliary Class Template OperOrganizer
141(1)
5.4.4.1 Specialization for Scalars
141(1)
5.4.4.2 Specialization for Lists of Scalars
142(1)
5.4.4.3 Other Specialized Versions
143(1)
5.4.5 The Auxiliary Class Template OperSeq
143(1)
5.5 Framework for Operation Templates
144(2)
5.5.1 Category Tags for Operation Templates
144(1)
5.5.2 Definition of UnaryOp
145(1)
5.6 Examples of Operation Implementations
146(9)
5.6.1 Sigmoid Operation
146(1)
5.6.1.1 Function Interface
147(1)
5.6.1.2 Template OperSigmoid_
148(1)
5.6.1.3 User Calls
148(1)
5.6.2 Operation Add
149(1)
5.6.2.1 Function Interface
150(1)
5.6.2.2 The Implementation Framework of OperAdd_
150(2)
5.6.2.3 The Implementation of OperAdd_::Eval
152(1)
5.6.3 Operation Transpose
153(2)
5.6.4 Operation Collapse
155(1)
5.7 The List of Operations Supported by MetaNN
155(4)
5.7.1 Unary Operations
156(1)
5.7.2 Binary Operations
156(2)
5.7.3 Ternary Operations
158(1)
5.8 The Trade-off and Limitations of Operations
159(1)
5.8.1 The Trade-off of Operations
159(1)
5.8.2 Limitations of Operations
160(1)
5.9 Summary
160(1)
5.10 Exercises
161(2)
6 Basic Layers
163(42)
6.1 Design Principles of Layers
163(8)
6.1.1 Introduction to Layers
163(2)
6.1.2 Construction of Layer Objects
165(1)
6.1.2.1 Information Delivered through Constructors
165(1)
6.1.2.2 Information Specified through Template Parameters
166(1)
6.1.3 Initialization and Loading of Parameter Matrices
166(1)
6.1.4 Forward Propagation
167(2)
6.1.5 Preservation of Intermediate Results
169(1)
6.1.6 Backward Propagation
170(1)
6.1.7 Update of Parameter Matrices
170(1)
6.1.8 Acquisition of Parameter Matrices
171(1)
6.1.9 Neutral Detection of Layers
171(1)
6.2 Auxiliary Logic for Layers
171(16)
6.2.1 Initializing the Module
171(1)
6.2.1.1 Using the Initialization Module
172(2)
6.2.1.2 Makelnitializer
174(1)
6.2.1.3 Class Template Paramlnitializer
175(1)
6.2.1.4 Class Template Initializer
176(1)
6.2.2 Class Template DynamicData
177(1)
6.2.2.1 Base Class Template DynamicCategory
177(2)
6.2.2.2 Derived Class Template DynamicWrapper
179(1)
6.2.2.3 Encapsulating Behaviors of Pointers with DynamicData
180(1)
6.2.2.4 Category Tags
181(1)
6.2.2.5 Auxiliary Functions and Auxiliary Meta Functions
181(1)
6.2.2.6 DynamicData and Dynamic Type System
182(1)
6.2.3 Common Policy Objects for Layers
183(1)
6.2.3.1 Parameters Relevant to Update and Backward Propagation
183(1)
6.2.3.2 Parameters Relevant to Input
183(1)
6.2.3.3 Parameters Relevant to Operations
184(1)
6.2.4 Metafunction InjectPolicy
184(1)
6.2.5 Universal I/O Structure
185(1)
6.2.6 Universal Operation Functions
185(2)
6.3 Specific Implementations of Layers
187(13)
6.3.1 AddLayer
187(3)
6.3.2 ElementMulLayer
190(1)
6.3.2.1 Recording Intermediate Results
190(1)
6.3.2.2 Forward and Backward Propagation
191(2)
6.3.2.3 Neutral Detection
193(1)
6.3.3 BiasLayer
193(1)
6.3.3.1 Basic Framework
193(2)
6.3.3.2 Initialization and Loading of Parameters
195(2)
6.3.3.3 Obtaining Parameters
197(1)
6.3.3.4 Forward and Backward Propagation
198(1)
6.3.3.5 Collecting Parameter Gradients
199(1)
6.3.3.6 Neutral Detection
200(1)
6.4 Basic Layers achieved in MetaNN
200(2)
6.5 Summary
202(1)
6.6 Exercises
202(3)
7 Composite and Recurrent Layers
205(56)
7.1 Interfaces and Design Principles of Composite Layers
206(6)
7.1.1 Basic Structure
206(1)
7.1.2 Syntax for Structural Description
207(1)
7.1.3 The Inheritance Relationship of Policies
208(1)
7.1.4 Correction of Policies
209(1)
7.1.5 Constructors of Composite Layers
210(1)
7.1.6 An Example of Complete Construction of a Composite Layer
210(2)
7.2 Implementation of Policy Inheritance and Correction
212(3)
7.2.1 Implementation of Policy Inheritance
212(1)
7.2.1.1 The Container SubPolicyContainer and the Functions Related
212(1)
7.2.1.2 The Implementation of PlainPolicy
212(1)
7.2.1.3 The Metafunction SubPolicyPicker
213(2)
7.2.2 Implementation of Policy Correction
215(1)
7.3 The Implementation of ComposeTopology
215(15)
7.3.1 Features
215(1)
7.3.2 Introduction to the Topological Sorting Algorithm
216(1)
7.3.3 Main Steps Included in ComposeTopology
217(1)
7.3.4 Clauses for Structural Description and Their Classification
217(3)
7.3.5 Examination of Structural Validity
220(2)
7.3.6 Implementation of Topological Sorting
222(1)
7.3.6.1 Pre-processing of Topological Sorting
223(1)
7.3.6.2 Main Logic
224(1)
7.3.6.3 Post-processing of Topological Sorting
225(1)
7.3.7 The Metafunction to Instantiate Sublayers
226(1)
7.3.7.1 Calculation of Policies for Each Sublayer
226(1)
7.3.7.2 Examination of the Output Gradient
227(1)
7.3.7.3 Policy Correction
228(1)
7.3.7.4 Instantiations of Sublayers
229(1)
7.4 The Implementation of ComposeKernel
230(16)
7.4.1 Declarations of Class Templates
230(1)
7.4.2 Management of Sublayer Objects
231(3)
7.4.3 Parameter Acquisition, Gradient Collection, and Neutrality Detection
234(1)
7.4.4 Initialization and Loading of Parameters
235(2)
7.4.5 Forward Propagation
237(1)
7.4.5.1 The Interface ComposeKernel::FeedForward
238(1)
7.4.5.2 Saving the Calculation Results of Sublayers
239(1)
7.4.5.3 FeedForwardFun
240(1)
7.4.5.4 Constructing Input Containers of Sublayers
240(2)
7.4.5.5 The Implementation Logic of Input from in Connect
242(1)
7.4.5.6 The Implementation Logic of Input From Internal Connect
243(2)
7.4.5.7 Forward Propagation and Filling in Results of Output
245(1)
7.4.6 Backward Propagation
246(1)
7.5 Examples of Composite Layer Implementations
246(1)
7.6 Recurrent Layers
247(9)
7.6.1 GruStep
247(3)
7.6.2 Building a RecurrentLayer Class Template
250(1)
7.6.2.1 Main Definitions of RecurrentLayer
250(2)
7.6.2.2 How to Use RecurrentLayer
252(1)
7.6.2.3 Implementations of Functions Such as SaveWeights
252(1)
7.6.2.4 Forward Propagation
253(1)
7.6.2.5 Backward Propagation
254(1)
7.6.2.6 The Function FeedStepBackward
255(1)
7.6.3 The Use of RecurrentLayer
256(1)
7.7 Summary
256(1)
7.8 Exercises
257(4)
8 Evaluation and Its Optimization
261(30)
8.1 Evaluation Models of MetaNN
262(9)
8.1.1 Hierarchy of Operations
262(1)
8.1.2 Module Division of Evaluation Subsystems
263(1)
8.1.2.1 Overview of an Evaluation Process
264(1)
8.1.2.2 EvalPlan
265(2)
8.1.2.3 EvalPool
267(1)
8.1.2.4 EvalUnit
268(1)
8.1.2.5 EvalGroup
268(1)
8.1.2.6 EvalHandle
269(1)
8.1.2.7 EvalBuffer
270(1)
8.1.2.8 The Auxiliary Function Evaluate
270(1)
8.2 Basic Evaluation Logic
271(7)
8.2.1 Evaluation Interface of Principal Types
271(1)
8.2.2 Evaluation of Non-principal Basic Data Types
271(3)
8.2.3 Evaluation of Operation Templates
274(3)
8.2.4 DynamicData and Evaluation
277(1)
8.3 Optimization of Evaluation Processes
278(10)
8.3.1 Avoiding Repetitive Computations
278(2)
8.3.2 Merging Similar Computations
280(1)
8.3.3 Multi-operation Co-optimization
281(1)
8.3.3.1 Background
281(1)
8.3.3.2 MetaNN Solutions
282(2)
8.3.3.3 Matching Evaluation Structures at Compile Time
284(2)
8.3.3.4 Equality Judgment of Objects in MetaNN
286(2)
8.3.3.5 Auto-trigger Optimization
288(1)
8.4 Summary
288(1)
8.5 Exercises
289(2)
Postscript 291(10)
Index 301
Li Wei graduated from Tsinghua University in 2011 and has been engaged in the development and maintenance of the online prediction section of the deep learning machine translation system at Baidu's Natural Language Processing Department. He currently works for Microsoft Advanced Technology Center and has more than ten years of relevant development experience, with a strong interest in C++ template meta-programming and compile-time computing.