Atjaunināt sīkdatņu piekrišanu

E-grāmata: Parallel Computing in Quantum Chemistry

  • Formāts: 232 pages
  • Izdošanas datums: 09-Apr-2008
  • Izdevniecība: CRC Press Inc
  • Valoda: eng
  • ISBN-13: 9781040209448
Citas grāmatas par šo tēmu:
  • Formāts - EPUB+DRM
  • Cena: 77,63 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Formāts: 232 pages
  • Izdošanas datums: 09-Apr-2008
  • Izdevniecība: CRC Press Inc
  • Valoda: eng
  • ISBN-13: 9781040209448
Citas grāmatas par šo tēmu:

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

An In-Depth View of Hardware Issues, Programming Practices, and Implementation of Key Methods

Exploring the challenges of parallel programming from the perspective of quantum chemists, Parallel Computing in Quantum Chemistry thoroughly covers topics relevant to designing and implementing parallel quantum chemistry programs.

Focusing on good parallel program design and performance analysis, the first part of the book deals with parallel computer architectures and parallel computing concepts and terminology. The authors discuss trends in hardware, methods, and algorithms; parallel computer architectures and the overall system view of a parallel computer; message-passing; parallelization via multi-threading; measures for predicting and assessing the performance of parallel algorithms; and fundamental issues of designing and implementing parallel programs.

The second part contains detailed discussions and performance analyses of parallel algorithms for a number of important and widely used quantum chemistry procedures and methods. The book presents schemes for the parallel computation of two-electron integrals, details the HartreeFock procedure, considers the parallel computation of second-order MųllerPlesset energies, and examines the difficulties of parallelizing local correlation methods.

Through a solid assessment of parallel computing hardware issues, parallel programming practices, and implementation of key methods, this invaluable book enables readers to develop efficient quantum chemistry software capable of utilizing large-scale parallel computers.
I Parallel Computing Concepts and Terminology
1 Introduction
3
1.1 Parallel Computing in Quantum Chemistry: Past and Present
4
1.2 Trends in Hardware Development
5
1.2.1 Moore's Law
5
1.2.2 Clock Speed and Performance
6
1.2.3 Bandwidth and Latency
7
1.2.4 Supercomputer Performance
8
1.3 Trends in Parallel Software Development
10
1.3.1 Responding to Changes in Hardware
10
1.3.2 New Algorithms and Methods
10
1.3.3 New Programming Models
12
References
13
2 Parallel Computer Architectures
17
2.1 Flynn's Classification Scheme
17
2.1.1 Single-Instruction, Single-Data
17
2.1.2 Single-Instruction, Multiple-Data
18
2.1.3 Multiple-Instruction, Multiple-Data
18
2.2 Network Architecture
19
2.2.1 Direct and Indirect Networks
19
2.2.2 Routing
20
2.2.3 Network Performance
23
2.2.4 Network Topology
25
2.2.4.1 Crossbar
26
2.2.4.2 Ring
27
2.2.4.3 Mesh and Torus
27
2.2.4.4 Hypercube
28
2.2.4.5 Fat Tree
28
2.2.4.6 Bus
30
2.2.4.7 Ad Hoc Grid
31
2.3 Node Architecture
31
2.4 MIMD System Architecture
34
2.4.1 Memory Hierarchy
35
2.4.2 Persistent Storage
35
2.4.2.1 Local Storage
37
2.4.2.2 Network Storage
37
2.4.2.3 Trends in Storage
38
2.4.3 Reliability
38
2.4.4 Homogeneity and Heterogeneity
39
2.4.5 Commodity versus Custom Computers
40
2.5 Further Reading
42
References
43
3 Communication via Message-Passing
45
3.1 Point-to-Point Communication Operations
46
3.1.1 Blocking Point-to-Point Operations
46
3.1.2 Non-Blocking Point-to-Point Operations
47
3.2 Collective Communication Operations
49
3.2.1 One-to-All Broadcast
50
3.2.2 All-to-All Broadcast
51
3.2.3 All-to-One Reduction and All-Reduce
54
3.3 One-Sided Communication Operations
55
3.4 Further Reading
56
References
56
4 Multi-Threading
59
4.1 Pitfalls of Multi-Threading
61
4.2 Thread-Safety
64
4.3 Comparison of Multi-Threading and Message-Passing
65
4.4 Hybrid Programming
66
4.5 Further Reading
69
References
70
5 Parallel Performance Evaluation
71
5.1 Network Performance Characteristics
71
5.2 Performance Measures for Parallel Programs
74
5.2.1 Speedup and Efficiency
74
5.2.2 Scalability
79
5.3 Performance Modeling
80
5.3.1 Modeling the Execution Time
80
5.3.2 Performance Model Exampe Matrix-Vector Multiplication
83
5.4 Presenting and Evaluating Performance Data: A Few Caveats
86
5.5 Further Reading
90
References
90
6 Parallel Program Design
93
6.1 Distribution of Work
94
6.1.1 Static Task Distribution
95
6.1.1.1 Round-Robin and Recursive Task Distributions
96
6.1.2 Dynamic Task Distribution
99
6.1.2.1 Manager-Worker Model
99
6.1.2.2 Decentralized Task Distribution
101
6.2 Distribution of Data
101
6.3 Designing a Communication Scheme
104
6.3.1 Using Collective Communication
104
6.3.2 Using Point-to-Point Communication
105
6.4 Design Example: Matrix-Vector Multiplication
107
6.4.1 Using a Row-Distributed Matrix
108
6.4.2 Using a Block-Distributed Matrix
109
6.5 Summary of Key Points of Parallel Program Design
112
6.6 Further Reading
114
References
114
II Applications of Parallel Programming in Quantum Chemistry
7 Two-Electron Integral Evaluation
117
7.1 Basics of Integral Computation
117
7.2 Parallel Implementation Using Static Load Balancing
119
7.2.1 Parallel Algorithms Distributing Shell Quartets and Pairs
119
7.2.2 Performance Analysis
121
7.2.2.1 Determination of the Load Imbalance Factor k(p)
122
7.2.2.2 Determination of μ and σ for Integral Computation
123
7.2.2.3 Predicted and Measured Efficiencies
124
7.3 Parallel Implementation Using Dynamic Load Balancing
125
7.3.1 Parallel Algorithm Distributing Shell Pairs
126
7.3.2 Performance Analysis
128
7.3.2.1 Load Imbalance
128
7.3.2.2 Communication Time
128
7.3.2.3 Predicted and Measured Efficiencies
129
References
130
8 The Hartree—Fock Method
131
8.1 The Hartree—Fock Equations
131
8.2 The Hartree—Fock Procedure
133
8.3 Parallel Fock Matrix Formation with Replicated Data
135
8.4 Parallel Fock Matrix Formation with Distributed Data
138
8.5 Further Reading
145
References
146
9 Second-Order Moller—Plesset Perturbation Theory
147
9.1 The Canonical MP2 Equations
147
9.2 A Scalar Direct MP2 Algorithm
149
9.3 Parallelization with Minimal Modifications
151
9.4 High-Performance Parallelization
154
9.5 Performance of the Parallel Algorithms
158
9.6 Further Reading
164
References
164
10 Local Moller–Plesset Perturbation Theory
167
10.1 The LMP2 Equations
167
10.2 A Scalar LMP2 Algorithm
169
10.3 Parallel LMP2
170
10.3.1 Two-Electron Integral Transformation
171
10.3.2 Computation of the Residual
173
10.3.3 Parallel Performance
174
References
177
Appendices
A A Brief Introduction to MPI
181
B Pthreads: Explicit Use of Threads
189
C OpenMP: Compiler Extensions for Multi-Threading
195
Index 205
Janssen, Curtis L.; Nielsen, Ida M. B.