Atjaunināt sīkdatņu piekrišanu

Introduction to Computational Linear Algebra [Hardback]

(American University of Beirut, Lebanon), (INRIA, Rennes, France), (INRIA, Rennes Cedex, France)
  • Formāts: Hardback, 262 pages, height x width: 234x156 mm, weight: 800 g, 16 Tables, black and white; 9 Illustrations, black and white
  • Izdošanas datums: 26-Jun-2015
  • Izdevniecība: Chapman & Hall/CRC
  • ISBN-10: 1482258692
  • ISBN-13: 9781482258691
Citas grāmatas par šo tēmu:
  • Hardback
  • Cena: 119,73 €
  • Grāmatu piegādes laiks ir 3-4 nedēļas, ja grāmata ir uz vietas izdevniecības noliktavā. Ja izdevējam nepieciešams publicēt jaunu tirāžu, grāmatas piegāde var aizkavēties.
  • Daudzums:
  • Ielikt grozā
  • Piegādes laiks - 4-6 nedēļas
  • Pievienot vēlmju sarakstam
  • Bibliotēkām
  • Formāts: Hardback, 262 pages, height x width: 234x156 mm, weight: 800 g, 16 Tables, black and white; 9 Illustrations, black and white
  • Izdošanas datums: 26-Jun-2015
  • Izdevniecība: Chapman & Hall/CRC
  • ISBN-10: 1482258692
  • ISBN-13: 9781482258691
Citas grāmatas par šo tēmu:

Teach Your Students Both the Mathematics of Numerical Methods and the Art of Computer Programming

Introduction to Computational Linear Algebra presents classroom-tested material on computational linear algebra and its application to numerical solutions of partial and ordinary differential equations. The book is designed for senior undergraduate students in mathematics and engineering as well as first-year graduate students in engineering and computational science.

The text first introduces BLAS operations of types 1, 2, and 3 adapted to a scientific computer environment, specifically MATLAB®. It next covers the basic mathematical tools needed in numerical linear algebra and discusses classical material on Gauss decompositions as well as LU and Cholesky’s factorizations of matrices. The text then shows how to solve linear least squares problems, provides a detailed numerical treatment of the algebraic eigenvalue problem, and discusses (indirect) iterative methods to solve a system of linear equations. The final chapter illustrates how to solve discretized sparse systems of linear equations. Each chapter ends with exercises and computer projects.

Preface xiii
About the Authors xvii
List of Figures xix
List of Tables xxi
List of Algorithms xxiii
1 Basic Linear Algebra Subprograms-BLAS 1(26)
1.1 An Introductory Example
1(2)
1.2 Matrix Notations
3(1)
1.3 IEEE Floating Point Systems and Computer Arithmetic
4(1)
1.4 Vector-Vector Operations: Level-1 BLAS
5(3)
1.5 Matrix-Vector Operations: Level-2 BLAS
8(3)
1.6 Matrix-Matrix Operations: Level-3 BLAS
11(4)
1.6.1 Matrix Multiplication Using GAXPYs
12(1)
1.6.2 Matrix Multiplication Using Scalar Products
13(1)
1.6.3 Matrix Multiplication Using External Products
13(1)
1.6.4 Block Multiplications
13(1)
1.6.5 An Efficient Data Management
14(1)
1.7 Sparse Matrices: Storage and Associated Operations
15(4)
1.8 Exercises
19(6)
1.9 Computer Project: Strassen Algorithm
25(2)
2 Basic Concepts for Matrix Computations 27(30)
2.1 Vector Norms
27(1)
2.2 Complements on Square Matrices
28(14)
2.2.1 Definition of Important Square Matrices
29(1)
2.2.2 Use of Orthonormal Bases
29(1)
2.2.3 Gram-Schmidt Process
30(3)
2.2.4 Determinants
33(1)
2.2.5 Eigenvalue-Eigenvector and Characteristic Polynomial
34(2)
2.2.6 Schur's Decomposition
36(3)
2.2.7 Orthogonal Decomposition of Symmetric Real and Complex Hermitian Matrices
39(2)
2.2.7.1 A Real and Symmetric: A = AT
39(1)
2.2.7.2 A Complex Hermitian: A = A*
40(1)
2.2.8 Symmetric Positive Definite and Positive Semi-Definite Matrices
41(1)
2.3 Rectangular Matrices: Ranks and Singular Values
42(5)
2.3.1 Singular Values of a Matrix
43(1)
2.3.2 Singular Value Decomposition
44(3)
2.4 Matrix Norms
47(4)
2.5 Exercises
51(2)
2.6 Computer Exercises
53(4)
3 Gauss Elimination and LU Decompositions of Matrices 57(22)
3.1 Special Matrices for LU Decomposition
57(3)
3.1.1 Triangular Matrices
57(1)
3.1.2 Permutation Matrices
58(2)
3.2 Gauss Transforms
60(2)
3.2.1 Preliminaries for Gauss Transforms
60(1)
3.2.2 Definition of Gauss Transforms
61(1)
3.3 Naive LU Decomposition for a Square Matrix with Principal Minor Property (pmp)
62(7)
3.3.1 Algorithm and Operations Count
65(1)
3.3.2 LDLT Decomposition of a Matrix Having the Principal Minor Property (pmp)
66(1)
3.3.3 The Case of Symmetric and Positive Definite Matrices: Cholesky Decomposition
66(2)
3.3.4 Diagonally Dominant Matrices
68(1)
3.4 PLU Decompositions with Partial Pivoting Strategy
69(4)
3.4.1 Unscaled Partial Pivoting
69(2)
3.4.2 Scaled Partial Pivoting
71(1)
3.4.3 Solving a System Ax = b Using the LU Decomposition
72(1)
3.5 MATLAB Commands Related to the LU Decomposition
73(1)
3.6 Condition Number of a Square Matrix
73(2)
3.7 Exercises
75(2)
3.8 Computer Exercises
77(2)
4 Orthogonal Factorizations and Linear Least Squares Problems 79(26)
4.1 Formulation of Least Squares Problems: Regression Analysis
79(1)
4.1.1 Least Squares and Regression Analysis
79(1)
4.1.2 Matrix Formulation of Regression Problems
80(1)
4.2 Existence of Solutions Using Quadratic Forms
80(3)
4.2.1 Full Rank Cases: Application to Regression Analysis
82(1)
4.3 Existence of Solutions through Matrix Pseudo-Inverse
83(4)
4.3.1 Obtaining Matrix Pseudo-Inverse through Singular Value Decomposition
85(2)
4.4 The QR Factorization Theorem
87(7)
4.4.1 Householder Transforms
87(4)
4.4.2 Steps of the QR Decomposition of a Matrix
91(1)
4.4.3 Particularizing When m > n,
92(1)
4.4.4 Givens Rotations
93(1)
4.5 Gram-Schmidt Orthogonalization
94(4)
4.6 Least Squares Problem and QR Decomposition
98(1)
4.7 Householder QR with Column Pivoting
99(1)
4.8 MATLAB Implementations
99(2)
4.8.1 Use of the Backslash Operator
99(1)
4.8.2 QR Decompositions
99(2)
4.9 Exercises
101(1)
4.10 Computer Exercises
102(3)
5 Algorithms for the Eigenvalue Problem 105(44)
5.1 Basic Principles
105(10)
5.1.1 Why Compute the Eigenvalues of a Square Matrix?
105(2)
5.1.2 Spectral Decomposition of a Matrix
107(5)
5.1.3 The Power Method and its By-Products
112(3)
5.2 QR Method for a Non-Symmetric Matrix
115(6)
5.2.1 Reduction to an Upper Hessenberg Matrix
115(2)
5.2.2 QR Algorithm for an Upper Hessenberg Matrix
117(2)
5.2.3 Convergence of the QR Method
119(2)
5.3 Algorithms for Symmetric Matrices
121(3)
5.3.1 Reduction to a Tridiagonal Matrix
121(1)
5.3.2 Algorithms for Tridiagonal Symmetric Matrices
122(2)
5.4 Methods for Large Size Matrices
124(10)
5.4.1 Rayleigh-Ritz Projection
125(1)
5.4.2 Arnoldi Procedure
126(2)
5.4.3 The Arnoldi Method for Computing Eigenvalues of a Large Matrix
128(2)
5.4.4 Arnoldi Method for Computing an Eigenpair
130(1)
5.4.5 Symmetric Case: Lanczos Algorithm
131(3)
5.5 Singular Value Decomposition
134(4)
5.5.1 Full SVD
134(2)
5.5.2 Singular Triplets for Large Matrices
136(2)
5.6 Exercises
138(3)
5.7 Computer Exercises
141(8)
6 Iterative Methods for Systems of Linear Equations 149(28)
6.1 Stationary Methods
150(2)
6.1.1 Splitting
150(1)
6.1.2 Classical Stationary Methods
150(2)
6.2 Krylov Methods
152(4)
6.2.1 Krylov Properties
152(1)
6.2.2 Subspace Condition
153(1)
6.2.3 Minimization Property for spd Matrices
154(1)
6.2.4 Minimization Property for General Matrices
155(1)
6.3 Method of Steepest Descent for spd Matrices
156(2)
6.3.1 Convergence Properties of the Steepest Descent Method
157(1)
6.3.2 Preconditioned Steepest Descent Algorithm
157(1)
6.4 Conjugate Gradient Method (CG) for spd Matrices
158(7)
6.4.1 Krylov Basis Properties
159(2)
6.4.2 CG Algorithm
161(1)
6.4.3 Convergence of CG
162(1)
6.4.4 Preconditioned Conjugate Gradient
163(1)
6.4.5 Memory and CPU Requirements in PCG
164(1)
6.4.6 Relation with the Lanczos Method
164(1)
6.4.7 Case of Symmetric Indefinite Systems: SYMMLQ Method
165(1)
6.5 The Generalized Minimal Residual Method
165(4)
6.5.1 Krylov Basis Computation
166(1)
6.5.2 GMRES Algorithm
166(1)
6.5.3 Convergence of GMRES
167(1)
6.5.4 Preconditioned GMRES
168(1)
6.5.5 Restarted GMRES
169(1)
6.5.6 MINRES Algorithm
169(1)
6.6 The Bi-Conjugate Gradient Method
169(5)
6.6.1 Orthogonality Properties in BiCG
170(2)
6.6.2 BiCG Algorithm
172(1)
6.6.3 Convergence of BiCG
172(1)
6.6.4 Breakdowns and Near-Breakdowns in BiCG
173(1)
6.6.5 Complexity of BiCG and Variants of BiCG
173(1)
6.6.6 Preconditioned BiCG
173(1)
6.7 Preconditioning Issues
174(1)
6.8 Exercises
175(2)
7 Sparse Systems to Solve Poisson Differential Equations 177(50)
7.1 Poisson Differential Equations
177(2)
7.2 The Path to Poisson Solvers
179(1)
7.3 Finite Differences for Poisson-Dirichlet Problems
179(16)
7.3.1 One-Dimensional Dirichlet-Poisson
180(7)
7.3.2 Two-Dimensional Poisson-Dirichlet on a Rectangle
187(5)
7.3.3 Complexity for Direct Methods: Zero-Fill Phenomenon
192(3)
7.4 Variational Formulations
195(6)
7.4.1 Integration by Parts and Green's Formula
195(2)
7.4.2 Variational Formulation to One-Dimensional Poisson Problems
197(1)
7.4.3 Variational Formulations to Two-Dimensional Poisson Problems
198(2)
7.4.4 Petrov-Galerkin Approximations
200(1)
7.5 One-Dimensional Finite-Element Discretizations
201(13)
7.5.1 The P1 Finite-Element Spaces
202(1)
7.5.2 Finite-Element Approximation Using S1 (Π)
203(2)
7.5.3 Implementation of the Method
205(6)
7.5.4 One-Dimensional P2 Finite-Elements
211(3)
7.6 Exercises
214(1)
7.7 Computer Exercises
215(12)
Bibliography 227(6)
Index 233
Nabil Nassif is affiliated with the Department of Mathematics at the American University of Beirut, where he teaches and conducts research in mathematical modeling, numerical analysis, and scientific computing. He earned a PhD in applied mathematics from Harvard University under the supervision of Professor Garrett Birkhoff.

Jocelyne Erhel is a senior research scientist and scientific leader of the Sage team at INRIA in Rennes, France. She earned a PhD from the University of Paris. Her research interests include sparse linear algebra and high performance scientific computing applied to geophysics, mainly groundwater models.

Bernard Philippe was a senior research scientist at INRIA in Rennes, France, until 2015 when he retired. He earned a PhD from the University of Rennes. His research interests include matrix computing with a special emphasis on large-sized eigenvalue problems.