Course Materials
Introduction:
Why is this course important?
The problem solving process and the role of this class
Some model problems
Computer number system (normalized floating numbers)
Machine epsilon/precision/constant
Errors: absolute and relative errors
Input error fl(x)
Round off error bounds, fl(x o y)
When can loss of accuracy happen?
How to minimize the round off errors?
Definition of vector norms
||x||p, p=1,2, the infinity.
Cauchy-Schwartz inequality and application
Which norm is bigger/biggest/smaller/smallest?
Equivalence of the norms in the finite dimensional space.
Subordinate/associated/natural matrix norms
||A||p, p=1,2, and infinity.
Direct methods for soliving linear system of equations
Backward substitution for a triangular system of
equations
Gaussian elimination to reduce a general system of equations to a
triangular
system
Elementary row transform matrices and their propreties,
the
relation
to Gaussian eliminations
Augmeneted matrix and Gaussian elimination
Gaussian elimination and LU decomposition, forward and backward
substitution
Find the determinant of the matrix using Gaussian elimination
Operation counts of Gaussian elimination, substitution, O(n3/3),
O(n2/2)
Partial row pivoting: Why and how?
How to find the inverse matrix of A? What is the computation cost?
How to evaluate y = A-1b or A-k B?
Wha is
the operation account?
Error analysis
Condition number of A
Growth factor and upper bounds
Banach's Lemma
Perturbation analysis of (A + E) x = b + e
Residual vector and relation to the ralative error
Gaussian elimination for special matrices
Symmetric positive/negative definite matrics, how to
tell, Cholesky
decompostion
and LDLT decomposition, storage and operation counts
Banded matrices: tridiagonal, penta-diagona
Strictly colummn matrices
Basic iterative method for soliving linear system of equations
What is an iterative method? Why and when do we want
to use
iterative methods?
What is a basic iterative method?
Jacobi mathod (solve for the diagonals) and the iteration matrix D-1(L+U)
Gauss-Seidel method (solve for the diagonal and use whatever is
available),
the iteration matrix (D+L)-1U
SOR(w) iterative method: x(k+1) = (1-w) x(k)
+ w xGS(k+1)
Convergence of iterative method
Sufficient condtions for iterative matrices:
There is one associate matrix norm such that ||R|| < 1 Sufficient
condtions for the original matrix A
Strictly row diagonally dominant
Weakly strictly diagonally dominant and irreducible
Symmetric positive definite matrices Sufficient AND necessary
condition of the convergency, the
spectral redius
p(R) < 1.
Convergence speed and the minimum number of iteartions are needed
Algebraic eigenvalue problems
Review eigenvalues and eigenvectors, Jordan canonical
forms
The Power method for the dominant eigenvalues and variations,
convergence
order, condition of convergency, proof
Gerschgorin circle theorem for eigenvalues