Numerical Methods: Jacobi, Gauss, Dolittle, and Newton
Jacobi Method
The Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges.
Function [x, iter] = jacobi(A, b, x0, tol, maxiter)
n = size(A)(1); dim, dim = size(A); x = x0; xnext = zeros(dim, 1); r = norm(A * x - b); normb = norm(b); iter = 0; while (r > tol * normb) && (iter < maxiter) iter++; for j = 1:dim xnext(j) = (b(j) - A(j, [1:j - 1, j + 1:dim]) * x([1:j - 1, j + 1:dim])) / A(j, j); end x = xnext; r = norm(A * x - b); end
Gauss Elimination with Pivoting
Gaussian elimination is a method for solving a system of linear equations. This is achieved by transforming the system’s augmented matrix into an upper triangular matrix through a series of elementary row operations. Partial pivoting involves swapping rows to ensure the largest element in a column is used as the pivot, improving numerical stability.
Function [x, vecperm] = gausspiv(A, b)
n = rows(A); c = columns(b); Ab = [A b]; vecperm = 1:n; for k = 1:n - 1 [Y, j] = max(abs(Ab(k:n, k))); i = j + k - 1; if i > k Ab([k, i], :) = Ab([i, k], :); vecperm([k, i]) = vecperm([i, k]); end for i = k + 1:n m = -Ab(i, k) / Ab(k, k); Ab(i, k:n + c) = Ab(i, k:n + c) + m * Ab(k, k:n + c); end end x = sustregr(Ab(1:n, 1:n), Ab(1:n, n + 1:n + c));
Dolittle LU Decomposition
Dolittle’s method is a technique for decomposing a square matrix into a lower triangular matrix (L) and an upper triangular matrix (U). This decomposition is useful for solving systems of linear equations and calculating determinants.
Dolittle
n = Dimensions[A][[1]]; L = IdentityMatrix[n]; U = Table[0 {i, n}, {j, n}]; For[j = 1, j <= n, j++, U[[1, j]] = A[[1, j]]; ]; For[i = 2, i <= n, i++, L[[i, 1]] = A[[i, 1]] / U[[1, 1]]; ]; For[k = 2, k <= n - 1, k++, U[[k, k]] = A[[k, k]] - Sum[L[[k, s]] * U[[s, k]], {s, k - 1}]; For[j = k + 1, j <= n, j++, U[[k, j]] = A[[k, j]] - Sum[L[[k, s]] * U[[s, j]], {s, k - 1}]; ]; For[i = k + 1, i <= n, i++, L[[i, k]] = (A[[i, k]] - Sum[L[[i, s]] * U[[s, k]], {s, k - 1}]) / U[[k, k]]; ]; ]; U[[n, n]] = A[[n, n]] - Sum[L[[n, s]] * U[[s, n]], {s, n - 1}]; MatrixForm[L]; MatrixForm[U]; y = Table[0, {i, n}]; For[i = 1, i <= n, i++, y[[i]] = (b[[i]] - Sum[L[[i, k]] * y[[k]], {k, i - 1}]) / L[[i, i]]; ]; x = Table[0, {i, n}]; For[i = n, i >= 1, i--, x[[i]] = (y[[i]] - Sum[U[[i, k]] * x[[k]], {k, i + 1, n}]) / U[[i, i]]; ]; x
Newton’s Divided Differences Interpolation
Newton’s divided differences is a method for constructing a polynomial that interpolates a given set of data points. The method uses divided differences to recursively build the polynomial.
Divided Differences
n = ?; x = ?; y = ?; A = Table[y, {i, n + 1}]; A = Transpose[A]; For[i = 2, i <= n + 1, i++, For[j = i, j <= n + 1, j++, A[[j, i]] = (A[[j, i - 1]] - A[[j - 1, i - 1]]) / (x[[j]] - x[[j - i + 1]]); ]; ]; pNewton[t_] := Sum[A[[i, i]] * Product[(t - x[[j]]), {j, i - 1}], {i, n + 1}];
Backward Substitution
Function [x] = sustregr(A, b)
n = rows(A); c = columns(b); x = zeros(n, c); for k = n:-1:1 x(k, :) = (b(k, :) - A(k, :) * x) ./ A(k, k); end
Forward Substitution
y = sustprogr(L, b); x = sustregr(U, y);