@MISC{Knott14gaussianelimination, author = {Gary D. Knott}, title = {Gaussian Elimination and LU-Decomposition}, year = {2014} }

Share

OpenURL

Abstract

Solving a set of linear equations arises in many contexts in applied mathematics. At least until recently, a claim could be made that solving sets of linear equations (generally as a component of dealing with larger problems like partial-differential-equation solving, or optimization, consumes more computer time than any other computational procedure. (Distant competitors would be the Gram-Schmidt process and the fast Fourier transform computation, and the Gram-Schmidt process is a first cousin to the Gaussian elimination computation since both may be used to solve systems of linear equations, and they are both based on forming particular linear combinations of a given sequence of vectors.) Indeed, the invention of the electronic digital computer was largely motivated by the desire to find a labor-saving means to solve systems of linear equations [Smi10]. Often the subject of linear algebra is approached by starting with the topic of solving sets of linear equations, and Gaussian elimination methodology is elaborated to introduce matrix inverses, rank, nullspaces, etc. We have seen above that computing a preimage vector x ∈ Rn of a vector v ∈ Rk with respect to the n × k matrix A consists of finding a solution (x1,..., xn) to the k linear equations: A11x1 +A21x2 + · · ·+An1xn = v1 A12x1 +A22x2 + · · ·+An2xn = v2