In numerical analysis, the Lax equivalence theorem is a fundamental theorem in the analysis of finite difference methods for the numerical solution of partial differential equations. It states that for a consistent finite difference method for a well-posed linear initial value problem, the method is convergent if and only if it is stable. [1]
The importance of the theorem is that while the convergence of the solution of the finite difference method to the solution of the partial differential equation is what is desired, it is ordinarily difficult to establish because the numerical method is defined by a recurrence relation while the differential equation involves a differentiable function. However, consistencyâthe requirement that the finite difference method approximates the correct partial differential equationâis straightforward to verify, and stability is typically much easier to show than convergence (and would be needed in any event to show that round-off error will not destroy the computation). Hence convergence is usually shown via the Lax equivalence theorem.
Stability in this context means that a matrix norm of the matrix used in the iteration is at most unity, called (practical) LaxâRichtmyer stability. [2] Often a von Neumann stability analysis is substituted for convenience, although von Neumann stability only implies LaxâRichtmyer stability in certain cases.
This theorem is due to Peter Lax. It is sometimes called the LaxâRichtmyer theorem, after Peter Lax and Robert D. Richtmyer. [3]
In numerical analysis, the Lax equivalence theorem is a fundamental theorem in the analysis of finite difference methods for the numerical solution of partial differential equations. It states that for a consistent finite difference method for a well-posed linear initial value problem, the method is convergent if and only if it is stable. [1]
The importance of the theorem is that while the convergence of the solution of the finite difference method to the solution of the partial differential equation is what is desired, it is ordinarily difficult to establish because the numerical method is defined by a recurrence relation while the differential equation involves a differentiable function. However, consistencyâthe requirement that the finite difference method approximates the correct partial differential equationâis straightforward to verify, and stability is typically much easier to show than convergence (and would be needed in any event to show that round-off error will not destroy the computation). Hence convergence is usually shown via the Lax equivalence theorem.
Stability in this context means that a matrix norm of the matrix used in the iteration is at most unity, called (practical) LaxâRichtmyer stability. [2] Often a von Neumann stability analysis is substituted for convenience, although von Neumann stability only implies LaxâRichtmyer stability in certain cases.
This theorem is due to Peter Lax. It is sometimes called the LaxâRichtmyer theorem, after Peter Lax and Robert D. Richtmyer. [3]