53. Finite difference methods for two-point value problems#
Anne Kværnø, André Massing
Date: April 19, 2021
53.1. Introduction#
In this note the finite difference method for solving partial differential equations (PDEs) will be presented.
Roughly speaking, a finite difference method consists of the following steps:
Discretize the domain on which the equation is defined.
On each grid point, replace the derivatives with an approximation, using the values in neighbouring grid points.
Replace the exact solutions by their approximations.
Solve the resulting system of equations.
We will first see how to find approximations to the derivative of a function, and then how these can be used to solve boundary value problems like
and time dependent partial differential equations like the heat equation
The technique described here is, however, applicable to several other PDEs, and it is therefore important to try to understand the underlying idea.
53.2. Numerical differentiation#
This is the main tool for finite difference methods.
Given a sufficiently smooth function
The derivative of
Given a sufficiently small value of
The first one is taken directly from the definition, and so is the second, and the third is just the mean of the first two.
A common approximation to the second derivative is
Numerical example 1:
Test the method on the function
import matplotlib.pyplot as plt
import numpy as np
# Import linear algebra module
import scipy.linalg as la
from matplotlib import cm
from matplotlib.pyplot import *
from mpl_toolkits.mplot3d import Axes3D # For 3-d plot
from numpy import *
from scipy.linalg import solve # Solve linear systems
from scipy.sparse import diags # Greate diagonal matrices
%matplotlib inline
newparams = {
"figure.figsize": (8.0, 4.0),
"axes.grid": True,
"lines.markersize": 8,
"lines.linewidth": 2,
"font.size": 14,
}
rcParams.update(newparams)
# Numerical differentiation
# Forward difference
def diff_forward(u, x, h=0.1):
return (u(x + h) - u(x)) / h
# Backward difference
def diff_backward(u, x, h=0.1):
return (u(x) - u(x - h)) / h
# Central difference for f'(x):
def diff_central(u, x, h=0.1):
return (u(x + h) - u(x - h)) / (2 * h)
# end of diff_central
# Central difference for f''(x):
def diff2_central(u, x, h=0.1):
return (u(x + h) - 2 * u(x) + u(x - h)) / h**2
# end of diff2_central
# Numerical example 1
x = pi / 4
du_exact = cos(x)
ddu_exact = -sin(x)
for h in [0.1, 0.01, 0.001]:
u = sin
du = diff_forward(u, x, h)
print(f"Approximations to the first derivative for h = {h}")
print(f"Forward difference: du = {du:12.8f}, Error = {du_exact-du:10.3e}")
du = diff_backward(u, x, h)
print(f"Backward difference: du = {du:12.8f}, Error = {du_exact-du:10.3e}")
du = diff_central(u, x, h)
print(f"Central difference: du = {du:12.8f}, Error = {du_exact-du:10.3e}")
print("Approximation to the second derivative")
ddu = diff2_central(u, x, h)
print(f"Central difference: ddu = {ddu:12.8f}, Error = {ddu_exact-ddu:10.3e}")
Approximations to the first derivative for h = 0.1
Forward difference: du = 0.67060297, Error = 3.650e-02
Backward difference: du = 0.74125475, Error = -3.415e-02
Central difference: du = 0.70592886, Error = 1.178e-03
Approximation to the second derivative
Central difference: ddu = -0.70651772, Error = -5.891e-04
Approximations to the first derivative for h = 0.01
Forward difference: du = 0.70355949, Error = 3.547e-03
Backward difference: du = 0.71063050, Error = -3.524e-03
Central difference: du = 0.70709500, Error = 1.179e-05
Approximation to the second derivative
Central difference: ddu = -0.70710089, Error = -5.893e-06
Approximations to the first derivative for h = 0.001
Forward difference: du = 0.70675311, Error = 3.537e-04
Backward difference: du = 0.70746022, Error = -3.534e-04
Central difference: du = 0.70710666, Error = 1.179e-07
Approximation to the second derivative
Central difference: ddu = -0.70710672, Error = -5.901e-08
53.3. Error analysis#
In this case the error analysis is quite simple: Perform a Taylor expansion of the
error around
The expansion for the error of the forward difference is:
\begin{align*} e(x;h) &= u’(x) - \frac{u(x+h)-u(x)}{h} \ &= u’(x) - \frac{(u(x)+u’(x)h + \frac{1}{2}u’’(\xi)h^2) - u(x)}{h} \ &= -\frac{1}{2}u’’(\xi)h \end{align*}
for some
The expansion for the error of the central difference is slightly more complicated:
\begin{align*} e(x; h) &= u’(x) - \frac{u(x+h)-u(x-h)}{2h} \ &= u’(x) \ &- \frac{\big(u(x)+u’(x)h + \frac{1}{2} u’’(x)h^2 + \frac{1}{6} u’’’(\xi_1)h^3 \big) - \big(u(x)-u’(x)h + \frac{1}{2} u’’(x)h^2 - \frac{1}{6} u’’’(\xi_2)h^3\big)}{2h} \ &= -\frac{1}{12}\big(u’’’(\xi_1) + u’’’(\xi_2)\big)h^2 \ &= -\frac{1}{6}u’’’(\eta)h^2, \qquad \qquad \text{ for some } \eta \in (x-h, x+h). \end{align*}
In the last step, the two remainder terms have been combined by the intermediate value theorem. The error for the approximation of the second order derivative can be found in a similar manner.
Problem 1: Use Taylor expansion to show that
for
Reminder. The order of an approximation is
for all sufficiently small
In practice, it is sufficient to show that the power expansion of the error satisfies
The forward and backward approximations are of order 1, the central differences of order 2.
We are going to use these formulas a lot in the sequel, so let us just summarize the results, including the error terms:
Difference formulas for derivatives:
\begin{align*}
u’(x) &= \left{
\begin{array}{ll} \displaystyle
\underbrace{\frac{u(x+h)-u(x)}{h}}{\partial^+u(x)} - \frac{h}{2}u’’(\xi), \ & \text{Forward difference} \ \mbox{} \
\displaystyle
\underbrace{\frac{u(x)-u(x-h)}{h}}{\partial^-u(x)} + \frac{h}{2}u’’(\xi), & \text{Backward difference} \ \mbox{} \
\displaystyle
\underbrace{\frac{u(x+h)-u(x-h)}{2h}}{\partial^{\circ}u(x)} - \frac{h^2}{6}u’’’(\xi).\qquad & \text{Central difference}
\end{array} \right. \ \mbox{} \
u’’(x) & =
\underbrace{\frac{u(x+h)-2u(x)+u(x-h)}{h^2}}{\partial^+\partial^-u(x)} - \frac{h^2}{12}u^{(4)}(\xi), \qquad
\text{Central difference}
\end{align*}
53.4. Finite difference method for the 1d Poisson problem#
We start our journey with the the following problem.
Let
\begin{gather}
u’’(x) = f(x) \quad \forall x \in (a,b), \tag{1a} \ u(a) = u_a, \quad u(b) = u_b \tag{1b} \end{gather} where
and are given values.
For the rest of the this section we will
assume that
Note that so this is second order ordinary differential equations and thus requires two addition values to determine a solution completely, one on each boundary point.
This is contrast of the second order initial value problem
\begin{align*}
u’’ &= f(t,u(t), u’(t)) \quad \forall; t \in (0,1)
\
u(0) &= u_0,
\
u’(0) &= u_1
\end{align*}
where we have two conditions on the left end point.
The main reason is that in the former case,
Instead of trying to compute
Then the second order derivative
\begin{align} \partial^+ \partial^- u(x_i) := \dfrac{u(x_i+h) - 2 u(x_i) + u(x_i-h)} {h^2} \approx u’’(x_i) = -f(x_i) \end{align}
Now the idea is to numerically solve for the two-point boundary
value problem by associating an unknown variable
54. \begin{align} -\partial^+\partial^- U_i#
\dfrac{U_{i+1} - 2 U_i + U_{i-1}} {h^2} = f(x_i) \quad \text{for } i = 1,\ldots N-1. \end{align}
Note that at
55. \begin{align} \dfrac{1}{h^2} \underbrace{ \begin{bmatrix} -1 & 2 & -1 & & & \ & -1 & 2 & -1 & & \ & & -1 & 2 & -1 & & \ & & & \ddots &\ddots & \ddots & \ & & & & -1 & 2 & -1 \end{bmatrix} }{\widetilde{A}} \underbrace{ \begin{bmatrix} U_0 \ U_1 \ U_2 \ U_3 \ \vdots \ U{N} \end{bmatrix} }_{\widetilde{U}}#
\underbrace{ \begin{bmatrix} f(x_1) \ f(x_2) \ f(x_3) \ \vdots \ f(x_{N-1}) \end{bmatrix} }_{\widetilde{F}} \end{align}
Out of reflex we immediately ask whether this system
is always solvable and has a unique solution.
Unfortunately the matrix
So to close the system, let’s incorporate the boundary condition by setting
$
Find
56. \begin{align} \dfrac{1}{h^2} \underbrace{ \begin{bmatrix} h^2 \ -1 & 2 & -1 & & & \ & -1 & 2 & -1 & & \ & & -1 & 2 & -1 & & \ & & & \ddots &\ddots & \ddots & \ & & & & -1 & 2 & -1 \ & & & & & & h^2 \end{bmatrix} } \underbrace{ \begin{bmatrix} U_0 \ U_1 \ U_2 \ U_3 \ \vdots \ U{N-1} \ U_{N} \end{bmatrix} }_#
\underbrace{ \begin{bmatrix} u_0 \ f(x_1) \ f(x_2) \ f(x_3) \ \vdots \ f(x_{N-1}) \ u_b \end{bmatrix} }_ \end{align}
56.1. Implementation#
def fdm_poisson1d_matrix(N):
r"""Computes the finite difference matrix for the Poisson problem in 1D
Parameters:
N (int): Number of grid points :math:`\{x_i}_{i=0}^N` counting from 0.
Returns:
A (ndarray): Finite difference matrix
"""
# Gridsize
h = 1.0 / N
# Define zero matrix A of right size and insert
A = np.zeros((N + 1, N + 1))
# Define tridiagonal part of A
hh = h * h
for i in range(1, N):
A[i, i - 1] = -1 / hh
A[i, i] = 2 / hh
A[i, i + 1] = -1 / hh
# Set a_00 = a_NN to 1 to incorporate boundary condition
A[0, 0] = 1
A[N, N] = 1
return A
def apply_bcs(F, bcs):
"Incorporate boundary condition into rhs vector"
F[0], F[-1] = bcs[0], bcs[1]
# Analytical reference solution
u = lambda x: np.sin(2 * np.pi * x)
f = lambda x: (2 * np.pi) ** 2 * np.sin(2 * np.pi * x)
N = 10
# Define grid points
x = np.linspace(0, 1, N + 1)
xfine = np.linspace(0, 1, 10 * N)
# Compute A and F
A = fdm_poisson1d_matrix(N)
F = f(x)
# print(A)
# print(F)
# Incorporate bcs
bcs = [u(0), u(1)]
apply_bcs(F, bcs)
# Solve AU = F
# (We will introduce a sparse solver when we look at 2D problems)
U = la.solve(A, F)
# Plot discrete solution on chosen discretization grid
plt.plot(x, U, "or", label="U_i")
plt.plot(xfine, u(xfine), "--b", label=r"$u_{\mathrm{ex}}$")
plt.legend()
# Show figure (for non inline plotting)
plt.show()
Problem 2
Apply the method of manufactured solutions: Take a known function
For example you can pick
After having picked your favorite (smooth) manufactured solution,
you can compute the error
$
For
compute and plot the numerical solution and the analytical solution
record the resulting
At the end of your experiment, compute the experimental order of convergence
for each refinement level
What order of convergence do you expect and what do you get?
56.2. General two point boundary problems (BVP)#
In the following, we will discuss the numerical solution of a two-point boundary value problem of the form
where
As before, a finite difference method for this problem is constructed with the following steps:
Step 1:
Given the interval
Step 2:
For each interior grid point
for each
Step 3: Ignore the error term, and replace the exact solution
This is the discretization of the BVP.
If we now include the two boundary values as equations
the discretization is a linear system of equations
where
The right hand side
Obviously, the first and last equations are trivial to solve, and are therefore often included in the right hand side.
Step 4: Solve
Example 1: We consider the equation
with exact solution
Choose
Let
\begin{align*} U_0 &= 1, \ (1-h)U_{i-1} + (-2-3h^2)U_i + (1+h)U_{i+1} &= 9x_ih^2, && i=1, \ldots, N-1, \ U_N &= 0.486351. \end{align*}
To be even more concrete, for
The first and the last equation is trivial to solve, so in practice you have a system of 3 equations in 3 unknowns,
with the solution
For comparison, the analytic solution in these points is
56.3. Implementation#
For simplicity, the implementation below is only done for BVPs with constant
coefficients, that is
An extra function is included to construct tridiagonal matrices,
that is, matrices where all entries outside the diagonal, sub-diagonal,
and super-diagonal are equal to zero.
The implementation consist of:
Choose
, let and , .Construct the matrix
and the vector . The matrix is tridiagonal, and except from the first and last row, has the elements below the diagonal, as diagonal elements and above the diagonal.Construct the vector
with elements for .Modify the first and the last row of the matrix
and the first and last element of the vector , depending on the boundary conditions.Solve the system
.
def tridiag(v, d, w, N):
"""
Help function
Returns a tridiagonal matrix A=tridiag(v, d, w) of dimension N x N.
"""
e = ones(N) # array [1,1,...,1] of length N
A = v * diag(e[1:], -1) + d * diag(e) + w * diag(e[1:], 1)
return A
diag?
# Example 1, BVP
# Define the equation
# u'' + p*u' + q*u = r(x) on the interval [a,b]
# Boundary condition: u(a)=ua and u(b)=ub
p = 2
q = -3
def r(x):
return 9 * x
a, b = 0, 1
ua, ub = 1, exp(-3) + 2 * exp(1) - 5
# The exact solution (if known)
def u_eksakt(x):
return exp(-3 * x) + 2 * exp(x) - 3 * x - 2
# Set up the discrete system
N = 4 # Number of intervals
# Start the discretization
h = (b - a) / N # Stepsize
x = linspace(a, b, N + 1) # The gridpoints x_0=a, x_1=a+h, .... , x_N=b
# Make a draft of the A-matrix (first and last row have to be adjusted)
v = 1 - 0.5 * h * p # Subdiagonal
d = -2 + h**2 * q # Diagonal
w = 1 + 0.5 * h * p # Superdiagonal
A = tridiag(v, d, w, N + 1)
# Make a draft of the b-vector
b = h**2 * r(x)
# Modify the first equation (left boundary)
A[0, 0] = 1
A[0, 1] = 0
b[0] = ua
# Modify the last equation (right boundary)
A[N, N] = 1
A[N, N - 1] = 0
b[N] = ub
U = solve(A, b) # Solve the equation
To verify the calculations done above, print the matrix
# Print the matrix A, the right hand side b the numerical and exact solution
print("A =\n", A)
print("\nb =\n ", b)
print("\nU =\n ", U)
print("\nu(x)=\n", u_eksakt(x))
A =
[[ 1. 0. 0. 0. 0. ]
[ 0.75 -2.1875 1.25 0. 0. ]
[ 0. 0.75 -2.1875 1.25 0. ]
[ 0. 0. 0.75 -2.1875 1.25 ]
[ 0. 0. 0. 0. 1. ]]
b =
[1. 0.140625 0.28125 0.421875 0.48635073]
U =
[1. 0.29317568 0.02555744 0.09382011 0.48635073]
u(x)=
[1. 0.29041739 0.0205727 0.08939926 0.48635073]
# Plot the solution of the BVP
xe = linspace(0, 1, 101)
plot(x, U, ".-")
plot(xe, u_eksakt(xe), ":")
xlabel("x")
ylabel("u")
legend(["Numerical", "Exact"])
title("Solution of a two-point BVP");
56.4. Boundary conditions#
To get a unique solution of a BVP (or a PDE), some information about the solution, usually given on the boundaries has to be known. The most common boundary conditions are:
Dirichlet condition: The solution is known at the boundary.
Neumann condition: The derivative is known at the boundary.
Robin (or mixed) condition: A combination of those.
In the example above, Dirichlet conditions were used. We will now see how to handle Neumann conditions. Robin conditions can be treated similarly.
Given the BVP with a Neumann condition at the left boundary:
Here,
But this is only a first order approximation, and thus lower accuracy is to be
expected. We could also use a second order approximation using the values in the
grid points
Instead, use the idea of a false boundary:
Assume that the solution can be stretched outside the boundary
\begin{align*} \frac{U_{1}-2U_0+U_{-1}}{h^2} + p(x_0)\frac{U_{1}-U_{-1}}{2h} + q(x_0) U_0 & = r(x_0), \ \frac{U_1 - U_{-1}}{2h} &= u’_a. \end{align*}
We can now solve the second equation with respect to
So the only thing that has changed is the first equation. And since central differences have been used both for the BVP and the boundary condition, the overall order of this approximation can be proved to be 2.
Example 2:
We consider the same example as before, but now with a Neumann condition at the left boundary:
which has the analytic solution
The modified difference equation at the boundary
We multiply this equation by
\begin{align*} (-2-3h^2)U_0 - 2U_1 &= (2h-2h^2)u’a, \ (1-h)U{i-1} + (-2-3h^2)U_i + (1+h)U_{i+1} &= 9 h^2 x_i, && i=1, \ldots N-1, \ U_N &= u_b, \end{align*}
which, for
The solution of this is
Problem 3:
Now consider the two-point boundary problem from Example 2; that is,
which has the analytic solution
Here we want you to test the two possible implementations of the Neumann boundary conditions discussed above.
a) Modify the matrix
b) Now modify the matrix