51. Numerical solution of ordinary differential equations: High order Runge-Kutta methods#
André Massing
Date: March 17, 2021
If you want to have a nicer theme for your jupyter notebook, download the cascade stylesheet file tma4125.css and execute the next cell:
import matplotlib.pyplot as plt
import numpy as np
from IPython.core.display import HTML
from numpy import pi
from numpy.linalg import norm, solve
def css_styling():
try:
with open("tma4125.css") as f:
styles = f.read()
return HTML(styles)
except FileNotFoundError:
pass # Do nothing
# Comment out next line and execute this cell to restore the default notebook style
css_styling()
As always, we start by import some important Python modules.
%matplotlib inline
# Use a funny plotting style
plt.xkcd()
newparams = {
"figure.figsize": (6.0, 6.0),
"axes.grid": True,
"lines.markersize": 8,
"lines.linewidth": 2,
"font.size": 14,
}
plt.rcParams.update(newparams)
52. Runge-Kutta Methods#
In the previous lectures we introduced Euler’s method and Heun’s method as particular instances of the One Step Methods, and we presented the general error theory for one step method.
In this Lecture, we introduce a large family of the one step methods which go under the name Runge-Kutta methods (RKM). We will see that Euler’s method and Heun’s method are instance of RKMs.
52.1. Derivation of Runge-Kutta Methods#
For a given time interval
The idea is now to approximate the integral by some quadrature
rule
Now we can define
52.2. Exercise 1: A first condition on #
Question: What value do you expect for
Choice A:
Choice B:
Choice C:
In contrast to pure numerical integration, we don’t know the values
of
But then again we get a closure problem if we choose new quadrature points.
The idea is now to not introduce even more new quadrature points but to
use same
This leads us to
where we set
52.3. Exercise 2: A first condition on #
Question: What value do you expect for
Choice A:
Choice B:
Choice C:
Choice D:
52.4. Definition 1: Runge-Kutta methods#
Given
Runge-Kutta schemes are often specified in the form of a Butcher table:
If
Note that in the final step, all the function evaluation we need
to perform have already been performed when computing the stages
Therefore one often rewrite the scheme by introducing stage derivatives
so the resulting scheme will be (swapping index
52.5. Exercise 3: Butcher table for the explicit Euler#
Write down the Butcher table for the explicit Euler.
Solution.
Define
52.6. Exercise 4: The improved explicit Euler method#
We formally derive the explicit midpoint rule or improved explicit Euler method. Applying the midpoint rule to our integral representatio yields
Since we cannot determine the value
leading to the scheme
a)
Is this a one-step function? Can you define the increment function
Solution. Yes it is, and it’s increment function is given by
b) Can you rewrite this as a Runge-Kutta method? If so, determine the Butcher table of it.
Solution.
Define
Then \begin{equation} y_{i+1} = y_i + \tau k_2 \label{ode:eq:improved_euler_k2} \tag{19} \end{equation}
Thus the Butcher table is given by
52.7. Implementation of explicit Runge-Kutta methods#
Below you will find the implementation a general
solver class
Explicit_Runge_Kutta
which at its initialization takes
in a Butcher table and has __call__
function
def __call__(self, y0, f, t0, T, n):
and can be used like this
# Define Butcher table
a = np.array([[0, 0, 0],
[1.0/3.0, 0, 0],
[0, 2.0/3.0, 0]])
b = np.array([1.0/4.0, 0, 3.0/4.0])
c = np.array([0, 1.0/3.0, 2.0/3.0])
# Create solver
rk3 = Explicit_Runge_Kutta(a, b, c)
# Solve problem (applies __call__ function)
ts, ys = rk3(y0, t0, T, f, Nmax)
The complete implementation is given here:
class Explicit_Runge_Kutta:
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def __call__(self, y0, t0, T, f, Nmax):
# Extract Butcher table
a, b, c = self.a, self.b, self.c
# Stages
s = len(b)
ks = [np.zeros_like(y0, dtype=np.double) for s in range(s)]
# Start time-stepping
ys = [y0]
ts = [t0]
dt = (T - t0) / Nmax
while ts[-1] < T:
t, y = ts[-1], ys[-1]
# Compute stages derivatives k_j
for j in range(s):
t_j = t + c[j] * dt
dY_j = np.zeros_like(y, dtype=np.double)
for l in range(j):
dY_j += dt * a[j, l] * ks[l]
ks[j] = f(t_j, y + dY_j)
# Compute next time-step
dy = np.zeros_like(y, dtype=np.double)
for j in range(s):
dy += dt * b[j] * ks[j]
ys.append(y + dy)
ts.append(t + dt)
return (np.array(ts), np.array(ys))
52.8. Example 1: Implementation and testing of the improved Euler method#
We implement the improved explicit Euler from above and plot the analytical and the numerical solution. Finally, we determine the convergence order.
def compute_eoc(y0, t0, T, f, Nmax_list, solver, y_ex):
errs = []
for Nmax in Nmax_list:
ts, ys = solver(y0, t0, T, f, Nmax)
ys_ex = y_ex(ts)
errs.append(np.abs(ys - ys_ex).max())
print(f"For Nmax = {Nmax:3}, max ||y(t_i) - y_i||= {errs[-1]:.3e}")
errs = np.array(errs)
Nmax_list = np.array(Nmax_list)
dts = (T - t0) / Nmax_list
eocs = np.log(errs[1:] / errs[:-1]) / np.log(dts[1:] / dts[:-1])
return errs, eocs
# Define Butcher table for improved Euler
a = np.array([[0, 0], [0.5, 0]])
b = np.array([0, 1])
c = np.array([0, 0.5])
# Define rk2
rk2 = Explicit_Runge_Kutta(a, b, c)
t0, T = 0, 1
y0 = 1
lam = 1
# rhs of IVP
f = lambda t, y: lam * y
# Exact solution to compare against
y_ex = lambda t: y0 * np.exp(lam * (t - t0))
# EOC test
Nmax_list = [4, 8, 16, 32, 64, 128]
errs, eocs = compute_eoc(y0, t0, T, f, Nmax_list, rk2, y_ex)
print(errs)
print(eocs)
For Nmax = 4, max ||y(t_i) - y_i||= 2.343e-02
For Nmax = 8, max ||y(t_i) - y_i||= 6.441e-03
For Nmax = 16, max ||y(t_i) - y_i||= 1.688e-03
For Nmax = 32, max ||y(t_i) - y_i||= 4.322e-04
For Nmax = 64, max ||y(t_i) - y_i||= 1.093e-04
For Nmax = 128, max ||y(t_i) - y_i||= 2.749e-05
[2.34261385e-02 6.44058991e-03 1.68830598e-03 4.32154479e-04
1.09316895e-04 2.74901378e-05]
[1.86285442 1.93161644 1.96595738 1.98303072 1.99153035]
52.9. Exercise 5#
If you have time, determine the experimental order of convergence EOC for the following methods:
Heun’s method with 3 stages
$
# Insert code here
The classical Runge-Kutta method with 4 stages
$
# Insert code here
52.10. Order conditions for Runge-Kutta Methods#
The convergence theorem for one-step methods
gave us some necessary conditions to guarantee
that a method is convergent order of
“consistency order
” + “Increment function satisfies a Lipschitz condition” “convergence order .
To put it in slightly different words:
“local truncation error behaves like
” + “Increment function satisfies a Lipschitz condition” “global truncation error behaves like ”
It turns out that for
52.11. Theorem 1: Order conditions for Runge-Kutta methods#
A Runge - Kutta method has consistency order $p$ if and only if all the conditions up to and including $p$ in the table below are satisfied.where sums are taken over all the indices from 1 to
Proof. Without proof.