%%javascript
MathJax.Hub.Config({
TeX: { equationNumbers: { autoNumber: "AMS" } }
});
import numpy as np
import matplotlib.pyplot as plt
Kontinuerlig | Diskret |
$x$ | $x_i = i\cdot\Delta x$ |
$f(x)$ | $f_i = f(x_i) = f(i\cdot\Delta x)$ |
$\frac{d}{dx}$ | $D^+$ |
$\int$ | $\sum$ |
I klassisk kalkulus benyttes kontinuerlige variabler:
\begin{equation} x \in \mathbb{R} \end{equation}
En diskretisering innebærer å velge en endelig oppløsning på $\mathbb{R}$, slik at
\begin{equation} x \rightarrow x_i := i\cdot\Delta x, i \in \mathbb{Z} \end{equation}
Dette er ofte første trinn når man klargjør et problem for numerisk løsning.
Med diskrete variabler, følger det at også funksjoner nå kun er definert i diskrete punkter. Hvor vi i klassisk kalkulus har
\begin{equation} f(x) \end{equation}
har vi nå i steden
\begin{equation} f(x) \rightarrow f_i = f(x_i) = f(i\cdot\Delta x) \end{equation}
I klassisk kalkulus er den deriverte definert som
$$ \frac{d}{dx} f(x) := \lim_{\Delta x \rightarrow 0} \frac{f(x + \Delta x) - f(x)}{\Delta x}$$
I diskret kalkulus er "fremad-differansen" en av flere mulige approksimasjoner på den deriverte
$$ \mathbf{D}^+ f_i := \frac{f_{i+1} - f_i}{\Delta x}$$
Uttrykk | Navn | Operator |
---|---|---|
$\mathbf{D}^+ f(x) := \frac{f(x + \Delta t) - f(x)}{\Delta x}$ | "Fremad-differanse" | $\mathbf{D}^+$ |
$\mathbf{D} f(x) := \frac{f(x+ \Delta x) - f(x-\Delta x)}{2 \Delta x}$ | "Senterdifferanse" | $\mathbf{D}$ |
$\mathbf{D}^- f(x) := \frac{f(x) - f(x-\Delta x)}{\Delta x}$ | "Bakoverdifferansen" | $\mathbf{D}^-$ |
En første ordens (lineær) differensialligning kan uttrykkes:
\begin{equation} f \left( u(x), x \right) = \frac{d}{dx} u(x) . \end{equation}Noen forskjellige notasjoner for førstederiverte:
\begin{equation} \frac{d}{dx} u(x) = u'(x) = \dot{u}(x) \end{equation}I Eulers metode approksimeres den deriverte i en differensialligning med fremoverdifferansen
\begin{equation} f \left( u(x), x \right) = \frac{d}{dx} u(x) \approx \mathbf{D}^+ u(x) = \frac{u(x + \Delta x) - u(x)}{\Delta x} \end{equation}Metoden kalles eksplisitt, fordi vi kan løse de resulterende ligningene slik at verdien $u(x + \Delta x)$ er eksplisitt gitt ved forløpende verdier:
\begin{equation} u(x + \Delta x) = f \left( u(x), x \right) \Delta x + u(x) \end{equation}Merk imidlertid at vi trenger en initiell verdi $u(0)$ for å komme i gang.
Gitt
\begin{equation} \frac{d}{dx} u(x) = -C u(x) , \quad \quad u(x) = u(x_0) e^{-Cx} \end{equation}$\rightarrow$ Implementasjon
Nsteps = 20 # number of steps
x0, x1 = 0, 2
C = 1.0 # coefficient
u = np.zeros(Nsteps, dtype = float) #Array to contain solution
u[0] = 1.0 #initial condition
x = np.linspace(x0,x1, Nsteps) #Array containing discretized x_i
dx = x[1]-x[0] # stepsize
for i in np.arange(Nsteps-1):
u[i+1] = (1 - C*dx)*u[i] # FE step
plt.figure(1, figsize = (8,4)) # Figure to plot solutions
plt.plot(x,u[0]*np.exp(-C*x), "-", linewidth = 6, alpha = .5, label = "Exact")
plt.plot(x,u, ".-", linewidth = 3, alpha = .5, label = "FE", color = (0,0,0))
plt.xlabel("x")
plt.ylabel("u(x)")
plt.legend()
plt.show()
Initialverdiproblem:
$$\frac{d}{dt} y(t) = f(y(t), t)$$ $$y(0) = a$$
Følgende fremgangsmåte vil alltid virke i dette kurset, samt i mange andre tilfeller:
$\rightarrow$ Live kodeeksempel: reaksjonskinetikk med koblede reaksjoner.
I "Baklengs Euler" approksimeres den deriverte med bakoverdifferansen
\begin{equation} f \left( u(x), x \right) = \frac{d}{dx} u(x) \approx \mathbf{D}^- u(x) = \frac{u(x) - u(x- \Delta x)}{\Delta x} \end{equation}Metoden kalles implisitt, fordi den generelt resulterer i en (gjerne ikke-lineær) relasjon for den ukjente
\begin{equation} u(x) - f \left( u(x), x \right) \Delta x = u(x- \Delta x), \end{equation}altså en ligning som må løses.
Gitt
\begin{equation} \frac{d}{dx} u(x) = sin( u(x) ) , \quad \quad u(x) = 2 tan^{-1}(e^{x}) \end{equation}$\rightarrow$ Hva nå?
Vi skriver om uttrykket for BE:
\begin{equation} u_{i} - sin(u_i)\Delta x - u_{i-1} = 0 \end{equation}For en gitt $u_{i-1}$ består nå problemet i å bestemme røttene til uttrykket over. Muligens enklere å se dersom vi benytter litt andre navn
\begin{equation} F(x) = x - B sin(x) - A = 0 \quad \quad \frac{d}{dx} F(x) = 1 + B cos(x) \end{equation}$\rightarrow$ Picarditerasjoner, Newtons metode, linjesøk osv.
def initial_guess1():
plt.figure(1, figsize = (10,6))
N = 5
i = 1
x = np.linspace(.5,2,N)
u = np.sin(x)
plt.plot(x,u, "o-", color = (.4,0,0), linewidth = 5, alpha = .4)
plt.plot(x[i+1], u[i], "o", color = (0,0,0))
plt.plot([x[i+1], x[i+1]], [u[i], u[i+1]] , "--", color = (0,0,0), alpha = .5)
plt.plot([x[i+1], x[i+1]-.05], [u[i+1], u[i+1]-.05], color = (0,0,0), alpha = .5 )
plt.plot([x[i+1], x[i+1]+.05], [u[i+1], u[i+1]-.05], color = (0,0,0), alpha = .5 )
plt.text(x[i+1]+.03, 0.85, "Newton Iterations:")
plt.text(x[i+1]+.03, 0.82," $u_{i,n} = u_{i,n-1} - f(u_{i,n-1}) f'(u_{i,n-1})^{-1}$")
plt.text(x[i+1]+.03, 0.75, "Initial guess: $u_{i, 0} = u_{i-1}$")
plt.show()
def initial_guess2():
plt.figure(1, figsize = (10,6))
N = 5
i = 1
x = np.linspace(.5,2,N)
u = np.sin(x)
plt.plot(x,u, "o-", color = (.4,0,0), linewidth = 5, alpha = .4)
plt.plot(x[i+1], u[i+1]+.1, "o", color = (0,0,0))
plt.plot([x[i], x[i+1]], [u[i], u[i+1]+.1] , "--", color = (0,0,0), alpha = .5)
plt.plot([x[i+1], x[i+1]], [u[i+1]+.1, u[i+1]] , "--", color = (0,0,0), alpha = .5)
plt.plot([x[i+1], x[i+1]-.03], [u[i+1], u[i+1]+.03], color = (0,0,0), alpha = .5 )
plt.plot([x[i+1], x[i+1]+.03], [u[i+1], u[i+1]+.03], color = (0,0,0), alpha = .5 )
plt.text(x[i+1]+.03, 1.0, "Newton Iterations:")
plt.text(x[i+1]+.03, 0.97," $u_{i,n} = u_{i,n-1} - f(u_{i,n-1}) f'(u_{i,n-1})^{-1}$")
plt.text(x[i+1]+.03, 1.05, "Initial guess: $u_{i, 0} = u_{i-1} + f(u_{i-1}) \Delta x$ (FE)")
plt.show()
Nsteps = 40
x0,x1 = 0, 5.0
u0 = np.pi/2.0
u = np.zeros(Nsteps, dtype = float)
u[0] = u0
x = np.linspace(x0,x1,Nsteps)
dx = x[1]-x[0]
F = lambda u_i, u_i1, dx : u_i - np.sin(u_i)*dx - u_i1
dFdu = lambda u_i, dx : 1 + np.cos(u_i)*dx
J = 0 #count iterations total
for i in np.arange(1,Nsteps):
# Determine u[i] so that
# F(u[i]) = u[i] - sin( u[i] )*dx - u[i-1] = 0
# Use: dF/du[i] = 1 + cos(u[i])*dx
# Newton step
for j in np.arange(100):
du = F(u[i], u[i-1], dx)*dFdu(u[i], dx)**-1
u[i] = u[i] - du
if np.abs(du)<=1e-8:
J += j
break
plt.figure(1, figsize = (8,4)) # Figure to plot solutions
plt.title("BE solver, %i Newton iterations total." % J)
plt.plot(x, 2*np.arctan(np.exp(x)), linewidth = 7, alpha = .2, label = "Exact")
plt.plot(x,u, ".-", linewidth = 2, alpha = .5, label = "BE", color = (0,0,0))
plt.xlabel("x")
plt.ylabel("u(x)")
plt.legend()
plt.show()
Nsteps = 40
x0,x1 = 0, 5.0
u0 = np.pi/2.0
u = np.zeros(Nsteps, dtype = float)
u[0] = u0
x = np.linspace(x0,x1,Nsteps)
dx = x[1]-x[0]
F = lambda u_i, u_i1, dx : u_i - np.sin(u_i)*dx - u_i1
dFdu = lambda u_i, dx : 1 + np.cos(u_i)*dx
J = 0 #count iterations total
for i in np.arange(1,Nsteps):
# Determine u[i] so that
# F(u[i]) = u[i] - sin( u[i] )*dx - u[i-1] = 0
# Use: dF/du[i] = 1 + cos(u[i])*dx
# Newton step, initial guess:
u[i] = u[i-1]
for j in np.arange(100):
du = F(u[i], u[i-1], dx)*dFdu(u[i], dx)**-1
u[i] = u[i] - du
if np.abs(du)<=1e-8:
J += j
break
initial_guess1()
plt.figure(1, figsize = (8,4)) # Figure to plot solutions
plt.title("BE solver, %i Newton iterations total." % J)
plt.plot(x, 2*np.arctan(np.exp(x)), linewidth = 7, alpha = .2, label = "Exact")
plt.plot(x,u, ".-", linewidth = 2, alpha = .5, label = "BE", color = (0,0,0))
plt.xlabel("x")
plt.ylabel("u(x)")
plt.legend()
plt.show()
Nsteps = 40
x0,x1 = 0, 5.0
u0 = np.pi/2.0
u = np.zeros(Nsteps, dtype = float)
u[0] = u0
x = np.linspace(x0,x1,Nsteps)
dx = x[1]-x[0]
F = lambda u_i, u_i1, dx : u_i - np.sin(u_i)*dx - u_i1
dFdu = lambda u_i, dx : 1 + np.cos(u_i)*dx
J = 0 #count iterations total
for i in np.arange(1,Nsteps):
# Determine u[i] so that
# F(u[i]) = u[i] - sin( u[i] )*dx - u[i-1] = 0
# Use: dF/du[i] = 1 + cos(u[i])*dx
# Newton step, FE initial guess:
u[i] = np.sin(u[i-1])*dx + u[i-1]
for j in np.arange(100):
du = F(u[i], u[i-1], dx)*dFdu(u[i], dx)**-1
u[i] = u[i] - du
if np.abs(du)<=1e-8:
J += j
break
plt.figure(1, figsize = (8,4)) # Figure to plot solutions
plt.title("BE solver, %i Newton iterations total." % J)
plt.plot(x, 2*np.arctan(np.exp(x)), linewidth = 7, alpha = .2, label = "Exact")
plt.plot(x,u, ".-", linewidth = 2, alpha = .5, label = "BE", color = (0,0,0))
plt.xlabel("x")
plt.ylabel("u(x)")
plt.legend()
plt.show()
initial_guess2()
Andre ordens ODE:
\begin{equation} \frac{d^2}{dx^2} u(x) = f \left( u(x), x \right) \end{equation}Approksimasjon (FE):
\begin{equation} \frac{d^2}{dx^2} u(x) \approx \mathbf{D}^+ \mathbf{D}^+ u(x) = \mathbf{D}^+ \left( \frac{u(x + \Delta x) - u(x)}{\Delta x} \right) = \frac{1}{\Delta x^2} \left(u(x + 2\Delta x) - 2 u(x+ \Delta x) + u(x) \right) \end{equation}Fordi
\begin{equation} \mathbf{D}^+ \frac{u(x + \Delta x)}{\Delta x} = \frac{u(x + 2\Delta x) - u(x + \Delta x)}{\Delta x} \end{equation}Partiell differensialligning (PDE):
\begin{equation} \frac{d^2}{dt^2} u(x,t) = - C \frac{d^2}{dx^2} u(x,t) \end{equation}Approksimasjon (FE)
\begin{equation} \mathbf{D}_t^+ \mathbf{D}_t^+ u(x,t) = - C \mathbf{D}_x^+ \mathbf{D}_x^+ u(x,t) \end{equation}$\rightarrow$ Løs algebraisk for ukjente.
!jupyter nbconvert FDSchemes_Intro.ipynb --to slides --post serve