# Time Discretization

This section describes the various approaches used to discretize the governing equation in the temporal domain such as two step, multistep and multistage methods.

In order to determine a numerical solution of the governing differential equations the temporal domain must also be discretised apart from the spatial domain. The direction of influence of the time coordinate is only in the future. Therefore, all the solution methods for time dependent problems advance in time from a given initial data.

A vast majority of the methods used for temporal discretization are linear in nature. The time dependent variable is updated using a linear combination of the variable and its time derivatives. Linear approaches can be broadly categorized based on the number of steps, stages and derivatives used in the discretization.

Some of the widely used time discretization approaches are described below.

## Generalized Two Step Methods

Two step methods involve function values at two instances in time, generally considering the current time step at which the solution is known and the next time step at which the solution has to be computed.

Consider a first-order ordinary differential equation for a dependent variable $\phi$ expressed as:

$\frac{d\phi }{dt}=f\left(t,\phi \right)$

with an initial condition $\phi \left({t}_{0}\right)={\phi }^{0}$ .

A generalized scheme using a weighted average value for the approximation of the variable value at the n+1th time step is expressed as:

${\phi }^{n+1}={\phi }^{n}+\Delta t\left[\theta {f}^{n+1}+\left(1-\theta \right){f}^{n}\right]$

where ${f}^{n}$ represents the function value at the nth time step and $\theta$ represents the weight.

The nature and stability of the temporal discretization scheme depends on the choice of the weight $\theta =0,\frac{1}{2}$ and $1$ represent the forward Euler, Crank-Nicholson and the backward Euler schemes, respectively.

## Multistep Methods

Multistep methods involve function values at more than two instances of time. These methods are generally derived by fitting a polynomial to the temporal derivative of the dependent variable, that is, $f\left(t,\phi \right)$ .

These methods include the Adams-Bashforth and Adams-Moutlton methods. The order of the method depends on the number of points in time at which the polynomial fitting it used. A third-order accurate Adams-Moulton method is expressed as:

${\phi }^{n+1}={\phi }^{n}+\frac{\Delta t}{2}\left[5{f}^{n+1}+8{f}^{n}-{f}^{n-1}\right]$

These methods require initial data at many steps, hence they are not self starting.

## Multistage Methods

Multistage methods involve computation of the function values multiple times at the same time step. They generally involve predictor and corrector steps to compute the values at the n+1th time step.

Numerical solution schemes are often referred to as being explicit or implicit. When a direct computation of the dependent variables can be made in terms of known quantities the computation is said to be explicit. When the dependent variables are defined by coupled set of equations, and either a matrix or iterative technique is needed to obtain the solution, the numerical method is said to be implicit.

Explicit methods are easy to program but are conditionally stable whereas implicit methods offer better stability but are computationally expensive. Predictor-Corrector methods offer a compromise between these choices. A variety of methods exist based on the choice of base method and the time instants used in the predictor and corrector steps.

The most popular methods in this category are the Runge-Kutta methods. A fourth-order Runge-Kutta method is constructed as follows:
• Explicit Euler Predictor: ${\phi }_{*}^{n+\frac{1}{2}}={\phi }^{n}+\frac{\Delta t}{2}{f}^{n}$
• Implicit Euler Corrector: ${\phi }_{**}^{n+\frac{1}{2}}={\phi }^{n}+\frac{\Delta t}{2}{f}_{*}^{n+\frac{1}{2}}$
• Mid-point rule Predictor: ${\phi }_{*}^{n+1}={\phi }^{n}+\Delta t{f}_{**}^{n+\frac{1}{2}}$
• Simpsons rule Corrector: ${\phi }^{n+1}={\phi }^{n}+\frac{\Delta t}{6}\left[{f}^{n}+2{f}_{*}^{n+\frac{1}{2}}+2{f}_{**}^{n+\frac{1}{2}}+{f}_{*}^{n+1}\right]$

## Generalized- $\alpha$ Method

The generalized $\alpha$ method is an implicit method of time integration which achieves high frequency numerical dissipation while at the same time minimizing unwanted low frequency dissipation and offers unconditional stability for linear problems. It is a variant of the generalized two step theta scheme discussed above where the first temporal derivatives are evaluated as variables.

For a linear system defined by:

$\stackrel{˙}{\phi }=\frac{d\phi }{dt}=\lambda \phi$

The generalized $\alpha$ method for integration from time step ${t}_{n}$ to ${t}_{n+1}$ is constructed as follows:
• ${\stackrel{˙}{\phi }}_{n+{\alpha }_{m}}=\lambda {\phi }_{n+{\alpha }_{f}}$
• ${\phi }_{n+1}={\phi }_{n}+\Delta t{\stackrel{˙}{\phi }}_{n}+\Delta t\gamma \left({\stackrel{˙}{\phi }}_{n+1}-{\stackrel{˙}{\phi }}_{n}\right)$
• ${\stackrel{˙}{\phi }}_{n+{\alpha }_{m}}={\stackrel{˙}{\phi }}_{n}+{\alpha }_{m}\left({\stackrel{˙}{\phi }}_{n+1}-{\stackrel{˙}{\phi }}_{n}\right)$
• ${\phi }_{n+{\alpha }_{f}}={\phi }_{n}+{\alpha }_{n}\left({\phi }_{n+1}-{\phi }_{n}\right)$

where $\Delta t$ is the time step size $\left(\Delta t={t}_{n+1}-{t}_{n}\right)$ and ${\alpha }_{m},{\alpha }_{f},\gamma$ are free parameters.

The above four equations combine to yield the following system:

${\varnothing }_{n+1}=c{\varnothing }_{n}$

where the solution vector ${\varnothing }_{n}$ at ${t}_{n}$ is defined as ${\varnothing }_{n}={\left\{{\phi }_{n},\Delta t{\stackrel{˙}{\phi }}_{n}\right\}}^{T}$ .