Tip:
Highlight text to annotate it
X
We have looked at some basic iterative methods like the Jacobi method and Gauss-Seidel method,
the Successive over relaxation method. These are classical methods. There are some combinations
of these also like Red Black method and these have not only the property of the domain of
convergence or the conditions of convergence and the rate of convergence.
Some of the aspects that we would not be discussing in this course also are bearing on whatever
method we use. For example, some method Jacobi method may be useful for parallel computing
because it does not require the solution of the current time step. So, there are some
special features like vectorization and parallelization properties of methods, all of which are going
to the selection of a particular method when we are looking at an advanced implementation
of a CFD method. Apart from this, the general requirement for an iterative method is that
we should make it converge faster and we have seen that the convergent behaviour of a given
iterative scheme is determined by the key matrix and you have a certain initial in the
classical methods. You have an initial variation of the residual which depends on the spectrum
of Eigen values and only as we go to larger and larger number of iterations, we reach
that asymptotic convergence ratio. So, if there is if there are ways in which
the convergence can be dynamically changed, then that would help or if if we can use some
other methods to increase the rate of convergence that would also be helpful. So, in in the
dynamic method for example, the SOR scheme or the Gaussian scheme can be converted into
a non-stationary problem like the Chebyshev method in in which you dynamically change
some of the parameters associated with the solution and then within the context of the
Gauss-Seidel method or SOR method, we can affect the rate of convergence. Those are
specific to certain cases and generalization may be not so easy, especially when we have
symmetric positive effect matrixes, then those methods work very well and you can reach the
convergence of an optimal SOR with Chebyshev iteration method.
When you have non-symmetric, asymmetric positive definite matrixes like what we have in the
case of a general steady state diffusion convection, diffusion type of equation where symmetry
of the matrix a is not guaranteed, in such a case, we have to these methods may not be
applicable. We can still take advantage of the nature of the problem to introduce efficient
direct methods into this overall solution scheme. We have seen that efficient solution
methods which are direct methods are like specific variations of the Gaussian elimination
method or the LU decomposition method for banded matrixes. We have seen the Tri-diagonal
matrix algorithm which has the number of mathematical operations required for solutions varying
linearly with the number of variables whereas, for the classical iterative methods in the
asymptotic limit, it is N square and for standard Gaussian elimination method, it is N cubed.
So, if we can introduce the direct method associated with TDMA, a Tri-diagonal matrix
algorithm into a Gauss-Seidel method, we can hope to achieve better results. So, that is
one introduction that is one way of combining the classical iterative methods with the direct
method. We will look at this particular case and we
will look at other cases which incorporate which have the same goal in mind, which is
to somehow increase the rate of convergence by maybe spending more time per iteration,
but improving the overall rate of convergence. So, we are looking at say advanced iterative
methods.
At this stage, it is probably a good point to recollect the emphasis on iterative methods.
We have seen that in a general CFD solution, we have to solve at least four equations for
a three-dimensional problem. The continuity and the three-momentum equations and we know
that each of these equations can be converted into some A phi equal to b, but we also know
that the coefficient matrix A contains assumed values of other variables, which are not determined
yet. For example, when we discretize the momentum
equation, we assume the value of v and w and maybe the pressure or related thing at that
particular at each point. So, the coefficient matrix is an approximation based on current
estimates of undetermined variables coming from other equations. So, in such a case there
is no point in solving A phi equal to b to death, that is to the utmost numerical accuracy.
So, if we can make some calculations and get at an improved solution, then we can update
the values of the in the coefficient matrix itself.
So, it is in that sense for a CFD problem, iteration iterative method are the most suitable
methods when we are looking at the overall solution for a three-dimensional or a two-dimensional
case. So, that is why we want to use iterative methods
which allow us to improve upon the coefficients in the coefficient matrix a corresponding
to each discretized equation. So, the first possibility is a classical method which is
known as SLOR, Successive Over Relaxation applied to lines and the basic idea is this,
that you have let say these are the ijk points, i minus 1, j plus 1 and j minus 1. For a Laplace
equation or a Poisson equation, this value at point ij is associated with the four neighbouring
points and these are the five points that appear in the A phi equal to b equation for
that particular point. So, in a standard Gauss-Seidel method, we
evaluate this in terms of the k plus 1 value of these two points and k value of these points.
In the Jacobi method, we evaluate the k plus 1 value of this based on the k values of the
four neighbouring points. In the Successive Over Relaxation method, we give a weight coefficient
w and then accordingly we do the overall evaluation, but in each case the value of this is explicitly
determined in terms of weighted values of the neighbouring values, of the neighbouring
four things. So, we compute this one and then we go to
this and then we go to this and we finish this row and then come back. Then we do point
by point evaluation of each thing, but if you look at this particular thing, then if
I say that let me this value, if I want to evaluate this I already know this value because
this is a lower row and my usual computation is actually going like this from left to right
and all the way and then to the next row and the next row like this.
So, one can say that when we are at this point, we do not know this and if you are doing point
by point, then we know this, we do not know this, but if we want if we say that I know
this value and I will assume this value, then this is given by the two neighbouring points
and I can and similarly this will be given by these three neighbouring points. The next
one, this will be given by these three neighbouring points like this.
So, once I assume this value and this value, then the value at the centre is given in terms
of the west point and east point and west point or the right neighbour and the left
neighbour. So, that gives me a Tri-diagonal equation a Tri-diagonal matrix ok. So, this
the value of this is represented only in terms of three successive neighbouring points, three
adjacent points and similarly this is represented in terms of three adjacent points. So, if
I assume this, I can convert this into A phi equal to b, where A is Tri-diagonal and all
the rest are 0. So, if this is ij and this will be i plus
1j and this value here is i minus 1j. In this case, the j does not appear because j is an
assumed value; it goes on to the right hand side. So, the b here has the values of phi
ij minus 1 and also the value of phi ij plus 1 and a here has the values of phi ij, phi
i plus 1j, phi i minus 1j. So, these are we see that this is left, centre and right. So,
those are the three diagonals, adjacent diagonals including the main diagonal which is coming
here. So, this becomes a Tri-diagonal equation for all the values in jth row.
So, this phi here contains all phi’s in rho j. So, if we solve this using the TDMA
scheme or the Thomas algorithm given that is it is effectively this becomes the discretized
equation of the one dimensional Laplace equation. So, we know that this will be this will satisfy
the weak form of the diagonal dominance condition. Therefore, we can apply the Thomas algorithm
here and solve for all these points efficiently, because the number of points if this is M,
the number of computations required will be thirteen times M or something like that ok.
So, we can make use of the efficient TDMA scheme to get all these values in one go with
assumed values of this. Once we determine this, then we go to the next row and then
the higher row like this. So, we do line by line solution of the of the overall matrix
of the points here and that is called, we can have Line Gauss-Seidel, we can have Line
Jacobi and Line SOR. In these cases, these are combinations of the Jacobi method plus
TDMA or Gauss-Seidel method plus TDMA and SOR method plus TDMA. So, in in this particular
case, we are not doing point by point explicit equation, but we are doing for all the values
in a particular row using the Tri-diagonal matrix algorithm which we know is an efficient
method for the solution of Tri-diagonal equations. So, this particular method is widely applicable.
How well, how good it is compared to the standard Gauss-Seidel method and all. It will definitely
give it will definitely have some advantage because in a typical Laplace type of equation,
it is a boundary value type of problem, so that means that the information of the values
at the boundaries has to be propagated to all interior points.
In a point by point evaluation in each evaluation, the information from the boundary goes only
one step. So, the left boundary here will the information of what the value should be
at the left boundary is propagated once, twice, thrice like that for each value. In the case
of line evaluation, then the information of the left boundary and right boundary is essentially
instantaneously known to all the points in this and that would improve the rate of convergence
that would improve the overall rate of convergence of the Line SOR, Line Gauss-Seidel or Line
Jacobi as compared to the standard Jacobi or Gauss-Seidel method or SOR because the
constraint of the boundary condition in transmitted faster in the case of line evaluation.
So, you can have in this particular case the information of the left and right side boundary
conditions is transmitted instantaneously in each row, but the top and bottom boundary
conditions is not evaluated instantly. So, that is still going step by step like this
and so you can you can go through like this and then next time you can have evaluation
along the vertical columns. So, you can have a combination of line SOR
applied for evaluation of all the points with horizontal lines and in the next round of
evaluation, you can use vertical lines like that. So, you can have a combination of HSOR
plus VSOR and again HSOR, VSOR. That type of thing can also be done and it is a variation
of the same kind of method. So, this is one simple extension incorporation of the efficient
TDMA scheme for one-dimensional diffusion equations into the overall scheme.
The same method can be more formalized in what can be, what is called as ADI that is
Alternating Direction Implicit method. This is proposed by Peaceman and Rachford in nineteen fifty
two maybe and this is essentially incorporates the same idea that it is it is efficient to
solve for one-dimensional equation using the TDMA scheme, but if you have a two-dimensional
method two-dimensional problem, then we cannot use TDMA because that will give us a Penta-diagonal
non-adjacent diagonal matrix for which we cannot use a TDMA.
So, the problem of that is is converted into two problems of Tri-diagonal matrix. So, let
us just see with the help of an example. Let us say that we have du by dt is equal to,
it is written in in essentially a symbolic way. Let me put down. So, this is the equation
and we are writing it in a symbolic way by saying that Sx here is the discretized operator
for this and Sy is the discretized operator for this. For example, Sx here Sx of u is
you have the alpha there alpha by delta x square, phi i minus 1j minus 2 phi ij plus
phi i plus 1j is the Sx operator. So, if you have this equation like this and this u here
contains all the phi values that are appearing in this and we can use for example, a first
order differencing here and say that un plus 1 minus un is equal to delta t, this alpha
is substitute in this times Sx plus Sy times u ok.
We have as usual the choice of making it explicit method or implicit by choosing to evaluate
this at n plus 1 or n. If we choose this as explicit, if we evaluate the right hand side
at n, then there is no iteration needed. We can go from point to point, but we may have
severe constraints on the delta t that we can use, but if we make it implicit here,
then we have to we can convert this into something like A phi equal to b. We have a Penta-diagonal
matrix because we have i minus 1i, i plus 1 at given j. Similarly, we will have for
this j minus 1j, j plus 1 like this. So, we will have the Penta-diagonal matrix A phi
equal to b which cannot be solved using TDMA method.
So, we can write this here as delta t, 1 plus delta t of Sx plus Sy, this whole thing times u this is no this is, so
now we can bring this un plus 1 term on to this side and then we can write 1 minus delta
t of Sx plus Sy. What we will now try to do is, that we will break it up into, we can
write this as 1 minus delta x, Sx times 1 minus delta t Sy times un plus 1. So, if we
evaluate this, then this is equal to 1 minus delta t Sx minus delta t Sy minus delta t
square Sx Sy times un plus 1 ok. So, if we this is delta t and this is delta
t square, so we can say that compared to this, this is a higher order term and we can if we can neglect this, then we
can write this as 1 minus delta t of Sx plus Sy times un plus 1. So, that means that this
thing here can be approximately written like this as product of 1 minus delta t Sx times
1 minus delta t Sy times un plus 1. So, this decomposition allows us to come up with overall
scheme in which we can use the TDMA scheme. So, making this approximation here, we can
put this in in a 2 step process like this, 1 minus delta t Sx times un plus 1 bar equal
to 1 minus delta t 1 plus delta t Sy times un and 1 minus delta t Sy, un plus 1 bar minus
delta t Syun. So, Peaceman and Rachsford making use of this
equivalence converted this into a 2 step process. Now, is this equivalent because this will
be approximately equal because we have neglected the second order terms. How can we show this?
If we were to substitute un plus 1 bar from this, then we will get it because we are dealing
with operators here. What we are essentially saying is, we pre-multiply this equation with
1 minus delta t delta x times this. So, if we do that 1 minus delta t Sx plus 1 minus
delta t Syun plus 1 is 1 minus delta t Sxun plus 1 bar minus 1 minus delta t Sxun plus
1 bar minus 1 minus delta t Sx this is u n. So, if we do this, then we know that 1 minus
delta t S x and this is equal to this one here. So, on the left hand side, we have this
one and this particular thing here is equal to from here is 1 plus delta t Sy. So, we
can write this as substituting for this this value 1 plus delta t Syun and here this will
be minus delta t Syun and plus delta t square Sx Sy un and here we have this thing here.
So, this and this cancel out. This is not uiun. So, this is un and then we have this
second order term which we can either neglect and we can see that this is the same as this
one here. So, from this we can show that this decomposition
is equivalent to; we can recover back this equation by that from this equation by substituting
by making use of this relation here which shows that this is equal to this whole thing
and substituting for this whole here the right hand side of this. So, that gives us un plus
delta t u n which is cancelling out with this, leaving out the second order term which can
be neglected.
So, we have we recover back like this. Now, what is the advantage in splitting the calculation
in this way? Each of this here is will give us A phi equal
to b which is a Tri-diagonal. In this particular case, phi n plus 1 bar equal to b and this
will be A phi n plus 1 equal to b, which is a function of n plus 1 bar and n. So, that
is let us call this as b prime. So, you have you can apply TDMA scheme because this contains
only S x and S y. We know S x is i minus 1, i and i plus 1 and this contains only Sy which
contains only j minus 1 and j and j plus 1. So, this enables us to go from n to n plus
1 in a sequential way, in an alternating way. First, we solve this one to get phi n plus
1 bar for all the points and then we use this one to get n plus 1 and that will complete
the step of going from n to n plus 1 in this 2 step process.
So, we make use of two TDMA schemes to go from n to n plus 1. What is the advantage
of this? This is like the Line SOR type of approach but we are applying the Line SOR
on both sides, both for the row wise calculation and the column wise calculation and the advantage
of implicit formulation ok. So, one can say that this is implicit formulation. The advantage
of implicit formulation in terms of the propagation of information from the boundaries to the
interior points at infinite speed is what we are actually getting in this.
So, this method is called obviously, Alternating Direction Implicit and why implicit, because
the right hand side is being treated in the implicit way and the resulting two-dimensional
expression is being converted into a 2 step process. We can do the we can easily extend
this to a three-dimensional step process also. For example, we can in that case we can do
like this and we can just to add a slightly more complexity, we can take SQ like this
and this S here contains not only x, y, but also z and Q contains maybe the discretized
source terms and other terms. One can write it as un plus 1 minus un equal to delta t
times theta H n plus 1 plus 1 minus theta times H n. What we mean by, if theta is equal
to 1, then it becomes fully implicit. If theta equal to 0, it becomes fully explicit and
if theta is somewhere in between, so the right hand side, which is the H which is the discretized
form of this. This is H is Su plus Q essentially and evaluated
using n plus 1 values of whatever that is relevant for the evaluation of this, using
n values in the case of H n here. So, you have depending on the value of theta that
you put, you can have as much explicitness or implicitness included in this and for the
specific case of theta equal to half, you get the Crank Nicholson scheme which is widely used for its second order
accuracy in time ok. So, whereas, we are looking at a first order accuracy, we are looking
at a second order accuracy in time per theta equal to half.
So, this is is a generic expression, generic discretization with H containing derivatives
from x direction, y direction and z direction of for example, the diffusion type of approximation
and maybe also from the convection. So, we can expect in the general case for second
order accuracy of diffusion and the first order accuracy of up winding scheme for the
convection term, we can expect a seven-diagonal matrix. So, this can be converted into something
like a phi in to b which will have seven-diagonals.
So, obviously we cannot use the TDMA scheme. We can use the Gauss-Seidel scheme and go
ahead with the solution, but we can split this up into a three step process. Each of
which will have, which involve only one-diagonal and that can be done like this, 1 minus theta,
delta t Sx, 1 minus theta delta t S y times 1 minus theta delta t Szun plus 1 can be written
approximately as, in case we have a parameter tau. We will explain what this parameter is?
It is tau times delta t Q plus 1 plus delta t tau times 1 minus theta S x 1 plus delta
t tau 1 minus theta S y times 1 plus delta t tau 1 minus theta S z into u n.
So, we can write this at n plus 1 in terms of this plus all the products, all the things
in terms of un. So, this tau is a parameter when we want to have a time accurate solution
for this equation, then we can use the value the parameter tau to increase the to take
a bigger step to increase the rate of convergence, to increase the rate at which it reaches a
steady state. If we want to have a time accurate solution, tau must be equal to 1 and theta
is that mix of implicitness and explicitness. So, using this approximation form we can break
this discretized equation into three steps which is 1 minus theta tau delta t S x u n
plus 1 bar equal to 1 plus tau delta t, then 1 minus delta tau theta t S y u n plus 1 bar
bar, two bar tau delta t. Finally, 1 minus theta tau delta t S z u n plus 1 which is
what we want equal to 1 plus tau delta t 1 minus theta S z.
So, these three steps here are equal to this step here whereas, this gives us an phi equal
to b which is Penta-diagonal here. Each of these things will give us A phi equal to b
which is Tri- diagonal. So, we can make use of the Tri- diagonal matrix algorithm three
times, once for this and for this and this to get the overall solution. How do we know
that this is the same as this? For example, pre-multiply this by 1 minus theta tau delta
t S y and then well substitute for this here and then again pre multiply by this thing
this whole thing substitute by u n plus 1 bar here to show that these three together
are equal to this.
So, you are going from n to n plus 1 in three time steps in a certain way and there is always
the possibility of this not being stable because each of this is an explicit form. When you
evaluate this, then it is explicit in terms of n which is already known and this n plus
1 double bar is expressed in terms of n plus 1 bar which is known from this equation. Although,
this is A phi equal to b type of evaluation, essentially this is an explicit step where
n plus 1 bar values are expressed in terms of u n and this is n plus 1 double bar which
is expressed in terms of u plus 1 bar and here it is expressed in terms of again known
values. So, these are three explicit steps. So, that
is a possibility that this is unstable and one can do the Neumann type of stability analysis
to see whether or not this will converge. For the specific case of the Crank Nicholson
scheme, where theta is equal to half and putting tau equal to 1 for a time accurate solution,
we get the overall equation like this in much simpler form delta t by 2 S y ok.
So, this is a simplification of the same generalised expression for theta equal to half making
it Crank Nicholson and tau equal to 1, so that we have a time accurate solution and
one can see readily the explicitness of this. The relation to the simple form that we have
here and also how this n plus 1 bar term here includes the information of Q, but not n plus
1 double bar and n plus 1 because Q is already in a way added to the first evaluation and
that first evaluation is feeding into this. We can have other ways of doing this this
coupling, but we have to evaluate the stability of the overall scheme using the Von Neumann
type of stability analysis. When we are dealing with just a Poisson equation or a Laplace
equation, unsteady diffusion equation where only diffusion term comes. Then these three
step process is stable, but if you also have the advection term, then it may become the
stability, may be compromised one has to properly evaluate that ok.
So, the idea of this ADI scheme is to convert a three-dimensional problem or a two-dimensional
problem into a series of one-dimensional problems and thereby gain advantage. What is advantage
here? Each of these takes n number of, a factor times n number times of steps only to evaluate
here. So, it is three times the factor of 12 or whatever it is for the evaluation of
this equation using the Thomas algorithm. So, one can say that if you say that the TDMA
scheme takes requires K times n number of operations, where K is of the order of 12
and n is the number of variables, then this requires KN and this requires KN, this requires
KN. So, that is 3KN total number of operations are required by this, so that the total number
of operations varies as n here. Whereas, in the regular case that is solving A phi equal
to b using the direct method will be will require n cube number.
Although, the fact that it is a seven-diagonal sparse matrix may reduce the number of operations
a bit, we are careful and we have to do this repeatedly. So, each step from n to n plus
1 will be requiring so many things and then we have to evaluate. So, the advantage is
that the overall number of mathematical operations required to go from step n to n plus 1 is
of the order of 3KN so that it varies linearly with the number of points. So, that is the
advantage and although, we have described this for an unsteady equation, the ADI method
can also be used for steady state equations, in which case this thing would not be there
but you still require S x and S y and S z.
So, in such a case again this will be converted into A phi equal to b involving the diagonals
coming from here and here. So, we can split it up again into diagonals coming only from
the x derivative, only from the y derivative, only from the z derivative.
So, this method can also be used to steady type of equations. So, we have looked at pressure
correction equation is one example where the ADI method can be effectively used because
it is a Poisson type of equation without involving any convection terms. Therefore, the stability
of that method is guaranteed. So, in such cases we can make use of this. So, this is
one advanced iterative method which incorporates the features of ADI, of TDMA scheme to take
account of all the variables that are appearing in one row or one column or in in the z direction
like that. There are other methods which make the solution
more implicit. Here, it is a row wise implicitness; both in this one and this one here. Here,
you have row wise implicitness followed by column wise implicitness, row wise and column
wise implicitness is there. The general idea is there is the more implicit
you make your solution scheme, the faster will be the rate of convergence. So, you would
like to solve a phi equal to b in an iterative way with as much implicitness as is possible.
If you make it fully implicit, you will be using only a direct method which we know is
very time consuming. So, the key is to make the iterative scheme more implicit, so that
it converges faster. At the same time the implicitness should not involve too much of
cost, too much of evaluation cost. So, we have to balance between the cost per
iteration and the number of iterations that we have to make. So, the convergence effects
the number of iterations that we have to make to reduce the residual to some low value and
the cost per iteration involves the how much mathematical operation we have to do in order
to go from iteration k to k plus 1. So, we have to balance the overall cost and
we have seen that in the case of basic iterative methods, you go through some period, initial
period in which the residual may not be converging very fast. Finally, it goes into the asymptotic
convergence rate at which we can realize the n square type of benefit. So, if we can alter
that rate of convergence by bringing in more implicitness into the solution because implicitness
means the boundary conditions information is travelling faster, then we can we can get
convergence; overall convergence faster.