Tip:
Highlight text to annotate it
X
In today's lecture number 29, we actually adopt a totally different method. So far,
we have been talking about solution methods based on mathematical classification. From
this lecture onwards, we are going to simply highlight the scientific and high performance
computing aspect alone. In the process, we basically tried to develop
high accuracy methods, which will by itself be non-dissipative, non-dispersive and which
will have very high spectral accuracy. One of the candidate methods is basically a Hermitian
implicit method, which is also known as the Pade approximation that gives rise to what
is called as the general compact scheme.
So, we are going to talk about compact schemes now onwards. In the process, what we do in
this compact scheme? We try to set up auxiliary relations in addition to the differential
equations; we try to obtain the derivatives simultaneously at many points together. We
begin this discussion by talking about the derivation methods for first derivatives adopting
methods with stencils, which are essentially symmetric or central.
By using Taylor series analysis we can set up methods, which are called high order compact
schemes. This essential idea is to set up implicit relations between the functions and
the derivatives and equating the terms order by order, we can really get very high order
compact schemes. We perform a global analysis in the wave number
plane for this compact scheme. We notice that many of the older compact schemes had some
problem with the boundary stencil; we cannot explain it physically that this problems arise
due to stability, instability or over stability at either end of the domain. We notice that
when we have periodic problem, this does not cause any problem; it is essential a problem
with the non periodic problem. As an example, we talk about a sixth order
scheme that is due to Lele. We talk about in general, the basis for choosing this compact
scheme because we said that these are auxiliary relations. So, there must be some basis; this
relates to what we called as a consistency condition. There was another way of method
that was investigated by Haras and Ta'asan. They tried to minimize the error as an optimization
problem. They look at the problem in the wave number space and tried to develop a compact
scheme. Let us get in to the real core of the subject.
When we talk about scientific computing, our intention was to develop method which will
give you accurate solutions; people do talk about high performance computing. If you have
noticed my first lecture, it was a sort of ridicule on that claim, what the high performance
is? We talked about what people used to do in 1980s? It is claim on supercomputing; on
high performance computing as it is called. We have more computing power today that does
not mean those people who were leading at research those days are doing the same today;
there is a mismatch all along. So, instead of talking about high performance
computing, we should focus our attention on accuracy because that is something you can
define, which is not a fashion statement. See, tomorrow Intel will announce a new chipset
and then, will be calling whatever we did yesterday was not high performance computing;
it is not that.
We try to define everything with; that I would say with sobriety, to talk about what it is
that we require. Basically high accuracy and that is what you notice. First point mentioned
here is that, if you want accuracy of course, you will have to resolve all relevant spatial
and temporal scales; that is what we decide to do. When we do that we also like to develop
numerical methods which do not propose additional layers of sources of error.
For example, now we know with our analysis that numerical scheme must be neutrally stable;
cannot afford to have overtly dissipative solutions, which may look good, make us feel
comfortable but, the solution are none the less erroneous. So that is not the way we
would like to go; we would like completely non-dissipative schemes, so that we follow
the physics of the problem. The differential governing equation themselves
will dictate the accuracy; you do not have to do numerically something. Although, you
may have noticed in yesterday's question paper, last question that I asked; solving the convection
equation by a first order upwind scheme and uniformly second order scheme, I ask which
is better. The answer is the first order, because there you could choose parameters
of CFL number n is equal to 1 and then you would get most accurate function.
That is something happens once in a while for its model equation for model parameters.
Apriority we do not know the solution; so, we do not know how to select those parameters.
We should not really aim at these two that we have seen is not working today anyway.
The third path - it is a very contemporary path of the discussion, it is a global phenomenon
that people do advertently compute and gets solutions, which is possible but, they are
inherently wrong. This is true of especially in fluid mechanics when you are looking at
high Reynolds numbers, high Rally number, and Aclare number flows, where you see that
you have to resolve all those wave numbers. That high wave numbers as we have seen, we
can invoke dispersion error. We often have this tendency in the absence of standard solution
to view these spurious solutions as to be representing turbulence and that is one of
the diseases of the modern day. We should resist from doing that. So, the third point
is nevertheless still very important; we need to talk about now spurious dispersion.
Someone I have said that you need high accuracy solution; so, basically what you need to do
is reduce your discretization error. One of the major sources of discretization error
is truncation, so people try to derive higher order schemes. Higher order schemes by now
we know have lesser, so we always try to develop higher order schemes; but, as we proposed
to do in this module of the codes to show that higher order is not synonymous to always,
what should be attempted is basically classify method based on their behavior in the k space,
because that is what we are trying to do. We are trying to resolve k different wave
numbers. So, we look at in k space and find out how
these schemes are doing. There we would notice as we intend moving that classification based
on order of truncation error is the truly inadequate, we need to do better. Why and
how we do it? That is the name of the game in this path. We are going to talk about the
extremely high accurate schemes; these are called the compact schemes.
They are based on that is called as Pade Approximation. One of the attribute of this method is that
you almost get near spectral accuracy, while keeping the scheme as compact as possible.
So that is what we will try to do, essentially this compact schemes are implicit in the sense
that they relate the derivative with the function in an implicit manner as shown here in equation
1. For example, let us say we are trying to derive this nth derivative which is shown
here on the left hand side with the superscript within the bracket indicate the order of the
derivative. This is the nth derivative they are written in an implicit manner; they are
not explicitly evaluated point wise; they are all indicated by the note index here j
plus k, k goes from minus N l to N r and j is the point where we are actually may be
focusing our attention upon. So, what you notice that for nth derivative
of the right hand side, we prefer to write it in this way, h to the power n would be
invoke in that fashion. So that this b k this multiplicative coefficient on the right hand
side with the function are pure numbers, they do not depend on grid spacing. Whatever grid
dependence comes that comes through this side, you have already noted that; first derivatives
we always divide by h, second derivatives we divide by h square, so on so forth.
This is as general pattern we would like to do. To tell to you that what we are doing
here is not just simply a pedagogic exercise, which has rather relevant practical application.
Let me tell you that although we are writing this generic equation for a great with uniform
spacing h, but you can actually directly use it for non-uniform grid; of course, you will
have do is, you will have to transform your governing equation, first you will have to
do additional step there. Whatever, the governing differential equation
you have in the physical plane or the partition plane, that is what we write most of the time.
We transformed it into sort of a computational plane; in the computational plane, we always
have uniform spaces. Once we decide to do that you realize that the scope of such an
exercise expand enormously, you can perhaps do almost all possible problems that you have.
In fact, I have told you that this method I am teaching you because even today the flow
field around space shuttle is calculated by this method not by finite element or finite
volume method, this has extreme applications and it has delivered, it is very satisfactorily
so far.
Let us look up the bandwidth of such a scheme. Bandwidth is determined by how many points
across which the derivatives are related shown by this N l and N r on the left hand side.
Same way, we also notice that the bandwidth of the scheme also is determined by the function
values.
So, what we are trying to do? Suppose, let me give you a flavor of the thing that we
are trying to do. Let us say, we are trying to solve a problem, space time dependent problem
which we always get; this is a very generic problem that we come across. A vector equation I am writing for fluid mechanics,
this is what we do. This is what it is called the Navier-Stokes equation, most of you are
familiar with but, what you notice is that we are talking here of special discretization.
We are essentially talking about evaluating terms of this kind. In the compact scheme
what you do? Earlier, what you have done? If we have term like this, we would have written
it like this say u x del del x plus u y are del del y. Let us say in a 2d problem this
is how we would be doing. Here, we would be writing the vector, so if I do it for x component,
I will write u x. If I do it for y component, I will write u y .
Earlier, what we did in explicit method. We specifically wrote those derivatives here
del u x but, here is instead of going through that route, we proposed to use this kind of
a functional relationship in evaluating the derivatives. So, I could set up methods by
which I could probably say calculate this kind of derivatives by using this equation
given here. I could do that or the same way I could also use the similar sort of scheme
by putting this n equal to 2, and then I will be getting this kind of terms. There you can
do all this . So, basically the compacts scheme means we
are trying to evaluate these derivatives that we have written here using this equation.
We will do it because it delivers more accuracy but, we can realize that apart from solving
the differential equation you are imposing some additional task on yourself writing such
auxiliary equation. These are not part of the problem; you are doing it to evaluate
those derivatives more accurately. Once you do those derivatives more accurately, then
you will see the amount of benefit that you would derive.
So, basically then what we need to do is for each of these derivatives that we are writing
here, we would write out similar equation. Once, we do that we are going to use those.
Once, we have evaluated from those auxiliary equations, these derivatives will then go
into these equation and plug them there and that is the whole strategy. Now, you understand
how and what way we are going to do.
That is why the bandwidth is important; we need to find out, how many points are involved
in evaluating these derivatives? Why are these councilor equations that we have written?
If I would have written a corresponding explicit scheme, I would have probably just simply
written like this and looking at the jth node and trying to evaluate the nth derivative.
I would have probably just simply written a corresponding equation like this. We have
written and we generally go from the bandwidth defines how far will have to go.
We also notice that keep this concerns on the right hand side has this kind of a property
what we are talking about here in terms of a k but, suppose if I write here if the b
ks are symmetric, if this quantity is symmetric then, what do we get? We will get a symmetric
stencil. That symmetric stencil would not involve the next or a term, so what I am trying
to say will become very appropriate if represented by the first derivative; if I do then we can
see the example clearly. What we do here? If we recall that we have done it like this.
This is like your second order scheme and if I am not wrong, you can actually convince
yourself what I am writing is correct or not by doing the Taylor series approximation and
check that this is a second order scheme, this is a fourth order scheme that we know.
What I am saying essentially is about the jth node, the coefficients are symmetric modules
the sign are opposite, what does it do? Of course, it removes all odd or even.
We will remove all the even derivatives, so that is one of the things. If we retain those
even derivatives, they would have given rise to numerical dissipation, so we do not want
to do it. As you see that, what we already know from explicit scheme? I have transferable
to what we do with this implicit scheme also. On the left hand side, we would require this
coefficient a of k should be symmetric about a naught and further more if we have the bandwidth
N. If suppose, if I had additional term here that would have remain unbalance, that would
have given us all kinds of derivatives and that would not give us a central scheme. Same
way in the compact scheme also the centrality of the scheme is determined by this condition
that a k's have to be symmetric about a naught. The number of points has to be same on either
side of the node under consideration. Now, what requires to be done is to fix of
this coefficient a k and b k. How did we go about doing this? Of course, what we did was
we wrote the Taylor series and equated the coefficients and pick the coefficients. For
example here, you can find out what is b 1; b 1 is 1 by 2 half, right? b plus 1 and b
minus 1 are same in magnitude one is 1 by 2 other is minus half.
Here you can see, 1 by 12 and then two third, two third and again. This is the way that
we have done for explicit scheme; we can do the same thing here for the compact scheme
to. So better to go and take a look at a specific example, talk of deriving first derivative
in terms of functions. What we do? Perhaps take a case where let us say the first derivative
is estimated by taking a bandwidth of 2, it is basically if we are looking at u j prime
we take 2 points to the left, 2 points to the right on the left hand side. When we tried
to relate the functions, the functions are what we have written down here involves 7
points, plus 3 to minus 3 including u j that will be 7 points.
So, this is a one of many possible ways that one can approach this problem. View this as
a kind of a generic equation; so what happened is, in addition to the actual solution of
the differential equation, first we will have to do this homework. Given the function right
hand side, we are trying to find out what those first derivatives are; that is the auxiliary
step that will have to go to in this exercise.
As I said that we have to relate these coefficients by Taylor series expansion. What we are going
to do? We write this Taylor series expansion; we will have expressions of various orders
of either side. We tried to match those coefficients and then the first unmatched coefficient that
remains that determines the order of accuracy of the representation.
For example, flip back to the previous page and if I look at the coefficients of let us
say u j prime it will be 1 from here, from this and this I will get 2 alpha, from this
end that I will get another 2 beta . So the coefficients of u j prime on the left hand
side are 1 plus 2 alpha plus 2 beta. Same way, we could expand this individual group
and find out that we would find this is the coefficient of u j prime on the left hand
side; this is the coefficient of u j prime on the right hand side. If I satisfied this
equation that means, what we have been very exact up to the first derivative; what satisfied
is the next order path. So that is why we will call such an equation
were we tried to evaluate those coefficients by satisfying equation 3 would lead to a second
order method because what happens? What will be the next term? Next term would be the third
derivative. So, the third derivatives when I match, then again I will get the next higher
order scheme that is the fourth order scheme. The coefficients you can actually find out
the third derivative on the left hand side invokes 6 times alpha plus 4 beta and on the
right hand side you get this . You can very clearly see the pattern this is 2 square,
this is 2 cube and so on so forth; you can relate that very easily by that is specific
way of composing that equation 2.
You can go through this exercise, and then you can see that if you match the coefficients
up to fifth derivative by invoking the previous two equations plus this, then you end up with
the sixth order scheme. If you additionally match the seventh derivative terms, then you
will get up to their eighth order scheme and so on so forth. Now, the question is how many
unknowns we have? We have 5 unknowns a, b, c, alpha and beta; that is what we are trying
to find out. So at the most, what we could do? We could
satisfy all of these equations, all of these 5 equations that we have written so far 5
equations, 5 unknowns and then we have done, we can in a sense have a unique tenth order
scheme out of this. However, if you try to let us say decide to go for a sixth order
scheme, then what you would be solving? You will be solving these three equations and
that is for the second, the first derivative, the third derivative, and the fifth derivative.
This is up to fifth derivative. So, we have three equations, we have to be then solving
for those three equations what are the additional things that we have to do? We will come to
that later.
But let us try to see what we do operational? Let us not just simply talk about what we
do in an analic systems mode. What we actually do in computing? In computing what we are
doing here as you can see on the left hand side, you have the unknown derivatives which
I can write as a vector and that should be multiplied by a matrix, which we are calling
as A. As you have seen, they happen to turn out to be there, we will see that constant
coefficient entries of this matrix A for the derivatives and B the coefficients of the
function; so that is what you will have to do.
So you have that computational domain, you would be setting up an equation of this kind.
You will be evaluating that each of those derivatives and purposely avoid to write which
derivative it is. Let us say you have solving a 2d problem, you would have a pro problem;
you will have probably 2 components of velocities and each one will have 2 derivatives. We will
have to solve such 4 equations, 4 sets of equations for calculating del u x del x, del
u x del y, del u y del x and del u y del y. We will have to be executing this step for
four such derivatives.
Now, suppose some may have difficulty in the beginning to appreciate what we mean by a
periodic problem and a non periodic problem. I think this despite our various exposure,
before; let us say, we take a domain of this kind and then we have discretize by uniformly
space point and the points goes like this. The periodic problem is actually doing it
repeatedly on the outside. So, this is something again not written explicitly in the book but,
whenever you do any computing you choose a domain and you are trying to extract knowledge
within this domain, how does that knowledge relate to what is happening outside? This
is a question that has been sort of bypass and people probably do not appreciate; whatever
may be the original nature of the problem, whenever we decide a domain and we do this,
we have actually an extension of the same problem for rest of the universe. That is
why what happens is, even though your original problem is non periodic and if you solve a
problem in a limited domain or if you increase it, increase the domain size you happen to
see a quite a bit of difference in the solution. What is a periodic problem? Periodic problem
means, if I have let us say, the solution at u at x equal 0 is u at x equal to 1; this
is the physical condition of periodicity of the problem so, physical variables of periodic
what I just now said that in computation, we always have a periodic extension of the
computational domain that is a different aspect that is what your numeric's done.
Whether you like it or not your numerics always do that; this is something you must keep in
back of your mind. However, if this condition is not satisfied if this is true, then this
is your periodic problem, this is what we are talking about physical periodicity. If
u at x is equal to 0 is not equal to u at x equal to 1, then this is what we call as
non periodic problem. So that is what we are stating here also that if we have a periodic
problem that I have explained to you many times before this A matrix and the B matrix
will be periodic.
We have also seen if I go back to this generating equation 2, if I designed to obtain the derivatives
for such a system, you can very clearly see that j can start only from where it can start
from. Now j is equal to 4 that is good, because you can see if I try to apply this at j equal
to 3, I do not have the information; I am telling out of the domain. So you can see
where this choice of the scheme originate is our ability to handle such equation in
a most generic fashion. So there is this problem of compact scheme
although it promises to give you high accuracy but, you have to see that to set up an equation
like this. You will have to apply such generic equation only in the interior nodes and that
interior nodes starts from j equal to 4 here and ends at suppose I have n points, it should
be n minus 1 right, so that is the whole idea. So what happens many times; to avoid such
difficulties, you purposely set this c is equal to 0. Then you can start this thing
stencil that we have written here from j equal to 3 to n minus . You can realize that there
are so many variations possible but, it also tells you that if we adopt such a scheme we
will have to do something special for the left over points. Suppose, I decide to take
2 then, I will have to write similar equation for j equal to 1, j equal to 2, j equal to
3, from j equal to 4 onwards we will be using this.
The same way we will have to write out equation for j equal to n, j equal to n minus 1, and
j equal to n minus 2 because this equation will be varied from 4 to n minus 3. You got
to realize that the compact schemes have this problem that if you have a non periodic problem,
then you will have to add additional near boundary stencil that is what I was explaining
to you. That you have to write additional such equations
and those equations are what because as we are saying that we are restricting our computation
in a domain; we have knowledge only about what is happening inside. We do not know what
the states of the variables are outside.
So the near boundary stencil if I want to write it for j equal to 1 or 2 and 3 and so
on so forth; I have to write those derivatives only in terms of what we know, we cannot get
information from outside. Then what happens? See any derivative that I am evaluating here,
the information only comes from interior of the domain on this side, what about here?
This information also comes from inside. What happens then? When we write down this
near boundary stencils then, we are actually imposing some numerical conditionality of
the information propagation. On the left hand side, it goes from right to left and on the
right side; it goes from left to right. Suppose, I am solving a simple equation like this.
That is why we will be looking at this equation very often, this is such a nice equation;
this actually set the goal standard for all computing.
If you can do something with this equation just go home, do not waste your time. If you
cannot solve this equation correctly, do not hope by any other fancy method if life will
be easier. What is interesting about this is if c is positive, then what is happening?
Information is propagating from left to right. Now, if I use this what we are suggesting
that we need to have interiors near boundary stencil and this is actually the information
is propagating this way and the numerical condition impose violets this . You would
be assured you will be happy to note that when you violate physics, your numerical method
actually grows up . Locally you will see that we are trying to oppose the propagation of
the signal in the correct direction that will immediately hurt you in terms of numerical
instability there locally. But, having said that this is something you must realize that
in many such exercise, why does not create catastrophe?
Let us say, we are trying to solve this equation in a domain like this and we are setting up
some near boundary stencils like this for say this 3 additional nodes on the left, 3
on this side. Then what will happen? Here we are sending the signals from this way and
here we are sending it from there. This is what we are getting here is non-physical nature
of this discretization whereas, this adds in the actual physical propagation of the
signal. We can see that this adds but, we also know
from our exam paper yesterday, we have seen that for part of appending if do not choose
the answer correctly, you can overdo it. So, what happens here? This leads to numerical
instability and this is physical, I will put it in parenthesis, I mean quotes that physical
discretization it does not violate the directionality but, if you are not careful, then you may
end up with lot of numerical dissipation. At the most you can hope to do it without
any dissipation, additional numerical dissipation but, at least it would not do. Now, you may
ask me a question then how come people still do it and get away with it? Well, the answer
is right in front of you because what happens is apart from the signal itself the error
would also be govern by an equation like this but, with some right hand side that we have
established very clearly . Now what happens? That means the error also
has a directionality that shows that it should go in the same direction as the signal. What
happens is as the error is going to grow here but, it is also convicting; after few time
step, the error which was here it will be inside beyond this third point. Once, they
are inside they are not violating the physical issues, physical direction.
What happens is this is an actual tragedy of computing because what happens is people
do this, then they go back and say look I am doing direct numerical because what has
happened? You are artificially creating a source of error because of numerical discretization
and that error actually goes inside your computing domain. Once, they are outside these bad nodes
there no more violating the physical directionality. Because there your scheme is and when you
are doing perfectly central scheme that is why you were looking at central scheme. We
do not want one to impose additional dissipation; so, from j equal to 4 onwards you are in a
safe territory. So, what happens is in a problem like this if I am solving, then I am always
in a furious sense I am exciting the system from the left hand side.
The same way when I am coming on these last 3 nodes if I am not careful, I am adding numerical
dissipation so that error is decaying also. What happens is without your knowledge or
intent, you are exciting the systems furiously on the left and damping on the right .
So what happens is if you are solving a problem, you would not require any disturbance and
you would still see as if the flow has been excited. That is what I said this is the tragedy,
because people do not have the proper quantification of what is this additional excitation that
we are creating. See the numerical excitation and the physical excitation, in a real physical
problem not necessarily going to be the same, right? If I take a different numerical method,
I will excite the problem in a different fashion. In the physical world, the background disturbances
are probably sometimes not known, you know my favorite example of tossing a coin; we
do not know what the source of error is there. That is why we will never probably able write
out a governing equation, evolution equation if I give you an initial condition, a coin
is facing head upwards at this time, at this height you toss it with this much of force,
tell me what will be the outcome when it comes back on the ground; that is the simple problem
of tossing a coin, right. Newtonian mechanics but, we still have not seen the end of the
problem, we do not know what this background sources whether we are modeling all possible
disturbances are not we will never be able to know.
Suppose, you give it to some people involve in mathematical numerical modeling, they will
always come out with some results. The paper we will be announced in nature saying that
if you face this word with your left hand tuck in your pocket this coin would came face
head up. So there would be lots of such exciting papers
and commentaries are experts, how things are changing we have now supercomputing ability.
So that is the story, but the point remains that if we are solving a problem, any problem
we cannot characterize the background distance. We do not know we have not been able even
today, definitively establish the causality, you know about causality I just read 2 days
ago this new theory has been proposed for this large colloidal. Apparently, the people
behind this they are afraid that it will not work so, they have said if it does not work
it will prove god exist is for rest of you to figure out what it is?
Usually start what they are talking about I do not; they say that some particles called
Higgs boson, who will be created. The gods do not like it, so god will not allow this
collider to function that is the end of the story. You prove two thing Higgs boson exist,
god exist by nonfunctional collider in Geneva that is the wonderful research that we do
this is. Anyway we are coming back to this, so what
has happened here that we create some kind of a numerical disturbance in the left hand
side and we expect that is going to make the physical universe. That is bit of a sort of
optimism, I suppose people are optimistic. So, we cannot find them for that but, let
me assure you that we do not know the exact relationship between physical disturbance
environments with numerical. We expect that all numerical simulation will
eventually lead to the same physical results has yet to be proven. This is something you
must realize that many of the papers that you would be looking at in various journals,
they will be saying that we are doing direct simulation means with we do not have to a
characterized the background disturbance and we will still get the result. Well I suppose
that is yet to be proven and if somebody does it that should really deserve sort of credit
for that kind of proof.
So one of the side of that equation 2 that we have written here that if we somehow decide
a query that we said beta equal to 0, then in the left hand side we have a tri diagonal
matrix structure, a is a tri diagonal matrix. Then we already know how to handle that because
we have the Thomas algorithm here and that should allow us to solve .
Well, please there is mistake here, so what happens is that in solving this auxiliary
equation, you will have to be using Thomas algorithm. We know if the size of the matrix
is n while matrix of doing 5 into 7 and calculations depending on whether you have a periodic problem
or a non periodic problem.
So let us do a concrete example, it is better that way; that we set to begin with beta and
c is equal to 0, left hand side we have a tri diagonal structure but, on the right hand
side we have a penta diagonal structure, it is going from j minus 2 to j plus 2. What
you do is write down the Taylor series, equate the coefficients of u prime, u triple prime,
and u then you will get this three equations for this three unknowns. You solve it to get
this unique value; this is unique prescription of the problem.
However, you realize that this scheme we can only use it from j equal to 3 to n minus 1.
Now for 1 and 2 with this equation will not work, 4 is for n and n minus 1. What you are
noticing here? I would like to draw your attention to this what I call here as the consistency
condition, we are in the business of evaluating the first derivative.
So somewhat may will have to satisfy this equation for sure, because that is our basic
business, evaluate the first derivative correctly. Satisfaction of this equation is a must and
that is why you call this as a consistency condition. So please do realize that if you
are evaluating a first derivative, then co-efficient of u prime gives you the consistency condition
which you cannot give up. So this is a clear example of how these derivatives
can be worked out. You also realize that if you have a periodic problem, then you can
use this same stencil for all point because you have knowledge of what is outside the
domain, where you do not have any problem. That is why many times if you are setting
up some new method, then do look at periodic problem because periodic problem helps you
avoid this kind of numerically generated issues.
Although that becomes little academic in exercise but, it is still assures that you are in the
right track, that you are doing it correctly. If I now look at the previous exercise that
sixth order scheme, we will look at for the term that is drop out is proportional to h
to the power of 6 and that is associated the seventh derivative.
Now, what Lele did? He tried to write it in terms of this. See you have three equations.
If we decide to solve few of the equation and keep one of the coefficients as a parameter
let us called alpha as that parameter; basically, what we are doing in writing this. We are
solving the first two equation and then we will get the value of a and b like this that
will be a kind of a fourth order scheme. So, for fourth order scheme a is given in terms
of alpha and beta sorry a alpha and b, you also given in terms of alpha. So we can choose
different values of alpha and we can evaluate the corresponding b and we will develop a
fourth order scheme. Now if I look at this there are couples of
issues that comes in the four that how do we choose this value of alpha let us say in
this case here.
We had seen that if we choose the value of alpha as one-third, then of course, we go
to sixth order accuracy, but suppose instead of taking one-third if I take 0.3 then what
happens? It is not surely a sixth order scheme, it is a fourth order scheme but, how good
it is compared to let us say if I choose alpha equal to 0.25 or say some value 0.29.
Does it mean that these coefficients are some kind of a magic attractor fixed at these points;
that means, what I am saying that if I try to write down the error committed. Suppose
say, I try to plot versus alpha a error and I have a value here alpha equal to one-third;
I know if I choose this, then it becomes a sixth order accurate.
My question is what happens to the error in the neighborhood of this point? Is it a continuous
function or a discontinuous function? This obsession with order becomes so much that
people will tell you that if you just go little bit to the left or little bit to the right,
you have lost the sixth order accuracy and you have jumped down to fourth order scheme;
so, you may have committed huge amount of error.
But is that true that is the question that we are asking the secondly here that if I
look at this discrete derivatives, are they discontinuous functions of this coefficient?
This is something we must keep in mind. You may also like to know I mean why we should
choose any particular value is there any physical basis or not.
Now, that is the question that we are posing here in the end that if we perturb these coefficients
from the obtained value apart from satisfying the consistency condition. So, whatever perturbation
we do give the perturbation should still satisfy the consistency condition.
You recall that 1 plus 2 alpha should be equal to a plus b that we have written 1 plus 2
alpha is equal to a plus b. So, what I am suggesting to you that will perturb all alpha
a and b in such a way but, this equation will still be valid if I do that what do I get?
Do I get a good accuracy or its going to be poor? This is something that we would be talking
about.
Well, there are of course, of no prize for guessing that this discrete approximation
of any derivative is essentially is a continuous function, it is not discontinuous as just
sort of supporter of higher order schemes will try to tell you that if you do not choose
alpha equal to one-third, the error is not like this that you have a point here then,
next you have error going like this or going like this . Error does not behave like this;
error has to be well; it could be still some continuous functions like this. It cannot
just simply be at 1 point here and the next point it should be somewhere else, it does
not happen that way. This issue was addressed by this two scientist
from Israel, Haras and Taasan, they actually obtained optimal value of alpha a and b. To
figure out how this numerically established derivative departs from it is a very accurate
method of evaluating. What we are basically talking about let us say, I have this equation
given by this . So, the entire special derivatives are embedded in this. So here a discrete operation
with a grid of size, I will call that as L h u of h.
So what Haras and Taasan did look at they set up optimized objective function on the
error that they said like suppose, I look at this, this is the my exact operator minus
the discrete operator . This is going to be a function of all these parameters that we
have alpha a b. Now, we try to take that objective function, we try to minimize the error with
respect to alpha a and b, those will give me additional equations.
How many such equations we can generate? Two only at the most, right. Because we have to
still satisfy the consistency condition that takes away one degree of freedom, you can
at the most equate those. That is what Lele was talking about you know 2 parameter family.
So suppose, I fix my alpha based on whatever a and b, I get satisfy the consistency condition
and then I minimize the error with respect to let us say a and b.
Or that means minimize the error with respect to let us say alpha. So basically the fourth
order scheme will have only one condition originating from this objective function.
Well, this is something what you may have done in your high school, right? Optimization
by looking at the first derivative and the second derivative, we can find it out. So,
those two parameter family that Lele was talking about sorry single parameter family, essentially
gave a fourth order scheme. So, everything the error including is written as a function
of alpha and we try to minimize that error with respect to alpha and that closes the
system and you get a optimized fourth order scheme.
What Lele did notice was, that was a landmark paper; by the way, incidentally alumni of
this institute. You should be happy to know he is a faculty at Stanford now. He did this
very important paper that he wrote in 1992; found out that this optimized fourth order
scheme actually provides you accuracy that is better than explicit tenth order scheme.
That is the reason the motive for which we will adopt compact schemes, because this gives
you unprecedented access to accuracy. That you would see if I were to get a tenth
order scheme, how many points we will have to take as a stencil? If I am taking explicit
method tenth order, so it should be 10 plus 1, you have seen. Second order you required
3, fourth order you required 5 so and so forth. If we look at the central scheme it is always
plus 1. So, you could notice this very interesting thing that whatever accuracy that you are
getting from an optimized fourth order scheme is better than tenth order accurate scheme
obtained in an explicit manner. So, we basically come to this observation
that the formal accuracy of truncation error is really not the criteria; a better depiction
would be looking at in the spectral plane. Now, you realize that why we have invested
so much of time and effort in learning about waves and spectral analysis. Without that,
we would not be able to see all this clearly. We will do that systematically as we go along.