Tip:
Highlight text to annotate it
X
In today's fourth lecture, we are going to talk about only the explicit Runge-Kutta method
because that has been found to be one of the most accurate methods.
As we go along in this course, we will find out why it is preferred, and we will also
talk about another classification of differential equations - time dependent differential equations.
This is related to the non-autonomous versus autonomous system, and when we express it
as an evolution equation, if the right hand side does not explicitly involve time, we
have what is called as the non-autonomous systems. And in this course, we will be mostly
focusing our attention to non-autonomous systems because many of the applications will see
that time does not appear explicitly. And coming back to various Runge-Kutta methods
that we talk about, we begin by two stage Runge-Kutta method which we have shown here
as a RK 2 method. This will be followed by RK 3 and RK 4 methods. However, we will be
not discussing in greater detail about RK 3 method because that will be given to you
as an assignment that you will be doing as a first assignment for this course.
Having exhausted this discussion on Euler and Runge-Kutta method, we introduce the multi-step
method by talking about three time level methods. And in this, we will be talking about Adams-Bashforth
scheme and also the leapfrog schemes. I will find out the pluses and minuses, and why we
do not want to use it. Finally, talking about this ordinary differential equation solution
methods, we need to talk about what are called as stiff differential equations.
If in the solution of this ordinary differential equation, multiple length or time scales are
involved, then what we encounter is a specific difficulty which is characterized by this
term called stiff differential equation characterized by the parameter stiffness ratio. And then,
we talk about the classical methods of solution of stiff differential equation by trapezoidal
method. This will be the final topic of today's talk.
Here is the problem. We noted that you can adopt single step method. From the Physics
point of view, that you do not get into numerical or spurious mode, but you could have multi-stage,
and what is that, and the first milestone in that road would be a two stage method.
We have talked about some of those; we have talked about Euler-Cauchy method. We have
seen that if you are matching the solution from t n to t n plus 1, you have to evaluate
two additional quantities which we called as K 1 and K 2. So, in that same class, Runge-Kutta
method is also formed. They are single stage, multi-stage methods.
We noted that, if we are trying to solve an equation of this kind and we are going from
one time step to the next one, then the Euler method is required that you take a slope here
and you extrapolate, and that is where you get, and that could amount to some error,
or you could draw this tangent here, which is the parallel to this, and you could get
this; that was your backward Euler method; that is what we talked.
So, here, in the multi-stage method, you see that you generate all your information from
either of these terminal's points; instead, you also look for some intermediate point
from where you keep on getting information and use them, and improve the accuracy of
the method. So, that is the whole idea that you do get
information from multiple stages in between these two time levels t n and t n plus 1.
And eventually, you get the solution as a weighted average of all these information
that you have culled from these terminal points plus those intermediate stages.
So, that gives you a normal leverage and freedom to choose methods from; for example, if you
restrict your attention to just simply explicit methods, then you would keep on calculating
let us say slopes at m such points; so, m could be at your disposal; you decide upon
how many stages you would like to. So, we are suggesting that, we choose m such methods,
and we have chosen these stages - intermediate stages in such a way that each K is could
be obtained from the information you have collected up to that point.
So, for example, we have information at t n; so, we could immediately calculate K 1.
Having obtained K 1, what we could do is we could obtain the solution at a next stage
which is in between t n and t n plus h so that this c 2 is less than 1. So, you have
some intermediate point which is c 2 h from t n, and then, there, we also calculate the
dependent variable as some kind of a weighting of what we have just now calculated in the
previous level K 1; that means, in evaluating K 2, you need K 1. In evaluating K 3, you
would require K 1 and K 2, and so on, so forth. So, this is essentially a recipe for explicit
methods. So, that is what, we said that, we will be
keeping our attention focused to explicit methods. We will obtain the slopes at m such
stages. Each K i's will be obtained from the cumulative predecessors. That is a distinct
feature of all explicit methods and eventually you get the final accepted solution as the
previous time step solution plus this weighting factor times this K1.
So, these K i's are nothing but something like your delta u. right So, they are the
delta u's and this W 1 W 2 and W m - they are the weights that you associate with the
method. So, essentially, now if you look back, you
have to decide upon what are these coefficients - c 2, c 3, all the way up to c m, and here
you have a 2 1, a 3 1, and a 3 2, and so on so forth, and finally, these weights are also
our unknowns.
So, these are basically the parameters of the Runge-Kutta methods and they have to be
obtained. One of the easiest ways is to work out the Taylor series expansion of the function
u n plus 1 in terms of u n, and the number of terms that you match from your theoretical
Taylor series expansion.
So, what we mean by theoretical Taylor series expansion is, we are writing u of t n plus
1 - this we are going to write u of t n. Since your governing is, we are looking at d; so,
if this is what we adopt our governing equation as, then we can keep on adding whatever information
that we have; delta t which we will be calling as h; So, h times of u prime and h squared
u double prime, and these are all evaluated at t n, so on and so forth.
So, this is your analytical expression. Whereas, the numerical solutions we have denoted so
far with the simple superscript like this, and we have said that our method would be
something like h phi of t n plus 1, t n, u n plus 1, and of course it will also be a
probably the function h also. So, what you essentially do is - you obtain
two Taylor series - one from the analytical expression; one from the numerical method.
You keep matching them, and the number of terms that you match from either side would
define your order of the method. So, if I go up to say h squared, that is a second order
method; if I decide to go up to h cube, that that will be our third order method, and so
on and so forth. So, this is the essential idea in a nutshell.
So, you can now just work out. One such simplest possible method would be an explicit second
order Runge-Kutta method. So, here we are of course fixing m equal to 2, and we will
first evaluate K 1; that would be the delta u at the starting h; h times f evaluated at
t n, u n, and then, next we will be adding to this path, the independent variable incremented
by c 2 h, and the corresponding dependent variable incremented by a 2 1 K 1, and so
that the accepted solution at the n plus 1th of level would be the predecessor plus the
weighting of these two increments K 1 and K 2.
So, very easily, you can see that 1,2,3,4 - we have four unknowns. So, for this second
order method, we will have to work out some ways and means of evaluating these four unknowns.
So, what we do is, as I told you, we will write down the theoretical estimate in terms
of the Taylor series, and as you can see, u prime as given here is f; so, that is your
h times f u double prime. Now, you have to realize that f is a function of u and t. So,
if you take u double prime, that will give you the partial of f with respect to t plus
f times f u, and the following term - the third order term, would have many more such
combinations f tt plus 2 f f tu plus f square f uu plus this and so on, so forth.
So, we could keep on writing it all down and the way we have written down K 1, K 1 was
simply h of f evaluated at the previous h, and K 2 we said would be h of f evaluated
at t n plus c 2 h and u n plus a 21 K 1. So, that is the expression for that. So, if I
expand this K 2 by h, that is basically a Taylor series expansion of this about t n
and u n, and that gets you there.
So, basically, you have this K 2 by h, and then finally, what you need to do is, you
would be writing this u n plus W 1 K 1 plus W 2 K 2.
You have this expression for K 1 and K 2, and then, you keep using those information
in terms of u and f, and this is what you get; u n plus 1 should be u n plus W 1 into
h f plus W 2 into h, and these are those expressions that we have written down as Taylor series.
Expansion of that, expanded about at t n would give you this whole set of terms. So, basically,
talking about a second order method, all you would be interested in writing down the coefficients
on either side of order h and order h squared; order h you can very clearly see, here, the
coefficient is would be, analytical expression would be 1 itself. So, that is that; h itself
and coefficient is 1; whereas here you are seeing that it should be W 1 plus W 2. So,
that is your order h condition. Similarly, order h square would give you,
from here you can see this W 2 h into h c 2 f t will be one such term, and then there
would be another term coming from there - W 2 h into h a 2 1, and that is multiplied by
f f u, and if you look at the corresponding analytical expression, this is what we have.
So, this is your del u double prime. So, this is basically u double prime here; it is nothing
but this quantity - half times f t plus f f u
So, what you notice is that 51 a is a direct equation that you can work with. What about
this? This equation somehow depends on your problem because the problem defines your function
f. So, what happens is - for a general methodology, you should not be talking about any arbitrary
function, but if you do, then you note that these coefficients of f of t and the coefficient
of f f u must also independently match because that would be the most general way of doing
it, and if you do, that would actually yield two conditions.
So, basically you have one condition from order h and two conditions from order h square;
that gives you three relations, but you have four unknowns. So, the easiest way is to solve
it parametrically; you choose any one of them as a parameter; let that be c 2; then, you
can immediately see that, from these equations, you would get a 2 1equal to c 2, and W 2 you
can work it out from here , and W 1 is 1 minus W 2. So, that will give you these conditions.
So, basically, we have solved it in terms of the parameter c 2. We still have to decide
upon how we go about fixing c 2, but I may draw your attention at what we had done. We
had taken t n and then we added a fraction of the time step to go to the intermediate
stage. Whenever you have explicit dependence on t,
you call such systems as non-autonomous; they are called the non-autonomous system. In contrast,
you can have autonomous system where the t does not appear explicitly. So, you would
not have this part, if you do not have that part , that is what we call as autonomous
system. So, that means, we are talking about something like c 2 equal to 0, but then, you
may say that this looks difficult because c 2 appears in the denominator; it is not
that difficult; you can work from the first principle and develop that. We will be actually
doing one such example of autonomous system and we will work out for a higher order system,
and then you will know what I am talking about.
So, if I now write down everything in terms of c 2, then u n plus 1 would be u n plus
these two weighting functions times the increment K 1 and K 2. So, one way of fixing c 2 would
be to look at what we have omitted while working out this strategy. That would be the next
term, that we have omitted, that would be order h cube term, which is nothing but this
, and since W 2 c 2 is h 2 2 h, we can club it in there and then we can see that it is
the truncation error term is the leading term or the h cube term which will work out like
this. So, we have the leading truncation error term written in terms of the right hand side
of the equation that we are trying to solve plus our choice of c 2.
Now, there are various ways of looking at it - how to fix c 2? They are something called
Lofkin's bound which tells you that you could work out and estimate for the quantities that
you see in your truncation error; that would be del t i del u j. So, this mixed derivative
is bounded by this Lofkin's prescription; that will be l i plus j m to the power j minus
1. So, your i and j would of course go from 0, 1, 2, etcetera. So, if you do that, you
can immediately see what you get. Suppose, I choose i equal to j equal to 0, then that
would give me an estimate for f. So, mod f would correspond to i equal to j equal to
0; if I put that, it would give me this bound, that mod f should be bounded by m.
Now, if I look at this partial derivative, so that would imply what? i equal to 0, j
equal to 1; And that would be bounded by this quantity. L and M which are some finite bounds
or finite numbers; so, that we do not have to know in general, but for given specific
function, we can work out these quantities. So, f of t would be similarly i equal to 1,
j equal to 0, and that will give you this. And you could work out, similarly, the other
relations: f of t t which is i equal to 2 and j equal to 0; that would be bounded by
L square M, and f t u would correspond to t equal to i equal to 1, j equal to 1, and
that would be bounded by L squared, and this will be bounded by L squared by M.
So, if I use this information from these bounds, then you could see that t n plus 1 could be
bounded by M L squared. We are looking at the leading term, as that would be 3 one-sixth
minus c 2 by 4 and plus one-third. So, this is one way of looking at the problem. So,
what? We are saying that this leading truncation error term would be given like this. So, basically,
in essence, we have substituted these bounds in this last equation 55; that yields this
coefficient as 3 and this one gives you one third
So, basically, then you can immediately see that this will be minimum, if I choose c 2
in such a way that this quantity here is equal to 0. right So, that is the one way of doing
it; however, we need not have gone through all this. If you are smart enough, you can
immediately reason it out here, that for any arbitrary choice of f, you can see that; this,
we do not have any control right. So, only thing that you can exercise control in reducing
the truncation error would be to put this first set of term equal to 0, and that immediately,
such that this coefficient equal to 0. So, without even bothering to work out all these
bounds and etcetera, your common sense will immediately tell you that if you put this
quantity in the first bracket itself equal to 0, then you have done. So, that is equivalently
choosing c 2 equal to two-thirds.
So, this is one way of developing a Runge-Kutta method. So, you choose c 2 equal to two-thirds;
that will immediately give you a 2 1 equal to two-thirds and you can work out W 1 and
W 2 like this. So, this is essentially the step that you would be doing in writing a
program that you could write K 1 and K 2 in terms of this, and eventually W 1 and W 2
will assist you in fixing the solution at the next step. So, that is one way which ensures
reduction of error; so, that is the choice
Well, you could also see that, here, if I would choose c 2 equal to half, then W 1 becomes
equal to 0, right and then what I need to obtain is just simply W 2. And then, I can
get this solution in terms of K 2 alone because W 1 equal to 0. So, this is what is called
as modified Euler-Cauchy method. So, it is a multi-stage method, but again, what you
would be doing is you would be evaluating K 1, but you do not need to store it; instead,
you can directly obtain K 2 and obtain the solution at the next step in terms of K 2
alone. So, this is the way that these methods are worked out.
So, if I want to go little higher, for example, we are looking at a fourth order method; so,
that I will call as R K 4 method; Runge-Kutta four stage method, and let us make our job
simpler by considering an autonomous system. Although it may appear to be sort of a big
restriction, but most of the practical problems that you would encounter would be of this
class. So, most of the problems that we come across in computing belongs to autonomous
class, so that giving up the dependence on t is not a very serious one. So, the methods
that we are going to talk about are going to be quite useful.
So, since we are talking about four stage method, we would be evaluating all these four
increment functions: K 1, K 2, K 3, and K 4, and you can see the pattern that you do
not have dependence on t, but you have dependence on u alone. So, the K 2 will be incremented
by a 2 1 times, K 1 K 2 will be incremented by a 3 1 K 1 plus a 3 2 K 2, and K 4 would
be incremented by previous three increments K 1, K 2, K 3 here itself, and final solution
would be the weighting of all these increments. So, all that needs to be done is essentially
the same step that we have worked out just now. We will be working those expressions
out for the Taylor series, if I look at K 2, so, define a as equal to K 2 by h; then,
we can expand it, and this is how it is noticed that K 1 itself is h times f. So, you can
plug the main. And what you are going to see? That K 2 will be h times f and this gives
you a h, and there is another h. So, it will they have the second term h squared a 2 1
f f u. The next term would come from here. That will
be h cube by factorial 2 and if there is a a 2 1 square and we have f square coming from
here and this f u u remains as it is and the next term would give you this. By now, you
have realized that, having chosen the order of the method, you just need to keep terms
up to that order only. So, that is what we are doing. We are just retaining terms of
2 h 4 because it is a four stage method. So, that is your expansion for K 2. Similarly,
K 3 would work out like this and you can notice that you have to use all this information
on K 1 and K 2 in K 3. It is a little involved and you get this whole set of terms when you
retain terms up to h 4 alone. You realize that, if I were to be doing exercise
for a non-autonomous system, it would be lot more tougher because there would be f t and
f; mixed derivatives would come; that is why I try to keep it controlled. To look at the
way these methods are developed for this autonomous system, non autonomous system is little more
involved.
So, the last thing that you need to do is find out this K 4 quantity; K 4 by h; that
would give you this and you can notice that what you have here is the a 4 1 K 1, a 4 2
K 2, and a 4 3 K 3, which itself was a kind of a series of terms. And, if I keep in that
series of terms only have to h cube because there is a h downstairs here. That would make
this overall series on the right hand side as h 4, and that is what we need to do.
So, that is what we do. We do a little bit of algebra and this is what appears as the
intermediate solution for this increment K 4. Having obtained these expressions for K
1, K 2, K 3, and K 4, you go back and plug it in that expression where we have written
down u n plus 1 in terms of u n, and this increments K 1 to K 4.
Then, you would have obtained a series where we have retained terms up to h 4 and then
what you do is you compare that series with the Taylor series expansion of u t at n plus
1 and go on till h to the power 4; that is, your last line of that slide tells you, and
equate term by term once you start doing that. Since you have access to this material, I
am just not bothering you with all this derivation on the board. You can convince yourself; they
are the ones that will appear. So, the lowest order term will tell you that this W 1 to
W 4 adds up to 1, and that is what you would except because we are talking about weightages.
So, all the weights should add up to 1 and that is what you are seeing here.
Look at order h square term; that will give you this. There is no problem there; however,
if you look at order h cube term, you would once again see, there would be terms which
are coefficients with f f u square, and there would be terms which are f square f u u terms
multiplied to that. So, if we equate the Taylor series with this algorithm, and then equate
the coefficient of these two sets of terms as we did before, then order h cube will generate
two equations, and that is what we have written down. 62 and 63 are those terms corresponding
to f f u square and f square f u u. You look at order h 4 and there you would have terms
which are multipliers of f cube f triple u f square f u f triple u, and f f u cube. So,
if we match these three sets of terms, that will give us three more equations.
So, what do we have now? We have from order h 1 equation, from order h square 1 equation;
order h cube, we have two equations; order h 4 - three equations. So, all together, we
have seven equations. How many unknowns do we have? We have those four weightages W 1
to W 2, and then we have a 2 1, a 3 1, a 3 2, and a 4 1, a 4 2, a 4 3. So, there are
ten unknowns now. So, that gives you some bit of room to maneuver. What you could do
is, you could fix any three of them according to your convenience, and convenience means
you would like to put some of them equal to 0; that would mean lesser number of computation.
right If I set some coefficients equal to 0, that gives me the liberty of not evaluating
those sets of terms. So, we look for such opportunities, but if
you look at this equation, that should send you a warning bell by saying - look, be careful;
do not try to put a 4 3, a 3 2, a 2 1 equal to 0; then you would be violating the condition
that would not be; that is why, I have written it down that you should really not choose
them as equal to 0. What we could instead do? We could pick up the other three: a, I,
z, equal to 0 because that is what is involved in those increment functions. So, if I trace
these three to 0, then we can decide to have some kind of advantage of computing less.
And I wanted to bring one fact r to your attention that, I have looked at some textbooks, very
old ones and good ones, Ralston, and there are some books by Indian authors too. Unfortunately,
I could not reconcile to the fact that they talk about eight equations and what we are
seeing here that we get seven equations. So, what happens is - working with 8 equations
for 10 unknowns; they give you some kind of solutions.
So, we get these weightages as one sixth, one third; one third and one sixth, and accordingly
some set of conditions are given. So, I asked one of my PG student from Maths to really
see what is going on here, if we are making some mistake; the good news is, we are not
making some mistake; apparently, those books are not reporting something consistent because
as we can see, we cannot have more than seven equations, if I am going up to fourth order.
So, we decided to check out their solutions. So, we will adopt all those weights that they
have chosen, and then work out for the rest. This is what we get - that two of the equations
gives a 3 2 either as 1 or one half; so, we decided to take one half because you do not
want to go in an intermediate step, all the way to the terminal point so that 1 would
be a non-admissible solution. right So, that is what. We decided that we should take a
3 2 as half; if we do that, then we will get a 4 3 as 1, and from two sets of equations
that we will get a 4 3 equal to 1, and if we take another sets, then we get two possibilities
- 1 or 1 by root 2, and if I take another two equations, then I get a 4 3 equal to 1.
So, all these tell you, somehow, that this 1 by root 2 is odd man here. So, that should
not be a valid solution; instead, if we choose a 4 3 equal to 1, they seem to satisfy 61,
62, 63 and 65; 64 is already satisfied. So, all you need to see if that equation 66 is
satisfied most of the combinations that we get do not satisfy the last equation except
this condition a 2 1 is half a 3 2 is half and a 4 3 is one. This is a big relief because
what happened is - this is exactly the solution reported in those books; I do not know; it
is a bit of mystery at this point that they have done something inconsistent; they come
to some such solutions and that solution seems to be quite ok. And even if you correct those
mistakes, you still get a set of solutions which are the same.
Well. This looks somewhat very inelegant. We have been looking at some kind of a solution
which was obtained by someone. We are just checking this. The best thing would be to
choose a method where we could minimize the next higher order truncation error term like
what we talked about in R K 2 method. Just now, we looked at the truncation error term,
order h cube term, and that we minimized, and that is how we fixed c 2.
So, we should be able to do similar such exercise; looks like a bit of a work, and that kind
of a possibility always tell you, with all apologies to P G students, that P G students
have bit of a time to work those details out. So, I have given it to my young friend there
to work it out. So, that is the way. So, you basically look at order h 5 term and
minimize that truncation error term. So, what we are talking about? We are talking about
fixing three quantities in that; exercise is arbitrary; then you have 7 equations. And
try to look at the next higher order term in that three dimensional space and find out
where the minimum is, and that is how you do it. So, this is all about Runge-Kutta methods.
Let me assure you that Runge-Kutta methods are one of the most elegant, physically correct,
efficient methods. So, that is why I would not ask you to look for anything beyond Runge-Kutta
methods, for all your requirements of time advancement for sure, and we will come back
to that topic again when we come to discussing PDEs.
Now, just for the sake of completeness, I will briefly touch upon multi-step methods.
You ought to know because lots of work gets done using multi-step method; say, for example,
a generic equation that derives a three time level method is written here in this equation
67. So, you have the solution advance time level in terms of what you have in the current
time level plus what you have stored in the previous time level.
So, that is how the three time level gets into the picture, and of course, in solving
that equation d u d t equal to f. You need to evaluate those, and once again, we are
talking here about autonomous system because that is what we get. Now, what you can do
is you have to fix this coefficients alpha 1 alpha 2 beta 1 beta 2. So, what you do is,
you write down the Taylor series expansion. So, you write u n plus 1 in terms of u n,
and this of course, remains as it is. This, you also write in terms of u n. So, what you
do is you write down the Taylor series of this; also, that we have seen, and equate
term by term and you get this equation; alpha 1 becomes 1 minus alpha 2 beta 1 works out
like this and beta 3 works out like this. So, you can once again treat alpha 2 as a
free parameter. So, if you choose alpha 2 equal to 1, you immediately see that alpha
1 becomes 0, and beta 2 becomes 0, and the method that you get is called a Leapfrog Method,
and this is what all the weather prediction people have been doing ever since they started
their business. They always use this Leapfrog Method.
I will come to discussing about this particular method that after having said that multi-step
methods have to be observed, but still it so happens that if you are solving in viscid
equation, this method happens to be one of the most accurate methods, and that is for
some good reason, atmospheric science people have been using for decades. Now, we will
talk about it. So, this is one of the methods you see. The
three level method works out fine because what happens here, you get two modes and both
the modes becomes coincident, and that happens to be the physical mode. So, that is the beauty
of this method. Unfortunately, though this works only if you are looking at in viscid
equation, the moment you add diffusion terms, you would notice that this method will blob
on your face. So, that is a major issue that you would not
like to use a priori without knowing pros and cons of the method. It is a warning to
you that appropriate analysis must be performed before adopting a method.
This last one, I have purposely added. This is advanced by Adams and Bashforth; these
were two, I suppose, scientists in Cambridge in 19th century and they were solving some
problem of surface tension, and they worked it out that they could do it by hand and they
did come out with this method. What it does is, it advances the solution in terms of the
current solution and this right hand side here is evaluated at the current time level,
and the information obtained at the previous time level. The bad news about this method
is, it is the overly glorified; lots and lots of people use it; in fact, you go and use
any search engine, you will find that, lots and lots of people are claiming that they
are doing most accurate calculation using this Adams-Bashforth method, which we will
see that it is not really a good thing to adopt.
I purposely mention that Adams and Bashforth, when they developed, they were using their
brain power to work it out by hand. So, they did not do many many time steps and that is
why they did not probably realize the pitfall of the method which we did about few years
ago, and we did report it in couple of papers and warn people, but looks like the warning
has not registered. People still continue to use; so, this is something.
So, having come to a bit of a discussion about ODEs and their solution method, I try to bring
in a little bit of advanced topic and that is what is called as a stiff differential
equation. However, that may not appear to be such an oddity for people coming from chemistry
and chemical engineering. Stiff differential equations are those which are, of course,
very stiff resistant. What happens is, if you try to integrate, then you would see that
there are components of solutions which vary completely, differently.
So, if I talk about a set of equations, so what we have written here is a sort of a vector
equation. u prime means u 1 prime u 2 prime; I have written it in a columnar fashion and
the right hand sides are functions of time plus all those u's. And what you have is,
you are provided with a solution at the starting gate at t 0.
So, what you are going to see is, this is a set of coupled equation and you could write
u prime equal to some a u, and that a would come from this partial derivatives, what we
call as a Jacobean. So, this nothing but del f del u 1 del f del u 2, and so on so forth.
So, you can fill up a matrix and that matrix will have n Eigen values. right We have an
order of the system m. So, we will have m Eigen values.
What happens is, you can work out a ratio, which we will call as the stiffness ratio;
that is nothing but the quotient of maximum for the real part of lambda i and minimum
of the real part. So, that basically tells you that, in that system, something is going
with this kind of rate with time. That is what it means right real part of time. So,
we have generic solution and e to the power lambda t. right
So, the maximum growth rate would correspond to this part growth or decay because we are
talking about modulus. So, it could also be a decay rate also, and here, the complementary
part which will be just on the other end of the spectrum, if you look at this ratio, if
this ratio is too far apart, then you have all kinds of problem.
Why because, your numerical requirement will be tied to these extremes. right It is almost
like a poor man's blanket; you know, you try to cover up this part, that part will be exposed,
and vice versa. So, what happens is, you have to be really
be conscious about it. As I mentioned about this example from chemistry, this could relate
to say, simple chemical equation. So, you have a forward part of the reaction and the
reverse reaction, and you know the forward part of course goes at a very high rate compared
to the reverse reaction in most of the stable reactions, and that gives rise to a regular
problem of stiffness issue, and that is what I thought I will explain to you. There was
this equation that we have been looking at recently relates to stability of physical
systems. This was proposed originally by Landau, some bit of work was done by Stuart, and later,
Eckhaus contributed to it. Eckhaus' contribution comes through this coupling
term. So, suppose, let us say, we have a two degree of freedom system given by a 1 and
a 2, and we are looking at its time rate, now what? You notice that this path, the first
term, corresponds to the linear growth rate. Suppose I remove the second and third term
on the right hand side, then what I get? d a 1 d t is alpha 1 a 1. So, I can immediately
integrate and I can see a 1 goes as lambda 1 t; same way, a 2 goes as lambda 2 t.
Now, this term, the second set of term which was a bit of a mystery Landau suggested, but
never shown the proof, and he did not completely tell us how he got it, but that seemed to
work out very fine for some systems, and you would be surprised to see that people use
it across many disciplines; not only for what he intended in fluid mechanics, but it has
been used even in quantum mechanics. Landau-Ginzburg equations, people do use it is a variation
of this equation.
So, this second term that you are seeing here, this one, this second term here and the third
term here, that gives rise to what is called as a non-linear saturation because this one
- the first term, the first term that we have seen, tells us a 1 goes as e to the power
alpha 1 t. So, if I plot a 1, then from a present condition, it will perhaps start growing
exponentially. That is what it means. If real part of alpha 1 is positive, that is what
it does. This second term that we are seeing here, beta 1 1, that has this tendency to
saturate that, and this is very often seen in physical system. So, that is a basically
non-linear saturation term. So, that gives you eventually an envelope
which goes like this. So, you get a stable system description; in fact, those of you
who have seen or attended any course of fluid mechanics, you may have seen that, if you
keep a bluff body in a flow, you get this vortex shedding, which comes out in a nice
manner. right You get alternate from the alternate side; you see that shedding, that is very
well captured by this Stuart-Landau equation. However, for some situations, for some parameters,
these do not work very well, and that provoked Eckhaus to add this coupling term which was
done by one of our students recently. We did look at it and what we found that whenever
you are looking at it, let us say you have predominantly a single degree of freedom system
a 1, but somehow another mode also comes into play, which you cannot neglect for all time.
So, then, you need to worry about this third term here and this second term here.
However, now, what happens? You see, a 1 may go at a higher rate and a 2 contributes very
little, but still, its presence has to be there. It is exactly like your forward and
reverse reaction. You want to talk about the equilibrium; you will have to keep both; so,
here is a similar scenario to maintain the equilibrium. You need to have these cross-coupling
terms, and what happens is this coupling is rather weak, and the moment this coupling
becomes weak, you are looking at a large value of this stiffness ratio.
So, what? You find that, when you are looking at the dynamics of multi-model system is not
dominated by single mode, or if you are looking at higher order differential equations, you
would often face stiffness of differential equations, and this is not imaginary. This
keeps happening. So, to illustrate it, we have chosen a couple of ODEs looks little
odd in terms of these coefficients, but you solve it out; it looks like this.
This is a synthetic problem. So, you can see, we have purposely concocted this solution
to really work out two fundamental modes. One decays with time as e to the power minus
2000 t; the other one is much more gentler in its decay. Its e to the power minus 2 t.
So, one is decaying rapidly as compared to the other.
So, you might as well say why should we even bother; this is anyway going to disappear
in a short time. So, we do not really need to worry about e to the power minus 2000 t,
but then, life is not always that simple; you will not guess the solution, if you knew
you wouldn't be doing, you would be given, handed down, a couple of coupled equation;
you will have to be solving it and if you solve it, you would find that the time step
requirement that comes out from the numerical instability point of view, which we will talk
about later, puts a very restrictive condition on the amount of time increment that you can
choose. So, for this equation, depending on this stiffness ratio, now, you can see this
is about 1000 and the time step is restricted to 00014.
So, even for such a simple looking linear equations, you would have to spend lot of
effort. You cannot just simply say I will not worry about it. If you say that, you will
see to your peril that solution process will break down. You decide to take anything larger
than this. This value of delta t essentially tells you, that you are capturing this; without
capturing that, you cannot go ahead. Now, mathematically speaking, this is about
the coupling of the equation. We have two modes and when we compute, we also have what
we call as the round-off error. You cannot compute without round-off error, and what
happens is, round of error would always be there because your set of equations of both
the modes, although this mode by itself does not say its importance, but when you are trying
to obtain a numerical solution, you have to do the same thing that you do analytically
also. You cannot just simply get a solution without both the modes being present. right
So, what you need is, we need to keep solutions where all these modes remain, retain their
linear independence. right If one part of the solution become dependent on the other,
then you have numerical contamination. The solution breaks down and that is what this
stability limit is telling us; that, if I do not take small steps, I would not be able
to retain both the modes, and I will lose linear independence. So, what happens is the
parasitic error which is given by this riding of the small sources of error, like round-off
error rides, on what seems to be benign and does not affect the solution, but those errors
ride on those un modeled path, and eventually solution blows up; that is what it says.
So, classically, what is to happen is people used to use a method called trapezoidal method,
to solve at least this class of problem given here.
So, if I talk about the system of two equations here, so, d y 1 d t equal to f 1 and d y 2
d t equal to f 2, then the trapezoidal method would involve evaluating this right hand side
as weightage of the function evaluated at the current time level and the function evaluated
next time. So, basically, this is how you go about, and
if the equations are linear, it works fine, but if there are non-linearities, we will
have to work little harder, and in some cases, even for linear equation, this does not work
out. So, this is what we are going to talk about tomorrow's class.