Tip:
Highlight text to annotate it
X
Good afternoon. This is Doctor Rudra Pradhan here. Welcome to NPTEL project on Econometric
modelling. Today, we will continue the topic Bivariate Econometric modelling. So, in the
last class, we have discussed detail about the structure of bivariate econometric modelling.
So, the starting point of bivariate econometric modelling is that, we must have two variables
in the systems. Let me highlight briefly, what what was our last discussions.
So, for two variables X and Y, then the bivariate models is represented as Y equal to alpha
plus beta X plus U. So, this is the basic format of bivariate econometric modelling.
So, there are three ways we can represent this particular structures. And that that
is with respect to various data types. So, we have three different data setup. One is
cross sectional analysis, second, time series analysis, then penal data analysis. So, penal
data is the combination of cross sectional analysis and time series analysis.
So now, for briefly, if we will go by three different structures with respect to this
bivariate econometric modelling, then obviously, the three way representation is like this.
Y i equal to alpha plus beta X i plus U i. This is cross sectional modelling, then similarly,
Y t equal to alpha plus beta X t plus U t. This is time series modelling and Y i t equal
to alpha plus beta X i t plus U i t is penal data modelling.
So now, we are not in a position to discuss all these things simultaneously. So, we start
with a basic framework of bivariate modelling, that too, cross sectional analysis only. Now,
for cross sectional analysis, either we can represent the simple models like Y equal to
alpha plus beta X plus U or you can write Y i equal to alpha plus beta X i plus U i.
Now, you make a look here. The entire structure of bivariate econometric modelling is represented
here. So, this particular structures is divided into three parts. One is intercept that is
what, we call it alpha. This is intercept and this is slope and this is residuals or
error terms error terms. So now, here the idea is that, so, we have
Y equal to Y 1, Y 2 up to Y n. So, X equal to X 1, X 2 up to X n. And U equal to U 1,
U 2 up to U n. Now, we have discussed the detail constants of this particular bivariate
econometric modelling in the last class. Now, here we are assuming that there are n number
of observations and one of the interesting point of this bivariate econometric modelling
is that both the variables must have same number of observations. If there is any up
comings, then obviously, bivariate modelling cannot be fitted.
So, we are assuming that there are n number of observations and that too Y variables and
X variables and corresponding to U variables. Here, Y is dependent variables, X is independent
variables and U is the error terms, which is usually not captures but means the variable
which is not captured in the system will be represented in the form of U. Now, we are
we are assuming that there will be a estimated models. So, Y equal to Y hat equal to alpha
plus alpha hat plus beta hat X. So, let us assume that, so, this is the estimated models.
So, put in other way, in a other different way.
So, our starting point is Y equal to alpha plus beta X plus U. So, this is two regression
lines. So, let us assume that let us assume that Y hat equal to alpha hat plus beta hat
X. Now, obviously, Y equal to Y hat plus e. That implies e equal to Y minus Y hat. So,
what is this particular structures? Now, let us see here. So, this is the entire setup
here. This is X X series and this is Y series and this particular component is called as
alpha. So, our movement of Y hat is like this. So, here Y hat equal to alpha hat plus beta
hat X. So now, there are certain original points
here. This is the estimated line. So, the origin two points are like this. So, the difference
will be like this here. So, we have the difference like this. Now, this is e 1, this is e 2,
this is e 3, this is e 4, this is e 5, like this. Now, this e represented as the error
terms. So, that means, when we fitted a line, then obviously, that is different from the
true points. So, that true point and the estimated line, so, it will give you or it will give
the signal of error terms. So now, if we further elaborate this particular
equation, then obviously, e equal to Y minus Y hat. Y hat is here, Y hat minus, sorry,
Y minus Y hat. So, that is alpha hat minus beta hat X. So, e equal to Y minus alpha hat
minus beta hat X. So, here we have two objectives. So, here we have we have two specific objectives
we have two specific objectives. what is this objectives?
The first objective is to get the alpha hat and what is the actual value of alpha hat
and what is beta hats and second objective is to find out the error components. So, we
have now, when you got the estimated equations and through which we have to get the error
component, then our objective is very simple. We like to know, what is the exact value of
alpha hat and what is the exact value of beta hat? And through the help of alpha hat and
beta hat, we get to know the error component or we have to evaluate the error term through
the help of alpha hat and beta hats. So now, how do you go for that? So, since
error is a residual term and which is you know not supporting to the dependent variables
exactly. So, our objective must be to minimize that error components. So, that means, we
must represent a models where, every variables should be identified; means most of the variables
should be extremely dependent variables. If that percentage is very less, then the model
accuracy will be you can say very least. So, we have to prepare our self or we have
to fit the model in such a way that most of the relevant variable must be included in
the system. So, accordingly, we have to design our structure or you can say systems. Now,
the entire structure is nothing but, e equal to Y minus alpha hat minus beta hat X and
our objective is to get the alpha hat and to get the beta hat. And with the help of
alpha hat and beta hat we have to observe the e components. Let us see how we have to
observe this one.
So now, e equal to Y minus Y hat. So, this is nothing but, Y minus alpha hat minus beta
hat X. So, summation, so, here to get alpha hat and beat hat X, so, we have to minimize
the error terms. So, we have to we have to minimize minimize the error term error term.
The way we have to minimize the error term, then obviously, we will get the best value
of alpha hat and best value of beta hat. So, how do you go for that?
So now, there are there are several methods to which we have to minimize the errors. So,
there are methods like you know, sum of the methods like ordinary least square method,
generalized least square method, weighted list care methods, maximum likelihood estimators,
maximum likelihood estimators. Like this, so many methods are there, where we can minimize
the error sum. So now, it is not possible to go each methods simultaneously. So, we
will take a particular methods, then through which we have to minimize the error sum.
So now, the easiest method for this is called as a ordinary least square methods and popularly
known known as wireless techniques. So, what is all about this wireless techniques? The
wireless techniques objective is to minimize the error sum squares. Now, our objective
or agenda is to calculate what is error sum. Now, e is e is nothing but errors. So, which
is equal to Y minus alpha hat minus beta hat X.
So now, we have to calculate what is the error sum square? So, that means, sum of the error
sum squares. So, that means, summation e squares i equal to 1 to n. So, e i squares. So, this
is the error sum squares. So, which is equal to summations Y minus alpha hat minus beta
hat X. And of course, there is i also. Now, i equal to 1 to n. So, this is also squares.
So, error sum square is nothing but the difference between the actual Y minus the expected Y.
So, the difference will give you the error such that, if you make it squares, then you
will get and it is apply sum, then obviously, you will get the error sum squares.
So now, through which you have to, we have to we have to minimize the components. Now,
let us see. This is the this is the starting procedure of this particular system. Now,
our objective is here to get the alpha hat and beta hat. That is why, we have to minimize
the error sum squares. Now, since, we like to get the value of alpha hat and beta hat,
so, we have to minimize the error sum square with respect to alpha hat and beta hat.
So now, so, there are two system here. So, how do you minimize the system? So, there
are you know, means this is typically optimization techniques. So, we have two different structure
of optimization. One is called as minimization technique and another is called as a maximization
technique. Now, here, we are in the process of minimization. So, there are two standard
rules to minimize the sum squares. So now, here, first first step is to take
d summation e square Y d alpha hat is equal to 0 and d summation e square by d beta hat
is equal to 0. Now, let us call it f 1, this is called as f 1 and this is called as f 2.
Now, the first order, this is this is otherwise known as first order necessary conditions.
So, second order sufficient condition is that, now f 11 and f 1 2, f 2 1 and f 2 2 must be
greater than 0. And f 1 1 greater than 0 and f 2 2 must be greater than 0. So, that means,
what is f f 1? So, f 11 is nothing but d summation e squares, d square summation e square by
d alpha hat squares. So, f 2 2 is nothing but d square summation a square d beta hat
squares. So, like this f 1 2 is nothing but d square summation e square by d alpha hat
and d beta hats. So now, we are not going to discuss detail about this particular mathematical
setup. So, what we have to do? We can get the sensor through only first order necessary
conditions. Now, what we have to do? We have to just minimize the sum square.
So, what is what is d summation e square by d alpha hat d summation e square by d alpha
hat is nothing but 2 into summations Y minus alpha hat minus beta hat X. So, into with
respect to alpha hat. So, of course, then and there is minus 1. Now, which must be equal
to 0. Now, if we will simplify, that implies which is nothing but summation Y equal to
n alpha hat plus beta hat summation X. Let us assume that this is equation number one.
So now, similarly, we have to calculate d summation e square by d beta hat. So, d summation
square beta hat is nothing but 2 summation Y minus alpha hat minus beta hat X into with
respect to beta hat. So, obviously, minus X is the extra terms which has to be multiplied
in systems. Now, this should be exactly equal to 0. Now, if we will simplify again, so,
obviously, this is equal to summation Y X Y, that implies summation X Y equal to alpha
hat summation X plus beta hat summation X squares. So, since equal to 0 so, obviously,
if you properly structure, then we will get summation X Y equal to alpha hat summation
X plus beta hat summation X square. Now, let us call it equation number two.
Now, if you club this two equations, so, that means, the system will be now the system will
be now summation Y equal to n alpha hat plus beta hat summation X and summation X Y is
equal to alpha hat summation X plus beta hat summation X square. So, what is our objective
here? Our objective is here to get alpha hat and to get beta hat X. Forget about this second
objective of error component. So, in the mean times, we have derived these two equations,
just to know, what is the exact value of alpha hat and what is the exact value of beta hats.
Now, we have to, you know items to get and we have two equations. So, the system is systematic
one. So, that means, the system is unique one. So, it can be operated.
So, what I like to do here? So, I will put this, a concept into matrix format. So, this
is nothing but Y equal to simply X beta, simply called as a X beta. Now, what is X beta here?
So, X beta X beta is nothing but, you put it here like where, where Y equal to summation
Y summation X Y. Then, X equal to then X equal to n summation X, then summation X summation
X square, then beta equal to then beta equal to alpha hat and beta hats. So, that means,
the whole system will be represented as Y equal to X beta. Now, let us assume that this
is equation number three. So now, if we will multiply, now multiply
multiply now multiply X inverse on both the sides on both the sides, multiplying X inverse
on the both the sides. So, what happens? Now, X inverse Y is equal to X inverse X into beta.
X into inverse Y equal to X inverse X beta. Now, what is X inverse X? X inverse X by matrix
it is nothing but unit matrix; it is nothing but unit matrix. So, as the result, the value
of matrix is exactly equal to one. So, that implies beta equal to X inverse Y, beta equal
to X inverse Y. Now, the question is, what is X inverse Y? So now, we know, what is X.
So, X is n summation X summation X summation X squares. So, we have to find out the X inverse.
So, X inverse equal to adjoint of X divide by divide by mod X mod X. What is, if we will
put it in different structure, then X inverse X inverse is represented as a summation X
squares minus summation X, then minus summation X then n. So, this is what X inverse divided
by mod A, which is nothing but n summation X squares minus summation X whole squares.
So, this is the entire value of X inverse. So, there is a rule how to get the X inverse.
So, I am not going detail about this explanations. So, you have to know yourself. So, the X inverse
means if f is X is available and it is in square format, then obviously, we will get,
we are able to manage to get the X inverse. Now, the system is two into two. So, it is
square matrix of order two into two. So, it is not difficult to get the, you can say inverse
matrix. So, X inverse is this much. So now, we like to know X inverse Y. Now,
X inverse Y X inverse Y is nothing but, so, again, we have to go for matrix multiplication.
Summation X square minus summation X minus summation X n divide by 1 n summation X square
minus sum X whole square into into Y. What is Y? Y equal to sum Y sum X Y sum Y and sum
X Y. So now, this is the X inverse Y. Now, beta
equal to X inverse Y. So, what is beta? Beta is nothing but alpha hat and beta hat. Now,
if we will simplify simplify this particular equation by matrix multiplication, then we
will get alpha hat equal to alpha hat equal to, so, we will get alpha hat equal to like
this. This alpha hat equal to summation Y into summation X square, then minus summation
X into summation X Y divide by n summation X squares minus sum X whole square. This is
the alpha hat component, this is the this is the alpha hat component.
Similarly, we will get beta hat. Beta hat equal to beta hat equal to n summation X Y
n summation X Y minus minus n summation X Y minus sum X into sum Y into sum Y divide
by n summation X square minus sum X whole square. So, this is beta hat component.
So, if we will if we will simplify further, then this particular item can be represented
as you can say summation X Y by summation X square. So, this X represent where X equal
to X minus X bar and Y equal to Y minus Y bar. I will I will explain how it is how it
is transferred into this particular format. So, there is a trick to solve this particular
problems. Now, since we have objective to get alpha hat and beta hat, so now, you are
in a position to know the value of alpha hat and to know the value of beta hats.
So, this is our starting point of bivariate econometric modelling. The moment you get
alpha hat and beta hat, then the game plan will be completely different now. Now, the
idea is, the basic idea for this particular bivariate econometric modelling is that we
have to fit a best line, otherwise it is called as a best fitted line. So, how do we get best
fitted line? Best fitted line depends upon the value of alpha hat and beta hat.
So now, the alpha hat and beta hat may may be it cannot be it cannot be a constant or
it cannot be unique. It can be different with respect to different setup or different structures
because the moment we will get a particular estimated equation Y hat equal to alpha hat
plus beta hat X, then obviously, that model has to be you can say a identify properly.
So, that is what we called as a reliability of the models. So, the details, testing structure
we have discussed long back, in my first one or two lectures. Now, when will we when we
have a estimated model, so, we have to go first the reliability part or that is nothing
but diagnostic check. Now, once you have that and if the model is
free from this particular diagnostic check or it is reliable one, then you can use that
model or you can say that this model is perfectly okay or best fitted model. If not, then you
have to modify by various ways, either you can redesign the model or redesign the system,
redesign the data setup or redesign the technique. So, by this way, you will get a particular
models. At the end, which one is the best model for this particular analysis?
So now, once you have alpha hat and beta hat, so, you are estimated equation will be Y hat
equal to alpha hat plus beta hat X. So, alpha hat is followed by this one and beta hat is
followed by this one. Now, there is actually tricks here. So, when you know particularly
exam point of view, it is very difficult to go for you know, so much derivation or analysis,
there is trick how to get the solution very quickly.
So, what is our starting point here? Our starting point is here. Summation Y equal to n alpha
hat plus beta hat summation X square and summation X Y is equal to alpha hat summation X plus
beta hat summation X square. So, this is how, we have started our journey. So now, I think
this is alpha hat plus beta hat summation X square and summation X Y equal to alpha
hat summation X plus beta hat summation X square. Sorry, this is summation Y equal to
n alpha hat plus beta beta hat summation X and alpha summation X Y equal to alpha hat
summation X plus beta hat summation X square. So, what you have to do? Now, let us take
a first equation here. So, summation Y equal to n alpha hat plus beta hat summation X.
Now, what I will do? I will divide n both the sides. Summation Y equal to n alpha hat
Y n plus beta hat summation X by n. So, summation Y by n is nothing but Y bar. This is what,
we have already discussed detail in the univariate univariate data structure.
So now, Y Y bar is equal to n n cancel. This is nothing but alpha hat plus beta hat X bar.
Summation X by n is nothing but X bar. So, it will be X bar. Now, our objective is here
to get the alpha and beta hat. Now, alpha hat is only single element here. So, obviously,
alpha hat equal to Y bar minus beta hat X bar. So, technically there is no point to
no point to derive the alpha hat or you have to run behind this alpha hat value alpha hat
value. We will get automatically because we know the Y information and we know the X information
by the by the help of Y information and X information, we can get to know what is Y
bar and what is X bar. So, it is not a difficult task. So, what is
the difficult task here? So, here the unknown factor is beta hat. So, once you will get
the beta hat, other things will be remain available with you. So, as a result, so, you
have to calculate first beta hat rather than alpha hat. So, once you will get beta hat,
with the help of beta hat you can able to get the alpha hat. So, what is beta hat here?
So, beta hat equal to the formula we have already mentioned. So, beta hat equal to n
summation X Y minus sum X into sum Y by n summation X square and sum X square minus
sum X whole square. So, once you will get beta hat, then through
which alpha hat can be observed through which alpha hat can be observed. Now, so, what we
have to do here? So, we like to take a case here. So, we like to know, what is this what
is this entire structure? How do we get this alpha hat and beta hat? So, before we go to
particular example, so, let me highlight here this particular issue. So, this is otherwise
called as a covariance of X Y by sigma sigma X or it is variance of X, this is covariance
of X. So, covariance X is nothing but, some simply
a summation X minus X bar into Y minus Y bar divide by n and variance of X is nothing but,
summation X minus X bar into X minus X bar divide by n. n n cancels, so obviously, this
is nothing but sigma X and this is nothing but covariance of Y. So, it is sigma X means
it is a square root. So, obviously, this is this is okay. Now, alpha hat is this much
and beta hat is this much. So, that means, the other way you have to represent the beta
is nothing but summation X Y by summation, you can say X square summation X square. So,
X is X minus X bar, Y is Y minus Y bar, X square is nothing but this particular item.
X minus X bar and summation X i j this much. So, if we will simplify, then you will get
this particular equation. So now, we have alpha hat and we have beta hat. So now, we
will see how practical it can be evaluated. So, take a example here.
So, we take here X series X series here, then Y series here. This is sample points. So,
1, 2, 3, 4, 5, like this. So, here, so, this sample points are 51, 60, then 65, then 71,
then 39, then 32, then 81, then 76, then 66, then 66, then Y series is nothing but 187,
then 210, then 137, then 136, then 241, then 262, then 110, 143, then 152. So, that means,
1, 2, 3, 4, 5, this is 6, this is 7, 8, 9. So, there are 9 sample points. So, this sample
points is a 9. These are the sample points and these are the X series and these are the
Y series. Since, X has a nine sample points and Y has a 9 sample points, that means, system
is okay now. So, the model can be estimated. Now, what is the idea behind this models?
So, we will assume that the model or Y and X are means, Y and X are related in a linear
Y. So, our assumption that Y equal to alpha plus
beta X and if we will add the error term, then obviously, this is plus U. Now, we are
assuming that the estimated model is equal to Y hat equal to alpha hat plus beta hat
X and where alpha hat equal to Y minus Y bar minus beta hat X bar and beta hat is equal
to n summation X Y minus sum X into sum Y divide by n summation X square minus sum X
whole square. So, now you see here since we have a X and
Y series, so, what is the essential point here? For X and Y you see here. This is nothing
but, we first need we first need mu X mu Y, we need mu X mu Y and another is sigma X X,
sigma X Y, sigma Y X and sigma Y Y. So, this is this is mean of X, this is mean of Y and
this particular matrix is called as a variance covariance matrix variance covariance matrix.
So, now within the given setup, so, you can able to get all these items separately. Now,
to solve this particular equations, so, what is the essential requirement?
So, essential requirement is that essential requirement is that we like to know first,
what is summation X, then summation Y, then summation X Y, then summation X square, then
summation Y squares. These are the requirements we like to know. So, what is summation X,
what is summation Y, what is summation X Y, what is summation X square, what is summation
Y square, and finally, what is the sample size sample size?
So now, in fact, I have already calculated this particular items. So, this is nothing
but X series. So, sum X equal to 541. I am just filling here. Summation Y equal to 1578
1578, then summation X Y is equal to 88291, then summation X square is equal to 34705,
and Y Y is nothing but 298712. So, this summation X square is nothing but 347 34705 and n is here 9,
n is 9 here. So now, we like to know alpha hat. Alpha hat equal to Y bar minus beta hat
X bar. Let us start first beta. So, beta hat equal to n summation X Y. So, n into 8, n
is 9 here. So, 9 into 88291 minus summation X into summation Y. So, 541 multiplied by
151578 summation X into summation Y divided by n summation X square. What is summation
X square? It is 9 into 34705 minus sum X whole square root is sum X sum X is 541. So, 541
whole square. So, this is what the beta value is all about.
So now, if we simplify this particular equation this particular equation, then you will get
you will get this particular equation is like this. So, beta hat is equal to 3.004, you
will get 3.004 where where all these information are available. So, if you simplify this particular
equation, so, you will get beta hat equal to 3.004. So now, alpha hat equal to Y where
minus beta hat equal to 3.004 into X bar. Now, if we simplify further, then it is nothing
but 355.93. So, that means, your final equation equal to, Y hat equal to 355.93 minus 355.93
minus 3.004 into X 3 point into X. So, this is what we call as a estimated model. So,
this is what we call as a estimated models.
So, now what you have to do? So, we get to know, we just summarize what we have done
till now. Now, the starting point is, so, we have we have Y equal to alpha plus beta
X plus U. This is original format where, U is error term. This is slope and this is intercept,
this is dependent variable, this is this particular item is independent variables and this is
explained items, this particular is a unexplained items. So, by this process, we are assuming
that Y hat equal to alpha hat plus beta hat alpha hat equal to alpha hat equal to Y bar
minus beta hat X bar and beta hat equal to summation X Y summation X square where, X
equal to X minus X bar X equal to X minus X bar Y equal to Y minus Y bar and X Y is
nothing but, X minus X bar into Y minus Y bar. And X square is nothing but, X minus
X bar into Y minus Y bar. So, this is what, we have received the final equation. So, that
is called as a line of the best fitted. So now, you see here. So, the original structure
is we start with the Y and X only. By the way, we will get U component here or you can
say error. So, this is sample format. So, 1, 2, 3 up to 9. So, for every items, you
must have a some observations some observations. So now, how you have to setup this particular
series? So, you have Y, and X means our original starting is with respect to Y information
and X information. So, we are assuming that Y and X has a relationship. And by the way
Y Y is dependent variable and X is independent variable. Now, we have to fit in such a way,
so that, we will get a best fitted line or that is called as a best related equations.
So now, the way you have to get the best related equation, so, we have to apply some technique.
So, here we are we are using the ordinary least square method. So that, we will get
the best fitted line. Now, so that, we will assume that it is nothing but Y hat. So, Y
hat. So, Y hat equal to alpha hat plus beta hat X. Now, you will get U here. So, this
is Y hat structures because Y hat equal to alpha hat plus beta hat. X alpha hat is here,
beta hat is here. So, put this value here, then X is there. So, for every sample X value
is there. So, for every sample, put X value here. So, you will get the Y hat value. Put
X value 2 here, then obviously, we will get Y 2 hat. Similarly, up to Y 9 hat, you will
get it. So now, how do you get U? U is nothing but, Y minus U hat. So, it will be called
as a e 1, e 2, e 3 up to e 9. So, these are all called as a error item. Now, we have to
see what is the contribution of a error and what is the contribution of X 2, what is the
Y? This is our basic agenda behind this particular topic.
So now, now there is certain problem here. So, what is this problem? Now, when we will
fitting a models, Y equal to alpha plus beta X plus error terms and you will get Y hat
equal to alpha hat plus beta hat X. So, this particular transformations, we have applied
OLS technique. So, there may, of course, there are several techniques. We can we can use
to get this Y value equal to alpha hat and beta hat X, but wireless technique is the
very standard technique and very easy to understand and simple to simplify. So, that is how, we
have to start with the wireless technique. So, when we go deep in this particular econometric
modelling, then we can apply maximum likelihood estimated techniques or generalized least
square methods and weighted least square methods. So, some of the problems under this econometric
modelling can be solved with these particular methods. That time, wireless technique may
not be may not be appropriate to get the best fitted best fitted line. So, there is way,
how, when or what times you kept like this, GLS technique or WLS technique or maximum
likelihood technique. So, here we start with first the basic level, then we have to go
into complex complex scenario. So now, here, when we will apply wireless
technique, then the entire equation will transfer into Y Y hat equal to alpha hat plus beta
hat X. Wireless techniques, of course, technique is the standard technique and easy to understand,
easy to apply, but it has certain limitations. There are there are certain limitation with
respect to its assumptions. So, we have certain assumptions before applying the wireless technique
or to get this estimated lines. And these assumptions are you know later point of times,
it is problem for this particular econometric modelling and each problem has to be investigated
problem. So, we will discuss detail what is the exact assumption and how this problem
can be, you can say generated in this particular systems. So, this problems are very complex
and very interesting also. So now, the system is, means the idea is here.
What is the, what are these assumptions related to wireless techniques? Because, wireless
techniques without these assumptions, wireless techniques cannot be applied and you cannot
get the best fitted models. So, that is what we call it, Y hat equal to alpha hat plus
beta hat X. Yes, it is means, theatrically we are just writing Y hat equal to alpha hat
plus beta hat X. But, to get alpha hat beta hat is not so easy. There are lots of complex
processes or complex structure through which we have obtained the alpha and beta hat. Just
now, you have derived the entire structures with respect to this particular alpha hat
value and beta hat value. So now, the way we are applying this OLS,
so, we have to go with certain assumptions because without such assumptions, it is very
difficult to minimize this error sum squares, that too by the help of wireless techniques.
So, these assumption are actually divided into three parts: one part is related to error
term, another part is related to independent variables and third part is related to dependent
or other items in a particular system. There are certain other items means, that items
related to statistics only, not some other things. Now, we will receive, what are these
assumption under this particular setup. So, first first assumption is that, the model
must be linear in parameters. Model parameters, model parameters are linear in nature, linear
in nature. So, every times, we are using Y equal to alpha plus beta X plus U. So, that
means, this model is linear one with respect to variables and with both parameters. So,
our the complex problems, so, this variable can be, you can say non-linear one and the
parameter can be non-linear one. But, suppose wireless technique is concerned, we have to
assume that all parameters should be linear in nature, but variables may be, you can say
may not be non-linear one. So, that means, we apply this. Let us say quadratic equation,
cubic equation or logarithmic equation, it can be possible; that means, Y can be log
Y, Y can be Y square, X can be log X, X can be X square or simply, we can put Y. Y equal
to like this. We can put Y equal to alpha plus beta X square beta X square plus, you
can say gamma gamma X. We can also fit like this way and we will get the value of, you
can say alpha beta and gamma. It is not a difficult task but, the standard assumption
is that whatever means, parameters are using in this particular setup, all parameters must
be linear in nature. And for bivariate model, obviously, there are only two parameters in
the system. One is related to supporting component; that is intercept. And another is slow formation;
that is indicated the weighted of the dependent variable towards the dependent variables.
So, this is first first assumption behind this particular techniques. So, model must
be means model parameter must be linear one. Second, X should be non-stochastic, X followed
by non-stochastic. So, that means, in fact, last class I have discussed it should be random
in nature. So, that means, there is some kind of probability may be involved in this particular
process because, we are hoping that this is the expected relationship and expected equations
or you can say, whatever may be the, since, we are using the term expectations, so, that
means, it is for future only. Because, the whole idea behind this particular estimated
model is to go for forecasting. So, what should be in the future? This is the original structure
within the original setup. We have to build a first through which can predict or forecast
the future one. So, that is how, we are doing all these jobs.
So now, so, means that is how, we have to assume that the variables are very much non-stochastic
in nature. Otherwise, it is very difficult to observe it or you can say, plan it. Now,
the the second, this is the second assumption behind this particular wireless technique.
Third assumption, mean of error terms should be equal to 0, mean of error term should be
equal to equal to 0. So, like this. So, that means, e of 1 U is equal to 0. So, this is
mean of error terms should be equal to 0; that means, you see, when we are considering
mean, then, obviously, some items should be above and some items should be below. This
is what we we have learned from the standard univariate data setup. So, mean is the, you
can say average or usually we consider divided into two equal parts, which is some 50 percent
above, 50 percent below. If that is the setup, then obviously, the entire system is model
less. So now, mean of the error term should be equal
to 0. Now, when we will get Y hat, then obviously, to get the to get the error component e, we
have to subtract Y minus Y and Y hat. Now, the difference sigma called as a error term.
Now, we have series of items through which we will get Y 1, Y 1 hat, X then Y 2, Y 2
hats, like this. So, since Y 1, Y 2 up to Y n, say Y hat 1, Y hat 2, like this up to
Y hat n. So now, for every items, so, there is error
components like e 1 for first component, for second component, you must have e 2. Like
this, it will continue up to nth items. Now, since, we are discussing about the average,
then obviously, sometimes the difference may be positive, sometimes the difference may
be negative. But, at the end, when we will go for summation, the plus items and minus
items should be equal n. If that is the case, then your system is perfectly okay, otherwise
the system is some kind of error.
So, this is third assumption behind this wireless technique. Then, fourth assumption is that
variance of error term should be constant variance of error term should be constant.
What is that? So, that means, what is variance? Variance here, now, we are we are discussing
here we are discussing here U U is the error term. So, we are calling it U i. So now, we
will take another error term U j. Now, so, there are two variables. In fact, now what
is variance? So, we start with a covariance. Covariance equal to Y i of 1 or U i of 1 U
j. So, this is what is called as a covariance of U i, i j. Now, this covariance of U i U
j can be equal to variance of U provided a means, if i equal to j. So, that means, when
we when we say variance of error terms should be constant or you can say unique, then obviously,
covariance of U i upon U j should be equal to 1, for i equal to j. And this particular
setup is called as a homoscedasity. This is particular setup is called as homoscedasity;
that means, when there is error error variance, so that error variance should be for very
equal like this. So now, when there are U U is the error terms.
So, through one U, you can create several U’s like this. So, let us say in a more
generalized format U 1, U 2 up to U n. So, this side U 1, U 2 up to U n. Now, we have
the variance covariance matrix. So, this is U 1 1, this is U 1 2 and this is U 1 n. So,
this is U 2 1, this is U 2 2, then this is U 2 n. So, this is U n 1, U n 2, this is U
n n; that means, the complex structure is divided into three parts. This is this is
diagonal elements, this is off diagonal, on diagonal and this is off diagonal.
So now, when we will you say that variance of error terms are equal, that means, these
are all variance and these are all covariance. Now, this variance should be exactly similar.
If this is the case, then this particular setup is called as a homoscedasity principle
and wireless techniques assumes that error variance are equal; that means, there is homoscedasity.
If a the situation is reverse or that means, if the error variance are not equal, it varies
with respect to sample points either in the cross sectional or something time series,
then obviously, it is in different format. So, that particular format is called as a
heteroscedasity problem. So, we have two different game together. One
is called as a homoscedasity and another is called as a heteroscedasity. So, homoscedasity
is very consistent with wireless technique. So, that means, one of the standardization
assumption of wireless is that, so, error variance should be equal. So, that is what
we call it homoscedasity. If that is not the case, then it is called as a heteroscedasity.
So now, when there is a heteroscedasity problem with the application of OLS, that means, the
model cannot be treated as a best fitted models. So, in that context, we have to redesign this
setup again. so, that the heteroscedasity problem can be removed, then we will get the
homoscedasity structures. So, then the model can be used for forecasting.
With this, we can close this subject today. So, next class we will start with the some
assumption of this particular bivariate modelling with the application of wireless technique.
Thank you very much. Have a nice day.