Tip:
Highlight text to annotate it
X
We have been reviewing topics in probability and random process. This would be the last
lecture at the part of this review, we will quickly recall what we did in the previous
lecture. We discuss properties of Gaussian random process, Poisson process, simple random
walk, wiener process, Brownian motion, random pulse process and Gaussian white noise process.
So, will have opportunity to return to some of these processes, later when we use them
to model, loads and responses, we also introduced the notion of mean square derivative; that
was based on mean square convergence of random variables, in the sense of mean square, I
briefly touched upon the topic of 2 random processes. In the previous lecture will continue
with that.
So, let us consider two random processes: U of t and V of t the description of U of
t to recapitulate is in terms of first order probability distribution function or second
order probability distribution function or in general the n th order probability distribution
function. Associated with this we also have the n th order probability density function.
We could define expectations this is the mean, this is covariance and so on and so for. So,
this completes the description of a random process U of t and we say that U of t is completely
specified, if I am able to specify the n th order joint density function for any choice
of these t 1, t 2, t 3, t n and for any choice of n. In a similar way in we can completely
describe random process V of t n th order density function or expectation, so on and
so for.
Now, when it comes to question of joint description of U of t and V of t, we can considered 2
time instance t equal to t 1 and t equal to t 2 and I select 1 random variable U of t
1 from ensemble of U of t and another random variable V of t 2 from ensemble of V of t.
So, this can be viewed as the joint probability distribution function of the random process
U of t and V of t at 2 time instance. So, this can be generalize for n time instance
from ensemble of U of t and some other m time instance from ensemble of V of t and if I
am able to specify, this kind of joint probability distribution function or the associated probability
density function, we can say that the 2 random processes U of t and V of t are completely
specified. We can also define the moments with respect
to these joint density functions. For instance, I can define, what is known as cross covariance
function, which is actually the expectation of C of U V of t 1, t 2 is expectation of
suppose mean is 0, suppose if i assume U of t is 0 and V of t is 0, this is nothing but
U of t 1 into V of t 2, if mean is not 0 this is what I have shown here. This is expectation
of u of t 1 minus m U of t 1 v of t 2 m V of t 2 into this density function. And this
double integral gives the cross covariance function between U and V; the word cross originates
from the fact that the random variable U of t 1 emanates from ensemble of U of t and the
random variable V of t 2 emanates from ensemble of V of t.
So, this is in contrast to the definition of auto covariance function, which we introduced
previously, where I talked about 2 random variables originating from the same ensemble.
So, the world although both functions are covariance, we add this prefix auto and cross
to indicate the choice of the underline random variables. The notion of stationarity can
be generalize to stationarity of two processes. Suppose, we ask the question, when do we say
that U of t and V of t are said to be jointly stationary. First of all, U of t must be stationary,
V of t must stationary, in the chosen sense of stationarity. Suppose, we are talking about
wide sense stationarity, U of t should be wide sense stationary in it is own right V
of t should be stationary in it is own right. That would mean of U would be a constant mean
of V would be a constant and auto covariance of U of t that is C U U of t, t plus tau will
be a function of only the time difference. Therefore C U U of t, t plus tau would be
C U U of tau.
Similarly, C V V of t, t plus t, t plus tau is C V V of tau. The additional requirement
for wide sense stationarity of two random processes is that the cross covariance function
should also satisfy this equal that is C U V of t, t plus tau must be function of the
time difference start in the same sense. We can also discuss about strong sense stationarity
in terms of probability density functions. So, I take joint density function between
a random variable originating from U of t and V of t at some time difference and based
on the notion of stationarity that we consider for U of t we could extent the definitions
and we can talk about strong sense stationarity of two random processes.
Now, let us look at what exactly is meant by, we talked about C U V of is expectation
of U of t 1 V of t 2. So, I will call it as C U V of t 1, t 2; suppose you have an ensemble,
suppose this is ensemble of U of t and this is an ensemble of V of t, what we are saying is there is the
time instant t, the another time instant t plus tau, there is this same time instant
t and time instant t plus tau. So, there are basically four random variables here two from
U of t and two from v of t. When I says C U V of t 1, t 2 it is from U of t 1 that is,
this is t 1 say this is t 1 is equal to t and this is a t 2 say t 1 equal to t this
is equal to t; U of t 1 that is this and V of t 2 that is this. This is clearly different
from that covariance between these 2 random variables.
So, this is also equal to that is this. This also equal to expectation of V of t 2 into
U of t 1. So, this is V of t plus tau and U of t so this can be written as C V U of
t 2, t 1. The point that I am making is this is not a symmetric function, because if you
simply consider C V U of t 1, t 2 then you will be looking at correlation between two
other random variables. For example, if you now consider C V U of t 1, t 2. This is correlation
between this random variable and this random variable. So, there is no reason why the co
relation covariance between these two random variables should be same as covariance between
this random variable. So, is not symmetric, because there are four random variables we
are talking about.
So, we can write this information covariance in the metric form C of t 1, t 2 C U U of
t 1, t 2 is the auto covariance of U of t C U V of t 1, t 2 is cross covariance between
U of t 1 and V of t 2 C V U of t 1 t 2 is cross covariance between V of t 1 and U of
t 2 and this is C V V of t 1, t 2 is auto covariance of V of t 1 and V of t 2. This
is not symmetric, because we are not talking about covariance between the same two random
variables. There are four random variables. So, although this is a covariance matrix it
should be understood that it is not in the sense of covariance between two random variables
there are actually four random variables here that is why this asymmetric arises C of tau
is this, if this process is stationary this will be suppose t is t 1 is t and t 2 is t
plus tau this is the covariance matrix. So, this is what I am trying to say here C U V
of tau is U of t into V of t plus tau. So, this is commutative, I can write this first
and this next this is V of t plus tau U of t and this is nothing but C V U of t minus
t minus tau. Therefore, C U V of tau is C V U of minus tau. So, in some sense there
is the kind of an anti-symmetric, we can see here. So, this would be understood, what is
the corresponding definition of power spectral density functions?
So, we define power spectral density matrix of power spectral density functions S U U
of omega is the auto power spectral density function of process U of T U of T is taken
to be stationary. And do you will also assume that it has 0 mean, S U V of omega is known
as cross power spectral density function between U and V, this is S V U of U this S V V of
omega. So, you recall how did we define S U U of omega, it was limit of 1 by T expectation
of X T of omega into it is conjugate. Now, if you replace omega by minus omega what happen
this will be X T of minus omega is X T star of minus omega X T of minus omega is nothing
but X T star of omega and X T star of minus omega is nothing but X T of omega. Therefore,
S U U of omega is S U U of minus omega.
Now, if you consider S U V of omega, this is expectation of U T of omega V T star of
omega, I think here I there is a mistake here this S U U of omega this X should be U S U
U of omega is define like this now if I replace omega by minus omega so this is U T of omega
is U T of minus omega V T star of omega is V T star of minus omega this is U T star of
omega and this is V T of omega. Therefore, this is S V U of omega now you take conjugate
of S U V of omega this will be defined as U T star of omega V T of omega, because you
are taking conjugation of this this is nothing but S U V of minus omega the point that you
have to notice is S U V of omega, which is the cross power spectral density function
it is a complex valued function it as a the nonzero real part and a nonzero imaginary
part, you can show that this indeed is the fourier transform of the cross covariance
function, we were assuming mean to be 0. Therefore, there is no distinction between cross covariance
and cross correlation function.
So, this is cross correlation because mean is 0. This is a Fourier transform pair constituted
by S U V of omega and R U V of tau sense. This is not a symmetric function S U V of
omega will have a non-zero real part and imaginary part and we can write it in the polar form
as an amplitude and a phase. So, this amplitude we called it as amplitude of cross P S D function
and this is the phase spectrum. The real part of S U V of omega is called co spectrum and
imaginary part is known as quadrature spectrum. This is a nomenclature later on when we model
earthquake loads using multiple random processes will be will have a occasion to use some of
this nomenclature, we can introduce what is known as Complex coherency function, which
is S U V of omega divided by square root of S U U of omega S V V of omega this again is
a complex function and the modulus of this complex function is known as coherency.
This coherency you can show that takes value between 0 and 1 and F U and V are independent,
you can show that coherency will be 0 and if they share U and V share a linear relationship
between them, the coherency will be 1. So, this again is somewhat analogous to the cross
co relation function that we were discussing earlier, we will consider a simple example,
where I defined two random processes U of t and V of t in terms of a parent process
S of t I call U of t as S of t and V of t is S of t plus alpha. Let S of t be a stationary
Gaussian random process with 0 mean. So, what is the coherency or a cross power spectral
density function between U and V that is the question C U V of tau can be expressed as
expectation of U of t into V of t plus tau mean of U and mean of V are 0 because mean
of S and S is stationary and mean is 0. Therefore, expected value of U and V would be 0.
So, U of t S of t V of t plus tau is s of t plus alpha plus tau. Therefore, this is
C S S of alpha plus tau this is t plus alpha plus tau minus t this is this. So, if you
consider now the fourier transform of this, we get the cross power spectral density function
between U and V, this is given by this C U V of tau exponential minus I omega tau d tau.
So, using this relation for C U V here we get this. We can now make a substitution beta
as alpha plus tau and limits of integration would remain unchanged, but there will be
C alpha plus tau will becomes C S S of beta and this tau will be beta minus alpha and
if you use that we get S U V of omega is exponential of I omega alpha into S S of omega this would
mean coherency U V of omega is exponential I omega alpha, we can consider as a similar
problem again there is a parent process S of t and at another process W of t U of t
is S of t and V of t is S of t plus W of t S of t is taken to be a stationary Gaussian
random process with 0 mean and W of t is 0 mean Gaussian white noise process, which is
independent of S of t. The question is what is coherency between U and V? So, this is
an exercise that you can try.
We now briefly touch upon, question of how to deal with non-stationary? One of the simple
technique to deal with non stationarity is to express a non-stationary process x of t
as a product of stationary process S of t and deterministic envelope function or modulating
function e of t is a deterministic envelope function, S of t for purpose of illustration
will take it as 0 mean stationary Gaussian random process, X of t would be Gaussian because
this a linear transformation it is a constant in to a Gaussian quantity. So, the X of t
would also be Gaussian. In modeling earth quake ground acceleration, this is one of
the popular models that will be using we can see first how it looks like the black line
that you are seeing here is X of t and this red line is actually the envelope e of t.
So, this envelope multiplies the sample of a stationary random process and what you get
is this black line. So, I am talking about modeling of X of t where non stationarity
as this kind of transient behavior, this is typically how an earthquake record might look
like it will lost for about 30 seconds and there is a strong motion phase and gentle
d k. so, we are interested in characterizing the behavior of the structure during the strong
motion phase that is what typically we are interested in.
So, a brief description of this random process can be obtained by considering it is mean
and covariance function. Since X of t is Gaussian the mean and covariance function completely
specify X of t. So, what is expected value of X of t? Expected value of X of t is expected
value of e of t into S of t, which is zero because expected value of S of t is 0. Then
expected value of X of t into X of t plus tau is nothing but e of t into s of t e of
t plus tau into S of t plus tau then this is e of t into e of t plus tau and R S S of
tau. So, the covariance of X of t now depends on t t plus tau this is a stationary part
but X of t by virtual presents of these 2 terms continued to be non-stationary.
If you consider for example, tau equal to 0 this expectation is nothing but a variance
of X of t and you can see here that variance is e square of t into sigma S square that
would mean the variance are the standard deviation is the function of time. So, it is not a white
sense stationary process and sense it process is Gaussian even the first order density function
will be a changing the time. Therefore, it is not stationary. There is one property which
will be very useful in modeling that is known as Markov property. We use the term Markov
random processes it should be noted that Markov property does not describe a distribution
function for instants a Gaussian random process refers to the fact that at a given time t
the random variable X of t is Gaussian distributed. So, you take two random variables there jointly
Gaussian and so on and so forth. When I talk about Markov processes, there
is no underline probability distribution that I am talking about for example, a Gaussian
random process could have a Markov property. So, what is this Markov property? Let us consider
a random process X of t, let us assume that it as continuous state and the time parameter
t is also continuous, will consider n time instants t 1, t 2 up to t n ordered in this
particular manner. This defines n random variables X of t 1 X of t 2 and so on and so forth.
X of t n to characterize a Markov property, we need the conditional probability distribution
of X of t 1 less than or equal to x n. So, I consider the probability that X of t 1 is
less than or equal to x n condition on the fact that X of t n minus 1 is less than x
n minus 1 X of t n minus 2 is less than or equal to t n minus 2 and so on and so forth.
If this is equal to X of t n less than or equal to x n condition only on the value of
X of t n minus 1, the previous time instant we say that the process is Markov. This should
be true for any choice of n and any choice of t 1 and t n that means, if you imagine
that n th time instant depend defines a future and n minus 1 time is the present. What Markov
property says is that future depends upon the present and not on the past. How the present
was arrived at has no bearing, on what happens tomorrow. It has so called one step memory,
you recall when I talked about white noise are a sequence of independent random variables,
they have no memory, there is no memory there.
Here, if there is one step memory, in terms of distribution function the Markov property
described as the probability distribution function of at t n that is x n, t n conditioned
on x n minus 1 t n minus 1 x n minus 2 t n minus 2 so on and so forth is equal to probability
distribution of x n, t n conditioned on x n minus 1 t n minus 1. The fact that all these
has no bearing on this function, associated with this, we can define the n th order probability
density function this also displace the same trait that the knowledge of what was happening
from t 1 to t n minus 2 has no bearing on this function. So, again you can see there
is a one-step memory. So, how do we describe a Markov random process?
We can start by talking about first order probability distribution function. There is
1 random variable x of t 1 that is completely describe in terms of a density function, two
random variables, I will write it in terms of joint density function or a conditional
density function and the first order density function. How about third order probability
density function? This can be define in terms of a third order joint density function or
two conditional density functions and 1 first order density function, if x of t is Markov
this density function conditional density function will be equal to this because what
happens at t 3 depends only on what is happening a t 2 and not on what has happened t 1. so
this density function becomes independent of x 1 and t 1.
So, here we need two conditional probability density functions and one probability density
first order density function. There is you can see here, this describes the transition
from t 2 to t 3. This describe transition from t 1 from t 2 and this conditional probability
density function is known as transitional probability density function of the Markov
process x of t. So, if you consider now a n th order joint density function by using
this argument, we can show that this depends on the first order density function P of X
1 t 1 and products of n minus 1 conditional density functions and these are known as I
said conditional probability density functions.
So, this is denote in this transition probability density function is called as t p d f. And
this is given as p of x new, t new conditioned on x new minus one t new minus one. So, a
complete description of a Markov process is through a first order density function and
this transition probability density function. There is the further requirement on this transition
density function which is known as a compatibility requirement or consistency requirement that
can be explain as follows, you consider two time instants t equal to t 1 and t equal to
t 2 and a intermediate time instant t equal to tau.
Now, let us consider the joint density function between x of t 1 and x of t 2 and that is
given by x 2, t 2 x 1, t 1 this can be written as probability density function x 2, t 2 conditioned
on x 1 t 1 into probability of x 1 t 1. This conditional density function itself... This
joint density function itself can also be interrupted as a marginal density function
of a third order probability density function that means, I consider now three random variables:
one here, one here and one here and integrate with respect to states of this random variable.
So, the second order density function is a marginal density function of a third order
density function. Now, let us use a conditional density is now.
This I will to since t 1 tau and t 2 are order in the manner that we are talking probability
of x 2, t 2 conditioned on x of tau and x 1 of t 1 into probability of x of tau conditioned
on x 1 t 1 into probability of x 1 t 1 d x. Now, the process is Markov. Therefore, this
density function can be approximated this density function, this is same as this approximated
by x 2, t 2 x of tau. Now, see what is this equality this equality is between this term
and this term there is p of x 1, t 1 on the left hand side and p of x 1, t 1 on the right
hand side, they can be cancel. So, I am left with p of x 2, t 2 conditioned on x 1 t 1
is this integral.
Now, I have invoked the Gaussian property and this is the expression I get on the right
hand side, what this says is basically in moving from t 2 to t 1. The conditional density
function should satisfy this internal requirement. So, this is the requirement for the transition
probability density function of a Markov random process. So, this the consistency condition
for the process to be Markov. So, this is so called chapman Kolmogorov Smoluchowski
relation, we will written to this related in this course, where we will use Markov property
of responses to characterize the probability distribution of structural response to random
loads that will come slightly later in the course.
Now, with these we have completed the review of topics and probability and random processes.
And we are now ready to start discussion discussing about stochastic structural dynamics or random
vibration problems. As a prelude to that we will begin by quickly reviewing, the basic
principles of dynamics of single degree freedom systems as this is a familiar equation of
motion for a single degree freedom system m is the mass, c is a damping, k is a stiffness
and f of t is the excitation, x naught and x naught dot are the initial conditions. This
dot represents derivative with respect to time.
So, this is a linear second order differential equation, It is ordinary differential equation
t is the independent variable x is a dependent variable. This is an initial value problem
and we need to solve this problem to analyze, the response of a structure, which can be
modeled as a single degree freedom system. I will touch upon certain descriptors of the
single degree freedom system, in terms of it is initial response, impulse respons,e
frequency response, function and the concept of resonance, which will serve as starting
point to understand for example, the notion of a steady state when f of t is a modeled
as a random process. If f of t is a random process, what it means
is there is an ensemble of f of t. Apart from the fact that at any time t it is a random
variable and this family of random variables etcetera. They implication of this on this
equation of equilibrium is that this is not a single differential equation, because f
of t has is itself of an ensemble of time histories this equation of motion represents
an ensemble of equation of motion we can takes sample by sample and solve this problem, but
that is not what we are going to do. The question we would ask is if I know the complete description
of f of t as a random process - that would mean the definition of n th order probability
density function for some any n. The question is what is the corresponding description of
x of t, can I get the complete description of the random process x of t? That would be
the first question or in other words, how does the uncertainty in specifying f of t
propagates through the system and manifest as uncertainty in x of t.
If f of t is a random process, x of t is also a random process. So, the basic problem in
random vibration analysis of a single degree freedom system lies in answering this question.
What is the given the complete description of f of t, what is a complete description
of x of t? This question will take up shortly but before we do that we will restrict our
attention of f of t being deterministic. Will begin by considering f of t to be harmonic
say, P cos lambda t. A point to be noted in our discussion is the value of this damping
c as there are three possible ranges of this damping value, the so called under damp systems
and critically damp system and over damp system. We focus our attention in this course on under
damp system unless I state it. Otherwise, the default model for damping is that it is
systems are under damp by that what it means is in free vibration, the structures oscillate
and the vibrations decay exponentially as shown here. That means there is oscillatory
decay to the state of rest in free vibration that is when the f of t is 0. So, this is
an underline assumption of all our analysis the part of this course in am again restricting
attention to only linear systems, if the system in non-linear or it has a different damping
model I will have to explicitly make that exception, when I describe the problem.
So, in standard notations, we divide by m and use the standard notation, where is eta
is the ratio of a damping to the critical damping x double dot plus 2 eta omega x dot
plus omega square x is P by m cos lambda t. These are the initial conditions so, x of
t as a complementary function and a particular integral. This so the complementary function
for this case can be shown to be sums of exponentially decaying harmonics that is exponential minus
eta omega t a cos omega d t plus b sin omega d t and the particular integral. In this case,
we assume it to be harmonic super position of cosine and sine functions these constant
C and D have to be obtain by performing by a harmonic balance of the equation of motion
with a right hand side, as p cos lambda t; where as these constant a and b are the arbitrary
constants of integration, which have to be evaluated based on the initial conditions
of the system.
So, if you do this harmonic balancing I get for C this quantity for D this quantity, if
I now substitute those values of C and D I get the displacement to be given in this form;
where A and B are still the constants, which have to be yet determine based on initial
conditions and if you actually do that you can show that A and B depend on x naught and
x naught dot through this pair of equations. So, I have determined A and B in terms of
system parameters and given initial conditions and this is the particular integral, so this
is a complete solution. If you now look at the nature of this solution, we can immediately
see that x of t is aperiodic, because there is an exponentially eta omega t, which decays
in time, but this component itself the second component itself is periodic with period 2
pi by lambda, which is actually the period of excitation.
Now, as time becomes large this exponential eta omega t decays to 0 in which case this
part, the first part goes to 0. And consequently what happens? The effect of initial condition
x naught and x naught dot are encapsulated in this constant A and B. This part is independent
of A and B. So, as t tense to infinity this entire part goes to 0 and effect of initial
conditions also consequently will vanish from the response.
So, if that happens, we say that x of t has reached harmonic steady state. In that situation,
the response is periodic at the driving frequency and is independent of initial conditions.
The question that will be asking shortly is there a counted part of harmonic steady state
when f of t is a random process that is a question that we have to answer. Before that
will run through some of the details. So, few remarks response is aperiodic for small
t as t becomes large response, becomes periodic in fact it is harmonic at the driving frequency.
So, there is a transient phase and a steady phase a transient phase is characterized by
aperiodic response, in which influence of initial conditions are still present. The
steady phase is when the response become periodic and effect of initial conditions vanish, does
such things happen, if f of t is stationary is a question that we have to bear in mind.
So, steady state for large t response becomes harmonic and influence of initial conditions
vanish, the system indeed take some time to reach steady state and that is gone by the
factor eta of omega. How does is originate? There is a term eta omega t so steady state
is reached when this term becomes 0. So, the rate at which this goes to 0 depends on value
of eta omega large larger the value of eta, omega sooner will be the steady state. If
eta is 0, system will never reach steady state that is for eta equal to 0, the effect of
initial condition never dies and response consists of two harmonics and sums of two
harmonics could we periodic but in general they are not periodic.
So, this is the graphical display of this solution. This red line, thin red line that
we are seeing is the total response - that means that transient phase plus the steady
phase. So, this is the actually the x of t, but on this if I now plot only the terms which
are functions of initial conditions, you can see that they oscillate and go to 0. So, initially
if you follow now this full lines, you will see that response is aperiodic and after some
vanish time they amplitudes have become constant and if you see the phases, they are also constant
that take some effort and we see we say that the system has reached steady state, the notion
of harmonic steady state. We will see later is related to notion of stationarity.
So, what does steady state mean? Steady state does not mean that x of t becomes constant.
Some property of x of t, namely it is amplitude and it is phase without forcing function that
becomes constant. when the function itself is continuous to be function of time it varies
with time, if damping is absent as I was telling the initial condition, there is no reason
why initial conditions should d k and the blue line is the total response and the red
line is the component due to initial conditions and total response has two harmonics and this
goes on oscillating forever and initial condition effect is felt throughout the time all the
values of time.
Now, we can see the few details about nature of this steady state. Suppose, in the expression
for x of t, we allow t to infinity that terms that are actually multiplied by e raise to
eta omega t go to 0 and what remains is this function. We can slightly reorganize these
terms the details can be digested, but if you actually do that we can define a non-dimensional
quantity, which is x which is amplitude of response of x of t divide by P by k P by k
is a static response. If I where to apply a static force p, the displacement of the
system would be P by k, this ratio is known as dynamic magnification factor and it is
given by this quantity, this r is the ratio of the driving frequency and the natural frequency
of the system this theta, which is the phase is given by tan inverse of 2 eta r divided
by 1 minus r square.
So, this dynamic magnification factor tells what is a magnification on the static response,
because a system is vibrating under dynamic loads, if I where to plot. Now, this dynamic
magnification factor as a function of r and eta, we will see this familiar curve here,
eta is changing along these curves, thus see the line blue, green and red lines and eta
is changing there; an on x-axis of frequency ratio is varying. So, if you as you see r
goes to 0 that means driving a cos lambda t and lambda is 0, the amplitude will be one
and there is no dynamic magnification factor, I get as one that means this is the static
behavior, this is the static point of static behavior. As frequency increases, we see that
in the neighborhood of lambda being equal to omega the dynamic magnification factor
claims up it is as I has 20, 30, 50 depending upon value of damping, subsequently as further
increase in r the dynamic magnification factor goes on reducing and beyond a point. In fact,
it becomes less than the static response and we can divide this graph into three regions:
one is the region here which is characterizes by low value of r, where the stiffness of
the system dominates - that means response is nearly static and it is dominated by the
stiffness characteristics here in the region r close to one. This is eta dominated as r
becomes large this behavior is dominated by inertial effects that means the load keeps
changing it is sign. So, rapidly that system fails to recognize that that means inertial
effects dominate the structural behavior.
So, this is mass control region, this is damping control region, and this is stiffness control
region. Similarly, if you look at the phase plot at in the region of r equal to 1, the
phase rapidly changes and goes through a transition, through a value of pi and a rapid change in
phase is what characterizes is so called resonant condition in the neighborhood of r equal to
1, where there is significant dynamic magnification, we say that structure is under resonance the
precise value of r at which this reaches the highest value is not equal to r, equal to
one is slightly different and that is actually the point of resonance in the phase diagram.
The manifestation of resonance is through a rapid change in phrase angle.
If we now look at this dynamic magnification factor, you see that as damping is reducing
the amplification factor is claiming up. So, the question would arise if damping indeed
goes to 0. What will be the magnification? It appears from this graph that the dynamic
magnification factor becomes unbounded, so what really happens if you apply a harmonic
load on a un damped system near resonance, what really happens, if you want to analyze
that we can consider the equilibrium equation with no damping here this is a complementary
function and this is the particular integral and let us assume the systems starts from
rest and we can show that if you impose these initial conditions we can evaluate a and b
and if you were to do that I get these expression and in this if I now simulate the resonant
condition namely omega going to lambda or limit of lambda going to omega what happens
is cos lambda t minus cos lambda t is goes to 0 omega square minus lambda square that
also goes to 0 it is indeterminate form. So, we have to use this one hospital rule
and get the limit has lambda goes to omega. So, we have to actually differentiate with
respect to lambda. So, if you do that I get this function P t in to sin omega t divided
by 2 m omega so as time becomes large this is not a periodic function although the system
is driven harmonically and it is system is linear, but address on is not a harmonic it
is not even a periodic function because as t becomes large this this amplitude becomes
large and in the limit of t tend in to infinity the respond becomes unbounded.
So, if you plot the time history, you see that there is a linear growth of amplitude
as time passes and if there is a critical limit on the say force in the spring, which
is k in to x that may be at this point. The structure would simply back, but it would
stills survive this 12 second of shaking before it actually breaks - that means an un-damped
system resonance does not cause instantaneous failure. The amplitudes not growing and at
some time when the amplitude crosses a threshold it fails. The description of dynamical is
in the description of dynamical system, we introduce what is known as indicial response
and impulse response. Indicial response is a response of the system to the unit step
function. A step function if you recall we define as U of t minus a is 0 for t less than
a and it is 1 for t greater than a that is as shown here. This is an a is 3 seconds,
so till reach the value of 3 it is 0 and yet three jumps. So, if you actually want value
of this function at 3 seconds you can take the left hand average of the left hand limit
and right hand limit, which is 0 and 1, you can call it as half it is a matter of convention.
Now, if we consider the response of the system to the suddenly applied load. We can model
the load as a u side step function. So, U of t means there is a suddenly applied load
at t equal to 0 suppose a system is addressed and you apply a suddenly applied load, constant
load, we can analyze this problem there is again the solution will have a complementary
function and a particular integral and if you now select the initial conditions, impose
the initial condition x of 0 is 0 x dot of 0 is 0 we can evaluate A and B and I get this
response. And we call this we denote this by G of t and this is known as indicial response
of the system. This is one of the important descriptors of a linear time in variant system.
How does indicial response look for a un damped system? You can see here if dumping is 0.
This will be 1 and this part is 0 I get 1 minus cos omega t. So that is depicted here
the function is oscillating about value of 1 not about 0 about 1, because it is 1 minus
cos omega t and this is the un damped indicial response a damped indicial response will oscillate
about one, this is one. So, it will respond and it will decay to a constant value s t
becomes large. This is nothing but actually the static response of the system under unit
low s t becomes large, the response of the system is nothing but the static response
because a damped system.
Now, we can consider the response of the system to a box kind of load, as shown here. So,
this can be modeled in terms of two give sides step functions, this is f suppose f is 1 here
f I have taken as one in this equation this is U of t minus U of t minus a. So, this is
a box of width a. So, I can analyze this response, the response will be G of t for t less than
or equal to a and for t greater than equal to a it will super position of response due
to U of t and U of t minus a which is G of t minus G of t minus a is might have I will
I will super imposing the force in terms of 2 functions and I am super imposing the response
in terms of corresponding responses to these 2 forces.
So, for a specific case, we can see here this is the response. This is the time at which
the box ends box excitation ends so as long as there is the load, it will be oscillating
and the load is suddenly removed; it will oscillate and come to rest. This is the response
of the system to a box input. Now, what we will do is we shrink this box, now suppose
there is the step function it amplitude f and time t c. Now, i will allow this t c to
go to 0, this and at the same time I will increase f. So that f t c goes to unit. So
these we have to discussed earlier when I discussed about discrete random variables
and the relation to continuous random variables, we get what is known as in the limit that
I am mentioning we get what is known as direct delta function.
So, direct delta function is defined as direct delta of t minus a equal to 0, when t naught
equal to a, area under this function is one and if it is actually f of t. This function
will have value f of the area under the function is 1 at t equal to a itself, we do not define
direct delta function, the direct delta function is define through an integral. This direct
delta function can be used to model impulsive loads on the structure. So, how do we get
the so called response of the system to an impulse. So, I will model f of t as a box
function and the response will be accordingly F in to G of t for t less than or equal to
T c and this is this for t greater than T c.
Now, I will rewrite f of t as F T c divided by T c. Now, the limit that I am talking about
this T c going to 0 F T c going to 1 and the forcing function goes to d u by d t which
is direct delta function, under the same limiting condition what happens to x of t? It will
be F of T c in to G of t minus G minus T c by T c so under this limiting condition you
see that this quantity is nothing but d G by d t. This h of t is therefore derivative
of indicial response. This we known as call it as unit impulse response function. And
the expression for that is displayed here this is G of t and this is the differential
of G of t. And for an un damped system when eta is 0 omega d will be omega, this is 1
by m omega sine omega t and that is the sinusoidal function that is shown here. For a damped
system it is a exponentially decaying harmonic sinusoidal function. So, in the next class
we will see how the impulse response function can be used to construct the response of the
structure to an arbitrary load f of t. So, the lecture ends with this.