Tip:
Highlight text to annotate it
X
Quality of Service. A huge topic, all focused on the answer to
one simple question. Who goes first? I mean that is it. That
is quality of service in a nutshell. Which of these packets is
going to go first? If you think about it, you've got a router,
it is connected to, we'll say, your LAN using a gigabit per second
connection. You've got your servers, your PCs and all that, and
then it is connected to a WAN link which may be bridges to another
office over here. Now, WAN link is, we will just say a T1, 1.5
megabits per second or so. So you have got a thousand megabits
per second on the LAN coming in to one or one and a half on the
T1 line. So naturally all your packets are going to start bottle-necking
up. They are all standing in line and the router has to now say,
Okay, which of you guys do I send first? Which of you moves to
the front of the line? Quality of service answers that question.
And I will say by default the answer is simple. Whichever got
there first gets to go first. Seems fair, but it does not work
in today's world where there is some traffic that is more valuable
than other traffic. So in this nugget we are going to look at
quality of service in all its glory. Looking at fighting the
quality of service villains, I think there are about four evil
villains we will talk about that you are going to fight. We will
then get into the quality of service models and services and
the different queuing methods that you have available.
Well, when we are setting up quality of service, the first thing
we have to know is who are we fighting? I cannot even say that
anymore. My girls, I have two little girls, Four-year-old and
two-year-old, and a little boy who is so young he does not count
yet, but they watch this Veggie Tales, David and Goliath episode,
and this big giant cucumber walks out, who is Goliath and he
goes, "Who will fight me?" So now every time I come out of my
office, I see my girls and I go, "Who will fight me?" and they
go "No daddy!" and they run away because I am going to tickle
them. So anyway, these are the villains screaming out "Who will
fight us!" on the network, to get our voice traffic going across
and the first one, I will say first off, quality of service is
not a resolution to this, f you have got, you know, well, say
a T1 line with one and a half mega of bandwidth, and you have
got, simply put, you know, five megabits per second of traffic
that you have to send, voice, video, data, all that kind of stuff
across the network, quality of service does not suddenly magically
make your low bandwidth links super and much better. Things like
that, a matter of fact, it will probably make it worse, by the
time it is said and done. The way I say it is that you have got
a sinking ship, you are on the Titanic, your WAN link is sinking,
and all you can do is rearrange the deck chairs, I mean, you
are moving this guy over here but the whole ship is going down.
Quality of service is not meant to resolve an overall big-picture
lack of bandwidth, What quality of service is meant to address
is temporary times of congestion, like you have got enough bandwidth
for your network but at times you are doing a backup across the
WAN and you need to make sure the voice still gets through without
destroying the voice quality. You know, that is the kind of thing
quality of service is about. So let me just mark that off, that
quality of service is not a solution for that, however, these
three pieces, deal directly with voice. These three pieces are
what quality of service is meant to address. First off, packet
loss. Packet loss happens when, you know, you have got a router,
again we got our slow WAN link. As packets start coming in here
and there is no enough bandwidth, they start bottling up, they
back-up in the memory queue. Well, as soon as that memory queue
hits its maximum to where it cannot hold anymore traffic, it
begins doing something called tail drop, meaning stuff coming
in here does not even get processed, like "Oh, I'm sorry we're
full, doors are closed," drop, drop, drop, it does not get to
evaluate what those packets are. So quality of service is meant
to address making sure the queue does not ever fill up all the
way, it is where the router at least has the chance to look at
the traffic that is coming in, and then at least makes some good
decisions on what it should drop, you know, whatever you do,
if this guy is your voice traffic, do not drop him, move him
to the front of the line and maybe drop that guy, which is somebody
surfing the web. So packet loss, that is the first one. Second
is delay. That is a big one that a lot of people do not think
about, they think, Okay, if we've got the bandwidth we're good
right?" Not necessarily. Your goal is to get voice traffic from
point A, here is point A, across the world to the other point,
we'll put point B, in 150 milliseconds or less.
Now put less than 150 milliseconds. Now that is Cisco's guideline.
If you look at some of the industry standard documentation they
will say less than 200 milliseconds. But the closer you get to
200 and then a little beyond, the more unnatural the conversation
begins to be. You will say something and then the other person
has enough delay before they hear it to where there are, someone
would say like "Are you there?" and they hear what you are saying
and it is like "Aha!" you just start feeling to communicate,
and on the really far end, I mean, people once you start getting
into the 500 plus milliseconds, meaning, you have seen news reporters
on the news that are in other countries, right? You see like
the host of the show going, "So Bob, what's it like?" you know,
and then they flash some guy in a bomb zone somewhere in the
world and he is just staring at the screen or going "Why is he
not talking?" and then it suddenly goes "Wow! It's really scary
here Bob." And he starts talking, it is just because there is
a huge satellite delay in communicating between those two people.
It is not that they did not have the bandwidth, you know, or
they had packet loss, where they could not understand it, it
is just the delay was too great. So you've got to make sure that
not only does your voice get some bandwidth that it needs, it
needs the first bandwidth from the link. So, you know, nothing
else, if you have got the option, nothing else should go ahead
of that because this guy is not only bandwidth sensitive like
you cannot drop it, but it is very delay-sensitive because it has to get to its destination in
a certain amount of time, or else things just begin to sound
unnatural. Now the last one, I would say is probably one of the
most confusing to people because it, well, I do not know, I
do not know how to say, it, it does not, anyway let
me describe it. So, jitter, Jitter is where you have variable delays,
you know, delay is just flat-out delay, like it takes, you know,
we will say 100 milliseconds to get from point A to point
B, that is just flat delay. What jitter is, is when you have variable
delays. Meaning, maybe you sent the first packet from point
A to point B and it took 100 milliseconds. The second packet took
110 milliseconds. Third packet dropped down to 90 milliseconds,
fourth packet shot up to 150 milliseconds of delay on that packet.
So what you get is, you know, the receiving device, let us
say it is a phone over here, he gets one packet, you know, really
quick and then there is a little gap right here between the
next packet and a little gap between the next packet, and
a little gap and all that. Just think about the way voice works.
Can there be gaps in the stream
of voice? I mean, think of it right now, if I am having jitter
and I am talking to you. There are little gaps between my voice
packets, it is not going to work. I mean, I, I, i, i, i, i, i,
I am trying to simulate, it would just be kind of clippie all
the way. So Cisco knows you can never have, you know perfect
like every single packet is always getting there, so what they
have is inside of the router that is receiving that, and also inside of the
IP phones if you are going direct to an IP phone. They have something
called a de-jitter buffer. And the best way to understand that
is like, you remember when CD players first came out and I am
talking like the portable CD players like you would, and I had
one, where it comes with the belt clip, and you clip this massive
big old box to the side of your waistband that is like pulling
down your shorts while you are mowing the lawn and you are pretending
like you are okay with that. Like the portable CD players. Well,
when they first came out, I mean, you had to like baby these
things because you bump it and it is going to skip, and your
song skips. Well, they came out, I think it was Sony that came
out with one first. Sony came out with a CD player with, they
called it ESP, extended skip protection. And what it would do
is read ahead on the CD like 60 seconds so that you could bump
it, shake it, and the CD would have read some of the audio onto
memory so that it would not cause an outage, unless you consistently
shook it for 60seconds and it ate up the buffer. So same thing
here, did I say outage? I am too network based, it won't cause
the audio to skip, right? Same thing here, the router, the phone
has a de-jitter buffer because it knows it cannot have these
little gaps between the packet. So it has to collect them, kind
of like read ahead on the audio a little bit, read ahead so that
it gets a smooth playout, so that you can hear the person smoothly
instead of getting these little dut, dit, dit, dit, do, dit.
These little gaps between the audio. Now, obviously, the de-jitter buffer
cannot be really big, I mean, you cannot have it like the Sony
ESP where it reads ahead 60 seconds or something like that, because
then you have got a 60-second delay from when the person
has talked until you hear them. So what the router does is just
do a small of a read-ahead as it can. It dynamically adjusts this to
where it tries to find the smallest amount of, the smallest size
of de-jitter buffer it can have, delays them just enough to get
a smooth playout and communicate to the phone. Well, that being
said, if you've got a bunch of variations you know, it is
trying to make this de-jitter buffer as tight as it can and so
maybe this is 1 millisecond of jitter, 2 milliseconds, 1 millisecond,
2 milliseconds, so the router is going, okay, so my de-jitter
buffer maybe is like 5 milliseconds, to where it is not going to
really impact that 150-millisecond delay but I am still kind
of getting that smooth playout. And then all of a sudden you get
this packet who is like 60 milliseconds delayed, it has got this
jitter. Well that totally would exceed the buffer, the read-ahead
on the router and so it would start receiving, maybe this
is packet one, two, three, four, five and then six has that big
delay, well maybe packet seven actually got there first, you
know, three milliseconds, for some reason this guy got really delayed,
well that is going to totally exceed the de-jitter buffer and
so what router does is it just says, "Well, I am going to translate
that straight into packet loss, I am going to drop that
guy, he is way out of bounds, and he is out of order, he is out
of bounds, he is outside of my jitter buffer." So not only
do we have to make sure it gets there in time, this is my focus
with quality of service, right? Not only do we have to make
sure it gets there in time, that is the delay factor, and the
fact is you have got to make sure it gets there. It's got to get
there smooth, like all the packets have to arrive smoothly at
about the same level of delay between them or else you run into
all kinds of jitter issues. Now you understand the enemy that
you are fighting against. The rest of this nugget is dedicated to knowing
the solutions that you have. Now, keep in mind that I said
there is an entire certification exam dedicated to quality of
service. Cisco does not want you to have to know all of that to
pass CCNA voice. They just want you to know this is the problem
and then here are the overview, an idea of the tools that
you have. Because, I mean, frankly Cisco knows that if you are
a network admin, and you are on a network that is running voice
over IP and people are coming to you, and they are saying "Hey,
this call is terrible, I'm getting clips and dropped calls and things
like that on the voice-over IP system." They know if you have
an idea of the tools that are at your disposal, and you are put
in a high-pressure situation, you'll figure it out. Now you'll
call somebody, you'll get on the web, you'll Google the answer,
you'll figure out how to implement quality of service. You just
have to know that there is a solution and here are the tools available
to you. So the first thing that you will see is that there
are three current models of how you can design quality of service.
Three of them that exist, and I would say one of them that everybody
prefers nowadays. The first one, I would say, is not really quality
of service. It's best effort. Best effort is the default state
of every router. And I said it is kind of like the post office.
When you take a letter to the post office and mail it, they are
just like, actually this happened to me. I had an eBay item that
I forgot to mail and the person emailed me back and they go "Hey,
it's been about a week, are you going to, I haven't received
this item, can you update me on the status?" I am like "Oh!"
cause, you know, on Ebay everything is about your feedback rating,
you do not want somebody to slam you. So I am like, I go to the
post office and I am like, I need a next day shipping on this
item. And they are like, okay, you pay extra and stuff like that.
And so I emailed the person back, I am like, "Oh, it should be
there tomorrow based on the tracking that I see." Of course,
I don't give them the tracking because then they'd know that
I just mailed it. And so the person e-mails me again the next
day and they go, "Hey, we still haven't received it, what's up?"
So I go to call the post office, I am like "Hey, I paid for next
day shipping," and they go, "Oh, well that's actually, we tried
to get it there the next day but it's not guaranteed." And you
want a stunned look on my face. that was about it, and I said
"So you're saying when I paid for next day shipping you are just
like "Okay, we'll give it a shot man, we will give it our best,
but no guarantees." I'm at them like and they go, "Well, no,
the only one that we guarantee is our guaranteed mail prices."
And so I am saving you the rest of that conversation because
it just got, it ugly after that. But that is the Best Effort
model of quality of service. It's like the router is trying to
get it there, and if it gets there great, if not well, I am sorry,
I tried. That is what every router does by default. Integrated
Service is like hiring a private jet. This is where you get to
say "This traffic always goes first." Or this traffic literally
gets a reservation of bandwidth on the line. So let us say you
get a voice call and you deploy the Integrated Service model.
Now keep in mind this is a model, not a method. There are many
methods that kind of go under each one of these models, but it
is a model. If you go with this model and you make a call across
the network, all the routers that you are going across literally
talk to each other and say, "Somebody is making a voice call,
everybody carve, we'll say 100 kilobit per second of bandwidth
out and it is now untouchable." Meaning that call, this is the opposite of
Best Effort, right? That call is guaranteed. Even if that call
is not using any bandwidth, the routers are saying "That's okay, that
100K per second is still dedicated to your phone call." If maybe
that both people stop talking, you get a feature called VAD
which stops sending bandwidth that kicks in. The routers don't
care. They say, "We've reserved that bandwidth because you are our
priority. So if you want guaranteed, that's Integrated Services."
The problem is, it doesn't scale. The bigger you get and the
more phone calls, eventually the routers reserve all the bandwidth
and nothing else can work except the stuff that you have
reserved bandwidth for. Excuse me, so the last one, and you can
guess, is the most popular, is Differentiated Services. What
that is, is the routers mark the packets with different levels and
treat them with different levels of service to where I can say, "Okay,
I am not reserving the bandwidth." I am not saying that nothing
else can touch this bandwidth because there is a voice call going
on but what I will say is if I get a voice packet it is going
to the front of the line. All these other stuff kind of gets bumped
aside. They always say Differentiated Services is almost certainly
guaranteed traffic. Because there is still a chance that if worse
comes to worse you might lose some packets of voice and things
like that in some very congested situations of network
traffic but it is the most scalable and it is the most granular
with how you are able to design your Quality of Service.
So when you are thinking about the models that I just talked
about, Best Effort, Integrated Services and Differentiated Services,
those are more of a mindset which will boil down to the tools
that you can implement but it is more of a mindset of "Am I running
my network in a Best Effort kind of way where there really is
no Quality of Service or do I want to deploy this to where voice
gets reserved bandwidth?" And that's actually one of the tools
you have for Integrated Services is a protocol called RSVP, where
literally when a voice call or whatever priority traffic you
have comes into the first router, it sends a message saying, "Hey
RSVP, everybody reserve this much bandwidth on your link for
this traffic that's about to come through." And it will not let
the traffic go through until the whole reservation, all the way
through is complete, so you are absolutely guaranteed. So that
is a mindset. Do you want to use that kind of mindset or do you
want to have a Differentiated Services mindset to where you can
define different levels of priority that every router along the
way can use and then move things around. So again I don't want
you to get stuck on, I was thinking after, I'm a very practical
guy, you know, I am like, if I go through a slide and I think,
"Okay, what do you do from a command prompt that relates to that?"
So from the last slide there's not really any command that you
type, it is more of a mindset of how you approach it. So now
let us get into the tools themselves that you have when you're
using those mindsets. The first two, and I am only going to talk
about the tools relevant to the CCNA voice exam and this Quality
of Service Nugget because there are a ton of tools that you have,
but again I am just focusing it down on these few. First one
is Classification and Marking. Probably the most essential tools
of them all because without this you have no way of differentiating
between different types of network traffic that exist out there.
And that you might, look at that picture go. Well, what's that
all about? So this is within a school. You have all been to school
and you can immediately identify the two groups of people right
here. You've got this guy which you immediately would say "Well
that guy is a bully." He is threatening, he is taking his lunch
money, he is beating them up, all that kind of stuff. And then
this little kid in the little yellow shirt here, he's sensitive.
I'll just put the sensitive child. He looks scared, he is like
"Please don't beat me down, I just want run to my science class
and do some experiments. And so he is the sensitive kid and so
immediately just by looking at this you can identify two different
kinds of school kids. And that is just from a quick glance. I
mean you can even expound on more on the details of that. But
now take that and apply that to your network. If you were to
say I've got two different kinds of network traffic. I have bullies
on my network and I have sensitive applications. Which would
you identify as those? I am just thinking on top of my head I
would say "Okay, the bullies of my network is, number one, web
surfing." Oh, people just surf on the web, wasting their life,"
Did I say that? Just wasting time surfing the web, downloading
music, all of that, it comes, out immediately I am like, that
is the majority of all of the traffic using the WAN link and
the internet connection. And then, I start thinking further and
I go "Okay, peer-to-peer file sharing," people exchanging of
those perfectly legal applications and music files, that is fully
un-copyrighted, that would be another bully of the network consuming
a lot of bandwidth. So I would go down the list and identify
my bullies. Then I say, "Okay, what is my sensitive network traffic?"
Answer is it depends on your network. For instance, if you run
Citrix on your network, that's sensitive. Telnet traffic, SSH
traffic, that's sensitive, to where every character is a packet,
and if you start dropping packets, you can't type anymore on
your command prompt. That is a sensitive application. Voice over
IP, absolutely, very sensitive. So are you seeing the mindset
I am already identifying? Classification means that you identify and you group the
different kinds of traffic. So again, me being a practical guy,
how do you do that? Well, there is a lot of different ways. You
could say if it comes in, this interface on your router, it's sensitive
or it's a bully, or if it's using this port number, if it's
TCP port 80, that's a bully, if it's using UDP as a transport
protocol, that's a sensitive traffic type. So you go through,
there are many different ways that you can classify. You can
say if it comes from this IP address it's sensitive or from this
sub net, it's a bully. You can go through and identify in many
different ways. You can identify in all kinds of different
ways different kinds of traffic, and that is considered
Classifying. So then you move down
to Marking. So Marking, Marking colors the packet so it can be quickly recognized.
So for example you might have a router and initially, to classify,
you use an access list to classify. That is one of the tools
you can use. And as traffic comes in, literally the router has
to look in at the layer three, the layer four header information,
so it is kind of doing a little deeper packet inspection
to see what that is. If I were to compare it to shipping it would
be like opening a box on the boxes a shipping company. So
every stop they came to, they would have to open the box to figure
out where it is going. Well, what marking does is allow you
to put a label on the outside of the box. So maybe the first
router that identifies, Whoah, that is sensitive traffic that is voice
or that is video, or that is something like that, then colors
it in the layer-two header or put a little tag in the layer- three
header. As a matter of fact, the actual tags are considered the
class of service tags at layer two and type of service tags
at layer three. You remember when you first got into networking,
they put up that big old chart and they go "Hey, in the header
of an IP packet there is, blah, blah, blah," they
put all these, it has always got the squares, you know, here
is all the squares, here is what is in the header of an IP packet,
and nobody remembers all of those things other than source and
destination IP address. But one of those fields was a little
one-byte field called Type of Service. And that is where you
can color the packet. So this router might say "Okay, it took
me a few processor cycles to do it but I've now identified that
as voice traffic." So I am going to put a little shade, let me
grab a color here, put a little shade in that Type of Service
byte, change a few bits up there, so that I have marked this
as a red packet, so all the future routers down the way, they
do not have to look deep inside of the packet to figure out what
it is, they look on the outside of the box, they look at the
Type of Service or a switch, a layer-two switch, might look at
the Class of Service Field and go "Oh, that's really sensitive
traffic, let's move it to the front of the line." So now, I just
went to another tool right there. The initial tools that you
have is Classification and Marking. They don't actually implement
Quality of Service, it is just an identification method that
all the other Quality of Service tools rely on. All right, it's
all downhill from here. The last thing I want to talk about in
this Quality of Service overview is the queuing tools that you
have at your disposal. So I just talked about Classification
and Marking which identifies and labels the traffic with what
it is. Now the queuing tools, is how do you respond to those
labels or those classifications of traffic that you have. For
instance when you see something considered sensitive, how do
you deal with it? Well the first one I want to talk about is
more of a generic queuing method. It is called Weighted Fair
Queuing. What this is, the rule is essentially the low talkers
in the network get priority over the high talkers in the network,
meaning if you have, and let me back up before I go into the meaning.
This is actually the default on a lot of serial connections on
Cisco devices. Just out of the box it is going to run Weighted
Fair Queuing. What it means is if you have an application or
a user or a computer, or something that is sending a whole bunch
of traffic, it is kind of like that meeting room that you have
right there. You have somebody who talks a lot, and if you have
ever been to a meeting, you know that your mind kind of goes
numb to them. You are listening and they just keep talking on
and on, and on, and on. You just kind of, after a while, you
are thinking about snow cones, and small dogs, and things like
that. And all of a sudden they stop and someone who never says
anything goes, "I have a thought," and immediately all the snow
cones and small dogs and whatever else you were thinking about,
you are like "Huh? What? Oh, Bob said something" and all eyes
turn to Bob and everybody is engaged to hear what he has to say,
because he never says anything. So that's Weighted Fair Queuing.
As your router hears kind of all these applications droning on
and on, they were always sending traffic, but when a low-bandwidth
sender comes in, the router goes, "Oh, that guy never talks,
let's move him to the front of the line, let's give him a little
more priority than these high-bandwidth or constant talkers that
are always blabbing on, on the network." So that way, and that
is where the name comes from, it's kind of just Weighted Fair.
If you do not send that much you should have a little more priority
than stuff that sends all the time. I was just thinking, between the two slides,
I was thinking, yeah, well, Weighted Fair Queuing is probably
like you listening to me right now, to where I am just droning
on and on and on. If somebody were to walk into the room right
now and be like, "Hey, what?" And they are like "Oh, sorry
I didn't know you were busy." You would be like, "Nope, man let me
hit the pause button." And you are like "That's just Jeremy, he's
just talking about some conference table or something, or I don't
know what he's talking, what do you need?" You would give
the priority to them because they are not just talking on and on,
and on. So anyway, next one up in the list is Class based Weighted
Fair queuing CBWFQ. This is powerful because it now allows
you to specify a little priority value, if you will, and
I guess, let me put a little more thought around that. Weighted
Fair Queuing just kind of happens, like the router just does
that for you, by itself, you do not really have much control over it
at all. Whereas Class based Weighted Fair Queuing, you have some
control over, you can specify the levels of service that you
want and things like that. And here is what I mean, that is why
there are a couple of doughnuts on the screen. I love Krispy
Kreme doughnuts. And if you ever go to a Krispy Kreme doughnut
restaurant, they actually have a conveyor belt of doughnuts that is
just rolling out, and they are so tasty. So you go up there, and
let's say you want the glazed doughnuts because they are really
good but you like some of the other stuff. So you go up and
you say "Hey, I want a dozen doughnuts, I want eight glazed, I
want two of the ones with the pink frosting, and then one chocolate,
and one twisted doughnuts." And you go back to your table
and you eat them all, and then you're like, "Oh man, I want that
again." So you go back up and you go, "Okay, I want a dozen
doughnuts, but again, I want eight glazed, I want two pink frosted,
one chocolate," is that what I said? Something like that, one
chocolate and then one twisted doughnut. You go back and you
eat it all. So when you are doing that you are giving priority
to the glazed doughnut, like you are consuming more glazed doughnuts
but you are still giving some level of priority to the others
and it's the same way with Class based Weighted Fair Queuing.
I can specify, you know what, I want to chop off, we'll just
say I mean it is actually in values of bytes so you can say like 10,000
bytes. Let us just say I want to send 50 packets of
web for every 20 packets of FTP, for every 10 packets of we'll
say Telnet, and I am using horrible applications for this because
you would never want to prioritize these this way. So we'll say
for every 10 packets of Telnet, for every 2 packets of instant
messenger traffic. And so with Class based Weighted Fair Queuing
what it's going to do is the router will have its queue now and
it is going to say, "Okay, I'll send 50 of these guys, okay stop,
send 20 of these guys, stop, send 10 of these guys, stop," So
it kind of maintains different queues, you can see it supports
up to 256 different classes of traffic. That's insane, you would
never use that many. But it allows you to specify start and stop.
This is awesome for data applications but horrible for voice.
I mean even if you put, let's say, we say, "I want to give voice
1,000 packets for every 50, this is not to scale anymore, but
1,000 packets for every 50, for every 20, and you are like "Man,
I'm really giving voice the priority," and you are, you are,
right? It is getting a lion's share of the bandwidth, but here
is the problem. Once it fulfills its quota of 1,000 voice packets,
think about it, what happens? It goes stop, now you've got voice
packets that are starting to fill up this queue, while it's sending
50 of these, 20 of these, 10 of these, 2 of these, it is kind
of going down eating the other doughnuts, if you will. Meanwhile,
these guys are bottling up. What is happening to these guys?
They're sitting there and the delay is going up and up, and up,
and up, and up, as they just sit there and that now, granted
once it loops back around, it's like "Ah, the floodgates open"
and all the voices like "Ah, let's go guys" and they start running
but the problem is who knows how long they have been delayed.
Somebody on the phone, they're hearing them talk, and, "How are
you doing?...Hey Fred," and then you get these big gaps in
there while it's servicing all the other queues and these guys are just
sitting. So Class based Weighted Fair Queuing is awesome for
data but man we need something with a little more priority for
voice, and that is what brings us around to Low-latency
Queuing or LLQ. Low-latency Queuing is actually a
happy concise name because the full name of LLQ is actually PQ-CBWFQ.
You can type that into Google and you will see it links straight over to
Low-latency Queuing. Low latency Queuing is Class based Weighted
Fair Queuing, I should have filled that in, I mentioned that we've
got all of our doughnuts that we are eating. The doughnuts represent
the ones that you've specified. Send this many packets of this
and this many packets of this, Well CBWFQ says "Okay, well, if you're
not specified, then you just get treated by the Weighted
Fair Queuing algorithm." Meaning, that if you are a low bandwidth sender.
You get more priority over a high bandwidth sender. But
otherwise it's just kind of like, let us just send the rest. So
fulfill the doughnut quotas and then send whatever other traffic
you do not specify using the Weighted Fair Queuing algorithm.
So, the lawnmower has entered the scene, right? You are just
going, "How far do these pictures go?" This is it. So the lawnmower
represents buying a lawnmower, right, it's simple. Let's say
you go to the store, I actually just bought a lawnmower and you
go to Home Depot or Lowes like I did and you walk in and you go
"Hey, what's the best lawnmower?" And they like "Hey, tell
you what, we actually have," now this did not happen, but I wish
it did. You go to a Home Depot, Home Depot says "Hey, we actually
have this killer lawnmower with all the features for a special
of a dollar right now." At Home Depot, you are like, "Whoa!"
You know, immediately you are like "I'll take them all, I'll buy
all of the lawnmowers that you have because you're thinking, I'll
give these to my friends and family. They're going to love
me. I'll give them a great lawnmower." So, you
buy them up. You buy out the stock and Home Depot says, "Hey,
I also know that there is a deal going on at Lowes. Lowes has
them for two dollars. "I'm going there, I'll buy out their stock," Lowes
says two dollars, but you get to Lowes, now, I guess this is,
the dollar and two dollars are so insignificant. You are probably
just saying, "Well I just," anyway, let me explain my analogy.
My analogy is breaking down. You go to Lowes, right? You buy your
first one for two dollars and you are like, "Man, Home Depot
was such a just a great price." Okay, let us say Lowes was 20
dollars, still a great price right? You would buy one of them
and say, "Well, you know what? Home Depot was such a great
price, I am driving back there. I want to see if they got any
more in stock since I went to Lowes. You get back there, and like,
"Nope, nope, still stocking." So I go like "Okay, I'll go buy
one more from Lowes, and that kind of thing." So, the point is
this, the analogy is this. You're always going to
be sending your high priority traffic above everything else and
that's what Low latency queuing does. It adds that one dollar lawnmower
queue to where, you've specified, sent 20 packets of FTP for
every 10 of HTTP, for every, you've specified all your donut
queues, your Class based Weighted Fair Queuing algorithm, but
you specify one queue as your priority queue. You know, you need
like the looming bush priority queue, queue, queue, queue, to
where, if anything shows up inside of that queue, it's like,
stop the train folks. That is the priority. we're moving that
to the front of the line above all the doughnuts, above all the
talking, conference tables, above all the other queuing methods.
That's going first. And you probably know what's going in that
queue. Voice over IP. Voice over IP gets that queue, it gets
the priority. But, you know, and you can see there is a delay
guarantee, you get guaranteed bandwidth if you use the priority
queue, but you got to be careful because there's only one of
them. There's only one priority queue in Low-latency queuing.
So you want to use it sparingly, I mean, use it for your voice,
don't share it with anything else. Don't say "Oh well Citrix
is pretty important too, let us throw it in there. Now you got
Citrix competing with voice because if you put multiple things
in that high priority queue it, uses kind of a first in first-out
method, where if Citrix is there first, let's send that one.
It gets priority over everything else but you don't want things
competing in that priority queue, that makes sense. So this is,
I would say, the queuing algorithm that we use for almost everything
nowadays because it gives us the priority queue we need for our
voice, it gives us the doughnuts, the ability to classify the
different kinds of other traffic and saying, "Well, this has
priority over that over that, over that," and kind of goes in
a round robin effect servicing those queues, and then for everything
else, it just uses an algorithm that's straight unfair. It says,
"Well, you know, the low-bandwidth centers should have priority
over the high-bandwidth centers." And that all boils down into
the queuing algorithm that you can use. The last thing I'll leave
you with is this. Your goal with Quality of Service is that you
should be able to deploy, well say voice-over IP, or any kind
of network traffic and it should not affect the user experience
for other things. Meaning, if people are used to surfing the
web at a certain level and things showing up in a certain amount
of time, accessing the database and getting their response in
a second or two, or something like that, when they are doing
database access or millisecond level access. And then you roll
out voice over IP with Low-latency queuing, and you've got so
much voice traffic that nothing else really has a chance to survive,
and so you got people looking like this guy, who is like, "Man,
this used to work fast, what's going on?" But, wow my phone system
sure works well. But people do not think that way. People have
their expectations on what things should work as and if you increase,
if you add voice and impact everything so significantly that
the rest of the business is now not working the way it should,
you really have to reconsider and evaluate that.
Management may come at you and say we want to do voice-over IP,
we do not want to pay to upgrade any of our links. You really
have to do an analysis on that and say, "Do we have the bandwidth
to do that without causing people to look like this guy? Can
we do this without severely impacting our network performance?"
Man, I am holding myself back, there is so much more to say about
Quality of Service. so many more tools, so much more thought
process that goes into it, but that's all we have the time for
at the CCNA voice level and all they are really looking for you
to understand, but I would encourage you if you are like "Man,
this is cool, I like this capability. I mean CBT Nuggets, if
you got that streaming access to whatever you want kind of thing,
check out the Quality of Service series. It is actually not that
long because once you get the core of Quality of Service, there
is not much more to it that you have to know. Just understanding
the syntax and the tools and that kind of thing. It really is
an interesting topic but for now, I hope this has been informative
for you and I'd like to thank you for viewing.