Tip:
Highlight text to annotate it
X
>> [music] Good day viewers. In this segment I'll talk about ACK
clocking, a behavior used by TCP. So sliding windows have a self-clocking
kind of behavior, called ACK clocking. In this segment, I'll explain what ACK
clocking is, and why it's beneficial. First of all, just recall, the basic
features of a sliding window. And you'll see what I mean by an ACK
clock. In a sliding window, every time we get the
next ACK. Normally, this is the next in order ACK.
It advances the sliding window. And allows for a new packet to be sent
into the network, or a new segment to be send into the network.
So in effect, the ACKs are clocking out the data segments.
This is what we mean by an ACK clock. You can see here, packets going across
regularly, and ACKs coming back with the same spacing, and every time an ACK comes
back, it's a strobe or a clock, which allows the sender to release a new packet
into the network, assuming there's no[UNKNOWN].
You might wonder what all the fuss is about.
What I've described so far is simply how sliding windows work, after all.
Well, let's look at an example to illustrate the benefits of ACK clocking.
So consider what would happen in this network.
In this network, I have two hosts, that are connected to one another over a
two-routed network. Note that I've got a fast link on either
side, and there's a slow link Somewhere in the middle.
This is going to be the bottlenecking. The host can't see this directly, but you
know, somewhere in the network there will be a bottleneck, which will slow their
traffic down. The sender on the left sends a burst of
packets into the network. So you can see there are five packets
which are ejected rapidly in succession. What will happen.
Well the segments as they make their way through the network, they were sent in a
burst over the fast link. But they can't go out there quickly over
the slow link. This means that some of them will be
buffered, here they are inside the buffer. And as they go over the slow link they'll
effectively be spaced out. Why is that?
That's because if you had a fast link and you could send a packet every microsecond,
say, when link that was a thousand times slower, you could only send the same
packet, the same number of bytes, every millisecond.
So they'll have to sit around and wait for a lot longer, and the packet will be more
spread out on the. In some sense.
So, as we imagine these packets going further through the network, let's think
about what happens. They've already gone from a burst to being
spread out. That's what the slow-link does, it spreads
things out in time. Now, as they go back onto a fast network
however, this timing. Will not necessarily change.
It can be unchanged. There might be 1 in the buffer here.
But the other ones are going to be spread out in time with the same spacing.
And every time one arrives here, we'll send an ACK back to the other sides of the
ACK. Maybe I'll do it here.
The ACKs that are coming down here are also going to have this kind of spacing.
Here's a cleaner version of that. The key point is that the ACKs are
maintaining this spread or spacing, of the packets, and the spacing is coming from
this slow-link, the bottleneck link. Well, these ACKs are going to go through
the network. As they go over this slow-link.
They will maintain their spacing. The slow link spreads packets apart until
they are going slowly enough but if they are already going slowly they don't get so
spread apart anymore. It's like a maximum speed limit the fast
links you could drive fast if you get a slow link you get slowed down so that's.
Speed limit. Well, if you keep that slow speed when you
come back around, you don't get slowed around any more.
So, these acts are going to continue through this nice slow pace, through the
network to the other side. As they get to the other side, they will
gate out new packets. But these new packets won't go out in a
burst They will come there spread out with the same timing.
Let's look at a cleaned up picture of that.
You can see here, the ACKs arrive back here, with this timing.
At that timing, the ACK, so there's a clock, which gated out a new packet, and
these packets, these new packets or segments, were still spread with the
timing. Now the amazing thing about this, is that
the segments are now being spent, sent at just the right rate to go over the
bottleneck link. Their going to fit, because the timing
came from the bottleneck link, it's right. So then a new packet should arrive at the
gateway, if there's not a lot of interference from other flowers, at
roughly the right time that it can be sent out onto that link.
So we now expect it. There will be no big queue building here
and the packets will simply go out onto the slow link because they'll be arriving
at about the right order. So, in effect the sender is now sending at
about the bottleneck link so there will be no queuing or without actually having
anyone tell it what this, the. The capacity of the bottleneck link was.
Well, this is the benefit of ACK clocking. Ack clocking here effectively will help
the network to run with low levels of loss and low levels of delay and it's working
as follows, the network, when we sent a, a small burst, it smooth out the burst of
segment and its smooth amount to so that they will spread at about the bottleneck
rate. The ACK clock mechanism from the receiver
all the way back through the sender, transferred the same timing from the
bottleneck link back to the sender. The sender then, because of the sliding
window property, used that ACK timing To be able to send out data segments that
weren't in 1 big burst. They were spread out at about just the
right rate, that they would go over the bottleneck link without a lot of queuing.
So, by using the ACK clock, we have nice loop traffic that's well matched to the
capacity of the network. That's what's amazing about ACK clocking.
Tcp uses a slotting window and because of that ACK clocking.
Because of the value of this scheme. As I've said in a previous segment, TCP is
going to use a sliding window to control how much traffic is in the network.
Tcp's version of this dynamic window is called a congestion window, and just like
a flow control window, the sender, well actually in this case it's the sender,
whereas the flow control window is controlled by the receiver.
This sliding window will be adjusted by the sender.
So that there, there won't be too many packets in the network, it will match the
sender's capabilities to the network, ACK clocking is going to help us to do that.
You might wonder how it's controlling the rate of what's in the network, since we're
talking about a congestion window. Here's a key observation for you.
The rate at which you send is related to the size of the congestion window, or the
sliding window we're using in this case. With a sliding window of size w, you can
send w segments every round trip time, assuming there's no loss.
Well, our sliding window is size cwnd, so we get to send cwnd segments every round
trip time. If you divide them you'll find roughly the
rate that's being supported by that window size.
So TCP is going to send using only small bursts of segments, which the network will
smooth out. This is how we'll keep our traffic nice
and smooth and keep queues from building in congestion from, from forming.
Tcp is going to use these small bursts, and the network is going to smooth them
out and we'll use ACK clocking to keep everything smooth.
And, we'll do this using, adjusting things and then using packet loss as the signal.
Anyhow so this is ACK clocking, you now know what it is and we'll see how we use
it in subsequent segments.