Tip:
Highlight text to annotate it
X
So this is gonna start with our first technical topics for the class
which is, again as I mentioned we're gonna start with the switching exam
to go through all of the layer 2 ethernet switching
features and functionalities and then most likely on Wednesday
we'll get to routing The first topic then is gonna be based on
the overall general network design for layer 2 ethernet networks
We do sometimes call the campus network design or the
Heirarchical Campus design Where the idea behind this
is that Cisco separates their layer 2 switching design
into a three level of heirarchy which basically is gonna
give us smaller building blocks of the network that we can combine together in order to
to scale the network and to make it a little bit more manageable
The three categories, the three levels of heirarchy that they separated into
are the Access Layer the Distribution Layer, and the Core Layer
Where sometimes the distribution is also known as the Aggregation Layer
and sometimes the the Core Layer is also known as the backbone
Basically what this means is that your end hosts
which would be like your PCs or your phones or your printers, whatever the
devices that are network-attached they're gonna be connecting to the Access
Layer From the Access Layer, we're gonna have
then these access switches that are typically connected to the distribution
which is aggregating not only the number of ports
but also the bandwidth in the network And then it's ultimately gonna be sent
to the Core Layer of the network which is gonna be the main
interconnection between different either geographi regions
or different like functional units of the of the network
So if we were to visualize this the ideas that the port density
for connecting to the end host this is what's happening at the Access Layer
Then typically we're gonna have one or more connections up to the Distribution Layer
This is where we would be running typically like our layer 3 routing
or maybe redundancy protocols like HSRP or VRRP
Or maybe any type of traffic filtering Usually the Access Layers is just for basic
attachment Like where our VLAN assignments would be
or where the spanning tree protocol domain would start
Where the core of the network this is where we're trying to just
basically do forwarding between different segments
So we don't want to debug the core down with additional features like
application level traffic classification or any type of filtering
or any real advanced features This is just trying to get traffic
from point A to point B like we could be attaching to the WAN
or we could be attaching maybe to the to the data center
where typically the core is gonna be to aggregate the
physical different points of the network together Now the reason we want to do this to begin
with is that the main advantage
it's gonna allow us to scale the network So if I wanna add a new
portion of the network, I don't have to redesign the entire thing in order to do
this Cause the only thing I need to do is just
add a small Access Layer or add a
small new distribution block So it's gonna make the network design more
scalable We'll also see that this is gonna
impact what's known as the failure domain of the network
Or if one of the devices goes down let's say, one of their, there's a physical
link cut or one of the switches crashes
by separating them into these different layers, just because one portion of the network
fails doesn't necessarily mean it's gonna affect everything else in the topology
So when we get into some of the interaction between the layer 2 LAN
and the layer 3 routed network this is one of the big
design considerations that if my network is built wrong
I could have a host that has something wrong with its NIC card
and maybe it's sending like a broadcast storm out to the network
I wanna make sure that that's not gonna take my whole data center down
because one end point is broken The other is that it's gonna help us in
in troubleshooting and trying to manage the network
So not only management from a a physical layer 1 point of view
like how is the cable plant installed but then when we look at
like aggregating the device's configurations or adding new changes to the network
When you build it in these levels of heirarchy it makes the overall network design
a little bit easier to manage There's a couple documents I wanna mention
here that go along with
with Cisco's general design theory for this If you search for the
the design zone and the design zone is what they used to call
the solutions reference network design guides or the SRNDs
Sometimes this is also called the Cisco validated design or the CLV
Basically what this is is that before they take some new technology
and try to deploy it in a large scale They've all sort of test and it goes on behind
the scenes and they try to, you know, build like
scale models of the network to see what the end result of it is
and what's the best way to to build the network together
and the best way to configure it So under design zone, if you were to go
to borderless networks as what they call this now, this is
basically the enterprise network and under campus,
this is generally what's talking about the layer 2 switch network, the layer 2 LAN
or also layer 2 wireless access So like our WiFi interacting with our ethernet
LANs And under here, there's a
there's a couple good design documents One of them is the
the borderless campus design guide
which basically is just a a big PDF that talks about
different portions of the network and how they fit together
Where the network design principles and models
this is gonna talk about like the the different layers of the network
or if we look at the multi-tier it talks about the core layer versus
the distribution layer versus the campus layer But also what's nice about this
when you actually go to deploy these technologies they give you a lot of
of configuration templates that you can use to figure out what would be
the appropriate configuration of your layer 2 access switches
as opposed to your layer 3 distribution switches and then your
your routing in the core So what we'll come back and reference this
as we we get to different portions of the network
like how is EIGRP routing gonna play in to the
the layer 2 LAN or how could we use OSPF routing instead
Also high availability this is gonna be for stuff like our HSRP
or VRRP or GLVP designs then also deploying
QoS for application performance This would be dealing like with our
our IP telephony or our voice over IP systems or maybe some sort of
video conferencing or multicast video that we have going over the
LAN This is kind of just the overall
picture of what they're trying to say These are the different products that we have
and this is how it fits in to our design model
There's also another one here that is the high availability campus network design guide
We'll talk about this in more detail when we get to some of the layer 3 routing
protocols But this one here is really good
the Campus Recovery Analysis design guide This shows how you could
figure out based on how your network is built If one portion fails,
what is gonna be the result of that? And what are the individual
protocols that come into this design that are ultimately gonna control
if there's a problem in the network, what's the result of it
Like it talks about you could have a case where you're running
HSRP, your routing with EIGRP and your doing per VLAN spanning tree protocol
If a failure happens what's the result of this
from a restoration point of view or what sometimes
called the network reconvergence process So basically when the network topology changes
how long does it take the network to self-heal around that?
or how long does it take to figure out the change happen
and then to change layer 2 switching or to change layer 3 routing in order to get
the the new correct agreement and what the topology
looks like So depending on what the layer 3 protocols
are and how these interact with our layer 2 protocols
like we'll talk about the differences between rapid spanning tree versus the
the legacy per VLAN spanning tree or the legacy common spanning treee
Also when would you wanna run HSRP versus GLVP
Also how could layer 2 looping come in to the
the design So some of these stuff may be above and beyond
the scope of the switch exam or the route exam
but from a practical point of view, this is where you
you can get a lot of help for the actual deployments of these networks
So when you're in designing stage and when you're in your implementation
stage how is Cisco
recommended to do it over the years and then what's the result that they've
come up with for this type of testing So if we look at some more specifics of
the individual pieces that are making up this model
again the last portion of the three level design or the
three levels of heirarchy is the access layer
And this is gonna be the entry point for the application
So our desktops, our IP phones, our printers Also like the wireless access points
Generally, these are gonna be connected at the access layer
Historically, this has been generally made up
of layer 2 ethernet switches but in today's networks, a lot of times
it's also of layer 3 switching as well So we'll look at some design cases when we
get later to to routing with like EIGRP and OSPF
There's cases where you could pretty much do away with layer 2's
switchin almost completely except for the devices in the same
VLAN on the access layer and then use routing protocols in order
to do the path selection and to do any of the
the convergence in the rest of the network The main idea behind this
is that once we go to send traffic to other portions of the network
we need some additional layer of heirarchy to
to kind of aggregate our bandwidth together or aggregate our connections together
And this is what the next level is the distribution layer
So typically we're gonna have multiple connections from access to distribution
so that if there's a physical link cut or one of the physical switches goes down
we're able to rebuild the network or to to reheal or reconverge around us
By far, the most common feature of the access layer
is gonna be the broadcast domain segmentation or what we call the
virtual local area network or simply the VLAN We'll see that the VLAN ultimately is gonna
gonna control on the ethernet LAN
who are the host that you can directly talk to without having to go to the router
So if I were to send an address resolution protocol request or the ARP request
the broadcast domain would control who actually gets that request when I actually
send it The other thing is gonna be for the
quality of service marking or what we sometimes call the
classification and in today's enterprise network
this is gonna be a big feature that we need to make sure is
properly enforced Because if I have my phones
for the voice traffic now we're going over the ethernet LAN
I wanna make sure that someone doing like a
FTP download or doing web browsing is not gonna cause someone else's phone call
to drop So as the traffic actually enters, the access
layer this is one of our big considerations for
switching We need to make sure that the different applications
I'm able to tell the difference between them in the network
and the way that we do this is with what's known as the
packet marking or the packet classification This is what sometimes defined as your trust
boundary So whether the layer 2's
switches or layer 3 switches when they allow the traffic in
are they going to leave the classification up to the application?
Like the actual phone itself or whether they're gonna try to enforce
this themselves like change different types of packet marking
Another key feature is gonna be the security of the network
In a lot of networks nowadays, this is where we run up
protocol that is known as 802.1x which basically means that when I connect
my wired PC or my wired whatever to the LAN
it's gonna ask me for authentication information Could be based on
simply username and password It could be something like
a one time password, like a like an RSA secure ID
or you have those physical dongles that you've plug into your machine and then
that allows you to get on to the network This is mainly what the job of
802.1x is To take our authentication databases if we're
running like Windows active directory or some sort of other
authentication mechanism we need to get that to talk to the access
switcher to see who can actually
get on to the network to begin with In the case of the wireless LAN
This is where we would be doing our our layer 2 wireless security
like are we running WPA to are we running the personal
version which is just the username and password Are we running the enterprise version
which can use like the secure IDs But the key is that, this is happening on
the first step of the network
If there's someone that I don't want to get access to the network
I wanna try to do that as early as possible in the topology
so down to the access layer We'll also see that there's a bunch of other
common layer 2 security hosts and ethernet
LANs that we need to deal with This is were protocols like dynamic ARP inspection
would run or the DATP snooping feature
or the IP source card we'll talk about those in more details when
we get to layer 2 security But basically this is preventing someone from
pluggin into your network and then doing some sort of attack
or they can redirect traffic to them or they could knock someone else off from
the network We wanna make sure that we can
enforce this right in the entry point of the network, right at the access layer
Another key thing would be for multicast traffic management
if we're running some multicast application like let's say that you're running
like Norton Ghost to do different images for your desktops
or you're running something like a a video application
that you're end host are watching I wanna make sure that when that
traffic actually gets down to the last
segment down to the access layer then it's only going to the host that actually
want it Because otherwise if I don't
manage my traffic with multicast multicast basically becomes broadcast
and this kind of defeats the purpose of running a multicast application to begin with
that I want a case that I have one sender but many receivers
A one to many type of routing
paradigm or traffic paradigm we sometimes call
as opposed to a unicast communication which would be one to one
or in the case of broadcast which would be one to all
Last but not least, this is where also your inline power would run
So for your IP phones or for your access points or
other network equipment they can get its electricity from the
the network cable itself this is what the access layer is gonna be
incharge of This was know as inline power
Next level of heirarchy is the distribution layer
or sometimes called the aggregation layer Where the ideas that were taken all these
connections to go down to the end points and we're combining them together
in the higher bandwidth links So if I have a thousand endpoints in the network
I don't need a thousand cables that literally run to the core of the network
So we're trying to combine them together in smaller number of links
but generally higher bandwidth of links Or a lot of times this is gonna be where our
layer 3 operations are gonna run
Where the end host when it look at is default gateway
it could be pointing at the distribution layer So this is gonna be our
our separation or what we sometimes call the demarcation
between the layer 2 network and the layer 3 routing network
Typically you'd see that you have multiple connections that go up to the core
and also multiple connections that go down to the access layer
So if there's a physical link failure I'm able to reroute
around it or make a layer 2 decision in order to use
like spanning tree, you choose one link over the other
Or likewise if one of my core switches goes down
then it doesn't take out the entire network Because I've multiple levels of heirarchy
here Some of the key features here would
be the redundancy of the network So as I mentioned our
what sometimes called the First Hop Redundancy Protocols
or the FHRPs We'll see there's different variatios of this
that have different features like HSRP is a
Cisco specific implementation where VRRP is an open standard
So there maybe some designs where you wanna use one versus the other
But the overall idea behind this is that if one of my distribution switches
goes down from the end host perspective, they're not
gonna know that They're able to still send traffic into the
network They're able to still make their phone calls
and it's not really going to affect that one of the physical portions of the network
is down Another key point is gonna be the
the aggregation of the bandwidth So if I have multiple
fast ethernet links or I have mulitple gigabit ethernet links
going down to the access layer when I start to add all the bandwidths up
if I have the same connections in the distribution it means you're gonna have a lot of
over subscription of the network Or basically when
not everyone can send it a 100% link utilization at the same time
because the bandwidths don't add up together So this is were our protocol that's known
as EtherChannel or the standardized version of this which
is defined by 802.3ad is used
which allows us to take multiple physical links
and then combine the bandwidths of them together but from the network point of view, it's gonna
look like one logical link or one
link for sending either the layer 2 or the layer 3 bandwidth
Where a lot of times in today's designs what you may see like in the
the case of like a data center network is that at your access layer
when I'm going down to my end host this connection here maybe like gigabit ethernet
Where if the hosts are now gigE attached
which means they can send at 1000 megabits per second
or 1 gigabit per second If I have 48 ports that are
facing downstream towards the actual end host when I connect up to my distribution switches
let's say have distribution switch 1 and distribution switch 2
If I have potentially 48 gigs of bandwidth going down to the access layer
it doesn't make sense that these up links would be FastE
or maybe even that they would be single gigE So on a lot of times in newer equipment
you may find that these up links are going to be 10 gigE
and there's new standards that are available to get us to 40 gigE
and then also the very newest ones that are coming out go up to a 100 gigE
or we're trying to combine the bandwidths of the end host
and then send them up stream towards the distribution
Where again in this case that if this physical switch failed
I should be able to transparently reroute my traffic
to the other switch and then the network or the end host
are not actually gonna know that that failure occured
so again with the last point with the bandwidth aggregation
if you don't have these huge bandwidth links
40 gigE or 100 gigE because these are still really really expensive
you may be taking smaller ones like let's say we have single gigE
but now I have multiple attachments Where I have 2 ports that are going
from access to distribution but I'm combining these together
into a 2 gigabit per second link This is what we do with either the
etherchannel feature or the standard version of it which again
is called the the 802.3ad
Sometimes they're also called this NIC teaming
where your N servers could also run them like your VMware machines or your
Windows or Linux boxes when they attach the network, they could have
multiple physical links
but they combine that together to look like one logical circuit
And this is the advantage of 802.3ad Cause an open standard then anybody
can be implemented if they want to Another is gonna be load balancing
that if we have that same type of design where there's multiple access switches
let's say access 1, access 2 3 and 4
and these are then connecting to distribution level switches
let's say distribution 1 and 2
and I have all sorts of measures of connectivity to try to avoid
the case where if one switch goes down I can reroute around this
I may want some of my traffic to go this direction
and some of my traffic to go this direction which is sometimes either load sharing
or load balancing or load distribution
Where some of the traffic goes one physical path of the network
some of the traffic goes the other physical path
there's different types of protocols that we could use to accomplish
this whether we're dealing with like layer 2 spanning
tree or layer 3 routing like with
EIGPR or OSPF there's different considerations we'll see
from not only a design point of view but also the actual implementation point of
view how we need to account for those
This is also gonna be used at the distribution layer
for what's know as topology summarization or which is sometimes called also aggregation
and topology summarization generally is gonna be talking about
your IPv4 addressing Where down to the access layer
typically on a VLAN to subnet bases or or an IP subnet bases
we're gonna have a one to one relationship or if host are in VLAN 10
they're gonna be in the same subnet and if hosts are in VLAN 20
they're gonna be in a different segment So as these IP networks
start to get advertised up to the layer 3 network
one of the things that we can do is try to hide the details
of the network topology to make the devices in the core of the network
have a much simpler decision of how I route traffic from point A to point
B Where maybe I have 10,000
hosts that are attaching to the network and I have a thousand different VLANs
which means I have a thousdan different subnets maybe in the core of the network, I could
try to to minimize this number
or maybe I can get it down to a 100 subnets or
50 subnets or something smaller so that when the core makes the forwarding
decision they don't have to look at 100,000 routes
in the routing table or look at some huge layer 2 MAC address table
in order to figure out how do I actually switch the traffic
where the the bandwidth aggregation, this is gonna be
a physical function of the network, a layer 1 aggregation
the topology summarization usually this is a logical layer 3 summarization
So as we get to our details of EIGRP BGP and OSPF
we'll look at the different ways that we can do that
high details of the network and then end up with less complicated decision
then lastly we have the core of the network This is what's also sometimes known as the
backbone where the idea behind this
is that we wanted to be able to forward just as quickly as possible
and be as reliable as possible without having to run tons of software features
Most of the time this is gonna be our hardware accelerated layer 3 switches
like catalyst 6500 or 7600 series router
or like the Nexus 5000 or 7000 switches something that has lots of bandwidth capability
and can just switches quickly as possible between the
the different segments of the network Idea behind this is that we generally want
to get line speed or wire speed forwarding That if I had physical links
that are 10 gigabits per second I wanna make sure that I can
actually get my links to be like 95% utilized or potentially a 100% utilized
Where that there's not a difference between how fast I can forward in software
versus what the hardware of the interfaces Also a big key behid this,
is that if a failure happens of a physical link
or a physical like another switch or another router
We wanna try to reconverge around this as fast possible or basically
reheal the network A lot of times, this is gonna be based on
your layer 3 network design with things like EIGRP
or OSPF or feature that is known as
MPLS traffic engineering and the Fast reroute feature
where typically this is historically a service provider feature
but you can also implement this in the enterprise to avoid the case where one of my
switches goes down or one of my core router goes down
I can immediately make a new decision and then try to
to reroute around this where in certain cases you can get traffic
engineering to heal the network in about 50 milliseconds
for EIGRP and OSPF you can get somewhere around 1 second or maybe
less than that It really depends on what your application
needs are If you're dealing with some sort of financial
network that is actually accounting
packets like down to the microsecond then this is really gonna be a big deal
But if you're just an enterprise network and you wanna make sure that if someone
is on a phone call, the phone call is not gonna disconnect
then usually that's a lot easier to solve than some of these really extreme examples
So in our case, we're mainly gonna be focusing on
how do we do this with the layer 3 routing protocol
or how do we do this with like layer 2 spanning tree
another is gonna be the bandwidth utilization
where one of the big problems in the core of the network historically
is that when you start to add up all the connection speeds of the
access layer or the distribution then if you don't have enough bandwidth in
the core you're not actually gonna be able to
to use the full capability of the network which is sometimes also called oversubscription
which basically means that we're gonna try to guess for cases where
not every single host is gonna send 100% of their link speeds at the same time
Or if someone is making a phone call they may only need like 64 kilobits per second
of bandwidth I don't really need to offer them gigabit
ethernet Where they're gonna send it a thousand megabits
per second But this is one the big design
one of the big design considerations we need to take for the core
of the network is how are we actually gonna
distribute the bandwidth between different portions of the topology
Any questions so far? It's with label switching
it's kind of bolted at the same time Basically the way that it works
this is not gonna be in the scope of the exam but
it's kind of a cool feature to know that there's a lot of interesting
problems that you can solve with it but let's say that you have
like a core router 1 that is attached to
to router 2, to router 3 and to router 4
and I'm trying to get my traffic from point A to point B
So I have 2 different ways that I can go into the network
I can either go from from 1 to 2 to 4
or I could go from 1 to 3 to 4 What you can do with
the traffic engineering and specifically the fast reroute
which is sometimes called link and node protection
which basically means that from point A to point B, I'm gonna
try to send my traffic this direction but before I send the traffic
I know that ultimately I'm gonna be leaving the network via
device 4 or by router 4 So I can precalculate a backup path
that goes another direction So if this device goes or this link goes down
I can immediately switch over to the the precalculated path
as opposed to waiting for OSPF or waiting for EIGRP to make a new decision
So you end up in this design where you you try to take almost
every possible failure into account and you say if this link goes down,
then you're gonna go to that way If this router goes down, then you're gonna
need to go through these other 2 hops in order
to get there but the key is that, this device
basically has precalculated those failure scenarios
so if it happens it can switch over really really fast
So next thing we would then need to talk about is what are the different devices
in a network and how do they fit in to this different categories, the
the access distribution and core layers where in the case of ethernet LANs
this is typically where we're gonna be talking about
some of the legacy devices like our hubs or our repeaters or multiport
repeaters This is also
traditionally where our layer 2 bridges would sit
where today is gonna be taken over by the role of our layer 2 switches
Also we have our regular layer 3 routers that would be running like IP routing
or IPv6 routing And then also a combination of these which
could be like layer 3 switches or layer 3 or layer 4 switches
that as the layers go up of the OSI model basically means the device is making a decision
based on more complicated inputs or has more visibility of like
is this a TCP session between web servers versus a phone call between a source and a
destination and you'll see that for a lot of devices nowadays
they can make their decision not only on the layer 3
like the IP destination but also by the layer 4 information
like the TCP port or the UDP port
And we'll see that the routers actually do this by default
We'll talk about that more when we get into layer 3 routing
So the first of which, the hubs and the repeaters this is what we consider to work at the
first layer of the OSI model which is the physical layer
Where the idea behind a hub or a repeater is that it's gonna take the physical
electrical signal that it's receiving and as it comes in, it's just gonna amplify
it out all of the other ports or basically
repeat it out all of the other ports Where hub is sometimes known as a multiport
repeater Now the disadvantage of this
is that these devices are typically unintelligent and unmanaged
which means that they don't accept any configuration from us or don't accept management from us
but basically just a layer 1 physical device
or be kind of like if you're trying to amplify your
your TV cable signal If you plug in an amplifier
it's basically a single port repeater It's taken the electrical signal as it
comes in and then repeating and/or generating it again as it's going out
The main disadvantage of this in terms of ethernet networks
is that it doesn't make a decision on how it's gonna send the traffic
Just for every traffic, every type of traffic that's coming in, this is going out in all
the other directions What this means is that devices connected
to a hub are in what's known as the same collision
domain And this has to do with a
physical layer 1 and layer 2 contension mechanism of ethernet
that says if I wanna send a frame on to the layer 1 network
I've to check to make sure no one else is sending first
which in the case of ethernet, this is called our
Carrier Sense with Multiple Access and Collision Detection
or CSMA/CD This is only needed
in half duplex ethernet networks because in full duplex networks
which we'll see with our switches it means that all devices are in different
collision domain but if you were to use a hub
or what we'll talk about later like in the case of wireless with WiFi
Everyone in the wireless channel is in the same collision domain
And they all have to contend for the same bandwidth
Or try to share the bandwidth between them This also means that everyone is in the same
broadcast domain which we'll see the broadcast domain is ultimately
gonna control when I send an ethernet
frame out to layer 2 packet Who could possibly receive it?
So in the case of IPv4 over ethernet We need to do our IPv4 address
to our layer 2 MAC address mapping which is gonna be done by what protocol?
It's gonna be done by ARP the Address Resolution Protocol
So I broadcast domain or basically then control
when you send out an ARP, who's actually gonna get it?
Only devices in your same broadcast domain will be able to see that
because from an ethernet transmission point of view
We're not gonna be forwarding broadcast between different VLANs
Or routers are not gonna forward broadcast between different routed segments
So in the case of the hubs and the repeaters Not only are they sharing the same
physical wire which means they're in the same physical domain
They're also a part of the same logical layer 2
network or the part of the same broadcast domain
Next one up from there would then be the layer 2 bridges
or layer 2 switches Where the idea behind this is that
we want some more intelligence going down to the
to the access layer and we're gonna be forwarding our frames
based on layer 2 addressing or essentially based on our layer 2 MAC addresses
as opposed to just regenerating the
the electrical signal out all of the interfaces We'll see when we actually get to the layer
2 switches the way that they do this is by using a table
that is know as the CAM table or the the content addressable memory
This is basically a layer 2 switching table
that's gonna hold all the MAC addresses of the network
So the CAM table or the MAC address table is ultimately gonna be how the switches
decide how they send their traffic or how they forward their traffic
It means the same thing. We'll see that when we talk about
some of the WAN medias like frame relay or ATM
There's also switches that use the same type of logic
that as a packet comes in or a frame comes in
I'm gonna make the decision based on the layer 2 address
where it supposed to go to Where in the case of frame relay
it would be doing based on the data link connection identifier or the DLCI address
In the case of ATM, it uses what's known as the virtual
path identifier and the virtual channel identifier or the VPI/VCI
These 2 are basically the equivalent of MAC addresses
but for a different layer 2 protocol So a bridge or a switch, it doesn't necessarily
mean ethernet but it's the same type of logic
that we're making the decision of how traffic is sent
based on these different types of layer 2 addresses or
what we sometimes call the hardware address Now the key is that when the layer 2 switches
make their forwarding decision and then when they actually send the traffic
they don't actually change anything in the packets or in the layer 2 frames
This is why ethernet switching is also known as transparent bridging
Where when we compare the layer 3 routers versus the layer 2 switches
layer 2 switch doesn't change anything when traffic is moving from point A to point
B Where a router is gonna be changing the traffic
everytime it moves it between an interface It's gonna be building a new layer 2 header
Like if I'm going from ethernet to frame relay or I'm going from wireless to ethernet
the router's gonna have to change the frame format
which is sometimes known as the layer 2 packet rewrite
But layer 2 switches don't do this The only thing they need to know
is where's the packet going, where's the destination of it
and then it's gonna move it out a different interface
Based on this MAC address table or the CAM table.
We'd also typically see that the layer 2 switches are gonna be hardware accelerated
which used the ASICs or basically a hardware implementation
at the chip level of the forwarding logic
to move the packets between the the interfaces
A lot of times, this is why you'll see that a layer 2 switch would
be able to do forwarding at line rate like at a gigabit per second
or potentially 10 gigs per second But when you start to get into layer 3 routing
even though you may have a gigabit ethernet link on a router
most of the time, it's not gonna be able to send that fast
and it's because it has to make a more complicated decision as it actually sends
the traffic between the interfaces
Now when we're connecting to our layer 2 switches or our layer 2 bridges
we would consider the devices to be in the same broadcast domain
And we'll see it in more detail a little bit later, this way so we could change
this like changing VLAN assignments
or in the case of like frame relay or ATM we would have different virtual circuits
but from a most simplistic point of view if you take like an unmanaged switch
and plug all of your devices in to it they should be able to send layer 2
broadcast to each other or be able to do like that
IP address to MAC address resolution with ARP
cause they're gonna be in the same broadcast domain
The key difference though is that they're not
in the same collision domain So in the case of switches, we can run the
the ethernet LAN or the individual ports and what we consider full duplex mode
which means that there's a different wire that I use to send my traffic
then I use to receive my traffic So I can send and receive at the same time
because from a layer 1 physical point of view, it's using different
cables Whether it's a copper link
if it's using different twisted pairs or whether it's a fiber link that uses a different
send fiber versus a receive fiber
Full duplex just means you can send and receive at the same time
When we talk about wireless networks with WiFi
they always have to run in half duplex because your radio can either
transmit or receive but not both at the same time
And this is one of the big limitations of WiFi
that it's a half duplex media, you have to contend
with all of the other devices for the bandwidth in the air
So as I mention the layer 2 switches like the ethernet switches
what they are mainly gonna define is what we consider the broadcast domain
or basically who can directly talk to each other over ethernet
From an ethernet low level point of view this is what's controlled
by the specific destination address that is all Fs
So in the ethernet frame we have a source
address and we have a destination address so where's the packet coming from?
where's the packet going to? Just like we do in IP networking
where we could have a one to one communication which would be unicast
So from one source to one destination we could have a one to many
which would be a multicast We could have a one to all
which be a broadcast communication Both layer 2 networks and layer 3 networks
have this notion, where in the case of of ethernet LANs
there are unicast MAC addresses which would be normally your MAC address
that like that's assign to your NIC card There's also multicast MAC addresses
that we would use if we're trying to listen to a video feed
or we'll see like when we try to run routing protocols
like OSPF or EIGRP, those use multicast Where a broadcast is gonna be
from 1 source to all destinations at the same time
In the case of a layer 3 broadcast like with IP
this would be the all 255 address In layer 2 networking, this corresponds
to the all Fs address So when I send an ARP request
or when I send some other type of layer 3 broadcast
it means that the layer 2 switches are gonna be changing it into this destination
this layer 2 MAC address broadcast address
So the idea behind this transmission is that when the switch receives it
it always look at the destination MAC address and based on that it's gonna
figure out where do I actually wanna send this
If the destination is the all Fs, the broadcast the frame is gonna go out all of the ports
that it has in that broadcast domain
So it's gonna go out all ports except the one that it came in on
And this is what we consider the flooding procedure
or MAC address flooding, or layer 2 flooding Where the idea behind this, or the ultimate
goal is that this how the ethernet switches
figure out what does the topology look like So where the host is actually located
and when I send traffic from host A to host B
what is the port that is supposed to come in on
and what is the port that is supposed to go out on
And it does this based on this this automatic broadcast flooding
Now in cases of unmanage switches which basically means that you don't configure,
you just plug it in and it automatically works
all of the ports are always gonna be in the same
broadcast domain But in our cases we're gonna be dealing with
managed switches which means that we can log in to the switch
make configuration changes change all sorts of different options and
administration We then have control over what are the broadcast
domain And this is what the switches define as the
virtual local area network, so the VLAN
So from a flooding procedure, when the switch is gonna receive this layer
2 frame it's going to all Fs
it's gonna go out every port except the one that it came in on
but only ones that belong to that same VLAN and when we look at the difference between
the access layer and the distribution and the core of the network
this is one way that we can ensure that traffic on this portion of the
network isn't necessarily going to affect traffic
in another portion So by changing not only the
the collision domains upgrading from hubs to switches
we can manually define what are the broadcast domains
and eventually control who can talk to each other directly with layer 2 ethernet
We'll see the key difference between the the layer 2 switches and the layer 3 routers
is that frames that are in the same broadcast domain
or in the same VLAN those are gonna use layer 2 switching to reach
each other But for destinations that are in different
broadcast domains or different VLANs those are gonna need to use layer 3 routing
We'll see cases where the switch has both layer 2 VLANs assigned
but then also has layer 3 routed interfaces Or if I were to have a case where
I have some hosts A and B that are in VLAN
VLAN 1 and they're connected to a switch If they want to reach other host
that are in VLAN 2 the switch is gonna have to make a layer 3
routing decision in order to move traffic from one broadcast
domain to another Where if these were in the same VLAN
it would be using just its layer 2 switching or in other words the MAC address table
in order to make it's decision So which one it's ultimately using is
is gonna affect different portions of the desing
It's going to affect, if there's a failure in the network
how long does it take me to figure that out and heal around it
because the layer 2 protocols are gonna use different convergence technique
than the layer 3 protocols are Correct.
So when we actually get to the configuration of this, the layer 3 routing
Sometimes this is called multilayer switching or MLS
This is kind of a legacy term that that refers to basically
I have VLAN interface that has an IP address assigned
and then I could use that to route packets between different VLANs
Sometimes these VLAN interfaces are called SVIs or Switch Virtual Interfaces
So when someone says interface VLAN that's what they're talking about,
it's a layer 3 routing interface that the switch has assigned
But we're gonna get into more details about
this So the question is, if there's
two host that has the same IP address but they're in different VLANs
they're gonna be in different broadcast domains So when we look at the specifics behind this
the assignment of who can talk to each other directly at ethernet
or directly over the broadcast domain this is always based on the VLAN number
So there are actually valid cases where you can have two host
with the same IP address but as long as they're not in the same layer
2 network that's gonna be fine
But we'll see like when we get to the router's configuration
if you assign it an address that someone else already
has on that ethernet LAN, it's gonna tell you
It'll say, I heard an ARP request" I heard an ARP reply from someone else with
the same address So it's gonna cause the switches
not to be able to send the traffic between them
There's also some interesting cases we'll see a little bit later when we get to
security that this could be
a way someone could do, what's known as a a man in the middle attack or
MIM attack To try to redirect traffic like
from your default gateway to me and then I could look inside the
the data, from the source to the destination Like if I'm trying to steal someone's passwords
or whatever data that's in the network And there's different ways that we can prevent
this at layer 2, we'll get into more details
I'm sorry, can you still maintain.. What? It depends on the platform.
So some of the higher level platforms you can
It also depends on what type of features that you run at the same time
So the more features that you add like when we talk about QoS
if you have a very complicated quality of service policy
that says I want my voice traffic to do this versus my data traffic to do that
For every layer that the device has to look in
and if we were to consider this in the case of like the
the OSI model So with the OSI model, we have these 7 layers
At layer 1, your physical layer This is where your physical cables would be
or like wireless would also run here when you're
looking like at your actual radios At layer 2, this is where we
would have our ethernet switches or where we would have frame relay
or PPP or ATM like some of our WAN protocols
At layer 3, at the network layer, this is where we have IPv4
and IPv6 At layer 4 of transport,
this is where we would have TCP and UDP
Then the other one 5 through 7, this is gonna be the application
Now what you need to think about is that everytime a router or
switch makes a decision it has all these potential layers
that it could make its decision on It could be something as
simple as the physical layer Like in the case of a hub
just takes your electrical signal and just repeats it out another interface
Where layer 2 ethernet switch would normally look at
the source MAC address and the destination MAC address
and then figure out where I'm gonna send the frame based on the destination MAC
Layer 3 router is gonna have to look in the IP header
So look at what is the source, what is the destination
We could also include the TCP or the UDP ports
We could also include the application level things like an HTTP get
or an HTTP post Which you would do with like your
your application catching engines and stuff like that
But when you look at this from like a programmatic point of view
it's gonna take more resources to look deeper into the frame
for every feature that I want to run So you can actually tell the router
to make it's decision based on this information at layer 7
but the problem is it takes much more processing power to do that
because I have to make my decision at layer 2
then I have to make my decision at layer 3 and 4 and so on and so forth
Where you can technically do it but the more features you add
the slower the forwarding is gonna be So this is typically why you would use some
sort of of application like Cisco has like that ACE
module It's used for layer 7 forwarding
or for, if you wanted to do something really advanced that your layer 4 decision
you'd wanna use something maybe like your 7600
or like your ASR, some really powerful routers where can make that decision
It's really gonna depend on the platform But anytime we get above layer 3 routing,
this is where the like the enterprise platforms
like the ISRs or ISR 2s You can make this decision
but it's gonna be an extreme perfomance. Exactly, yeah.
And Cisco used to publish numbers on this
they don't really anymore but there's kind of a useful document you
could use If you look for Cisco router performance
this portable product sheets it says that
this document shows the the raw switching
numbers for 64-byte IP packets and what it also says is that
as you, what this would no services enabled So if you add access list, if you're adding
encryption, compression, etc. performance will
decline significantly from the given numbers Basically what this allows you to do is
look at the different router numbers and then see where they fit in
of how fast you needed to forward Because if we look at like a low level
platform, let's say like an 1811
has ISR 1 It says the packets per second that you can
switch or the equivalent of megabits per second
even though you may have a fast ethernet interface It says the
best case forwarding is gonna be 35 megs Because the CPU
is too slow and it doesn't do a hardware implementation you can't get line rate when you use this
platform It's really only when you get into the real
high level ones Like the last one that they show here is
7600 with Sup720 says says that you can get an aggregate of
15 gigs where when you look at
the service provider platforms like the the GSR
at the engine 4 line card it says that you can get 10 gigs
because for an engine 4 card, the fastest speed is either 10 gigE or OC 192
So you get a 100% line rate because it's hardware accelerate to do that
But the problem is, when you look at the enterprise, you're not actually using this
platform So you're gonna be using something else mid-range
like maybe 2900 or 3900
where when you look at these you take a huge perfomance hit when you try
to run those applications So with 38, let's say 3845
says best case scenario you're gonna get an aggregate forwarding of
256 megs but that's what no features
So if you turn on network address translation, it's gonna go down
If you add an access list to the interface, it's gonna go down
If you do something like end bar for layer 7 classification
it's gonna take a huge huge hit This is aggregate across all interfaces
All interfaces. So this is just fast how fast the CPU can forward
And the problem is these aren't really practical numbers
because this for 64-byte packets which is just the smallest size
So when you look at like you're web browsing usually, normal day to day
application is gonna try to use as large as of a frame size as possible
So the numbers are gonna even go down from here
Right. Aggregate between all of them. And that's what are the main differences between
the layer 2 switches versus the layer 3 routers
If you can keep your network to make the decision as
as low level into the frame as possible the faster you're gonna be able to forward
So if you can decide just based on the ethernet frame
99% of platforms are gonna support wire wire speed for this
Like you can actually get a 10 gig link that will run in a 100% utilization
But as soon as you have to start looking at not only the MAC address
but also the IP address destination It takes more processing power to do that
And then if you look at layer 4 information and on and on and on.