Sei sulla pagina 1di 15

1.1.

a Describe basic software architecture differences


between IOS and IOS XE
http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/ios-xe-3sg/QA_C67622903.html
In IOS XE, IOS 15.0 runs as a single daemon inside a Linux operating system. Other system functions
now run as discrete separate processes in the host OS. As well as stability benefits, this allowed load
balancing across multi-core CPUs.

1.1.a (i) Control plane and Forwarding plane


Control and data plane can be further separated, as the drivers for the ASICs can be external from the
IOS process. There are a standard set of APIs available to the control plane processes provided by
the Forwarding and Feature Manager (FFM). In turn, the FFM programs the data plane via the
Forwarding Engine Driver (FED).

1.1.a (ii) Impact to troubleshooting and performances


Separate processes allow for better fault isolation and reliability one process dying wont necessarily
kill the box.
Supports multithreading and multicore CPUs.
Provides the same look and feel as IOS.
Wireshark and Mediatrace included.

1.1.a (iii) Excluding specific platforms architecture


IOS XE allows the platform dependent code to be abstracted from a single image. Because the
drivers are outside of IOS, this enables a more platform independent IOS process.
Non-IOS applications can be either tightly integrated or run alongside IOS on the same platform.
Service Points are available for integration with IOS.

1.1.b Identify Cisco express forwarding concepts


CEF maintains its tables to facilitate the routing of packets purely in memory, with no CPU overhead.
If the CPU cant handle the packet, it punts the packet to the software for processing. Examples of
things which require punting are:

IP Header Options;
The outgoing interface is not on supported media;

The packet is destined for the router;


Or the router has to send a reply (i.e. ICMP destination unreachable, etc).

1.1.b (i) RIB, FIB, LFIB, Adjacency table


RIB Routing Information Base. The normal routing table. show ip route. There can be multiple
RIBs on a router (VRFs).
FIB Forwarding Information Base. Used by CEF. It is more optimised and much faster to parse.
There is a FIB per VRF. Viewed with show ip cef.
LFIB The MPLS version of the FIB a faster to parse version of the LIB.
Adjacency Table Maintains the layer 2 forwarding information for each FIB entry meaning no
ARPs needed. Viewed with show adjacency <interface> detail. Adjacent means reachable by a
single layer 2 hop.
Adjacency Types

Cache Adjacency Correct outbound interface and correct MAC for FIB entry the
MAC is either the next hop, or the end host if on the same subnet;
Receive Adjacency Packets destined for the router (including broadcasts and
multicasts);
Null Adjacency Packets to be sent to Null0 and dropped;
Punt Adjacency Packets which cannot be CEF switched and must be punted to
a higher process;
Glean Adjacency Like a cache, but before the ARP the router knows the next
hop or knows it is not directly connected, but does not have a MAC address. Glean
Adjacencies will trigger an ARP.
Discard Adjacency No layer 2 mapping exists so the packet is dropped. No ICMP
Unreachable response is sent.
Drop Adjacency No layer 2 mapping exists so the packet is dropped. ICMP
Unreachable IS sent.

1.1.b (ii) Load balancing Hash


Definitions

Prefix Describes a destination IP network.


Path A valid route to reach a destination. Each path has a cost.
Session A unidirectional communication flow between two IP nodes. All packets
in one session use the same source and destination address.

The Load Share Table contains 16 hash buckets which point to the paths. For equal cost paths, the
buckets are split evenly (for 2 paths, 8 buckets each; for 3 paths, 5 buckets each + 1 disabled bucket).
For unequal cost each path gets a different number of buckets according to the load sharing ratio.
Types

Per-destination (or per-session) Original Mode creates a 4 bit hash of source and
destination IP which controls bucket assignment. Universal (default) mode adds an ID
hash to this which is local to the router this randomises the bucket assignments
between routers across the path. Tunnel mode is for use in environments where
tunnels are used which means there are very few source/destination pairs.

ip cef load-sharing algorithm original

ip cef load-sharing algorithm tunnel

ip cef load-sharing algorithm universal

Per-packet Round robin each packet through the buckets. Not recommended as
it causes out of sync data, which means more overhead for TCP and data loss for UDP.

ip load-sharing per-packet

Per-port Adds the layer 4 source and/or destination ports in the 4 bit hashing
function to create more even distribution.

ip cef load-sharing algorithm include-ports destination

ip cef load-sharing algorithm include-ports source

ip cef load-sharing algorithm include-ports source destination

1.1.b (iii) Polarization concept and avoidance


If all routers are making the same decision based on a source/dest hash, if the first router allocates 2
streams to link 1, then every subsequent router will also allocate the same two streams to link 1. This
causes some links to be permanently under-utilised, and could end up causing congestion on overutilised links.
Avoidance

Use different load balancing algorithms across different routers in the network so
that each router makes an independent decision.
Alternate between an even and off number of links between each network layer
if every layer is linked by two paths then distribution could be polarised if the
number of paths differs then the CEF bucket allocation will change.
Use the universal algorithm This adds a unique local ID into the hash algorithm
meaning each router will make an independent decision.

1.1.c (i) Unicast flooding


One of the main causes is asymmetric routing. This is covered in 1.1.c(iii). Useful document
here: http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6000-seriesswitches/23563-143.html

The primary impact of this is that all hosts connected in that VLAN receive the traffic. Suppose two
10gig servers are communicating, and asymmetric routing is taking place; If there is a 100mbps host
on the same switch, it is going to receive ALL traffic from the server, effectively saturating the link.
STP TCNs (topology change notifications) causes forwarding tables to age out quicker than their
normal timers. If there is a flapping link causing STP reconvergence, this can cause excessive unicast
flooding. Configuring port-fast on all edge interfaces limits TCNs.
CAM Overflow is another cause. It is unlikely to naturally occur in modern switches, as there is
usually sufficient memory to facilitate the needs of most networks. However, CAM overflow attacks
can be caused maliciously. When the MAC address table grows so large that it exceeds the size of the
Content Addressable Memory, then no new MAC addresses can be learned, which causes unicast
flooding. This can be protected against using port-security.
Selected ports can be blocked from unicast flooding using switchport block unicast. This may be
desirable in highly secured networks and where PVLANs are used.

1.1.c (ii) Out of order packets


Time sensitive UDP applications do not buffer packets for very long, so they do not cope very well with
reordering packets.
Excessive packet reordering in TCP can cause the receiver to send duplicate ACKs to trigger fast
retransmit. This causes excessive overhead in both CPU and bandwidth, as well as causing the
sender to reduce its window size. The receiver also has to buffer and reorder packets; this takes time,
memory and CPU cycles.

1.1.c (iii) Asymmetric routing

Asymmetric routing is when the return traffic takes a different path through the network than the
forward path. This can cause issues with NAT and firewalls among other things. If one link is highly
saturated, or higher delay (one Ethernet, one sat link for example), then asymmetric routing can cause
major problems with delay and jitter. It also causes unicast flooding, as described above.

1.1.c (iv) Impact of micro burst


Small periods of time where traffic load is exceptionally high. Can cause buffer queues to fill and
overflow, causing tail drop and packet loss. Causes overrun / no buffer drops. Can be compensated
for by traffic shaping. Can be difficult to diagnose, as the 1 minute utilisation of the link could be fairly
low.

CCIE Written Blueprint: 1.1.d Explain IP operations


Posted on August 14, 2014 by nick

1.1.d (i) ICMP unreachable, redirect


ICMP Unreachable
Generated by a host or gateway to indicate that the packet was discarded as the
destination is unreachable. It will not be generated for multicast traffic. It is sub-divided
into 15 types as follows:

Code
Value Message Subtype

Description
The datagram could not be delivered
to the network specified in the network ID
portion of the IP address. Usually means a
problem with routing but could also be

Network Unreachable

caused by a bad address.


The datagram was delivered to the network
specified in the network ID portion of the IP
address but could not be sent to the
specific host indicated in the address.

Host Unreachable

Again, this usually implies a routing issue.


The protocol specified in the Protocol field
was invalid for the host to which the

Protocol Unreachable

datagram was delivered.


The destination port specified in the UDP or

Port Unreachable

TCP header was invalid.

The MTU is smaller than the packet size,


and the router is not allowed to fragment
the packet. This message type is most
often used in a clever way, by
intentionally sending messages of
increasing size to discover the maximum
Fragmentation Needed and DF transmission size that a link can handle.
4

Set

This process is called MTU path discovery.


Generated if a source route was specified
for the datagram in an option but a router
could not forward the datagram to the next

Source Route Failed

step in the route.

Destination Network Unknown Not used; Code 0 is used instead.


The host specified is not known. This is
usually generated by a router local to the
destination host and usually means a bad

Destination Host Unknown

address.

Source Host Isolated

Obsolete, no longer used.

Communication with

The source device is not allowed to send to

Destination Network is

the network where the destination device

Administratively Prohibited

is located.

Communication with

The source device is allowed to send to the

Destination Host is

network where the destination device is

Administratively Prohibited

located, but not that particular device.

10

The network specified in the IP address


Destination Network

cannot be reached due to inability to

Unreachable for Type of

provide service specified in the Type Of

11

Service

Service field of the datagram header.

12

Destination Host Unreachable The destination host specified in the IP


for Type of Service

address cannot be reached due to inability

to provide service specified in the


datagrams Type Of Service field.
The datagram could not be forwarded due

13

Communication

to filtering that blocks the message based

Administratively Prohibited

on its contents.
Sent by a first-hop router (the first router to
handle a sent datagram) when
the Precedence value in the Type Of

14

Host Precedence Violation

Service field is not permitted.


Sent by a router when receiving a
datagram whose Precedence value
(priority) is lower than the minimum

15

Precedence Cutoff In Effect

allowed for the network at that time.

ICMP Redirect
Used to notify a host that a better next hop is available for exit from that network. If two
routers are on a network sharing routing information, and one is connected to an external
network, it makes little sense for a host to have two hops to exit the network, so the
router will send an ICMP redirect back to the host to tell it to use the other router.
Cisco routers send ICMP redirects when all of these conditions are met:

The interface on which the packet comes into the router is the same interface on
which the packet gets routed out.

The subnet or network of the source IP address is on the same subnet or network
of the next-hop IP address of the routed packet.

The datagram is not source-routed.

The kernel is configured to send redirects. (By default, Cisco routers send ICMP
redirects. The interface subcommand no ip redirects can be used to disable ICMP
redirects.)

1.1.d (ii) IPv4 options, IPv6 extension headers


IPv4 Options primarily used for network testing / debugging.

Record Route Each router on the route records its address in the header. The
destination then returns this information to the originator. It is limited to 9 hops,
because that is all the header can hold.

Source Route The sender specifies the route through the network. Uses the
same format as record route, only the sender pre-populates the IPs in the header.
Can be Strict the path has to be exactly as specified, hop by hop, or Loose Allows
multiple hops between addresses in the list.

Timestamp Same as record route, but each router also adds a timestamp.

IPv6 Extension Headers:

Hop-by-Hop EH is used for the support of Jumbo-grams or, with the Router Alert
option, it is an integral part in the operation of MLD. Router Alert [3] is an integral
part in the operations of IPv6 Multicast through Multicast Listener Discovery (MLD)
and RSVP for IPv6.

Destination EH is used in IPv6 Mobility as well as support of certain applications.

Routing EH is used in IPv6 Mobility and in Source Routing. It may be necessary to


disable IPv6 source routing on routers to protect against DDoS.

Fragmentation EH is critical in support of communication using fragmented


packets (in IPv6, the traffic source must do fragmentation-routers do not perform
fragmentation of the packets they forward)

Mobility EH is used in support of Mobile IPv6 service

Authentication EH is similar in format and use to the IPv4 authentication header


defined in RFC2402 [4].

Encapsulating Security Payload EH is similar in format and use to the IPv4 ESP
header defined in RFC2406 [5]. All information following the Encapsulating Security
Header (ESH) is encrypted and for that reason, it is inaccessible to intermediary
network devices. The ESH can be followed by an additional Destination Options EH
and the upper layer datagram.

1.1.d (iii) IPv4 and IPv6 fragmentation

IPv4
When a router receives a packet, and the MTU of the output interface is smaller than the
size of the packet, the router will fragment the packet if the DF bit is not set. The MF
(more fragments) bit is set on all packets except the last one, and the fragment offset
field is set to facilitate reassembly. If the DF bit is set and the packet requires
fragmentation, and ICMP destination unreachable (fragmentation required but DF set) is
sent back to the originator and the packet is dropped. Reassembly is performed by the
end receiver.
IPv6
IPv6 routers do not perform fragmentation. Any packets which are too large for the MTU
of the outgoing interface are dropped, and a ICMPv6 type 2 (Packet too big) message is
sent to the originator. All headers up to and including the routing EH are included in
every packet. The offset and more fragments bits are used the same way as IPv4. All
fragments must be received by the receiver within 60 seconds.

1.1.d (iv) TTL


Time To Live (TTL) is an 8 bit field in an IP packet. The initial value of this is set by the
sender (defaults differ per operating system). Every layer 3 hop within a network
decrements the TTL by a value of 1. If the value is 0, then the packet will be dropped and
a ICMP Time Exceeded message will be returned to the originator.
The primary function of this is to prevent traffic indefinitely looping around a layer 3
network. Note that this is in the IP packet, and therefore is not examined at a switch level
so does nothing to help layer 2 loops.
TTL is used by traceroute (ICMP, TCP or UDP). A packet is sent to the end destination with
TTL=1, and the originator of the Time Exceeded message is the first hop. A second
packet packet is sent to the same end destination but with a TTL=2. This continues until
the end destination is reached.

1.1.d (v) IP MTU


Maximum Transmission Unit (MTU) is the largest size of a packet that can be transmitted
out of an interface without fragmentation.

Optimum MTU depends on the network traffic; a large MTU causes a longer serialization
delay which may be unacceptable for voice traffic. However, a smaller MTU can be less
efficient when large volumes of data are being moved.
TCP I thought Id glance over this section. Turns out there was some stuff Id never heard of, such as
the bandwidth delay product.

1.1.e (i) IPv4 and IPv6 PMTU


Path MTU Discovery is the process of sending increasingly larger packets with the DF bit set, until
finally a ICMP Destination Unreachable (Packet too large, DF bit set) message is received. The size
just below that which caused this message is the maximum MTU for the path. Note that this relies on
ICMP traffic being permitted through the network.

1.1.e (ii) MSS


The Maximum Segment Size is the maximum amount of data, in bytes, that can be received in a
single TCP segment, excluding the TCP and IP headers. This is separate to MTU a large TCP
segment can be fragmented across multiple packets; the MSS refers to the reassembled size.

1.1.e (iii) Latency


Latency is the time it takes to get from end to end. This can be affected by congestion, serialization,
queueing, propagation delay, and many other things. Good document
here: http://www.o3bnetworks.com/media/40980/white%20paper_latency%20matters.pdf

1.1.e (iv) Windowing


The window size is the amount a unacknowledged data that can be in transit at a given time. This is
negotiated between two hosts. While connectivity is reliable, all packets are being received, and upper
level protocols are accepting the packets and keeping the buffers empty, hosts will attempt to increase
the window size. In the event of missing packets, filling buffers, etc, the hosts will reduce the segment
size.

1.1.e (v) Bandwidth delay product


The bandwidth multiplied by the round trip time gives the value of how much data should be in transit
in the network. This would be the optimum window size; The amount of data to send before you
should reasonably expect an acknowledgement.

1.1.e (vi) Global synchronization

During congestion, TCP senders will reduce their window sizes, backing off the amount of bandwidth
they are using. All TCP streams will behave the same way, so eventually they will become
synchronised, increasing to cause congestion and backing off at roughly the same rates. This causes
the familiar saw tooth bandwidth utilisation graphs. RED and WRED can help alleviate this.

1.1.e (vii) Options

Maximum Segment Size only used in the SYN and SYN/ACK phases to
negotiate the MSS for the session.
Window Scaling an addition to the window size flag in the header to facilitate
larger than 64kb windows.
Selective Acknowledgements SACKs can acknowledge specific parts of the
stream, so that only specific bytes are retransmitted in the event of errors. Traditional
ACKs acknowledge only the latest packet received, so if packets are received out of
order and an earlier bit was missing, a SACK can request only that bit.
Timestamps Used so that TCP can measure delay. The original reference
timestamp is negotiated during the SYN and SYN/ACK phase.
Nop No Option. Used to separate the different options.

This topic made me think about the starvation stuff. I suppose it is pretty obvious that UDP wouldnt
back off if WRED was employed, but its something I never really thought about.
I found a few good videos on YouTube which gave some good RTP/RTCP overviews.

1.1.f (i) Starvation


TCP Starvation / UDP Dominance is experienced in times of congestion where UDP and TCP streams
are assigned to the same class. Because UDP has no flow control causing it to back off in the event of
congestion, but TCP does, TCP ends up backing off and allowing even more bandwidth to UDP
streams to the point where UDP takes over completely. This is not helped by WRED, as the drops
caused by WRED would not affect UDP streams.
The best way to resolve this is to classify UDP and TCP streams separately as much as possible.

1.1.f (ii) Latency


Latency is end-to-end delay. UDP is connectionless, the only real effect of latency on UDP streams is
that there will be a greater delay between sender and receiver. Jitter is the variance in latency this
causes problems for UDP streams. Jitter is smoothed by buffering.

1.1.f (iii) RTP/RTCP concepts


Real-time transport protocol. Encapsulated in UDP keeps it faster and more real-time. Cant afford
to wait for retransmits / re-order packets, etc.

Real-time Transport Control Protocol. Provides feedback on the quality of the RTP stream. QoS,
packet counts, Jitter, RTT, etc.
As the blueprint goes, this is, in my opinion, the most vague topic to write about. It is dependent on the
understanding of the topics, and how the changes will impact the existing network. I have skimmed
through this really, with the intention of covering the topics in their actual topic sections. I am pretty
used to evaluating impact I seem to spend my entire life writing change orders and determining
disruptiveness.

1.2.a Evaluate proposed changes to a network


This is a difficult section to write a paragraph about, as it is based on the understanding of the core
topics, analysing the proposed changes and deciding how they will impact / affect the existing network
infrastructure. These will be covered in more detail in their specific sections.

1.2.a (i) Changes to routing protocol parameters


Could include things like metrics, additional routes, redistribution. How these changes will impact
existing services, etc.

1.2.a (ii) Migrate parts of a network to IPv6


Involves looking at IPv6 transition mechanisms: 6to4 tunnels, Toredo, ISATAP, Dual Stack, etc. Impact
on existing services, interoperability, etc.

1.2.a (iii) Routing protocol migration


Manipulating metrics, redistribution, administrative distances, etc.

1.2.a (iv) Adding multicast support


Selecting the right place for the RP bottlenecks, bandwidth, optimisation.

1.2.a (v) Migrate spanning tree protocol


Interoperability between legacy and rapid spanning-trees, MST regions, etc.

1.2.a (vi) Evaluate impact of new traffic on existing QoS


design

Evaluating existing utilisation, correct classification and marking, etc.

1.3.a Use IOS troubleshooting tools


1.3.a (i) debug, conditional debug
Debugs can be used on a wide range of functions (debug ?). Some debugs can be very noisy. Debug
conditions can be set to filter out some of the noise for example debug condition interface fa0/0 will
limit the debug information to things using that interface. Undebug all does not remove conditions,
they must be specifically removed with the undebug condition command. Debugs can be quite
processor intensive, so it is wise to check whether the device can handle it, and cancel it when it isnt
required.

1.3.a (ii) ping, traceroute with extended options


The extended ping / traceroute allow the use of specific IP headers to test different network scenarios.
Options are:
Ping

Traceroute

Field

Description
Prompts for a supported protocol. Enter appletalk, clns, ip, novell,

Protocol [ip]:
Target IP addres

apollo, vines, decnet, or xns. The default is ip.


You must enter a host name or an IP address. There is no default.
The interface or IP address of the router to use as a source
address for the probes. The router normally picks the IP address of

Source address:
Numeric display

the outbound interface to use.


The default is to have both a symbolic and numeric display;

[n]:
Timeout in

however, you can suppress the symbolic display.


The number of seconds to wait for a response to a probe packet.

seconds [3]:

The default is 3 seconds.


The number of probes to be sent at each TTL level. The default

Probe count [3]: count is 3.


Minimum Time to The TTL value for the first probes. The default is 1, but it can be
Live [1]:
set to a higher value to suppress the display of known hops.
Maximum Time to The largest TTL value that can be used. The default is 30.
Live [30]:

The traceroute command terminates when the destination is

Port Number

reached or when this value is reached.


The destination port used by the UDP probe messages. The

[33434]:

default is 33434.
IP header options. You can specify any combination.

Loose, Strict,

Thetraceroute command issues prompts for the required fields.

Record,

Note that the traceroute command will place the requested

Timestamp,

options in each probe; however, there is no guarantee that all

Verbose[none]:

routers (or end nodes) will process the options.

1.3.a (iii) Embedded packet capture


Can be used to monitor packets flowing to, through and from the device. They can be analysed on the
device, or exported to a PCAP file for opening in Wireshark. Full command reference is
here: http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/epc/configuration/15-mt/epc-15mt-book/nm-packet-capture.html
1.

Set a buffer: monitor capture buffer MYCAPTURE size 256 max-size 100. Size is
the size of the buffer, and max-size is the maximum size per element. Access-lists,
packet limits etc can be included in this command.
2.
Set a capture point: monitor capture point ip cef MYPOINT fa0/1 both.
3.
Associate the capture point and buffer: monitor capture point associate
MYPOINT MYCAPTURE
4.
Start the capture: monitor capture point start MYPOINT

1.3.a (iv) Performance monitor


Configured in a similar way to netflow, using flow record collectors, Cisco Performance Monitoring can
be used to monitor for packet loss, delay, jitter, etc. It is able to export these records, and generate
SNMP alerts based on thresholds.
his is another difficult section in the blueprint to write about. I find troubleshooting techniques and
methodologies to be quite personal; no two peoples brains work the same way. I guess this is based
on how I do things and some tips Ive received from a few people over the years.

1.3.b (i) Diagnose the root cause of networking issue


(analyze symptoms, identify and describe root cause)

Read the information in the support ticket very carefully, and take into consideration all of the
symptoms. Take particular note of anything that may have changed around the time the symptoms
start. This should give you a vague area to begin L2, L3, specific routing protocol, etc. Verify that the
fault is as described. Either start hop by hop, or use split half, to try and isolate the problem.

1.3.b (ii) Design and implement valid solutions according to


constraints
Within the guidelines of what is permitted within the scope of the network design (or exam question!),
draft a solution. Write it down if needed for clarity. Review this solution in your head before
implementing. Think outside the box is the solution you are proposing going to have any knock on
effects to other services? Implement. If it doesnt work, dont jump in head first and start changing
things. Step back, reassess, and start the process again. Otherwise you end up changing so many
things you dont know what you did.

1.3.b (iii) Verify and monitor resolution


Use the appropriate show commands to verify that everything has worked as expected. Test end to
end connectivity.
This is a very short section! I didnt see the point in harping on about wireshark, I use it most days at
work. And the IOS embedded packet capture was discussed in length further up the blueprint (i.e. in a
previous blog post).

1.3.c Interpret packet capture


1.3.c (i) Using Wireshark trace analyzer
Packet capture can be obtained using a hub, or more commonly a SPAN / RSPAN port. Functionality
includes filtering, tracing sessions, reassembling conversations, etc. Knowing the protocols, and
therefore what to expect to see, is key. Actually using wireshark is a whole other video series!

1.3.c (ii) Using IOS embedded packet capture


As described above in 1.3.a (iii) Embedded packet capture. In my experience it is almost always better
to save this as a PCAP, export and open in Wireshark. If needed, show monitor capture can provide
the information.

Potrebbero piacerti anche