CCIE Bootcamp Review

It’s been almost 3 months since I’ve sat for my CCIE Bootcamp with Marko Milivojevic and I wanted to take the time to write a review for the class to help others who are considering sitting for a Bootcamp before their CCIE Lab attempts.

First I want to give you a little background on the format I was using to study for my CCIE before I was able to sit it in on the bootcamp. I was using a particular vendor’s training solutions, including their workbooks, their training videos, and their rack tokens and all seemed to be going well. I was doing relatively well on the workbook material and content and I was able to finish tasks in reasonable amounts of time. Nothing that I was tracking with any accuracy, though. Just working through training videos and workbooks and doing my best to keep pace with the timeline that I had set forth for myself for my CCIE lab date. I’d read the blog posts about speed and accuracy and other requirements for being able to pass and I’d felt confident that if I’d stayed course, I’d have a healthy chance at passing. Maybe not the first time, but the second or third for sure. So, after passing my CCIE Written at Cisco Live last year, I’d started my usual routine, and targeted this February to sit for the real thing.

From June through November, I’d put in around 200-250 lab hours and worked a decent way through some workbooks, labs and videos as I’d said, and I was fairly confident when I flew out to attend the Bootcamp in RTP. On the flight I’d decided to read up on some BGP design as I was working through some design challenges for work. And I specifically remember reading up on conditional injection and some path manipulations methods. I landed, got my rental, and made it to my Dad’s house – who conveniently lives about 30 minutes from Cisco’s RTP campus. We went out to dinner that Sunday night, and caught up on our lives as we hadn’t seen each other in a few years.

The next day came and I have to say I was excited to see what the class had in store. I was up a bit early and out on my way to the class. We had the benefit of having the bootcamp on the RTP campus, so I was able to find out where I would be going to take the lab, and take that unknown away. Once I got to the campus, I was quickly able to find the building that the bootcamp was going to be in. I walked in and I was meet with about 10 other people who were there to sit for the bootcamp as well. We all got our name tags from the greeter and we waited patiently for Marko to arrive.

Once Marko arrived he greeted us with a smile and a hand shake and we found our way to the conference room that we would be working in for the next two weeks. There was some back and forth banter for about 20-30 minutes as we first arrived and then Marko dove right in. We did a round the room introduction of ourselves and we all learned a little about each other, helping break what would become a pretty thick slab of ice within the room and the people who were in it. We were also asked to address what we thought to be our weak points in our theory so Marko could help form the class to what he thought would benefit us all the most.

He was able to gear the class toward our concerns and help us to identify the gap between theoretical and experiential knowledge. And I can NOT stress this enough. It was a larger gap that existed than I’d thought. I vividly remember him putting an example on the board that consisted of 3 routers, none of his examples consisted of much more than this, other than getting into the BGP examples later on in the week, and after some questions from Marko and a lot of uneasy quiet in the room, we weren’t able to figure out the problem in any reasonable amount of time. A room full of 15 Network Engineers all striving to obtain one of the industry’s most recognized certifications for doing just that, network engineering, were just shown that we don’t know anything about a protocol we all work with on a, probably, regular basis. Now, how much of that was “I don’t want to look like an idiot in front of my peers” syndrome, I can only speak for myself and I say it was a lot. And I would’ve looked like an idiot most of the time if I had decided to speak up.

However, I have to stress, this was not Marko’s intention. He wasn’t trying to make us feel bad about how much we knew about a particular protocol, or if we knew how that protocol would’ve reacted in a specific scenario or topology. He wanted to show us what to strive to become. What it would take to be considered a CCIE. It was on that first day that I’d realized I wasn’t doing NEARLY enough in my studies and I needed to up my game drastically.

Fast forward to Friday and 4 more days of realizing how much I needed to increase my efforts, we wrapped up the week of theory. Within this week, I kept thinking to myself, “I don’t think I’ll ever be able to pass this exam”, on more than a few occasions. A few of us from class decided to get some dinner that night and I had a conversation with Marko. I remember telling him that of everything I’ve taken away from his class so far, I’d taken one thing to heart the most. I realized how much I had to rely on myself and no one and nothing else. When you’re in the heat of the moment its the trust you have to have in yourself to methodically walk yourself through a problem and figure out exactly what it is that needs to be done to fix it. There is no Google, there is no co-worker to collaborate with. You have yourself and some horribly organized and sometimes written Cisco documentation and that’s it.

We had the weekend to recover and started in on the labs Monday morning. All I have to say about those labs is that they are pure evil. Pure, calculated, evil. They are designed to push you to your absolute limits in terms of mental stamina, and when you think you’ve got it figured out, you don’t. Go back and try again. I thought to myself, and even text my wife a few times, “I don’t want to / can’t get this certification and I don’t know why I am wasting my time”. She would text me back and encourage me to stick with it and that everything would be OK. I even thought about leaving the class, that I was wasting my time and I didn’t want the stupid certification. What was to be a 2 hour troubleshooting section took me the better part of 12 hours and don’t even get me started on the configuration!

That said, the labs were tough, but they were fair. Nothing in the labs was anything we shouldn’t be expected to see. It was my lack of closing the gap between theoretical and experiential that was breaking me. Circling back to the reliance on myself, it was myself that was failing me. Nothing more. It was now that I was realizing that a lot of what I’d read online about time saving techniques for typing, etc were just a sham. After all of this was said and done, I took that one over-arching theme back to my studies outside of the class. I needed to up my personal game and investment.

Fast forward to today and I’m writing this post in my basement office at 0045 after a 3 hour QoS lab and I’m feeling much more confident with where I am at in terms of the lab. I hope to pass in a single attempt, but will be completely happy if it takes me multiple times. Reason being, in that time between the bootcamp and now, I’ve slowed down and tried to listen to what my peers has been telling me from the beginning. Obtaining your CCIE is much less about the number and far more about the journey.

To sum this up in a few words, go take the bootcamp with Marko. I regret nothing about the time I spent in his class and I believe it made me a better person, both personally and professionally. So much so, in fact, that if he offers an SP class when I decide to go for that, he will be the first instructor I turn to to invest my time and money into.

Thank you Marko for helping me take myself to that next level. The level required to really call yourself a CCIE.


10G Auto-negotiation

Working through a circuit turnup, I’d stumbled into a situation where an ISP tech was telling me I needed to enable auto-negotiation on my end of the of the circuit for it to come up, with respect to how they’d provisioned their end.

I was fairly certain the 10GBASE-LR interface we were currently working with didn’t support such a feature set. But I was in the process of questioning myself. I decided to double check against the 802.3-2008 spec that I keep handy in my dropbox folder at all times.

Sure enough, my research proved worthwhile. Contained within the 802.3-2008 Section 4, Clause 44 titled Introduction to 10 Gb/s baseband network, there is a handy diagram to show what features and functionality each flavor of 10G will support from a PHY perspective.

Screen Shot 2013-04-15 at 10.39.16 AM

The interface that I was dealing with was 10GBASE-LR. The ‘L’ meaning long wavelength (1310nm) / Long Reach and ‘R’ meaning scrambled ecoding 64b/66b.

But checking the diagram and finding the 10GBASE-R encoding we can see that there has been no addition of the Auto-negotiation sublayer to that stack. Therefore, auto-negotiation is not supported on 10GBASE-R links.

Checking again, we do see that 10GBASE-T links DO support auto-negotiation. So, any 10G copper interfaces that you turn up, you’ll be able to support it. Otherwise, none of the optical standards have the sublayer incorporated.

Case in point, knowing where to find the information is almost always the largest part of the fight.

BGP 4-byte ASNs

BGP ASN Overview

ASes are the unit of routing policy in the modern world of exterior routing, according to RFC1930. However, the classical definition of an AS is a set of routers that all exist within a single technical administrative domain. When it comes to BGP, this ASN is the numerical identifier for unique presence for an organization, on the Internet.

The diagram below gives a good logical idea of how ASNs are utilized to provide unique presence within the Internet.


Traditional 2-byte ASN

This ASN is represented in a 16 bit number unsigned integer. It being 16 bits, the maximum number of ASes that can be assigned is limited to 65535. As with RFC1918, and private IPv4 address space – there is a reserved range for ASNs. ASNs 64512 through 65535 are reserved for private use and are not globally routable. Meaning they cannot be advertised across the internet to other potential peers that you’re organization connects to.

New 4-byte ASN

As I’d addressed previously, the fact that RFC1918 has left us at a choke point in terms of IPv4 public addressing, it has been decided that 64512 public AS numbers will not be enough and will eventually run out. So, the powers at be have decided that it is time to address this problem now and nip it in the bud instead of waiting for the same problems we’re now having with IPv4/v6 conversion.

Just as the 2-byte AS number is notated in decimal notation, so is the new 4-byte ASN. However, the notation is known as ASDOT notation. Specifically, the 32 bits will be split into two ‘words’ of 16 bits a piece and notated with a ‘.’ or dot in the middle. An example being 65000.65000. However, should the higher order 16-bits be set to represent the value of decimal 0, traditional 2-byte notation can be used.

With the advent of this idea, RFC4893 was created. This RFC explains the two major attributes that have been added to BGP to support the incremental migration from support from a 2-byte ASN to a 4-byte ASN structure. The first being AS4_PATH which now supports the new 32 bit length, and AS4_AGGREGATE. With this post we’ll be focusing on the AS4_PATH attribute.

It also introduced a new AS_TRANS attribute that is used to substitute a 4-byte ASN when peering with a BGP speaker that has only 2-byte ASN support. More on that below.

4-byte BGP Peering

When BGP is starts the process of forming an adjacency between two BGP speakers, the OPEN message is sent between the two devices. Within this OPEN message the “My Autonomous System” field houses the administratively defined ASN. The OPEN message format can be seen below :

        0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
       |    Version    |
       |     My Autonomous System      |
       |           Hold Time           |
       |                         BGP Identifier                        |
       | Opt Parm Len  |
       |                                                               |
       |                       Optional Parameters                     |
       |                                                               |

As shown above, the traditional BGP “My AS” field is 16 bits, or 2-bytes.

Before we get into the specifics behind how a BGP speaker advertises its 4-byte ASN to another BGP speaker, we have to address a limitation that was built into BGP from its inception. BGP was designed to terminate the peering with a neighbor should it receive an OPEN message with an Optional Parameter that isn’t supported. This seriously inhibits the introduction of future capabilities being incorporated into the BGP protocol as a whole. Limiting the future extensibility of the protocol.

With this, RFC2842 was created that defines a new “Capabilities” Optional Parameter that will be advertised in the OPEN messages. Allowing for some extensibility to be built into BGP. BGP peers will then list supported Optional Capabilities and peer accordingly. The 4-byte ASN Capability code has been assigned as number 65 by the IANA. The rest of the Capability codes can be found here. With the advent of this new Capability code, we’re able to now use 4-byte ASNs within a BGP topology.

We are then able to peer as traditional 2-byte BGP peers would. Simply configuring the neighbor addressing and what AS they’re a part of.

2 and 4-byte Support and Interconnection

Things get interesting when having to deploy both 2-byte and 4-byte ASNs. RFC4893 provides the explanation for how this process takes place.

As is defined within RFC4893, for the remainder of this post we’re going to refer to BGP speakers that only support 2-byte ASN’s as OLD BGP speakers, and devices that are configured for 4-byte support as NEW BGP speakers.

RFC4893 states that when a NEW BGP speaker is peered with an OLD BGP speaker it is to advertise the AS_PATH attribute in the old 2-byte ASN form. This is where the AS_TRANS substitution will occur. The NEW BGP speaker knows the limitation of the OLD BGP speaker and will swap the 4-byte ASNs in the advertisement with the AS_TRANS attribute, or as defined within RFC4893, the reserved ASN 23456 . The NEW BGP speaker is also required to send the AS4_PATH attribute at the same time. This AS4_PATH attribute will only consist of the 4-byte ASNs that are advertised from upstream NEW BGP speakers.

NEW BGP speakers are to be prepared to receive both an AS_PATH and AS4_PATH attribute from an OLD BGP speaker. Upon receiving both the AS_PATH and AS4_PATH attribute, the NEW BGP speaker will then merge the AS_PATH and AS4_PATH attribute.

This process allows OLD BGP speakers to co-habitate with NEW BGP speakers and still preserve the new 4-byte ASN between NEW BGP speakers.

An example of the process can be seen below :

BGP 4-Byte ASN

LDP Extended Discovery

Label Distribution Protocol (LDP) is a protocol used within MPLS for exchanging label bindings with other MPLS routers within the AS. These labels are then used to negotiate LSP’s throughout the network.

LDP uses an adjacency discovery technique by sending an LDP discovery packet, UDP 646, to the all-routers multicast address of Once an adjacency is discovered, one of the LSRs will take the active roll and the other takes the passive roll. The passive LSR waiting for the active to initiate the connection. This is done via the comparison of unsigned integers but I’m not going to get into the granulars behind that, you’re welcome to read RFC3036 if you’re interested.

If the routers are not directly connected, as with TE tunnels sometimes spanning discontiguous LSRs within the MPLS domain, there is the ability to utilize the LDP Extended Discovery mechanism that is defined within RFC3036. This will allow the LSR to produce a Targeted Hello message toward a defined target LSR. Instead of using the all-routers ( address, the router will send a unicast discovery packet to the target LSR.

As the RFC states, the basic LDP discovery process is symmetrical, with both LSRs discovering each other and negotiating their role of in the establishment of the LDP session. With LDP Extended Discovery, the process is asymmetrical with one LSR initiating the connection and the targeted LSR deciding whether or not it wants to respond or ignore the request to establish an LDP session. Should the targeted LSR decide it wants to respond, it will do so by sending periodic Targeted Hellos in return.

We’ll start with the following topology :


Routers PE1 and PE2 are not directly connected, therefore they will not establish an LDP adjacency, as seen here in the show mpls ldp neighbors output from PE1.


PE1 has successfully established LDP adjacencies with P1 and P2, but not with PE2. If we were to want to establish LDP adjacency with these two LSRs, the following process would have to be followed :

First, we would need to define a targeted neighbor within the LDP process on one of the PE routers. For this example, we’ll go ahead and define PE1 as the active LSR by issuing the mpls ldp neighbor neighbor-ID targeted ldp command :


By enabling the debug mpls ldp targeted-neighbors, we can see the LDP process kick up and start to attempt to discovery the defined target LSR, on PE1 :


Once PE1 has been setup to target PE2, we can then configure PE2 to accept targeted LDP messages by issuing the mpls ldp discovery targeted-hello accept command :


Once PE2 has been configured to respond to the targeted hellos it is receiving, we can see the adjacency establish through the same debugs :


We can also verify the adjacency by issuing the show mpls ldp neighbor command again – and here we see that is now listed as a valid adjacency :


MPLS TTL Propagation

Working through my MPLS labs for my IE studies I came across this little tidbit of information. I like to keep track of these one liners(or generally small sections of information) in all of the documentation that I read, because they seem to be incredibly helpful once committed to memory.

This particular nugget has to do with TTL propagation within an MPLS cloud, from a provider’s perspective. I trust anyone with enough interest in reading this post has experience with the TTL field and how the traceroute command utilizes this header information within a network.

We’ll start with how MPLS handles TTL propagation for the traffic being forwarded within the MPLS cloud. MPLS needs a loop prevention mechanism as does any other forwarding protocol. And instead of having to modify the IP header of every packet that passes through an interface on an LSR within the cloud, it copies the IP packet’s TTL header information into a new label being pushed onto the IP packet as it enters the MPLS cloud. Thus preventing having to touch the IP header information on the ingress packets.

As the packet traverses the MPLS cloud, each LSR will decrement the TTL within the MPLS header, just like in a typical IP network. As the packet reaches the egress E-LSR, the E-LSR will pop the label on that packet, subtract the cost of one interface(the egress interface) from the current TTL value, and then apply that value to the header of the IP packet that will be forwarded on.

That being said, Service Providers are usually providing some sort of L3 WAN services, and their customers sit outside of their MPLS network. The result of the traceroute command with MPLS’s default configuration allows for customers to see every LSP in the path that one of their packets traverses. This can cause some headache for them, as they don’t need to know what the provider’s topology consists of, but it also poses potential security issues for the SP themselves.

The output below shows what it looks like for a customer to run the traceroute command with the default TTL copy behavior of MPLS.

1 44 msec 12 msec 24 msec

2 [MPLS: Labels 16/16 Exp 0] 80 msec 496 msec 88 msec

3 [MPLS: Label 16 Exp 0] 48 msec 60 msec 48 msec

4 56 msec 60 msec *

As you can see, the LSP’s are present in the output. Again, it may be that the SP doesn’t want to customer to have that type of insight into their network, or the customer doesn’t want to see the LSP’s while attempting to troubleshoot issues.

Thankfully, Cisco grants us the ability to surpress the TTL propagation functions of traffic within the MPLS network. We can use the no mpls ip propagate-ttl [local forwaded] command to surpress the copying of the TTL from the IP packet’s header. When this command is issued, the ingress E-LSR will assign a generic TTL value of 255 to the label, instead of copying the IP packet’s current TTL.

As a result, the entire MPLS network will look as though it is just a single hop to the customer. As seen below :

1 28 msec 36 msec 16 msec

2 108 msec 108 msec *

Now, let’s step back to the command again for a minute. I referenced the options at the end of the command to local or forwarded. What this means is, is that the TTL propagation can be disabled either from ingress customer traffic, or locally originated traffic from the LSR’s within the MPLS cloud. The local option references traffic originated by the LSR itself. Such as when a SP engineer logs into a router and issues the traceroute command, this would allow the SP engineer insight into the LSR’s within the MPLS cloud. Whereas the forwarded option pertains to traffic that is originated external to the MPLS cloud, usually within the customer sites.

IPv6 Address Formatting

So, I’ve come to the in depth IPv6 studies for my CCNP – I figure I’d take some notes on my blog to help others out who take this path.

I know, I should’ve gotten gotten some exposure to this on my CCNA studies, and I did, but not enough to totally absorb it.

OK, Take a deep breath. I know some people, including myself, were a little intimidated by this – but it CAN be done!

So, let get started :

With IPv6, the standard IPv4 32-bit addressing scheme goes out the window. Now a 128-bit addressing scheme is adopted.

IPv4 provided a total of 4,294,967,296 addresses. Ready for this one? IPv6 now provides this many addresses :

340,282,366,920,938,463,463,374,607,431,768,211,456  –  340 Undecillion addresses.

I’ve read somewhere that this is actually enough addresses to give every atom on the earth’s surface an address, and then 100 Earth’s there-after. You can check me on that, but I am writing this from memory. Seems like we will NEVER run out eh? Well, this brings up one of my favorite XKCD comics –

To prevent confusion and over/under/random-use and to help with the public allocations, the RFC states that we will leave 85% of the IPv6 spectrum unused until the standard is revised. When that will be? I don’t think anyone knows, but there was a time when we though IPv4 would provide enough addresses for everything….

As you can see, this would become a bit less than ideal to try and manage. That being said, the powers at be decided to divide the addresses into 8 groups of 4 hexadecimal characters each. The IP address in v6 land no longer consists of 4 octects of 8 bits. Now it consists of 8 groups of 16 bits per character and since it is hexadecimal the bits can be set to values ranging from 0 through F – and if you do the math, this comes out to be 2^128’s combinations of characters. Which adds up to that crazy number listed above.

An example being :

2001:0050:0000:0000:0000:0AB4:1E2B:98AA  –  A far cry from, I would say.

Again, to make this all a bit more comprehensive and manageable, the powers at be allowed us some lee-way on how we can write and handle the addresses. You are able to drop the groups of consecutive zero’s. But, here’s the kicker, this can only be done ONCE per address.

If we were to apply this to the address listed above, it would become :


Now, still a bit unrly to have to try and type into a cmd prompt to ping something, so there is another rule we can apply. We are allowed to drop leading zeros within the address. Again, if we were to apply this to the address above, it would become :


This brings it down to something, though not exactly EASY to remember or type – but a hell of a lot more manageable than the first number we started with.

That sums up the formatting of the addresses, and is a very high level overview – but I just wanted to point out the 2 rules that can be applied to “short-handing” the address to make it a little less stressful on your brain. As I know I needed.

As always, thoughts and insights are greatly appreciated.

Until next time.

EIGRP variance

Cisco’s proprietary routing protocol, EIGRP, offers and interesting tidbit of functionality to the network that decides to run on entirely IOS based routers. This little tidbit is known as the EIGRP variance command. What this function does, is allow for unequal cost load balancing on a router.

Please see Diagram below:

You’ll notice that the HQ router is connected to both remote office via some sort of serial based medium. One interface bandwidth being 128Kbps and one being 256Kbps (I know, not a whole hell of a lot of bandwidth, but this is just for ease of example). Along with that, the remote offices are connected using a FastEthernet standard at 100Mbps.

You’ll notice that Remote Office – 1 has network connected to it. Now, if all routers in the diagram are running EIGRP and are all fully converged, the HQ router will have a route installed in it’s route table for the network. Due to the low bandwidth on the 128Kbps link directly to Remote Office – 1, router HQ is going to install the HQ <-> RO2 <-> Switch <-> RO1 route into its routing table strictly because the cost of traversing that 128Kbps link, as opposed to the 256Kbps link and then the 100Mbps link between the two remote sites, would be far more costly on the time it would take for the traffic to reach it’s destination.

That being said. We all know, in the IT industry, we’re looking for newer and faster ways to get data from point A to point B. And we all know a little bit about what load-balancing is – Utilizing more than one medium to transport traffic from A to B at the same time – balancing the traffic 1-to-1 across multiple links.

Well, Cisco decided that the whole “only equal cost load balancing” model was a little too restrictive. So, they took it upon themselves to not only create their own routing protocol, but add a few little tidbits of functionality to it that truly make it their own. And this is where the variance command was born.

The variance command allows you to load balance the traffic across unequal cost paths, as opposed to the traditional load balancing across only equal cost paths.

If we refer back to the diagram above, we can see that we can now issue the variance command on the device for that particular instance of EIGRP on the HQ router. We will call the multiplier (n) for the sake of the following example.  To keep it simple math wise, if we entered the variance 2 on the HQ router, the router would then include routes with a metric of less than 2 times the minimum metric route for that destination. What that means is, once this command is issued, the router will look for routes to the network that are proportionally unequal to the metric of 2 defined in the variance command. (ie. 128Kbps is exactly 2 times less than 256Kbps)

A little tricky at first, but once you actually sit and think about it, just make sure that you have your math right before you enable the command, and watch the previously useless routes come to life and allow even more optimization to your network. 🙂