May 25, 2018 Blog (Ivan Pepelnjak)

Video: SPB Fabric Use Cases

As part of his “how does Avaya implement data center fabrics” presentation, Roger Lapuh talked about use cases for SPB in data center fabrics.

I have no idea what Extreme decided to do with the numerous data center fabric solutions they bought in the last few years, so the video might have just a historic value at this point… but it’s still nice to see what you can do with smart engineering.

by Ivan Pepelnjak ( at May 25, 2018 06:52 AM

The Networking Nerd

The Voice of SD-WAN

SD-WAN is about migrating your legacy hardware away from silos like MPLS and policy-based routing and instead integrating everything under one dashboard and one central location to make changes and see the impacts that those changes have. But there’s one thing that SD-WAN can’t really do yet. And that’s prepare us the for the end of TDM voice.

Can You Hear Me Now?

Voice is a way of life for some people. Cisco spent years upon years selling CallManager into every office they could. From small two-line shops to global organizations with multiple PRIs and TEHO configured everywhere. It was a Cisco staple for years. Which also had Avaya following along quickly to get into the act too.

Today’s voice world is a little less clear. Millenials hate talking on the phone. Video is an oddity when it comes to communications. Asynchronous chat programs like WhatsApp or Slack rule the day today. People would rather communicate via text than voice. We all have mobile devices and the phone may be one of the least used apps on it.

Where does that leave traditional voice services? Not in a good place for sure. We still need phone lines for service-focused businesses or when we need to call a hotline for support. But the office phone system isn’t getting any new features anytime soon. The phone system is like the fax machine in the corner. It’s a feature complete system that is used when it has to be used by people that are forced to use it unhappily.

Voice systems are going to stay where they are by virtue of their ubiquity. They exist because TDM technology hasn’t really advanced in the past 20 years. We still have twisted pair connections to deliver FXO lines. We still have the most basic system in place to offer services to our potential customers and users. I know this personally because when I finally traded out my home phone setup for a “VoIP” offering from my cable provider, it was really just an FXS port on the back of a residential cable modem. That’s as high-tech as it gets. TDM is a solved problem.

Call If You WANt To

So, how does SD-WAN play into this? Well, as it turns out, SD-WAN is replacing the edge router very quickly. Devices that used to be Cisco ISRs are now becoming SD-WAN edge devices. They aggregate WAN connections and balance between them. They take MPLS and broadband and LTE instead of serial and other long-forgotten connection methods.

But you know what SD-WAN appliances can’t aggregate? TDM lines. They don’t have cards that can accept FXO, FXS, or even PRI lines. They don’t have a way to provide for DSP add-in cards or even come with onboard transcoding resources. There is no way for an SD-WAN edge appliance to function as anything other than a very advanced packet router.

This is a good thing for SD-WAN companies. It means that they have a focused, purpose built device that has more software features than hardware muscle. SD-WAN should be all about data packets. It’s not a multitool box. Even the SD-WAN vendors that ship their appliances with LTE cards aren’t trying to turn them into voice routers. They’re just easing the transition for people that want LTE backup for data paths.

Voice devices were moved out of the TDM station and shelf and into data routers as Cisco and other companies tried to champion voice over IP. We’re seeing the fallout from those decisions today. As the data routing devices become more specialized and focused on the software aspects of the technology, the hardware pieces that the ISR platform specialized in are now becoming a yoke holding the platform back. Now, those devices are causing those platforms to fail to evolve.

I can remember when I was first thinking about studying for my CCIE Voice lab back in 2007-2008. At the time, the voice lab still have a Catalyst 6500 switch running in it that needed to be configured. It had a single T1 interface on a line card that you had to get up and running in CallManager. The catch? That line card would only work with a certain Supervisor engine that only ran CatOS. So, you have to be intimately familiar with CatOS in order to run that lab. I decided that it wasn’t for me right then and there.

Hardware can hold the software back. ISRs can’t operate voice interfaces in SD-WAN mode. You can’t get all the advanced features of the software until you pare the hardware down to the bare minimum needed to route data packets. If you need to have the router function as a TDM aggregator or an SBC/IPIPGW you realize that the router really should be dedicated to that purpose. Because it’s functioning more as a TDM platform than a packet router at that point.

Tom’s Take

The world of voice that I lived in five or six years ago is gone. It’s been replaced with texting and Slack/Spark/WebEx Teams. Voice is dying. Cell phones connect us more than we’ve ever been before but yet we don’t want to talk to each other. That means that the rows and rows of desk phones we used to use are falling by the wayside. And so too are the routers that used to power them. Now, we’re replacing those routers with SD-WAN devices. And when the time finally comes for use to replace those TDM devices, what will we use? That future is very murky indeed.

<script type="text/javascript"> __ATA.cmd.push(function() { __ATA.initSlot('atatags-26942-5b08387e9896d', { collapseEmpty: 'before', sectionId: '26942', width: 300, height: 250 }); }); </script>
<script type="text/javascript"> __ATA.cmd.push(function() { __ATA.initSlot('atatags-114160-5b08387e98970', { collapseEmpty: 'before', sectionId: '114160', width: 300, height: 250 }); }); </script>

by networkingnerd at May 25, 2018 03:26 AM

XKCD Comics

May 24, 2018 Blog (Ivan Pepelnjak)

ONIE and the Hammer of Thor

Someone left a comment on my Zero-Touch Provisioning post claiming how Big Switch Networks solved ZTP challenge using just IPv6 Link-Local Address and Neighbor Discovery instead of the complicated DHCP/TFTP/whatever sequence.

Here’s what he wrote:

Read more ...

by Ivan Pepelnjak ( at May 24, 2018 06:44 AM

May 23, 2018

Configuring SSL for gRPC on Junos

This is a short article on creating a self-signed root certificate which can be used to self-sign certificates for the purposes of treating our telemetry and instrumentation exploration with the security love it deserves. I also cover configuration of mutual SSL for gRPC on Junos. An article of dual purposes!

One of the things I see far too often is clear-text transport being used in demonstrations, labs and even production. This isn’t acceptable. We live in a world where security has to be woven in from the ground up. How do you really know your system works if you leave out all of the security controls?

I hear your teeth grinding. Why do you want to do this? First of all, even though we can bypass security on gRPC with Junos by going for insecure connectivity with clear-text, we shouldn’t. The world we live in is all about the data and the smallest amount of it can give the ‘bad guys’ a lead.

Now we’re done with the why, we need to deal with the how. There are three approaches to PKI that are common:

  1. Run around with your hair on fire rambling nonsense
  2. Create your own Certificate Authority (CA) system
  3. Use an existing CA

Number 1 is kind of exciting and 3 comes with built in help (usually someone or a team of people in your enterprise), so we’ll focus on number 2 in the most simplest of ways. No automatic signing will happen here. This post exercises the manual steps required to make this happen. Fear not. It will be fun!

Creating a Root Self-Signed Certificate

This will give you the power to create self-signed certificates for use in your network. You can also get your root CA cert signed by an outside trusted provider so they’re recognized by other systems. PKI is all about trust and the chain of it. However, for lab purposes, what we’re about to do is just fine.

The outcome of our forthcoming exercise will be to create a self-signed ROOT certificate (CA.crt) which will then allow us to create self-signed certs for both Junos devices and for our client applications.

Please note, you are required to have the

toolchain installed. The latest from a security perspective.

Let’s begin.

Create the Self-Signed Root Certificate and Key-Pair

First, we generate a new signing key. If you intend to run this in production, protect this key as if your life depended on it.

This will generate a 2048 bit RSA key.

openssl genrsa -out ca.key 2048

The next step is to actually create the self-signed root certificate. Be sure to enter the correct details as you are questioned for inputs.

openssl req -new -x509 -key ca.key -out ca.crt

The output will be two files:


Certificate Signing Request and Certificates

Next, we will create one device certificate and one application certificate. It’s not uncommon for me to use a privately owned domain for demonstration purposes. Do not use IP addresses here. Even if you only use this domain internally and don’t actually own it, avoid IP addresses else face a more complex generation sequence.

Device ID:
App ID:

Let’s generate the keys and certificate signing requests (CSRs).

openssl genrsa -out 2048
openssl req -new -key -out
openssl genrsa -out 2048
openssl req -new -key -out

Now we have two additional keys and two CSRs but we still do not have certificates.

Sign the CSRs

Let’s sign those CSRs using our previously created


openssl x509 -req -in -CA ca.crt -CAkey ca.key -CAcreateserial -out
openssl x509 -req -in -CA ca.crt -CAkey ca.key -CAcreateserial -out

If you’re curious about what the certificate looks like (at least the metadata), you can run this command and check it.

openssl x509 -in -noout -text

For Junos, we’ll also create a PEM certificate that contains both the certificate and private key. This is easily done by using the command line. This allows us to load the certificate and the signing key in one hit.

cat >

So far so good. We’ve now got our CA certificate and signing key, a host certificate, key and PEM. We’ve also got the application certificate and key. Next step, let’s configure Junos.

├── ca.crt
├── ca.key

Configuring Junos

I always start with the CA first. Exactly like we build out the CA structure, I replicate the configuration order on to Junos.

Here are the steps required.

  1. Copy across the ca.crt file, the vmx cert and the vmx key across to /var/tmp. Repeat the below for each file.
  2. scp <source> <user>

  3. Configure the PKI service with basic attributes. I use JTI as a basic identifier primarily because the purpose of this for my personal use is the telemetry interface.
  4. set security pki ca-profile JTI ca-identity JTI

    We also need to run an operational command to load up the CA cert.

    > request security pki ca-certificate load ca-profile JTI filename /var/tmp/ca.crt

    This loads the certificate file.

  5. Configure the certificate certification-authority.
  6. set security certificates certification-authority JTI ca-name JTI

  7. Load in the device specific PEM certificate.
  8. set security certificates local load-key-file /var/tmp/

  9. Configure gRPC and SSL
  10. set system services extension-service request-response grpc ssl port 32767
    set system services extension-service request-response grpc ssl local-certificate
    set system services extension-service request-response grpc ssl mutual-authentication certificate-authority JTI
    set system services extension-service request-response grpc ssl mutual-authentication client-certificate-request require-certificate

Once you’ve committed those changes and no errors have been returned, you can move to the test phase.


Note, I did this on an 18.1 vMX. Check for compatibility with gNMI over gRPC

They always say the taste of the pudding is in the eating, so let’s give this a whirl and make sure our configuration works!

I’m going to use one of my testlet applications which is written in Go. This particular testlet does a “get” over gRPC using gNMI.

You can follow along here. Open a directory of your choosing and clone the test client. There are three binaries prebuilt. One for Linux, one for OSX and one for Junos itself. I’m running on OSX, so will use the OSX binary.

For ease, also create a directory where you will place the ca.crt, client.crt and client.key. In our case, the client.* are the files respectively. I copied them and changed their name. See below to confirm naming.

# Copy these files in to this directory 
# ca.crt,,

git clone https://arsonistgopher/junos-gnmi-testclient.git
mkdir PKI

# Change the name of fhe files respectively
mv client.crt
mv client.key
mv ca.crt client.crt client.key PKI

# Next let&aposs fire the app up with user JET and our directory with certificates.
./junos-gnmi-testclient-osx-0.1 -certdir PKI -host -port 32767 -resource /system -user jet

If everything went well, this is what you should expect to see.

2018/05/23 17:27:31 -----------------------------------
2018/05/23 17:27:31 Junos gNMI Configuration Test Tool
2018/05/23 17:27:31 -----------------------------------
2018/05/23 17:27:31 Run the app with -h for options

Enter Password:
----- VERSION -----
----- GET DATA -----
{"root-authentication": {"encrypted-password": "$6$B7o0BPac$bt7vWsuxLa9BF9z2g3k6SS07KlWbT09nFCfHUeeGt18fXLOfJIGd9Cu1LQbNbFJ1RhEsDYhKPKQDc7Pjyn/RX0"}, "syslog": {"user": [{"name": "*", "contents": [{"name": "any", "emergency": [null]}]}], "file": [{"name": "messages", "contents": [{"name": "any", "notice": [null]}, {"name": "authorization", "info": [null]}]}, {"name": "interactive-commands", "contents": [{"name": "interactive-commands", "any": [null]}]}]}, "login": {"user": [{"authentication": {"encrypted-password": "$6$aPG7qJg6$WpN9gk2FUtdKd.U3RA..gQDB7kZsgaQQgZBDDxQmcter/hfu0bvhiLDtWMrAJlloiz9eqKtmSCIbTGr1Lsn.T1"}, "name": "jet", "uid": 2000, "class": "super-user"}]}, "services": {"ssh": {"root-login": "allow"}, "extension-service": {"request-response": {"grpc": {"ssl": {"port": 32767, "mutual-authentication": {"certificate-authority": "JTI", "client-certificate-request": "require-certificate"}, "local-certificate": [""]}}}, "notification": {"port": 1883, "allow-clients": {"address": [""]}}}}, "processes": {"dhcp-service": {"traceoptions": {"file": {"filename": "dhcp_logfile", "size": "10m"}, "level": "all", "flag": [{"name": "all"}]}}}}


So there you have it. We’ve created a self-signed CA cert, issued certificates to both a vMX and an application and tested it.

Sure, there are much more complex ways of doing certificate management like a full blown PKI system. Junos is reasonably feature rich for this kind of thing and it’s only a short Google away.

Hope this was useful. Any errata or comments, please leave comments.

The post Configuring SSL for gRPC on Junos appeared first on

by David Gee at May 23, 2018 05:12 PM Blog (Ivan Pepelnjak)

Why is Network Automation So Hard?

This blog post was initially sent to the subscribers of my SDN and Network Automation mailing list. Subscribe here.

Every now and then someone asks me “Why are we making so little progress on network automation? Why does it seem so hard?

There are some obvious reasons:

However, there’s a bigger elephant in the room: every network is a unique snowflake.

Read more ...

by Ivan Pepelnjak ( at May 23, 2018 06:36 AM

XKCD Comics

May 22, 2018 Blog (Ivan Pepelnjak)

Dissecting IBGP+EBGP Junos Configuration

Networking engineers familiar with Junos love to tell me how easy it is to configure and operate IBGP EVPN overlay on top of EBGP IP underlay. Krzysztof Szarkowicz was kind enough to send me the (probably) simplest possible configuration (here’s another one by Alexander Grigorenko)

To learn more about EVPN technology and its use in data center fabrics, watch the EVPN Technical Deep Dive webinar.

Read more ...

by Ivan Pepelnjak ( at May 22, 2018 06:31 AM

May 21, 2018 Blog (Ivan Pepelnjak)

Response: Vendors Pushing Stretched Layer-2

Got this response to my Stretched Layer-2 Revisited blog post. It’s too good not to turn it into a blog post ;)

Recently I feel like it's really vendors pushing layer 2 solutions, rather than us (enterprise customer) demanding it.

I had that feeling for years. Yes, there are environment with legacy challenges (running COBOL applications on OS/370 with emulated TN3270 terminals comes to mind), but in most cases it’s the vendors trying to peddle unique high-priced non-interoperable warez.

Read more ...

by Ivan Pepelnjak ( at May 21, 2018 06:27 AM

XKCD Comics

May 19, 2018

Potaroo blog

What Drives IPv6 Deployment?

It's been six years since World IPv6 Launch day on the 6th June 2012. In those six years we've managed to place ever increasing pressure on the dwindling pools of available IPv4 addresses, but we have still been unable to complete the transition to an all-IPv6 Internet.

May 19, 2018 07:00 AM

May 18, 2018

The Networking Nerd

Is Training The Enemy of Progress?

Peyton Maynard-Koran was the keynote speaker at InteropITX this year. If you want to catch the video, check this out:

<iframe allowfullscreen="true" class="youtube-player" height="359" src=";rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="584"></iframe>

Readers of my blog my remember that Peyton and I don’t see eye-to-eye on a few things. Last year I even wrote up some thoughts about vendors and VARs that were a direct counterpoint to many of the things that have been said. It has even gone further with a post from Greg Ferro (@EtherealMind) about the intelligence level of the average enterprise IT customer. I want to take a few moments and explore one piece of this puzzle that keeps being brought up: You.

Protein Robots

You are a critical piece of the IT puzzle. Why? You’re a thinking person. You can intuit facts and extrapolate cause from nothing. You are NI – natural intelligence. There’s an entire industry of programmers chasing what you have. They are trying to build it into everything that blinks or runs code. The first time that any company has a real breakthrough in true artificial intelligence (AI) beyond complicated regression models will be a watershed day for us all.

However, you are also the problem. You have requirements. You need a salary. You need vacation time. You need benefits and work/life balance to keep your loved ones happy. You sometimes don’t pick up the phone at 3am when the data center blinks out of existence. It’s a challenge to get you what you need while still extracting the value that is needed from you.

Another awesome friend Andrew von Nagy (@RevolutionWiFi) coined the term “protein robots”. That’s basically what we are. Meatbags. Walking brains that solve problems that cause BGP to fall over when presented with the wrong routing info or cause wireless signals to dissipate into thing air. We’re a necessary part of the IT equation.

Sure, people are trying to replace us with automation and orchestration. It’s the most common complaint about SDN that I’ve heard to date: automation is going to steal my job. I’ve railed against that for years in my talks. Automation isn’t going to steal your job, but it will get you a better one. It will get you a place in the organization to direct and delegate and not debug and destroy. In the end, automation isn’t coming for your job as long as you’re trying to get a better one.

All Aboard The Train!

The unseen force that’s opposing upward mobility is training. In order to get a better job, you need to be properly trained to do it. Maybe it’s a lot of experience from running a network for years. Perhaps it’s a training class you get to go to or a presentation online you can watch. No matter what you need to have new skills to handle new job responsibilities. Even if you’re breaking new ground in something like AI development you’re still acquiring new skills along the way. Hopefully, if you’re in a new field, you’re writing them all down for people to follow in your footsteps one day.

However, training is in opposition to what your employer wants for you. It sounds silly, doesn’t it? Yet, we still hear the quote attributed to W. Edwards Deming – “What happens if we train our people and they leave? What happens if we don’t and they stay?” Remember, as we said above you are a protein robot that needs things like time off and benefits. All of those things are seen as an investment in you.

Training is another investment that companies like to tout. When I worked at a VAR, we considered ourselves some of the most highly trained people around. The owner told me when I started that he was “going to put half a million dollars into training me.” When I asked him about that number after five years he told me it felt like he put a kid through college. And that was before my CCIE. The more trained people you have, the easier your job becomes.

But an investment in training can also backfire. Professionals can take that training and put it to use elsewhere. They can go to a different job and take more money. They can refuse to do a job until they are properly trained. The investments that people make in training are often unrealized relative to the amount of money that it costs to make it happen.

It doesn’t help that training prices have skyrocketed. It used to be that I just needed to go down to the local bookstore and find a copy of a CCNA study guide to get certified. I knew I’d reached a new point in my career when I couldn’t buy my books at the bookstore. Instead, I had to seek out the knowledge that I needed elsewhere. And yes, sometimes that knowledge came in the form of bootcamps that cost thousands of dollars. Lucky for me that my employer at the time looked at that investment and said that it was something they would pick up. But I know plenty of folks that can’t get that kind of consideration. Especially when the training budget for the whole department won’t cover one VMware ICM class.

Employers don’t want employees to be too trained. Because you have legs. Because you can get fed up. Because you can leave. The nice thing about making investments in hardware and software is that it’s stuck at a location. A switch is an asset. A license for a feature can’t be transferred. Objects are tied to the company. And their investments can be realized and recovered. Through deprecation or listing as an asset with competitive advantage companies can recover the value they put into a physical thing.

Professionals, on the other hand, aren’t as easy to deal with. Sure, you can list a CCIE as an important part of your business. But what happens if they leave? What happens when they decide they need a raise? Or, worse yet, when they need to spend six months studying for a recertification? The time taken to learn things is often not factored into the equation when we discuss how much training costs. Some of my old coworkers outright refused to get certified if they had to study on their own time. They didn’t want their free non-work time taken up by reading over MCSE books or CCNA guides. Likewise, the company didn’t want to waste billable hours from someone not providing value. It was a huge catch-22.

Running In Place

Your value is in your skillset at the company you work for. They derive that value from you. So, they want you to stay where you are. They want you trained just enough to do your job. They don’t want you to have your own skillset that could be valuable before they get their value from you. And they definitely don’t want you to take your skillset to a competitor. How can you fix that?

  1. Don’t rely on your company to pay for your training. There are a lot of great resources out there that you can use to learn without needed to drop big bucks for a bootcamp. Use bootcamps to solidify your learning after the fact. Honestly, if you’re in a bootcamp to learn something you’re in the wrong class. Read blogs. Buy books from Amazon. Get your skills the old fashioned way.
  2. Be ready to invest time. Your company doesn’t want you using their billable time for learning. So that means you are going to make an investment instead. The best part is that even an hour of studying instead of binge watching another episode of House is time well-spent on getting another important skill. And if it happened on your own time, you’re not going to have to pay that back.
  3. Be ready to be uncomfortable. It’s going to happen. You’re going to feel lost when you learn something new. You’re going to make mistakes while you’re practicing it. And, honestly, you’re going to feel uncomfortable going to your boss to ask for more money once you’re really good at what you’re doing. If you’re totally comfortable when learning something new, you’re doing it wrong.

Tom’s Take

Companies want protein robots. They want workers that give 125% at all times and can offer a wide variety of skills. They want compliant workers that never want to go anywhere else. Don’t be that robot. Push back. Learn what you can on your time. Be an asset but not an immobile one. You need to know more to escape the SDN Langoliers that are trying to eat your job. That means you need to be on the move to get where you need to be. And if you sit still you risk becoming a permanent asset. Just like the hardware you work on.

by networkingnerd at May 18, 2018 03:05 PM Blog (Ivan Pepelnjak)

Automation Example: Deploy MPLS/VPN Services

Steve Krause created a full-blown network services deployment solution, including post-deployment validation of OSPF and BGP routing, while attending Building Network Automation Solutions online course (I prefer course attendees working on real-life problems instead of artificial ones).

Hope you’ll enjoy exploring it ;)

by Ivan Pepelnjak ( at May 18, 2018 09:57 AM

XKCD Comics

May 17, 2018 Blog (Ivan Pepelnjak)

Get Familiar with Leaf-and-Spine Fabrics

An attendee of my Building Next-Generation Data Center online course asked me what the best learning path might be for a total (data center) beginner that has to design and install a small leaf-and-spine fabric in a near future.

This blog post was written for subscribers who want to get the most out of content. If you’re only interested in free stuff, you might feel it’s a waste of your time. You’ve been warned ;)

Read more ...

by Ivan Pepelnjak ( at May 17, 2018 09:56 AM

May 16, 2018

My Etherealmind

Video: Two Beer Networking – Whats Wrong With Network Diagrams ?

I think that the time for Network Diagrams is coming to a close. It takes large amounts of time ( and thus money) to produce diagrams. Maintaining diagrams is difficult, costly and something that should be automated. Networks are not static today. Overlays, IPsec Tunnels, VMs, virtual appliances. How can a diagram stay up to […]

by Greg Ferro at May 16, 2018 02:22 PM Blog (Ivan Pepelnjak)

Worth Reading: Manual Work Is a Bug

This blog post was initially sent to the subscribers of my SDN and Network Automation mailing list. Subscribe here.

Tom Limoncelli wrote a great article about starting an automation journey from sysadmin perspective. Not surprisingly, his recommendations aren’t that far off from what I’m telling networking engineers in my network automation presentations, Network Automation 101 webinar, and introductory part of Building Network Automation Solutions online course:

Read more ...

by Ivan Pepelnjak ( at May 16, 2018 09:54 AM

XKCD Comics

May 15, 2018

Networking Now (Juniper Blog)

GDPR Considerations for Companies Outside the European Union

On May 25th, 2018, the General Data Protection Regulation (GDPR) becomes enforceable under law in the European Union (EU). It fundamentally changes how businesses (and the public sector) must handle information relating to their customers, giving greater protection to individuals and harmonising the laws for data-handling across the EU.

by lpitt at May 15, 2018 10:57 PM

My Etherealmind Blog (Ivan Pepelnjak)

Is OSPF or IS-IS Good Enough for My Data Center?

Our good friend mr. Anonymous has too many buzzwords and opinions in his repertoire, at least based on this comment he left on my Using 4-byte AS Numbers with EVPN blog post:

But IGPs don't scale well (as you might have heard) except for RIFT and Openfabric. The others are trying to do ECMP based on BGP.

Should you be worried about OSPF or IS-IS scalability when building your data center fabric? Short answer: most probably not. Before diving into a lengthy explanation let's give our dear friend some homework.

Read more ...

by Ivan Pepelnjak ( at May 15, 2018 09:45 AM

May 14, 2018

Ethan Banks on Technology

Don’t Reply To Everything

I recently came across a simple idea that is having a positive impact on productivity. That idea is to not reply to everything. While this can be applied to social media broadly, I’m focused on email management here.

For me, not replying is more difficult than it sounds. I am a personality type that doesn’t like loose ends. I like to meet other’s expectations, and have them think cuddly, happy thoughts about what a swell person I am. I know that when I send an email, I hope to get a response. Therefore, when I receive an e-mail, my natural inclination is to respond.

<figure class="wp-caption aligncenter" id="attachment_5681682" style="width: 300px"><figcaption class="wp-caption-text">Too cuddly?</figcaption></figure>

Now, I don’t feel I overly waste time on replying to email. I’ve improved my response technique over the years. I bring an e-mail thread to a conclusion as rapidly as possible by anticipating and proactively answering questions. That’s more time-consuming than a quick, lazy “back to you” response, but saves time in the long run.

However, an advance on the proactive reply is never replying at all. Not responding is the ultimate way to bring an email thread to a conclusion.

You’re So Rude

On the surface, ignoring inbox messages seems rude. However, I don’t think that’s necessarily the case.

  1. People who email you are asking for your undivided attention. What is your relationship to that person? Do they merit such attention from you? Family, co-workers, customers, managers, etc…potentially, they do. Someone who might only know you because they stumbled across you through Facebook or the wiki server at work? Not really.
  2. Time spent answering email is time spent not doing actual work. Most email is not actual work, even though it seems like work. Distinguishing between time-wasting email and actual work is admittedly hard, as email is woven into the fabric of many organizations. But like project status meetings, email is too often a poor substitute for getting actual work done.
  3. Email almost never results in a product. Email is not a tool used to make anything. Design doesn’t happen in email. Creation doesn’t happen in email. In fact, email often doesn’t result in much of anything but time ticking off the clock.

Let’s Not Get Too Crazy

You might be pondering your specific company, and just how much of your workflow is tied to email. Obviously, it’s not possible to completely ignore all email. But how many email threads that you get CC’ed on really require your input? How many silly things are sent to you in an email simply to get an LOL? How many inbox messages are, in actuality, pointless?

As my quest for email control continues, I’ve realized that most messages that land in my inbox do not require a reply from me. I like to reply to messages and instinctively want to reply. But I don’t actually need to reply many times.

When I choose not to reply, I’ve removed a distraction from my day.

How To Decide What To Reply To

Choosing which messages to reply to is a personal matter. I make my decisions in the context of time.

  • I do not respond to messages that are ultimately a waste of time.
  • I do respond to messages that directly benefit the things I need to get done. Those messages are not a waste of time, because they are tied closely to something I am trying to achieve.

by Ethan Banks at May 14, 2018 10:57 PM

Dyn Research (Was Renesys Blog)

Tracking CDN Usage Through Historical DNS Data

With Mother’s Day having just passed, some e-commerce sites likely saw an associated boost in traffic. While not as significant as the increased traffic levels seen around Black Friday and Cyber Monday, these additional visitors can potentially impact the site’s performance if it has not planned appropriately.  Some sites have extra infrastructure headroom and can absorb increased traffic without issue, but others turn to CDN providers to ensure that their sites remain fast and available, especially during holiday shopping periods.

To that end, I thought that it would be interesting to use historical Internet Intelligence data (going back to 2010) collected from Oracle Dyn’s Internet Guide recursive DNS service, to examine CDN usage. As a sample set, I chose the top 50 “shopping” sites listed on Alexa, and looked at which sites are being delivered through CDNs, which CDN providers are most popular, and whether sites change or add providers over time. Although not all of the listed sites would commonly be considered “shopping” sites, as a free and publicly available list from a well-known source, it was acceptable for the purposes of this post.

The historical research was done on the www hostname of the listed sites, with the exception of The site was considered to be using a given CDN provider if the hostname was CNAMEd to the provider’s namespace, or if associated A records belonged to the provider. For time periods before listed sites began using a CDN provider for whole site delivery as shown in the charts below, it is possible that they were delivering embedded content through a CDN, but that is not captured here. ( is also on the Alexa list, but is not included below because it is not a shopping site, and because it is delivered directly from Google’s own infrastructure.)

In the interest of making the analyzed data easier to review here, the list of sites is broken out into several categories:

  • Brick & Mortar: the online presence of well-known retailers with physical stores
  • Online Native: retailers that primarily exist online
  • Automotive: car shopping sites
  • Video: media providers
  • Publishing: focusing on academic & research content
  • Nutrition: vitamin & supplement providers
  • Amazon: the .com and properties, along with Zappos

Brick & Mortar

Looking at the chart below, it is clear that Akamai has an extremely strong presence within this set of sites, as a long-term CDN provider across all of them. Although Bed, Bath & Beyond and Forever21 previously used AT&T’s CDN solution, they eventually transitioned to Akamai, likely as a result of the strategic alliance between the two providers. Among this group, Walgreens is the only site not currently being delivered by Akamai, having transitioned to Instart Logic in mid-2016.

Online Native

As businesses born on the Internet, one would think that they recognize the value of delivering their sites through a CDN, incorporating these services into their architecture from the start. However, in contrast to the online presence of brick & mortar retailers reviewed above, many of the “online native” sites have only come to rely on CDN providers in the last five years. (Except for one – iFixit is on the Alexa list, but not included here because we found no evidence of it being delivered through a CDN provider in our historical data set. However, it does appear to be using Amazon’s Cloudfront CDN service to deliver embedded page content.) Akamai has a strong presence among this set of sites as well, with a few exceptions. As part of its ongoing efforts to optimize site performance, Etsy moved to a multi-CDN configuration in late 2012, adding Verizon Digital Media Services/EdgeCast alongside Akamai, with Fastly joining the set of providers in early 2013. Humble Bundle has been delivered from Google since late 2010, although it is not using the Google Cloud CDN solution. Among this set of sites, Redbubble was the last to begin delivering its site through a CDN, waiting until early 2016 to integrate Cloudflare.


Looking at AutoTrader, we see that its site has been delivered by Akamai since late 2011. CarGurus turned up CDN services from Verizon Digital Media Services/EdgeCast in early 2013, and shifted to a dual-vendor strategy with Fastly and Verizon in late 2015, before moving to use Fastly exclusively in early 2017. held out much longer than its counterparts did, relying on origin infrastructure until activating Akamai’s services at the end of 2015.



Sky and DirecTV have both been long-time Akamai customers, with Sky integrating the CDN services before the start of 2010, and DirecTV coming on board in late 2012. Netflix is well known as an Amazon Web Services customer, and its site has historically been hosted on Amazon’s EC2 service. Although not a CDN service, it appears Netflix used the cloud provider’s Elastic Load Balancing solution for a three year period between 2012 to 2015.


The Oxford University Press site is found on the Alexa list, but is not included here because we found no evidence of it being delivered through a CDN provider in our historical data set. (The www hostname simply redirects users to, but there is no evidence of CDN usage, either for whole site or embedded content delivery, on that site either.) Wiley’s site was also historically served directly from origin, before shifting to delivery through Amazon Cloudfront in late 2017. However, Cambridge University Press has had a longer-term reliance on CDN providers, delivering through Akamai for three years, before shifting to CDNetworks for two years, and then to Cloudflare for the last two years.


Both sites in this category have had a long-term reliance on Akamai for delivery. However, iHerb briefly tested Amazon Cloudfront in early 2014. It also pursued a multi-CDN strategy with Akamai and Cloudflare starting in late 2015, before moving exclusively to Cloudflare two years later.


Zappos is included in this grouping, along with Amazon’s US and UK sites, because it has been owned by Amazon since 2009. As seen in the chart, it has relied on Akamai’s CDN services since that time as well. In contrast, Amazon’s native sites have only been served through a CDN since late 2016, with and being balanced between Akamai and Amazon’s own Cloudfront CDN solution.


In short, our Internet Guide data shows that Akamai has an extremely strong presence within top shopping sites, and for the most part, has held that position for a number of years. It also exposes the fact that more than sixteen years after CDN providers launched basic whole site delivery services, and more than thirteen years after dynamic site acceleration services were launched, there are still a set of shopping sites that are not taking advantage of those capabilities for performance and availability improvement, to say nothing of the security benefits from the WAF services also offered by these providers.

If you have other historical recursive DNS data research ideas for future blog posts, please comment on this post or e-mail us at

by David Belson at May 14, 2018 02:45 PM Blog (Ivan Pepelnjak)

What Is EVPN?

EVPN might be the next big thing in networking… or at least all the major networking vendors think so. It’s also a pretty complex technology still facing some interoperability challenges (I love to call it SIP of networking).

To make matters worse, EVPN can easily get even more confusing if you follow some convoluted designs propagated on the ‘net… and the best antidote to that is to invest time into understanding the fundamentals, and to slowly work through more complex scenarios after mastering the basics.

Read more ...

by Ivan Pepelnjak ( at May 14, 2018 08:50 AM

XKCD Comics

May 12, 2018 Blog (Ivan Pepelnjak)

Worth Reading: Cognitive Dissonance

I always wondered why it’s so hard to accept that someone might not find your preferred solution beautiful but would call it complex or even harmful (or from the other side, why someone could not possibly appreciate the beauty of your design)… and then stumbled upon this blog post by Scott Adams describing cognitive dissonance (the actual topic they’re discussing in the mentioned video doesn’t matter – look for the irrational behavior).

You might say “but we could politely agree to disagree” but unfortunately that implies that at least one of us is not fully rational due to Aumann’s Agreement Theorem.

by Ivan Pepelnjak ( at May 12, 2018 05:55 AM

May 11, 2018

The Networking Nerd

Time To Get Back To Basics?

I’ve had some fascinating networking discussions over the past couple of weeks at Dell Technologies World, Interop, and the spring ONUG meeting. But two of them have hit on some things that I think need to be addressed in the industry. Both Russ White and Ignas Bagdonas of the IETF have come to me and talked about how they feel networking professionals have lost sight of the basics.

How Stuff Works

If you walk up to any network engineer and ask them to explain how TCP works, you will probably get a variety of answers. Some will try to explain it to you in basic terms to avoid getting too in depth. Others will swamp you with a technical discussion that would make the protocol inventors proud. But still others will just shrug their shoulders and admit they don’t really understand the protocol.

It’s a common problem when a technology gets to the point of being mature and ubiquitous. One of my favorite examples is the fuel system on an internal combustion engine. On older cars or small engines, the carburetor is responsible for creating the correct fuel and air mixture that is used to power the cylinders. Getting that mix right is half science and half black magic. For the people that know how to do it properly it’s an easy thing that they can do to drive the maxim performance from an engine. For those that have tried it and failed, it’s something best left alone to run at defaults.

The modern engine uses fuel injection. It’s a black box. It can be reprogrammed or tinkered with but it’s something that is tuned in different ways from the traditional carburetor. It’s not something that’s designed to be played around with unless you really know what you’re doing. The technology has reached the point where it’s ubiquitous and easy to use, but very difficult to repair without a specialized skill set.

Most regular car drivers will look under the hood and quickly realize they know nothing about what’s going on. Some technical folks will be able to figure out what certain parts do by observing their behavior. But if you ask those same people how a fuel injection system or carburetor works they’ll probably give you a blank stare.

That’s the issue we find in modern networking. We’ve been creating VLANs and BGP route maps for years. Some people have spent their entire careers tuning multicast or optimizing layer 2 interconnects. But if you corner them and ask them how the protocol works or how best to design an architecture that takes advantage of their life’s work they can’t tell you aside from referencing some old blog post or some vendor’s validated design on their hardware.

Russ and Ignas each touch on something important. In the good old days before there were a hundred certification guides and a thousand SRNDs people had to do real work to find the best solution for a problem. They had to put pencil to paper and sort out the mess before they could make something work. That’s where the engineering side of the network comes from.

Today, it’s more “plug and play”. You drop in pieces of a solution and they should work together. In practice, that usually means all the pieces have to be from the same vendor or from approved partner sources. And anything that goes awry will need a team of experts and many, many consulting hours to figure out.

Imagine if we could only install networks without understanding how they work. Could you see a world where everything we install from a networking perspective is a black box like a fuel injector? That’s the case to a certain degree with cloud networking today. We don’t see what’s going on under the surface. We can only see what the interface exposes to us. That’s fine as long as the applications we are using support the things we’re trying to do with them. But when it comes to being able to fix the network at the level we’re used to seeing it could be difficult if not downright impossible.

Learning The Ropes

But, moreover, are the networking professionals that are configuring these networks even capable of making those changes? Does anyone other than Narbik really understand how EIGRP works? Facebook seems to think that lightweight messaging packets for routing protocols are outdated. So they used ZeroMQ without understanding why that’s a bad idea for slow speed links. They may understand how a routing protocol works in theory, but they don’t completely understand how it’s supposed to work in extreme cases.

Can we teach people the basics and understanding of protocols and design that they need in order to make proper designs outside of vendor reference documents? It’s a tall order for sure. Most blog posts are designed to talk about features or solve problems. Most literature from creators is designed to make their new widget work correctly. Very little documentation exists about integration or design. And a good portion of what does exist is a bit outmoded and needs to be spruced up.

We, as the stewards of networking, need to help this process along. We need to spend more time talking about design and theory. We need to dissect protocols and help people understand how to use the tools they have rather than hoping someone will build the best mousetrap ever to solve each piece of a complicated puzzle. We need to teach people to be thinkers and problem solvers. And, yes, that does mean a bit less complaining about things like vendor code quality and VAR behavior.

Why? Because those people are empowered by a lack of knowledge. Customers aren’t idiots. They have business reasons for the things they do. Our technology needs to support their business needs. Yes, that means we need to think critically about what we’re doing. And yes, they may mean eating our words now and then to avoid a showdown about something that’s ultimately unimportant in the long run.

If we increase the amount of knowledge about the important topics like design and protocols it should make the overall level of understanding go up for everyone. That means better designs. More integrated technology. Less reliance on being force-fed the bare minimum information necessary to make things work. And that means things will run faster and much more smoothly. Which is a win for everyone.

Tom’s Take

I’ll be the first to admit that I don’t know the dirty mechanics of Frame Relay switching or how to tune OSPF Hello timers for non-standard link types. It’s a skill I don’t use every day. But I know where to find them if I need them. And I know that it can help in certain situations where you see odd protocol behavior. Likewise, I know that if I need to go design a network for someone I need to get a lot of information about business needs and build the best network I can with the constraints that I have. Things like budget and time are universal. But one of those constraints shouldn’t be lack of knowledge about protocols and good design. Those are two things that should be ingrained into anyone that wants to have a title of “senior” anything.

by networkingnerd at May 11, 2018 07:47 PM

Dyn Research (Was Renesys Blog)

SeaMeWe-3 Experiences Another Cable Break

On Thursday, May 10 at approximately 02:00 UTC, the SeaMeWe-3 (SMW-3) subsea cable suffered yet another cable break. The break disrupted connectivity between Australia and Singapore, causing latencies to spike as illustrated below in our Internet Intelligence tool, because traffic had to take a more circuitous path.

The SMW-3 cable has had a history of outages, which we have reported on multiple times in the past, including August 2017, December 2014, and January 2013.

<article class="MediaCard MediaCard--mediaForward customisable-border" data-scribe="component:card" dir="ltr">

<script async="async" charset="utf-8" src=""></script>

The incident summary posted by cable owner Vocus Communications for this most recent break noted that “There is no ETR at this stage.” However, based on our observations of past outages, time to recovery has been measured on the order of weeks.

While this subsea cable is currently the only one carrying traffic from Western Australia to South East Asia, there are several additional cable projects in process that will help address this long-standing issue. The Australia-Singapore Cable (ASC) is expected to be ready for service in July 2018, and the Indigo-West Cable is expected to be ready for service in 1Q 2019. Both cables will connect Perth to Singapore. Given the great expense of repairing submarine cable breaks and the fact that a replacement cable will be live soon, it will be interesting to see if the Perth-Jakarta-Singapore segment of SMW-3 ever gets fixed at all.

SMW-3 isn’t the only cable in the region that has suffered repeated cable breaks. AAG’s link to Vietnam also suffers cable breaks with regularity. The most recent one occurred in late April, and it saw five issues in 2017.

<script async="async" charset="utf-8" src=""></script>

The cable breaks experienced by AAG and the Perth-Jakarta-Singapore segment of SMW-3 have occurred in some of the busiest waters in the world for fishing and shipping, not to mention this is also a region prone to typhoons, earthquakes and underwater landslides. The frequency with which the SMW-3 and AAC cables experience breaks have led my colleague Doug Madory to question which should hold the title of the “world’s breakiest cable”.



by David Belson at May 11, 2018 07:28 PM Blog (Ivan Pepelnjak)

Video: Use Network Device REST API with PowerShell

More and more network devices support REST API as the configuration method. While it’s not as convenient as having a dedicated cmdlet, it’s possible to call REST API methods (and configure or monitor network devices) directly from a PowerShell script, as Mitja Robas demonstrated during the PowerShell for Networking Engineers webinar.

You’ll need at least free subscription to watch the video.

by Ivan Pepelnjak ( at May 11, 2018 05:40 AM

XKCD Comics

May 10, 2018 Blog (Ivan Pepelnjak)

Layers of Single-Pane-of-Glass Abstractions Won’t Solve Your Problems

This blog post was initially sent to the subscribers of my SDN and Network Automation mailing list. Subscribe here.

We’ve been told for years how we’re over-complicating networking, and how the software-defined or intent-based whatever will remove all that complexity and remove the need for networking engineers.

What never ceases to amaze me is how all these software-defined systems are demonstrated: each one has a fancy GUI that looks great in PowerPoint and might even work in practice assuming you’re doing exactly what they demonstrated… trying to be creative could result in interesting disasters.

Read more ...

by Ivan Pepelnjak ( at May 10, 2018 06:03 AM

May 09, 2018

My Etherealmind
Network Design and Architecture

Why Content Providers don’t like Access Service Providers

Why Content Providers (Over the Top) don’t like Access Service Providers ?   Probably title of this post could be a ‘ Power of Access Providers ‘ or better ,  should be ‘ Why Some Content Providers don’t like Some Access Service Providers’. You will understand the reasons at the end of the post I promise.   […]

The post Why Content Providers don’t like Access Service Providers appeared first on Cisco Network Design and Architecture | CCDE Bootcamp |

by Orhan Ergun at May 09, 2018 10:04 AM Blog (Ivan Pepelnjak)

Autumn 2018 Network Automation Course Starts on September 18th

When the Spring 2018 Building Network Automation Solutions online course started, we didn’t know whether we’d run another course in 2018, so we offered engineers who wanted to get an early start Believer price.

The wait is over: the autumn 2018 course starts on September 18th. The schedule of the live sessions is already online, and we also have the first guest speakers. We’ll announce them in early June at which time you will no longer be able to get the Enthusiast price, so register ASAP.

by Ivan Pepelnjak ( at May 09, 2018 05:45 AM

XKCD Comics

May 08, 2018

My Etherealmind Blog (Ivan Pepelnjak) Subscription Now Available with PayPal

Every second blue moon someone asks me whether they could buy subscription with PayPal. So far, the answer has been no.

Recently we started testing whether we could use Digital River to solve a few interesting challenges we had in the past, and as they offer PayPal as a payment option, it seemed to be a perfect fit for a low-volume trial.

The only product that you can buy with PayPal during the trial is the standard subscription – just select PayPal as the payment method during the checkout process.

Finally: the first three subscribers using PayPal will get extra 6 months of subscription.

by Ivan Pepelnjak ( at May 08, 2018 02:10 PM

Network Design and Architecture

Please don’t register to South Africa/Johannesburg CCDE Class, it is full !

Hi Everyone,   I would like to inform you that Instructor Led CCDE Class in South Africa/Johannesburg Training is full. So please don’t register for it.   Having more people will reduce the time required for discussions. Those who attended any of my earlier class know that we have already very packed agenda, approximately 2000 […]

The post Please don’t register to South Africa/Johannesburg CCDE Class, it is full ! appeared first on Cisco Network Design and Architecture | CCDE Bootcamp |

by Orhan Ergun at May 08, 2018 09:20 AM

Different IGP and BGP Methodologies of Multi National Service Providers

Different IGP and BGP Methodologies of Multi National Service Providers. I mentioned two different IGP and BGP design approaches for two different Multi National Service Providers in my last CCDE course   Both are these operators in Africa and one of them has an operation in 4 and another has an operation in 5 countries.   […]

The post Different IGP and BGP Methodologies of Multi National Service Providers appeared first on Cisco Network Design and Architecture | CCDE Bootcamp |

by Orhan Ergun at May 08, 2018 08:43 AM Blog (Ivan Pepelnjak)

The Difference between Hodgepodge PoC and Production

A friend of mine who had the unfortunate “pleasure” of being exposed to one of the open-source controller platforms sent me this after reading my snarky take on bragging about what you’re doing at Something-Open-Something-Something conferences.

Read more ...

by Ivan Pepelnjak ( at May 08, 2018 06:12 AM

Keeping It Classless

Up and Running with Kubernetes and Tungsten Fabric

I have a predominantly technical background. You can show me all the slide decks you want but until I can get my hands on it, it’s not real to me. This has greatly influenced what I’m focusing on now that I’m doing more than just technical work - how to reduce the barrier to entry for people to become acquainted with a project or product.

As a result, I’ve been getting more involved with Tungsten Fabric (formerly OpenContrail). Tungsten is an open source Software-Defined Networking platform, and is a healthy candidate for building some tutorials. In addition, I’m new to the project in general - so, even if only for my own benefit, a blog post summarizing a quick and hopefully easy way to get up and running with it seems quite appropos.

Introduction to the Lab Environment

We’re going to spin up a 3-node cluster in AWS EC2 running Kubernetes, and using Tungsten Fabric for the networking. Why AWS instead of something like Vagrant? Simply put, a lot of advanced networking software require a lot of system resources - more than most laptops are able to provide. In this case, a total of four virtual machines (three-node cluster plus Ansible provisioning machine) with Kubernetes and Tungsten isn’t exactly “lightweight”, and that’s without any applications on top. So this is a good option to quickly spin up or spin down lab all programmatically.

The lab consists of four instances (virtual machines):

  • Ansible Provisioning VM - started by CloudFormation, responsible for instantiating the other three instances, and installing Kubernetes and Tungsten on them.
  • Controller - runs Tungsten and Kubernetes controller software
  • Compute01 - runs Kubernetes Kubelet and Tungsten vRouter, as well as any apps
  • Compute02 - runs Kubernetes Kubelet and Tungsten vRouter, as well as any apps

Recently, the Tungsten wiki was updated with instructions and a Cloudformation template for spinning up this environment. Cloudformation is a service offered by AWS to define a whole bunch of underlying infrastructure in text files ahead of time, so you can just run a single command rather than click through a bunch of GUIs, and presto chango you have a lab.

I took this work and ran with it to provide more opinionated parameters. This makes things a little simpler for our uses, so you don’t need to bother with a bunch of inputs to get to a quick Kubernetes/Tungsten cluster.

This lab also uses the relatively new Ansible provisioning playbooks for doing much of the legwork. Once CloudFormation spins up a single instance for running these playbooks, they’ll spin up additional AWS instances, and take care of installing Kubernetes and Tungsten components for us.


One advantage of using tools like CloudFormation or Terraform, as well as simpler tools like Vagrant, is that the overwhelming majority of the infrastructure complexity is defined ahead of time in text files, so that you, the user, really only need to do a few things to get a lot of value from this lab. That said, you need to do a few things ahead of time:

Spin up the “Stack”

CloudFormation defines infrastructure using template files. When we spin up infrastructure using CloudFormation, it refers to it all as a “Stack”. I have a Github repo where my modified CloudFormation template is located, so the first step is to clone this repo to your machine:

git clone && cd tftf

Now that we’ve got the repo cloned, we can run this command to spin up our stack. Note that we’re referring to cftemplate.yaml in this command, which is the CloudFormation template that defines our stack, located within this repo:

aws cloudformation create-stack --capabilities CAPABILITY_IAM --stack-name tf --template-body file://cftemplate.yaml

If that runs successfully, you should see it output a short JSON snippet containing the Stack ID. At this point, we can navigate to the CloudFormation console to see how the set-up activities are progressing:

You can navigate to the EC2 dashboard and click on “Instances” to see the new instance being spun up by CloudFormation:

You might ask - why only one instance? Actually this is how the Ansible playbooks do their stuff. CloudFormation only needs to spin up a single instance with Ansible to run these playbooks. Once done, those playbooks will connect to the AWS API directly to spin up the remaining instances for actually running our cluster.

This means you need to be patient - it may take a few minutes for all of this to happen. Read on for details on how to know when the provisioning is “done”.

After a few minutes, some additional instances will start to appear (use the refresh button to the right):

Eventually, you’ll see a total of four instances in the dashboard - one for our initial Ansible machine spun up by CloudFormation, and the remaining three that will form our Kubernetes/Tungsten cluster:

Accessing the Cluster

While it’s possible to SSH directly to any instance, as they all have public IPs provisioned, the Ansible machine already has certificates in place to easily authenticate with the cluster instances. So, we can SSH to the Ansible machine once and find everything from there.

First, grab the public IP address or FQDN of the Ansible instance:

Then, use that to connect via SSH with the user root and the password tungsten123:

ssh root@<ansible instance public IP or FQDN>

You should be presented with a bash prompt: [root@tf-ansible ~]# on successful login.

Now that we’re on the Ansible machine, we can take a look at the Ansible log located at /root/ansible.log. This is our only indication on the progress of the rest of the installation, so make sure you take a look at this before doing anything else:

tail -f ansible.log

YMMV here. Sometimes I ran this and it was super quick, other times it took quite a long time. Such is the way of cloud.

You should see PLAY RECAP somewhere near the bottom of the output, which indicates Ansible has finished provisioning everything on the other instances. If you don’t, let the execution continue until it finishes.

Finally, we can navigate to the Tungsten Fabric (still branded OpenContrail, don’t worry about it :) ) console by grabbing the public IP address:

Use that IP or FQDN as shown below in your web browser, and log in with the user admin and the password contrail123 (leave “domain” blank):

https://<controller public IP or FQDN>:8143/

We can use the same FQDN or IP to ssh from our Ansible instance to the controller instance. No password needed, as the Ansible instance already has SSH keys installed on the cluster instances:

ssh centos@<controller public IP or FQDN>

Destroy the Lab When Finished

If you wish to clean everything up when you’re not using it to save cost, there’s a bit of a catch. We can delete our CloudFormation stack easily enough with the appropriate command:

aws cloudformation delete-stack --stack-name tf

You should eventually see the stack status transition to DELETE_COMPLETE in the CloudFormation console.

However, as mentioned previously, CloudFormation is only responsible, and therefore only knows about, the one Ansible instance. It will not automatically delete the other three instances spun up by Ansible. So we’ll need to go back into the EC2 console, navigate to instances, and manually check the boxes next to the controller and both compute instances, and select Actions > Instance State > Terminate.

You may also have to clean up unused EBS volumes as well. Make sure you delete any unused volumes from the “EBS” screen within the EC2 console. For some reason, CloudFormation isn’t cleaning these up from the Ansible instance, and I haven’t had a chance to run this issue down yet.


That’s it for now! We’ll explore this lab in much greater detail in a future blog post, including interacting with Tungsten Fabric, running applications on Kubernetes, and more.

I hope you were able to get a working Tungsten Fabric lab up and running with this guide. If you have any feedback on this guide, feel free to leave a comment, and I’m happy to improve it.

May 08, 2018 12:00 AM

May 07, 2018

My Etherealmind Blog (Ivan Pepelnjak)

Using 4-Byte BGP AS Numbers with EVPN on Junos

After documenting the basic challenges of using EBGP and 4-byte AS numbers with EVPN automatic route targets, I asked my friends working for various vendors how their implementation solves these challenges. This is what Krzysztof Szarkowicz sent me on specifics of Junos implementation:

To learn more about EVPN technology and its use in data center fabrics, watch the EVPN Technical Deep Dive webinar.

Read more ...

by Ivan Pepelnjak ( at May 07, 2018 05:47 AM

XKCD Comics

May 04, 2018

The Networking Nerd

Transitioning Away From Legacy IT

One of the more exciting things I saw at Dell Technologies World this week was the announcement by VMware that they are supporting Microsoft Azure now in additional to AWS. It’s interesting because VMware is trying to provide a proven, stable migration path for companies that are wanting to move to the cloud but still retain their investments in VMware and legacy virtualization. But is offing legacy transition a good idea?

Hold On For One More Day

If I were to mention VLAN 1002-1005 to networking people, they would likely jump up and tell me that I was crazy. Because those VLANs are not valid on any Cisco switches save for the Nexus line. But why? What makes these forbidden? Unless you’re studying for your CCIE you probably just know these are bad and move on.

Turns out, they are a legacy transition mechanism from the IOS-SX days. 1002 and 1004 were designed to bridge FDDI-to-Ethernet, and 1003 and 1005 did the same for Token Ring. As Greg Ferro points out here, this code was tightly bound into IOS-SX and likely couldn’t be removed for fear of breaking the OS. The reservation continued forward in all IOS branches except NX-OS, which pulled them due to lack of support for those protocols.

So, we’ve got a legacy transition mechanism causing problems for users well past the “use by” date. Token Ring was on the way out at IBM in 2001. And yet, for some reason seventeen years later I still have to worry about bridging it? Or how about the rumors that Windows skipped from version 8 to version 10 because legacy code bases assumed Windows 9 meant Windows 95? Something 23 years old forced a major version change?

We keep putting legacy bridges in place all the time to help migrate things. Virtualization isn’t the only culprit here. We’ve found all manner of things that we can do to “trick” systems into working with modern hardware. We even made one idea into a button. But we never really solve the underlying issues. We just keep creating workarounds until we’re forced to move.

The Dream Is Still Alive

As it turns out, it’s expensive to refactor code bases and update legacy software to support new hardware. We’ve hit this problem time and time again with all manner of products. I can remember when Cisco CallManager wouldn’t install on a spare server I had with the same model number as a support machine just because the CPU was exactly 100MHz too fast. It’s frustrating to say the least.

But, we also have to realize that legacy transition mechanisms are not permanent fixes. It’s right there in the name. Transition. We put them in place because it’s cheaper in the short term while we investigate long term methods to make everything work correctly. But it’s still important to find those long term solutions. Maybe it’s a new application. Or a patch to make it work with new hardware. Sometimes, as Apple has done, it’s a warning that old software will stop working soon.

As developers, it’s important to realize that your app may last long past the date you want to stop supporting it. If you could still install Office 2000 on a desktop, I’m almost positive that someone would try it. We still have ways to install and use DOS software! If you want to ensure that your software is being used correctly and that you aren’t issuing patches for it after you’ve retired to a comfortable island with no Internet connection, make sure you find a way to ease transitions and offer new connection options to users.

For those of you that are still stuck in the morass of supporting legacy software or hardware, take a look at what you’re using it for and try to make hard choices where appropriate. If your organization is moving to the cloud, maybe now is the time to cut off your support for an application that’s too old to survive the migration. Or maybe it’s time to retire the Domain Controller That Time Forgot. But you have to do it before you’re forced to virtualize it and support it in perpetuity in AWS or Azure.

Tom’s Take

I’ll be the first to admit that legacy hardware and software are really popular. I worked with a company one time that still had an AS/400 admin on staff because of one application. It just happened to the one that paid people. At Interop ITX this year, the CIO for Detroit mentioned that they had to bring a developer out of retirement to make sure people kept getting paid. But legacy can’t be a part of the future. You either need to find a way to move what you have while you look for something better or you need to cut it off at the knees and find a way to make those functions work somewhere else. Because you don’t want to be the last company running AS/400 payroll over token ring bridged to a Cisco switch on VLAN 1003.

by networkingnerd at May 04, 2018 07:28 PM Blog (Ivan Pepelnjak)

Automation Win: Zero-Touch Provisioning

Listening to the networking vendors it seems that zero-touch provisioning is a no-brainer … until you try to get it working in real life, and the device you want to auto-configure supports only IP address assignment via DHCP, configuration download via TFTP, and a DHCP option that points to the configuration file.

As Hans Verkerk discovered when he tried to implement zero-touch provisioning with Ansible while attending the Building Network Automation Solutions course you have to:

Read more ...

by Ivan Pepelnjak ( at May 04, 2018 03:13 PM

Network Automation with Brigade on Software Gone Wild

David Barroso was sick-and-tired of using ZX Spectrum of Network Automation and decided to create an alternative with similar functionality but a proper programming language instead of YAML dictionaries masquerading as one. The result: Brigade, an interesting network automation tool we discussed in Episode 90 of Software Gone Wild.


by Ivan Pepelnjak ( at May 04, 2018 06:46 AM